arrow_back MSA + AI Provisions / Contracts / MSA-AI: AI-Specific Provisions
Updated 2026-05-13 About
Current through May 13, 2026

MSA-AI: AI-Specific Provisions

By Adam David Long

On this page chevron_right

MSA: The AI Addendum Teardown

The AI Addendum Teardown: What Every Clause Is Actually Doing

Most SaaS vendors have added AI-specific terms to their agreements in the last 12-18 months. These provisions arrive as an AI Services Addendum, an updated Acceptable Use Policy, or new sections in the Data Processing Addendum. They share a common structure -- and common patterns.

Below is a composite AI addendum, built from the actual published terms of seven major vendors. Each clause type is annotated with the pattern it triggers and what to do about it -- from both sides of the table, because in-house counsel drafts and sends these addenda to their own customers just as often as they receive them from vendors.


Clause 1: "We Don't Train on Your Data (Unless We Do)"

What it looks like:

Some vendors are explicit: your data is not used for training unless you opt in. Others use your data by default and offer an opt-out. The difference is enormous and often buried.

The spectrum across real vendors:

ApproachWhat it means for you
Opt-inYour data is not used for model training unless you affirmatively agree. This is the strongest position.
No-storagePrompts and outputs are not stored beyond what's needed to generate the response. Functionally similar to opt-in.
Opt-outYour data is used for training, improving services, and R&D by default. You can opt out, but the clock may already be running.
Anti-training (reversed)You are prohibited from using any AI output to train any AI system -- including your own. The restriction runs toward you, not them.

Pattern: The Silence Trap

The opt-out version is a Silence Trap. The vendor begins using your data for training the moment you start using the service. The opt-out mechanism may require a written request to a specific email address. If you don't know to ask, your silence is consent.

The anti-training clause is a different animal entirely. It restricts what you can do with the output -- not what the vendor does with your input. One vendor's terms prohibit customers from using "any content, data, output or other information received or derived from any generative AI features" to "directly or indirectly create, train, test, or otherwise improve any machine learning algorithms or artificial intelligence systems." If you're building internal AI tools, this clause could conflict with your roadmap.

If you're receiving this clause:

  • Check whether your vendor is opt-in or opt-out -- don't assume
  • If opt-out, submit the opt-out request immediately and confirm receipt in writing
  • Check for anti-training clauses if your company is building its own AI capabilities
  • Ask: does the vendor retain any derived data (aggregated insights, ML results) even after opt-out? One vendor's terms explicitly state that while the customer owns raw data, the vendor owns "aggregated machine learning results"

If you're drafting this clause:

  • Default to opt-in architecture for model improvement -- customers who want to opt out can request it, but your default should serve your data pipeline
  • Build a documented opt-out mechanism (specific email address, confirmation receipt) and enforce it; a discoverable opt-out protects both parties if the clause is later challenged
  • For anti-training restrictions: prohibiting customers from using your output to train competing models is standard and defensible -- this is your IP position, not an overreach
  • If you retain aggregated ML results after opt-out, state this explicitly; burying it creates disputes at renewal

Clause 2: "You Own the Output (But Good Luck Enforcing It)"

What it looks like:

Most vendors assign or attribute output ownership to the customer. This sounds protective. The caveats are where the pattern hides.

The standard language: "As between Customer and Vendor, Customer owns all Output" or "Output is Customer Data."

The caveats that follow:

  • "Output may not be unique, and other users may receive similar content" -- you own it, but so might someone else
  • "Output may not be protectable by Intellectual Property Rights" -- you own something that may have no legal protection
  • The vendor "assigns to Customer all right, title, and interest, if any, in and to Output" -- the "if any" is doing a lot of work

Pattern: The Illusory Protection

The ownership assignment looks like a right. But if the output isn't protectable IP (because it was generated by an AI, because it's not sufficiently original, because someone else got identical output), you own a right that may not exist. The clause gives you the label of ownership without the substance.

If you're receiving this clause:

  • Don't treat the output ownership clause as sufficient IP protection for your business
  • If the output is mission-critical (marketing copy, code, analysis), independently assess its protectability
  • The real protection is in the indemnification clause (next section), not the ownership clause

If you're drafting this clause:

  • "Customer owns output" plus "if any IP rights exist" is the right structure -- you give customers what they expect without warranting protectability you can't guarantee
  • Keep the "output may not be unique" language -- when multiple customers receive similar output from the same prompt pattern, you need this protection
  • Resist pressure to remove the "if any" qualifier; warranting ownership of something that may not be ownable creates exposure you can't quantify

Clause 3: "We'll Indemnify You (Within a Very Small Box)"

What it looks like:

The indemnification clause for AI outputs determines what happens when someone claims the AI's output infringes their intellectual property. The range across vendors is staggering.

The spectrum:

PositionWhat it means
Uncapped IP indemnificationVendor defends and pays for IP infringement claims from output, with no dollar cap. Not subject to the general liability limitation.
Two-pronged indemnificationSeparate coverage for (1) output infringes someone's IP and (2) training data infringes someone's IP. The broadest structural protection available.
Capped at $10,000 per claimVendor indemnifies, but total liability is hard-capped at $10,000 per output or claim. If the infringement costs you $500K, your recovery is $10K.
Not addressed in AI termsAI-specific indemnification doesn't exist. You fall back to the general MSA indemnification, which may not contemplate AI output at all.

That is not a range. That is a canyon. The difference between uncapped coverage and $10,000 could be the difference between a manageable legal expense and an existential one.

The universal kill switch: modification.

Every vendor that offers AI output indemnification includes an exclusion for modified output. The typical language: indemnification does not apply if the output was "modified, transformed, or used in combination with products or services not provided by Vendor."

Think about what that means in practice. Your team generates a first draft with the AI tool. They edit it -- add a sentence, swap a paragraph, insert a brand name. That draft is now "modified output." The indemnification may no longer apply to the final version that actually gets published.

Pattern: The Illusory Protection + Procedural Forfeiture

The indemnification exists on paper. The modification exclusion makes it structurally difficult to use. In practice, virtually all AI output gets modified before use. The indemnification protects the raw output that nobody publishes; it may not protect the finished work that actually creates liability.

Additional exclusions that appear across vendors:

  • Knew or should have known the output was infringing
  • Disabled or bypassed safety features, filters, or citation tools
  • Continued using output after receiving an infringement notice
  • Trademark-related claims (virtually every vendor excludes these)
  • Output generated from input the customer didn't have rights to use

If you're receiving this clause:

  • Know where your vendor falls on the spectrum before you sign
  • If your vendor offers AI indemnification, read every exclusion -- especially the modification clause
  • Push for a narrow definition of "modification" that excludes routine editing (formatting, minor revisions, integration into a larger document)
  • If your vendor doesn't offer AI-specific indemnification at all, your general MSA indemnification probably doesn't cover AI output -- raise this explicitly in negotiation
  • Use the uncapped/two-pronged vendors as benchmarks when negotiating with vendors who offer less

If you're drafting this clause:

  • The modification exclusion is your primary exposure management tool -- keep it broad; "modified, transformed, or used in combination with" is standard across the market
  • If you're offering AI output indemnification at all, cap it per claim; uncapped coverage is only viable if your training data sourcing is clean, documented, and defensible
  • Trademark exclusions are universal; enforce them -- AI output that happens to include a third-party mark is not your IP risk to bear
  • Add "output generated from input Customer did not have rights to use" as an exclusion -- customers who feed in third-party content they don't own have no indemnification claim against you for the resulting output

Clause 4: "You're Responsible for Everything the AI Does"

What it looks like:

Every vendor places compliance responsibility on the customer. The language varies from blunt to elaborate, but the effect is identical.

The blunt version: "Customer is solely responsible for all use of the Outputs and for evaluating the accuracy and appropriateness of Output for Customer's use case."

The elaborate version: Nine affirmative obligations including implementing abuse detection, output controls, AI disclosure, visible watermarking, content credentials, continuous testing, human oversight, feedback channels, and security measures.

The specific prohibitions that create liability exposure:

  • No automated decisions with legal effects unless a human makes the final call
  • No using AI for individualized professional advice (legal, medical, financial) without a qualified reviewer
  • No emotion recognition in the workplace
  • No social scoring or predictive profiling based on protected characteristics
  • No real-time biometric recognition in public spaces
  • Disclosure required when users interact with AI systems

Pattern: Compliance Burden Shift

The vendor built the AI system. The vendor chose the training data. The vendor designed the model architecture. The vendor decides when and how to update it. But the customer is "solely responsible" for ensuring that its use of the system complies with every applicable law -- including laws that are being written right now in a dozen jurisdictions.

The EU AI Act and an expanding body of state legislation across the U.S. are creating obligations for "deployers" of AI systems -- the entities that use AI to make or support decisions. That's your company when you're the customer. The vendor is the "developer" -- a different regulatory category with different obligations. The compliance burden shift clause in the contract mirrors and reinforces this regulatory split. For current status of state AI deployment laws, see the [LawSnap State AI Legislation Tracker] (coming soon).

What makes this dangerous: Some of these prohibitions (no emotion recognition, no decisions without human oversight, no social scoring) may conflict with how your company is actually using or plans to use the AI features. If your HR team is using an AI tool to screen resumes, and the vendor's AUP prohibits automated employment decisions without human oversight, your company may be in breach of the vendor's terms AND the emerging state AI laws -- simultaneously.

A vendor's AUP violation is defined as a "material breach" of the MSA in at least one major vendor's terms. Material breach typically triggers termination rights. So using an AI feature in a way that violates the AUP could cost you the entire platform relationship, not just the AI feature.

If you're receiving this clause:

  • Read the AUP and AI-specific restrictions before deploying any AI feature -- not after
  • Map your company's actual use cases against the vendor's prohibited uses list
  • If your use case is close to a prohibited category (e.g., AI-assisted hiring, AI in healthcare workflows), get explicit written confirmation from the vendor that your specific use is permitted
  • Push for vendor cooperation obligations on compliance -- the vendor should provide documentation sufficient for your regulatory assessments
  • If the vendor's AUP changes and your previously-permitted use becomes prohibited, you need a transition period, not immediate breach exposure

If you're drafting this clause:

  • "Solely responsible" language is your shield -- customers who deploy your AI into their workflows are the "deployers" under the EU AI Act and emerging U.S. state law; your terms should reflect that regulatory split
  • A robust prohibited uses list in your AUP shifts liability to the party actually using the technology for prohibited purposes; update it as AI regulations evolve
  • Make AUP violation a material breach -- this is your enforcement mechanism and your exit if a customer is using your platform in ways that create regulatory exposure for you
  • The Dynamic Document structure works in your favor here: make sure your MSA's incorporation-by-reference clause covers AUP updates without requiring customer consent for each revision
  • Require customers to certify compliance with prohibited uses at renewal; a paper trail matters when regulators ask who was responsible for a particular deployment

Clause 5: "The AI Might Be Wrong, and That's Your Problem"

What it looks like:

Every vendor disclaims accuracy. The language is consistent across the industry:

  • "May provide inaccurate or offensive output"
  • "Emerging technology" not designed "to meet Customer's regulatory, legal, or other obligations"
  • "May be unpredictable, and may include inaccurate or harmful responses"
  • "No warranty of any kind... that output will meet Customer's requirements or expectations, that content will be accurate"

Pattern: Verification Impossibility

This is the one pattern that is genuinely new to AI contracts. In a traditional SaaS agreement, the vendor can warrant that the software performs as documented -- the behavior is deterministic. In an AI agreement, the output is probabilistic. The same input can produce different output on different days. The vendor literally cannot warrant accuracy because the system is not designed to be consistently accurate.

The result: the warranty section of your MSA may cover the platform (uptime, security, access) but explicitly disclaim the thing you're actually paying for (the AI output). The product works as designed. The design includes being wrong sometimes.

If you're receiving this clause:

  • Don't rely on the vendor warranty for AI output quality -- it doesn't exist
  • Build your own validation processes: human review, spot-checking, output logging
  • If accuracy matters for your use case (and it usually does), negotiate for defined performance metrics -- not a vague "commercially reasonable" standard, but measurable thresholds on specific tasks with a testing protocol both parties agree to
  • Budget for the human review layer that every vendor's terms require but none of them provide

If you're drafting this clause:

  • The broad accuracy disclaimer is non-negotiable -- a probabilistic system cannot carry a deterministic accuracy warranty; resist customer pressure to soften this language
  • If a customer needs measurable AI performance guarantees, offer a use-case-specific testing protocol as a negotiated alternative to a warranty; this lets you bound the commitment to conditions you can actually control
  • Be clear in the warranty section: platform uptime and AI output quality are separate things; only the former is warranted
  • Keep "emerging technology" language -- it signals to courts and regulators that both parties understood the inherent unpredictability of the product at signing

The Composite Pattern

Read together, the five clauses of a typical AI addendum form a closed system:

  1. Your data may train their model (unless you opt out in time)
  2. You "own" the output (but it may not be protectable)
  3. They'll indemnify you (unless you edited it, which you will)
  4. You're responsible for compliance (for a system you can't see inside)
  5. The AI might be wrong (and that's not a bug, it's the product)

Each clause is individually defensible. Together, they shift substantially all risk from the party that built and controls the AI system to the party that uses it. That's not unusual in enterprise software -- liability allocation is the core function of any commercial agreement. But the AI-specific provisions add a new dimension: you're accepting liability for a system whose behavior is non-deterministic, whose capabilities change with each model update, and whose regulatory environment is being written in real time.

This is why the AI addendum is the most important document in your vendor stack right now -- and why it deserves the same scrutiny your MSA gets.

MSA: The Training Data Clause

The Training Data Clause

Your vendor may be using your data to train their AI model right now. The mechanism is a clause that gives vendors the right to use customer data to "improve the Services" — language that predates AI but now covers model training. Whether you can stop it, and when the window to stop it closes, depends entirely on how your vendor structured the opt-out.

What the Clause Looks Like

The training data clause rarely announces itself. It is typically embedded in the Data Processing Addendum or a new AI Services Addendum under language like:

"Vendor may use anonymized and aggregated Customer Data to improve the Services, including to develop, train, and improve AI and machine learning features."

Or, more directly:

"By using the AI Features, Customer grants Vendor a non-exclusive license to use Inputs and Outputs to improve Vendor's models and Services."

The key variable is whether the clause is opt-in or opt-out.

StructureWhat it means
Opt-inYour data is not used for training unless you affirmatively agree. The strongest position for customers.
Opt-outYour data is used by default. You can stop it, but the clock is already running from the addendum's effective date — not from when you read it.
No provisionFalls back to the general MSA or DPA data usage terms, which may not contemplate AI training at all — creating ambiguity about what the vendor can and cannot do.

The GDPR Problem

Under GDPR Article 28(3)(a), a data processor may process personal data only on documented instructions from the controller. "Improve the Services" as a vendor-defined purpose in the vendor's standard terms is not a documented instruction from you. It is the vendor instructing itself, using your data, under cover of a contractual term you accepted.

If the training uses EU personal data under that clause without a specific legal basis under GDPR Article 6 — and "improve the Services" is not itself a legal basis — the processing may be unlawful regardless of what the contract says. A contractual term cannot create a legal basis for processing that does not otherwise exist.

[GDPR Art. 28(3)(a), https://gdpr-info.eu/art-28-gdpr/ (processor processes personal data only on documented instructions from the controller); GDPR Art. 6(1), https://gdpr-info.eu/art-6-gdpr/ (lawfulness of processing requires a valid legal basis).]

The Opt-Out Timing Problem

Even where an opt-out exists, the window often closes before you have a practical opportunity to act.

The opt-out clock typically starts at the addendum's effective date — which is when the vendor posts the addendum to a URL, not when you receive notice, not when you read it. A 30-day opt-out window that begins the day the vendor updates a URL you were not monitoring may have already expired.

Some vendors send email notice. Some post to a portal. Some update a URL incorporated by reference into your MSA. Review your MSA's notice provisions to determine what constitutes valid notice for addendum changes — and whether the vendor's current practice satisfies that standard.

The "Aggregated and Anonymized" Qualifier

Most training data clauses include a qualifier: "anonymized and aggregated Customer Data." This limits but does not eliminate the concern:

  1. Anonymization is not a binary state. Re-identification from aggregated datasets is a known technical risk, particularly for datasets with unusual or distinctive characteristics.
  2. The qualifier typically applies to the training use, not to the initial processing necessary to prepare the data for training.
  3. Some vendors retain the right to "aggregated machine learning results" even after a customer opts out of the training use — meaning derivative value from your data may remain with the vendor even if your raw data is not used going forward.

Both Sides of the Table

If you're the buyer:

  1. Check whether your vendor is opt-in or opt-out — do not assume.
  2. If opt-out: submit the opt-out request immediately, in writing, to the specified contact. Confirm receipt. Do not rely on a web form submission without written confirmation.
  3. Push to replace opt-out with opt-in for any new or renewed AI addendum — frame it as a data governance requirement, not a negotiating position.
  4. Add a clause requiring the vendor to delete any data used for training purposes if you subsequently opt out.
  5. Confirm whether the vendor retains any derivative rights (ML results, aggregated insights) after opt-out.

If you're the vendor:

  1. Opt-out architecture maximizes your training data pool but creates trust erosion and, in EU contexts, potential GDPR exposure.
  2. Opt-in is increasingly the market expectation among enterprise buyers; adopting it before it becomes a regulatory requirement positions you as the trust-forward option.
  3. Be precise about what "improve the Services" covers — ambiguity creates negotiation friction and legal risk.

The Pattern Signal

The training data clause frequently appears alongside:

  • The Silence Trap — the opt-out mechanism is itself a Silence Trap: your inaction constitutes consent to data use for model training.
  • The Dynamic Document — the AI addendum that contains the training data clause is often incorporated by reference and updatable without reopening the MSA.
  • The Compliance Burden Shift — the vendor takes your data to train the model; you bear the compliance obligations for the model's outputs.
Tags: msa · ai-provisions

COMPLIANCE-BURDEN

The Compliance Burden Shift

The vendor built the model. The vendor chose the training data. The vendor updates it without telling you. Under the standard AI vendor MSA, the compliance obligations for everything the model does are yours.

The Compliance Burden Shift pattern — where regulatory risk falls on the party that does not control the product — exists in traditional SaaS agreements, but AI has made it structurally more dangerous. AI regulation is expanding faster than contract cycles. The obligations being shifted onto customers are real, affirmative, and not trivially satisfied.

What the Clause Looks Like

The blunt version:

"Customer is solely responsible for ensuring that Customer's use of the AI Features complies with all applicable laws and regulations, including without limitation laws governing automated decision-making, data protection, and artificial intelligence."

The elaborate version includes nine affirmative customer obligations: implementing abuse detection, output controls, AI disclosure, visible watermarking, content credentials, continuous testing, human oversight, feedback channels, and security measures.

Both versions accomplish the same thing: the compliance risk for a black-box system you cannot audit is placed entirely on you.

The EU AI Act Deployer Problem

Under Regulation (EU) 2024/1689 — the EU AI Act — a deployer is any natural or legal person using an AI system under its authority (except where used in the course of a personal non-professional activity). Article 3(4) sets that broad definitional threshold; the substantive obligations under Article 26 then attach when that deployer is using a high-risk AI system.

Article 26 — which applies from 2 August 2026 — imposes affirmative obligations on deployers of high-risk AI systems, including:

  • Implementing appropriate technical and organizational measures to ensure use in accordance with the instructions of use
  • Maintaining operational logs for at least six months
  • Informing affected natural persons that they are subject to AI-based decisions
  • Carrying out data protection impact assessments where required under GDPR Art. 35 (cross-referenced in Art. 26(9))

For specific Annex III deployers — public bodies and certain private actors deploying high-risk AI in contexts such as employment, education, or essential services — Article 27 additionally requires a fundamental rights impact assessment.

These obligations attach to the deployer — your company — regardless of the fact that the vendor controls the model, the training data, and the update schedule. The vendor's compliance burden shift clause simply acknowledges in the contract what the regulation already imposes: the deployer bears the compliance weight.

[Regulation (EU) 2024/1689, Art. 3(4), https://eur-lex.europa.eu/eli/reg/2024/1689/oj (definition of deployer); Art. 26 (obligations of deployers of high-risk AI systems, applicable from 2 August 2026); Art. 27 (fundamental rights impact assessment for specific Annex III deployers). Readable mirror: https://artificialintelligenceact.eu/article/3/, https://artificialintelligenceact.eu/article/26/, and https://artificialintelligenceact.eu/article/27/.]

The US Context

Colorado SB 24-205 (2024), codified at Colo. Rev. Stat. § 6-1-1701 et seq., creates analogous deployer obligations in the US context. The original effective date was February 1, 2026; SB 25B-004 (signed August 28, 2025) postponed the operative date to June 30, 2026. The statute applies to developers and deployers of "high-risk artificial intelligence systems" used in consequential decisions and imposes requirements for risk management, impact assessments, and consumer notifications.

A growing number of states have introduced or enacted AI-specific deployer obligations. Tracking current status against a bill list in contract prose decays immediately — see the LawSnap State AI Legislation Tracker for current status.

[Colo. Rev. Stat. § 6-1-1701 et seq., https://leg.colorado.gov/bills/sb24-205 (Colorado AI Act, original SB 24-205); SB 25B-004, https://leg.colorado.gov/bills/sb25b-004 (postponing operative date to June 30, 2026).]

Why the Shift Is Not Commercially Reasonable

The compliance burden shift is defensible in limited form — your company should be responsible for how it uses outputs and what decisions it makes. It becomes commercially unreasonable when it extends to obligations that require visibility into the model you do not have:

  • You cannot conduct an algorithmic impact assessment for a model whose weights you cannot inspect
  • You cannot confirm bias testing for a model the vendor updates unilaterally and without notice
  • You cannot provide meaningful human oversight of a model that changes between the oversight review and the next deployment

Accepting full compliance liability for a system you cannot audit is not a balanced allocation of risk. It is a full transfer of risk from the party with control (the vendor) to the party without it (you).

Both Sides of the Table

If you're the buyer:

  1. Add vendor cooperation obligations — the vendor must provide technical documentation sufficient for your regulatory compliance assessment on request.
  2. Require advance notice of material model changes — at minimum 30 days, with a right to test before the change goes live in your environment.
  3. Negotiate shared responsibility: the vendor bears compliance obligations for how the model was built and trained; you bear compliance obligations for how you deploy and use outputs.
  4. For EU AI Act compliance specifically: confirm whether the vendor has conducted the required conformity assessments for high-risk AI systems, and require the vendor to provide CE documentation if applicable.

If you're the vendor:

  1. Full customer-side liability is not commercially reasonable for AI systems in regulated use cases. Customers who understand their exposure are increasingly rejecting it.
  2. Providing technical documentation for customer compliance assessments is a competitive differentiator — most vendors don't, which means doing so is low-cost differentiation.
  3. Material model change notices protect the vendor as much as the customer — silent updates that break customer workflows create churn and support costs.

The Pattern Signal

The Compliance Burden Shift co-occurs with:

  • Verification Impossibility — you cannot verify compliance with obligations for a system you cannot inspect. The shift is most dangerous when combined with black-box architecture.
  • The Training Data Clause — the vendor uses your data to train the model; you bear the compliance obligations for the model's outputs. The data flows one way; the liability flows the other.
  • Template Contamination — the compliance burden shift clause arrived in the vendor's standard AI addendum. It was not negotiated; it was accepted because nobody knew what "standard" looked like.
Tags: msa

mail Subscribe to MSA-AI: AI-Specific Provisions email updates

Primary sources. No fluff. Straight to your inbox.

Also on LawSnap