arrow_back LawSnap / Contracts / MSA AI Provisions — Practitioner Guide
Updated 2026-04-23
Current through April 23, 2026

MSA AI Provisions — Practitioner Guide

MSA: AI Edition — At a Glance

At a Glance

What changedVendors are adding AI provisions through incorporated documents — AUPs, DPAs, AI addenda — not through MSA redlines. The MSA looks identical to last year. The terms governing AI features may not.
Who this affectsAny company using a SaaS vendor that has added AI features or an AI addendum since your last renewal. That is now most enterprise SaaS vendors.
The new document stackMSA → Order Form → DPA → AUP → AI Services Addendum (often added mid-term, incorporated by reference, updatable without notice)
Top 3 AI-specific risks1. AI outputs carved out of IP indemnification — the work your team publishes may have no coverage
2. Compliance burden shift — you are the EU AI Act "deployer" for a model you cannot audit
3. Training data clause — "improve the Services" may mean your data is training the vendor's model right now
What the standard MSA doesn't coverAI output quality in the SLA; AI-specific regulatory compliance obligations; model change notice; indemnification for AI-generated content your team modifies before use
The renewal trapAI addenda added mid-term carry forward at renewal. If you missed the AI addendum opt-out window, renewal locks in those terms for another year.
Where to startCheck whether your vendor's AI addendum is opt-in or opt-out for training data. Submit the opt-out request if it exists. Confirm receipt in writing. Do this before the next renewal.
Tags: msa · ai-provisions

AI-WATCHPOINTS

How AI Changed Your MSA: 8 Traps That Got Worse

Where sophisticated counsel still get caught.

The MSA you negotiated is not AI-ready by default. AI provisions arrive through incorporated documents, not redlines — which means the terms governing your vendor's AI features may have changed after you signed, without your knowledge and without reopening negotiation. The eight watchpoints below are the places where standard MSA risk calculus breaks down when AI is in the picture.


1. The AI addendum that changed your terms may have arrived through a URL in your existing MSA — not through a redline, not through negotiation, and not with your knowledge.

Vendors are not updating the MSA itself to add AI provisions. They are updating the documents the MSA incorporates by reference — the Acceptable Use Policy, the Data Processing Addendum, or a new AI Services Addendum added to the document stack after signing. A clause making the agreement “subject to the Acceptable Use Policy at [URL], as may be updated from time to time” gives the vendor the right to add AI-specific data usage rights, indemnification exclusions, and compliance obligations without your consent. Your continued use of the platform constitutes acceptance. The MSA you redlined last year may look identical — because it is. The terms governing the AI features you are now using are elsewhere.

[By analogy from the consumer context, courts have enforced terms incorporated by hyperlink where the user had reasonable notice. Meyer v. Uber Technologies, Inc., 868 F.3d 66, 75 (2d Cir. 2017), https://law.justia.com/cases/federal/appellate-courts/ca2/16-2750/16-2750-2017-08-17.html (enforcing arbitration clause in terms incorporated by hyperlink). The principle is well-developed in the consumer context; commercial B2B authority for hyperlink-incorporation under modern enterprise SaaS conditions remains thin.]

Why your MSA is not one document →


2. A warranty that AI features will perform “commercially reasonable” performance, with termination as your exclusive remedy, means your recovery for an AI-driven failure is a fraction of your annual fee — regardless of what that failure actually cost your business.

UCC § 2-719(1)(b) permits commercial parties to make a specified remedy exclusive. Most SaaS MSAs do. The warranty-remedy mismatch was always a structural problem, but AI makes it worse: “commercially reasonable” is indefinable for a probabilistic system whose outputs change with every model update, and the vendor controls when and how the model changes. You cannot verify whether the warranty is being met. If the AI system fails — producing inaccurate outputs, triggering a regulatory violation, or making flawed recommendations at scale — your recovery under the exclusive remedy clause is termination and a pro-rata refund of prepaid fees. Read the warranty and the exclusive remedy clause together, as a unit.

[UCC § 2-719(1)(b), https://www.law.cornell.edu/ucc/2/2-719 (parties may agree that a remedy is exclusive; if expressly agreed to be exclusive, it is the sole remedy).]

The warranty that doesn't protect you →


3. The consequential damages exclusion removes exactly the costs that AI failures generate — and the liability cap was sized for software bugs, not for an AI system making consequential decisions at scale.

UCC § 2-719(3) allows commercial parties to exclude consequential damages unless unconscionable. Most enterprise MSAs do. Before AI, this mattered. Now it matters more: the harms most likely to follow an AI vendor failure — regulatory fines for automated decision-making violations, customer harm from inaccurate outputs, reputational damage, business interruption during remediation — are all consequential damages. The liability cap limits your total recovery; the consequential damages exclusion removes the categories that make up that total. A cap of 12 months of fees was calibrated to software bug risk. It was not calibrated to the risk of an AI system that touches your customers, your compliance posture, or your operations at scale.

[UCC § 2-719(3), https://www.law.cornell.edu/ucc/2/2-719 (consequential damages may be limited or excluded unless the limitation or exclusion is unconscionable; limitation of damages where loss is commercial is not prima facie unconscionable).]

The liability cap — what it actually covers →


4. Standard IP indemnification covers copyright claims from the vendor's software — but most AI vendor MSAs now carve AI-generated outputs out of that coverage, or cap it at $10,000 per claim, and every vendor that offers AI indemnification voids it the moment your team edits the output.

Section 501 of the Copyright Act makes anyone who reproduces protected expression a potential infringer. Standard IP indemnification covers that risk when the infringement arises from the vendor's software. But AI vendor MSAs have diverged: some offer uncapped indemnification for AI outputs, some cap it at $10,000 per claim, some exclude AI outputs from indemnification entirely. The structural kill switch present in nearly every AI indemnification clause is the modification exclusion — indemnification is voided when output is “modified.” Your team edits AI output before publishing. That editing is modification. The work you actually use may not be covered by any indemnification at all.

[17 U.S.C. § 501, https://www.law.cornell.edu/uscode/text/17/501 (anyone who violates the exclusive rights of the copyright owner under sections 106 through 122 is an infringer).]

The AI addendum teardown — indemnification →


5. EU AI Act Article 26 imposes affirmative compliance obligations on you as a deployer — implementing technical measures, maintaining logs, conducting assessments — for a model the vendor built, controls, and updates without telling you.

Under Regulation (EU) 2024/1689, Article 3(4), a “deployer” is any entity using an AI system under its authority (except in personal non-professional use) — your company, when you use a vendor's AI tool. Article 26, which applies from 2 August 2026, attaches affirmative obligations whenever that deployer is using a high-risk AI system: implementing appropriate technical and organizational measures, maintaining operational logs for at least six months, informing affected natural persons that they are subject to AI-based decisions, and carrying out data protection impact assessments where required under GDPR Art. 35. For specific Annex III deployers, Article 27 additionally requires a fundamental rights impact assessment. The vendor's standard MSA disclaims responsibility for all of these obligations and places them on the customer. Colorado SB 24-205 (2024), codified at Colo. Rev. Stat. § 6-1-1701 et seq. and effective June 30, 2026, creates analogous deployer obligations in the US context. The compliance obligation is yours regardless of whether you can see inside the model, audit the training data, or receive advance notice of model changes.

[Regulation (EU) 2024/1689, Art. 3(4), https://eur-lex.europa.eu/eli/reg/2024/1689/oj (definition of deployer); Art. 26 (obligations of deployers of high-risk AI systems, applicable from 2 August 2026); Art. 27 (fundamental rights impact assessment for specific Annex III deployers). Colo. Rev. Stat. § 6-1-1701 et seq., https://leg.colorado.gov/bills/sb24-205 (Colorado AI Act); SB 25B-004, https://leg.colorado.gov/bills/sb25b-004 (postponing operative date to June 30, 2026).]

The compliance burden shift →


6. “Improve the Services” is not a documented instruction from you to the vendor — and using your data to train the vendor's AI model under that clause may lack a valid legal basis under GDPR.

Under GDPR Article 28(3)(a), a data processor may process personal data only on documented instructions from the controller. “Improve the Services” as a purpose in the vendor's standard terms is a vendor-defined purpose, not a documented controller instruction. If the AI training uses EU personal data under that clause without a specific legal basis under Article 6, the processing may be unlawful. Opt-out windows that begin at the addendum's effective date — often the date the vendor posts the update to a URL, not the date you receive notice — mean the processing may have already begun before you had a practical opportunity to stop it. The absence of a proactive opt-in is the tell: opt-out architecture places the burden on you to prevent processing that the vendor initiated.

[GDPR Art. 28(3)(a), https://gdpr-info.eu/art-28-gdpr/ (processor processes personal data only on documented instructions from the controller); GDPR Art. 6(1), https://gdpr-info.eu/art-6-gdpr/ (lawfulness of processing requires a valid legal basis).]

The training data clause →


7. AI features are typically not covered by the standard SLA — so when the AI output is degraded, inaccurate, or unavailable, service credits may not apply.

UCC § 2-719(1)(b) permits SLAs to designate service credits as the exclusive remedy for downtime. Most do. But the SLA covers platform uptime — the availability of the software infrastructure. AI output quality and AI feature availability are separate questions. A vendor whose platform is running at 99.9% uptime while its AI features produce degraded or inaccurate outputs is technically in compliance with the SLA. Read the SLA definition of “downtime” and “service credit” carefully: if the definition requires the platform to be fully unavailable, partial degradation of AI features — the kind of failure that may be most common and most damaging — may fall entirely outside SLA coverage and inside the consequential damages exclusion.

[UCC § 2-719(1)(b), https://www.law.cornell.edu/ucc/2/2-719 (parties may agree that a remedy is exclusive; if expressly agreed to be exclusive, it is the sole remedy).]

The SLA trap →


8. If your AI addendum arrived mid-term, the MSA's auto-renewal carries its terms forward automatically — and the opt-out window for the AI provisions may have already closed before you had a practical opportunity to act.

N.Y. Gen. Oblig. Law § 5-903(2) makes automatic renewal provisions in service contracts unenforceable against the customer unless the contractor gave written notice between 15 and 30 days before the cancellation deadline. Many enterprise MSAs are New York-governed and auto-renew annually without any reminder notice. The AI addendum compounds this: a mid-term addendum adds AI-specific provisions — data usage rights, compliance obligations, indemnification carve-outs — that were not present when you originally signed. The auto-renewal carries those provisions forward as part of the standing agreement. If you did not opt out of the AI addendum's data usage terms within the addendum's own window (often 30 days from posting), and you did not exercise the MSA's auto-renewal exit, you have now renewed into terms you may never have actively accepted.

[N.Y. Gen. Oblig. Law § 5-903, https://www.nysenate.gov/legislation/laws/GOB/5-903 (automatic renewal unenforceable absent 15-30 day written notice).]

The auto-renewal trap →


These eight watchpoints cover the places where standard MSA risk calculus breaks down when AI provisions enter the picture. For the full pattern analysis — every named pattern, the cross-vendor indemnification spectrum, and both-sides-of-the-table playbook — see the MSA Pattern Guide below.

MSA: How AI Changed Your MSA

How AI Changed Your MSA

The MSA you signed is a framework contract — it sets the terms that govern everything that goes wrong across the life of a vendor relationship. Most in-house counsel have reviewed enough of them to know where the risk lives. AI changed that.

Vendors are not modifying the MSA itself to add AI provisions. They are updating the documents the MSA incorporates by reference: the Acceptable Use Policy, the Data Processing Addendum, a new AI Services Addendum. This means the MSA you negotiated last year may look identical today — while the terms governing the AI features you are now actively using have been materially updated without your knowledge or consent.

This page covers the eight places where that shift creates real exposure: not new traps invented for AI, but existing MSA patterns that AI makes significantly more dangerous.

If you're looking at a specific vendor agreement right now, start with the MSA Review Checklist →. If you want to understand the full pattern behind any watchpoint, each one links to a dedicated explainer.

Tags: msa · ai-provisions

MSA: AI Provisions — The New Curveballs

The AI Provisions Changing Your MSA

Every major SaaS vendor has added or is adding AI-specific terms to their agreements. Most are not modifying the MSA itself — they're updating incorporated documents (the Acceptable Use Policy, a new AI Addendum, or the Data Processing Addendum). This means you can renew an MSA that looks identical to last year's and inherit AI terms you've never negotiated.

Three patterns from the 37-pattern library are firing at accelerated rates in AI-specific provisions.

1. Template Contamination: "Standard" AI Terms That Aren't

The pattern: Bad templates propagate at scale. When a new contract type emerges (like an AI Services Addendum), there's no established market standard. Vendors draft terms optimized for their position and present them as "our standard AI addendum." Because nobody has seen enough of these to know what's normal, the template goes unchallenged.

How it shows up: Irene receives an AI Addendum from her CRM vendor. It's the first one she's seen from this vendor. She can't benchmark it because she hasn't seen enough AI addenda from other vendors to know what "standard" looks like. The vendor's sales team says "everyone signs this." Maybe they do. That doesn't make it balanced.

What to watch for:

  • Data usage clauses that grant the vendor rights to use customer data for model training, improvement, or benchmarking
  • Output ownership language that's ambiguous about who owns AI-generated deliverables
  • Broad "AI-generated content" disclaimers that may exclude the core product functionality from warranty coverage

The move: Ask the vendor for a redline showing what changed from their pre-AI terms. If the AI Addendum is new, ask which provisions are standard across their customer base and which are negotiable. Collect AI addenda from multiple vendors — cross-vendor comparison is the fastest way to identify outliers.

2. Verification Impossibility: Warranties You Can't Check

The pattern: The vendor warrants something that neither party can practically verify. This pattern appears in 100% of cases alongside the Illusory Protection pattern — when you can't verify the warranty, the remedy is structurally unreachable.

How it shows up in AI provisions:

"Vendor warrants that the AI Features will produce outputs that are commercially reasonable and materially consistent with the Documentation."

What does "commercially reasonable" mean for a probabilistic system? The model's outputs change with every update. The documentation describes capabilities at a point in time. The vendor can't warrant accuracy because the system is inherently non-deterministic. You can't verify the warranty because you can't see inside the model.

What to watch for:

  • Accuracy warranties on AI outputs (what's the benchmark? who measures?)
  • "Consistent with documentation" when the documentation is a marketing webpage that changes quarterly
  • Bias or fairness representations without defined metrics or testing protocols

The move: Replace vague AI warranties with measurable commitments: defined accuracy thresholds on specific use cases, with a testing protocol both parties agree to, and a remedy that triggers automatically when the threshold isn't met. If the vendor won't commit to measurables, the warranty is decorative.

3. Compliance Burden Shift: Their Black Box, Your Liability

The pattern: The regulatory compliance burden falls on the party that doesn't control the product. In AI provisions, this is the most commercially dangerous pattern because AI regulation is expanding faster than contract cycles.

How it shows up:

"Customer is responsible for ensuring that Customer's use of the AI Features complies with all applicable laws and regulations, including without limitation laws governing automated decision-making, data protection, and artificial intelligence."

The vendor built the black box. The vendor trained the model. The vendor chose the training data. But you're liable for the outputs. If the AI feature produces a biased hiring recommendation, a hallucinated legal citation, or a privacy violation — that's your problem, not the vendor's.

What makes this urgent: The EU AI Act and a growing wave of state legislation across the U.S. are creating affirmative compliance obligations for "deployers" of AI systems — the companies that use AI tools to make or support consequential decisions. That's Irene's company. The vendor's compliance burden shift clause means Irene's company bears compliance risk for a system it can't audit, can't modify, and may not fully understand. For current status of state AI deployment laws, see the [LawSnap State AI Legislation Tracker] (coming soon).

What to watch for:

  • Unilateral compliance obligations on the customer for AI-specific regulations
  • Absence of vendor cooperation obligations (will the vendor provide documentation needed for your compliance assessment?)
  • No commitment to algorithmic impact assessments or bias testing
  • Indemnification exclusions for "AI-generated outputs" — check whether the core product functionality now falls into this exclusion

The move: Add vendor cooperation obligations: the vendor must provide technical documentation sufficient for the customer's regulatory compliance assessment. Require notice of material model changes. Negotiate shared responsibility for AI-specific regulatory compliance — the vendor controls the system, so pure customer-side liability is not commercially reasonable regardless of what the template says.

The Compound Risk

These three patterns don't operate in isolation. In practice, they compound:

  1. Template Contamination means the AI Addendum arrives as a take-it-or-leave-it document
  2. Verification Impossibility means you can't check whether the AI warranty is being met
  3. Compliance Burden Shift means when something goes wrong, it's your problem

The result: Irene's company accepts AI terms it can't benchmark, can't verify, and can't defend against when regulators come calling. This is why the AI provisions are the most important section to negotiate in any MSA renewal happening right now — not because they're the largest financial exposure, but because they're the least understood.

MSA: The AI Addendum Teardown

The AI Addendum Teardown: What Every Clause Is Actually Doing

Most SaaS vendors have added AI-specific terms to their agreements in the last 12-18 months. These provisions arrive as an AI Services Addendum, an updated Acceptable Use Policy, or new sections in the Data Processing Addendum. They share a common structure — and common patterns.

Below is a composite AI addendum, built from the actual published terms of seven major vendors. Each clause type is annotated with the pattern it triggers and what you should do about it.


Clause 1: "We Don't Train on Your Data (Unless We Do)"

What it looks like:

Some vendors are explicit: your data is not used for training unless you opt in. Others use your data by default and offer an opt-out. The difference is enormous and often buried.

The spectrum across real vendors:

ApproachWhat it means for you
Opt-inYour data is not used for model training unless you affirmatively agree. This is the strongest position.
No-storagePrompts and outputs are not stored beyond what's needed to generate the response. Functionally similar to opt-in.
Opt-outYour data is used for training, improving services, and R&D by default. You can opt out, but the clock may already be running.
Anti-training (reversed)You are prohibited from using any AI output to train any AI system — including your own. The restriction runs toward you, not them.

Pattern: [[The Silence Trap]]

The opt-out version is a Silence Trap. The vendor begins using your data for training the moment you start using the service. The opt-out mechanism may require a written request to a specific email address. If you don't know to ask, your silence is consent.

The anti-training clause is a different animal entirely. It restricts what you can do with the output — not what the vendor does with your input. One vendor's terms prohibit customers from using "any content, data, output or other information received or derived from any generative AI features" to "directly or indirectly create, train, test, or otherwise improve any machine learning algorithms or artificial intelligence systems." If you're building internal AI tools, this clause could conflict with your roadmap.

Irene's move:

  • Check whether your vendor is opt-in or opt-out — don't assume
  • If opt-out, submit the opt-out request immediately and confirm receipt in writing
  • Check for anti-training clauses if your company is building its own AI capabilities
  • Ask: does the vendor retain any derived data (aggregated insights, ML results) even after opt-out? One vendor's terms explicitly state that while the customer owns raw data, the vendor owns "aggregated machine learning results"

Clause 2: "You Own the Output (But Good Luck Enforcing It)"

What it looks like:

Most vendors assign or attribute output ownership to the customer. This sounds protective. The caveats are where the pattern hides.

The standard language: "As between Customer and Vendor, Customer owns all Output" or "Output is Customer Data."

The caveats that follow:

  • "Output may not be unique, and other users may receive similar content" — you own it, but so might someone else
  • "Output may not be protectable by Intellectual Property Rights" — you own something that may have no legal protection
  • The vendor "assigns to Customer all right, title, and interest, if any, in and to Output" — the "if any" is doing a lot of work

Pattern: [[The Illusory Protection]]

The ownership assignment looks like a right. But if the output isn't protectable IP (because it was generated by an AI, because it's not sufficiently original, because someone else got identical output), you own a right that may not exist. The clause gives you the label of ownership without the substance.

Irene's move:

  • Don't treat the output ownership clause as sufficient IP protection for your business
  • If the output is mission-critical (marketing copy, code, analysis), independently assess its protectability
  • The real protection is in the indemnification clause (next section), not the ownership clause

Clause 3: "We'll Indemnify You (Within a Very Small Box)"

What it looks like:

The indemnification clause for AI outputs determines what happens when someone claims the AI's output infringes their intellectual property. The range across vendors is staggering.

The spectrum:

PositionWhat it means
Uncapped IP indemnificationVendor defends and pays for IP infringement claims from output, with no dollar cap. Not subject to the general liability limitation.
Two-pronged indemnificationSeparate coverage for (1) output infringes someone's IP and (2) training data infringes someone's IP. The broadest structural protection available.
Capped at $10,000 per claimVendor indemnifies, but total liability is hard-capped at $10,000 per output or claim. If the infringement costs you $500K, your recovery is $10K.
Not addressed in AI termsAI-specific indemnification doesn't exist. You fall back to the general MSA indemnification, which may not contemplate AI output at all.

That is not a range. That is a canyon. The difference between uncapped coverage and $10,000 could be the difference between a manageable legal expense and an existential one.

The universal kill switch: modification.

Every vendor that offers AI output indemnification includes an exclusion for modified output. The typical language: indemnification does not apply if the output was "modified, transformed, or used in combination with products or services not provided by Vendor."

Think about what that means in practice. Irene's marketing team generates a first draft with the AI tool. They edit it — add a sentence, swap a paragraph, insert a brand name. That draft is now "modified output." The indemnification may no longer apply to the final version that actually gets published.

Pattern: [[The Illusory Protection]] + [[Procedural Forfeiture]]

The indemnification exists on paper. The modification exclusion makes it structurally difficult to use. In practice, virtually all AI output gets modified before use. The indemnification protects the raw output that nobody publishes; it may not protect the finished work that actually creates liability.

Additional exclusions that appear across vendors:

  • Knew or should have known the output was infringing
  • Disabled or bypassed safety features, filters, or citation tools
  • Continued using output after receiving an infringement notice
  • Trademark-related claims (virtually every vendor excludes these)
  • Output generated from input the customer didn't have rights to use

Irene's move:

  • Know where your vendor falls on the spectrum before you sign
  • If your vendor offers AI indemnification, read every exclusion — especially the modification clause
  • Push for a definition of "modification" that excludes routine editing (formatting, minor revisions, integration into a larger document)
  • If your vendor doesn't offer AI-specific indemnification at all, your general MSA indemnification probably doesn't cover AI output — raise this explicitly in negotiation
  • Use the uncapped/two-pronged vendors as benchmarks when negotiating with vendors who offer less

Clause 4: "You're Responsible for Everything the AI Does"

What it looks like:

Every vendor places compliance responsibility on the customer. The language varies from blunt to elaborate, but the effect is identical.

The blunt version: "Customer is solely responsible for all use of the Outputs and for evaluating the accuracy and appropriateness of Output for Customer's use case."

The elaborate version: Nine affirmative obligations including implementing abuse detection, output controls, AI disclosure, visible watermarking, content credentials, continuous testing, human oversight, feedback channels, and security measures.

The specific prohibitions that create liability exposure:

  • No automated decisions with legal effects unless a human makes the final call
  • No using AI for individualized professional advice (legal, medical, financial) without a qualified reviewer
  • No emotion recognition in the workplace
  • No social scoring or predictive profiling based on protected characteristics
  • No real-time biometric recognition in public spaces
  • Disclosure required when users interact with AI systems

Pattern: [[Compliance Burden Shift]]

The vendor built the AI system. The vendor chose the training data. The vendor designed the model architecture. The vendor decides when and how to update it. But you are "solely responsible" for ensuring that your use of the system complies with every applicable law — including laws that are being written right now in a dozen jurisdictions.

The EU AI Act and an expanding body of state legislation across the U.S. are creating obligations for "deployers" of AI systems — the entities that use AI to make or support decisions. That's Irene's company. The vendor is the "developer" — a different regulatory category with different obligations. The compliance burden shift clause in your contract mirrors and reinforces this regulatory split. For current status of state AI deployment laws, see the [LawSnap State AI Legislation Tracker] (coming soon).

What makes this dangerous: Some of these prohibitions (no emotion recognition, no decisions without human oversight, no social scoring) may conflict with how your company is actually using or plans to use the AI features. If your HR team is using an AI tool to screen resumes, and the vendor's AUP prohibits automated employment decisions without human oversight, your company may be in breach of the vendor's terms AND the emerging state AI laws — simultaneously.

A vendor's AUP violation is defined as a "material breach" of the MSA in at least one major vendor's terms. Material breach typically triggers termination rights. So using an AI feature in a way that violates the AUP could cost you the entire platform relationship, not just the AI feature.

Irene's move:

  • Read the AUP and AI-specific restrictions before deploying any AI feature — not after
  • Map your company's actual use cases against the vendor's prohibited uses list
  • If your use case is close to a prohibited category (e.g., AI-assisted hiring, AI in healthcare workflows), get explicit written confirmation from the vendor that your specific use is permitted
  • Push for vendor cooperation obligations on compliance — the vendor should provide documentation sufficient for your regulatory assessments
  • If the vendor's AUP changes and your previously-permitted use becomes prohibited, you need a transition period, not immediate breach exposure

Clause 5: "The AI Might Be Wrong, and That's Your Problem"

What it looks like:

Every vendor disclaims accuracy. The language is consistent across the industry:

  • "May provide inaccurate or offensive output"
  • "Emerging technology" not designed "to meet Customer's regulatory, legal, or other obligations"
  • "May be unpredictable, and may include inaccurate or harmful responses"
  • "No warranty of any kind... that output will meet Customer's requirements or expectations, that content will be accurate"

Pattern: [[Verification Impossibility]]

This is the one pattern that is genuinely new to AI contracts. In a traditional SaaS agreement, the vendor can warrant that the software performs as documented — the behavior is deterministic. In an AI agreement, the output is probabilistic. The same input can produce different output on different days. The vendor literally cannot warrant accuracy because the system is not designed to be consistently accurate.

The result: the warranty section of your MSA may cover the platform (uptime, security, access) but explicitly disclaim the thing you're actually paying for (the AI output). The product works as designed. The design includes being wrong sometimes.

Irene's move:

  • Don't rely on the vendor warranty for AI output quality — it doesn't exist
  • Build your own validation processes: human review, spot-checking, output logging
  • If accuracy matters for your use case (and it usually does), negotiate for defined performance metrics — not a vague "commercially reasonable" standard, but measurable thresholds on specific tasks with a testing protocol both parties agree to
  • Budget for the human review layer that every vendor's terms require but none of them provide

The Composite Pattern

Read together, the five clauses of a typical AI addendum form a closed system:

  1. Your data may train their model (unless you opt out in time)
  2. You "own" the output (but it may not be protectable)
  3. They'll indemnify you (unless you edited it, which you will)
  4. You're responsible for compliance (for a system you can't see inside)
  5. The AI might be wrong (and that's not a bug, it's the product)

Each clause is individually defensible. Together, they shift substantially all risk from the party that built and controls the AI system to the party that uses it. That's not unusual in enterprise software — liability allocation is the core function of any commercial agreement. But the AI-specific provisions add a new dimension: you're accepting liability for a system whose behavior is non-deterministic, whose capabilities change with each model update, and whose regulatory environment is being written in real time.

This is why the AI addendum is the most important document in your vendor stack right now — and why it deserves the same scrutiny your MSA gets.

MSA: The Training Data Clause

The Training Data Clause

Your vendor may be using your data to train their AI model right now. The mechanism is a clause that gives vendors the right to use customer data to "improve the Services" — language that predates AI but now covers model training. Whether you can stop it, and when the window to stop it closes, depends entirely on how your vendor structured the opt-out.

What the Clause Looks Like

The training data clause rarely announces itself. It is typically embedded in the Data Processing Addendum or a new AI Services Addendum under language like:

"Vendor may use anonymized and aggregated Customer Data to improve the Services, including to develop, train, and improve AI and machine learning features."

Or, more directly:

"By using the AI Features, Customer grants Vendor a non-exclusive license to use Inputs and Outputs to improve Vendor's models and Services."

The key variable is whether the clause is opt-in or opt-out.

StructureWhat it means
Opt-inYour data is not used for training unless you affirmatively agree. The strongest position for customers.
Opt-outYour data is used by default. You can stop it, but the clock is already running from the addendum's effective date — not from when you read it.
No provisionFalls back to the general MSA or DPA data usage terms, which may not contemplate AI training at all — creating ambiguity about what the vendor can and cannot do.

The GDPR Problem

Under GDPR Article 28(3)(a), a data processor may process personal data only on documented instructions from the controller. "Improve the Services" as a vendor-defined purpose in the vendor's standard terms is not a documented instruction from you. It is the vendor instructing itself, using your data, under cover of a contractual term you accepted.

If the training uses EU personal data under that clause without a specific legal basis under GDPR Article 6 — and "improve the Services" is not itself a legal basis — the processing may be unlawful regardless of what the contract says. A contractual term cannot create a legal basis for processing that does not otherwise exist.

[GDPR Art. 28(3)(a), https://gdpr-info.eu/art-28-gdpr/ (processor processes personal data only on documented instructions from the controller); GDPR Art. 6(1), https://gdpr-info.eu/art-6-gdpr/ (lawfulness of processing requires a valid legal basis).]

The Opt-Out Timing Problem

Even where an opt-out exists, the window often closes before you have a practical opportunity to act.

The opt-out clock typically starts at the addendum's effective date — which is when the vendor posts the addendum to a URL, not when you receive notice, not when you read it. A 30-day opt-out window that begins the day the vendor updates a URL you were not monitoring may have already expired.

Some vendors send email notice. Some post to a portal. Some update a URL incorporated by reference into your MSA. Review your MSA's notice provisions to determine what constitutes valid notice for addendum changes — and whether the vendor's current practice satisfies that standard.

The "Aggregated and Anonymized" Qualifier

Most training data clauses include a qualifier: "anonymized and aggregated Customer Data." This limits but does not eliminate the concern:

  1. Anonymization is not a binary state. Re-identification from aggregated datasets is a known technical risk, particularly for datasets with unusual or distinctive characteristics.
  2. The qualifier typically applies to the training use, not to the initial processing necessary to prepare the data for training.
  3. Some vendors retain the right to "aggregated machine learning results" even after a customer opts out of the training use — meaning derivative value from your data may remain with the vendor even if your raw data is not used going forward.

Both Sides of the Table

If you're the buyer:

  1. Check whether your vendor is opt-in or opt-out — do not assume.
  2. If opt-out: submit the opt-out request immediately, in writing, to the specified contact. Confirm receipt. Do not rely on a web form submission without written confirmation.
  3. Push to replace opt-out with opt-in for any new or renewed AI addendum — frame it as a data governance requirement, not a negotiating position.
  4. Add a clause requiring the vendor to delete any data used for training purposes if you subsequently opt out.
  5. Confirm whether the vendor retains any derivative rights (ML results, aggregated insights) after opt-out.

If you're the vendor:

  1. Opt-out architecture maximizes your training data pool but creates trust erosion and, in EU contexts, potential GDPR exposure.
  2. Opt-in is increasingly the market expectation among enterprise buyers; adopting it before it becomes a regulatory requirement positions you as the trust-forward option.
  3. Be precise about what "improve the Services" covers — ambiguity creates negotiation friction and legal risk.

The Pattern Signal

The training data clause frequently appears alongside:

  • The Silence Trap — the opt-out mechanism is itself a Silence Trap: your inaction constitutes consent to data use for model training.
  • The Dynamic Document — the AI addendum that contains the training data clause is often incorporated by reference and updatable without reopening the MSA.
  • The Compliance Burden Shift — the vendor takes your data to train the model; you bear the compliance obligations for the model's outputs.
Tags: msa · ai-provisions

COMPLIANCE-BURDEN

The Compliance Burden Shift

The vendor built the model. The vendor chose the training data. The vendor updates it without telling you. Under the standard AI vendor MSA, the compliance obligations for everything the model does are yours.

The Compliance Burden Shift pattern — where regulatory risk falls on the party that does not control the product — exists in traditional SaaS agreements, but AI has made it structurally more dangerous. AI regulation is expanding faster than contract cycles. The obligations being shifted onto customers are real, affirmative, and not trivially satisfied.

What the Clause Looks Like

The blunt version:

"Customer is solely responsible for ensuring that Customer's use of the AI Features complies with all applicable laws and regulations, including without limitation laws governing automated decision-making, data protection, and artificial intelligence."

The elaborate version includes nine affirmative customer obligations: implementing abuse detection, output controls, AI disclosure, visible watermarking, content credentials, continuous testing, human oversight, feedback channels, and security measures.

Both versions accomplish the same thing: the compliance risk for a black-box system you cannot audit is placed entirely on you.

The EU AI Act Deployer Problem

Under Regulation (EU) 2024/1689 — the EU AI Act — a deployer is any natural or legal person using an AI system under its authority (except where used in the course of a personal non-professional activity). Article 3(4) sets that broad definitional threshold; the substantive obligations under Article 26 then attach when that deployer is using a high-risk AI system.

Article 26 — which applies from 2 August 2026 — imposes affirmative obligations on deployers of high-risk AI systems, including:

  • Implementing appropriate technical and organizational measures to ensure use in accordance with the instructions of use
  • Maintaining operational logs for at least six months
  • Informing affected natural persons that they are subject to AI-based decisions
  • Carrying out data protection impact assessments where required under GDPR Art. 35 (cross-referenced in Art. 26(9))

For specific Annex III deployers — public bodies and certain private actors deploying high-risk AI in contexts such as employment, education, or essential services — Article 27 additionally requires a fundamental rights impact assessment.

These obligations attach to the deployer — your company — regardless of the fact that the vendor controls the model, the training data, and the update schedule. The vendor's compliance burden shift clause simply acknowledges in the contract what the regulation already imposes: the deployer bears the compliance weight.

[Regulation (EU) 2024/1689, Art. 3(4), https://eur-lex.europa.eu/eli/reg/2024/1689/oj (definition of deployer); Art. 26 (obligations of deployers of high-risk AI systems, applicable from 2 August 2026); Art. 27 (fundamental rights impact assessment for specific Annex III deployers). Readable mirror: https://artificialintelligenceact.eu/article/3/, https://artificialintelligenceact.eu/article/26/, and https://artificialintelligenceact.eu/article/27/.]

The US Context

Colorado SB 24-205 (2024), codified at Colo. Rev. Stat. § 6-1-1701 et seq., creates analogous deployer obligations in the US context. The original effective date was February 1, 2026; SB 25B-004 (signed August 28, 2025) postponed the operative date to June 30, 2026. The statute applies to developers and deployers of "high-risk artificial intelligence systems" used in consequential decisions and imposes requirements for risk management, impact assessments, and consumer notifications.

A growing number of states have introduced or enacted AI-specific deployer obligations. Tracking current status against a bill list in contract prose decays immediately — see the LawSnap State AI Legislation Tracker for current status.

[Colo. Rev. Stat. § 6-1-1701 et seq., https://leg.colorado.gov/bills/sb24-205 (Colorado AI Act, original SB 24-205); SB 25B-004, https://leg.colorado.gov/bills/sb25b-004 (postponing operative date to June 30, 2026).]

Why the Shift Is Not Commercially Reasonable

The compliance burden shift is defensible in limited form — your company should be responsible for how it uses outputs and what decisions it makes. It becomes commercially unreasonable when it extends to obligations that require visibility into the model you do not have:

  • You cannot conduct an algorithmic impact assessment for a model whose weights you cannot inspect
  • You cannot confirm bias testing for a model the vendor updates unilaterally and without notice
  • You cannot provide meaningful human oversight of a model that changes between the oversight review and the next deployment

Accepting full compliance liability for a system you cannot audit is not a balanced allocation of risk. It is a full transfer of risk from the party with control (the vendor) to the party without it (you).

Both Sides of the Table

If you're the buyer:

  1. Add vendor cooperation obligations — the vendor must provide technical documentation sufficient for your regulatory compliance assessment on request.
  2. Require advance notice of material model changes — at minimum 30 days, with a right to test before the change goes live in your environment.
  3. Negotiate shared responsibility: the vendor bears compliance obligations for how the model was built and trained; you bear compliance obligations for how you deploy and use outputs.
  4. For EU AI Act compliance specifically: confirm whether the vendor has conducted the required conformity assessments for high-risk AI systems, and require the vendor to provide CE documentation if applicable.

If you're the vendor:

  1. Full customer-side liability is not commercially reasonable for AI systems in regulated use cases. Customers who understand their exposure are increasingly rejecting it.
  2. Providing technical documentation for customer compliance assessments is a competitive differentiator — most vendors don't, which means doing so is low-cost differentiation.
  3. Material model change notices protect the vendor as much as the customer — silent updates that break customer workflows create churn and support costs.

The Pattern Signal

The Compliance Burden Shift co-occurs with:

  • Verification Impossibility — you cannot verify compliance with obligations for a system you cannot inspect. The shift is most dangerous when combined with black-box architecture.
  • The Training Data Clause — the vendor uses your data to train the model; you bear the compliance obligations for the model's outputs. The data flows one way; the liability flows the other.
  • Template Contamination — the compliance burden shift clause arrived in the vendor's standard AI addendum. It was not negotiated; it was accepted because nobody knew what "standard" looked like.
Tags: msa

MSA: The Silence Trap

The Silence Trap: What Happens When Nobody Says Anything

The Silence Trap appears in 28% of commercial contracts. The mechanism: inaction is treated as consent, or a process is defined so vaguely that exercising your rights becomes practically impossible.

The Disputed Charges Pattern

Salesforce's MSA (Section 5.5, last updated September 15, 2025) addresses disputed charges with language requiring parties to act in "good faith" to resolve billing disputes. It defines no process, no timeline, no escalation path, and no standard for what constitutes good faith.

In practice, this means:

  • You dispute a charge
  • The vendor's billing team responds (eventually)
  • There's no deadline for resolution
  • There's no mechanism to pause payment during the dispute
  • If you withhold payment pending resolution, you may trigger a breach provision
  • The vendor has no contractual incentive to resolve quickly

The silence isn't in the contract language — it's in what the contract doesn't say. The provision exists so both parties can point to it. It doesn't function as a remedy.

The Auto-Renewal Version

The auto-renewal clause is a Silence Trap by design. Your contract renews unless you affirmatively opt out within a narrow window — typically 30 to 60 days before the term ends. Miss the window and you're locked in for another year at a price you didn't negotiate.

What makes it a trap, not just a deadline:

  • The notification window is pegged to the contract end date, not the calendar (Irene has 20 contracts with different end dates)
  • Renewal pricing defaults to list price (your negotiated discount expires)
  • The vendor has no obligation to remind you the window is approaching
  • Enterprise SaaS switching costs make the "just don't renew" remedy theoretical rather than practical

The AI Consent Version

The newest form of the Silence Trap is in AI data usage provisions:

"Vendor may use anonymized and aggregated Customer Data to improve the Services, including AI and machine learning features. Customer may opt out by submitting a request to [email address] within thirty (30) days of the effective date of this Addendum."

The clock starts when the addendum takes effect — which may be when the vendor posts it to a URL, not when Irene reads it. Silence equals consent to data usage for model training.

Both Sides of the Table

If you're the buyer:

  1. Replace "good faith" dispute language with a defined process: written notice → 15-day response → escalation to [named roles] → mediation if unresolved at 30 days
  2. Add a right to withhold disputed amounts during the resolution period without triggering breach
  3. Extend auto-renewal notice windows to 90 days and require the vendor to provide written renewal notice 120 days before term end
  4. For AI opt-outs: replace opt-out with opt-in — data usage for model training requires affirmative consent, not silence

If you're the vendor:

  1. Vague dispute resolution protects your cash flow; detailed dispute resolution protects the relationship. Choose based on whether you want renewals.
  2. Auto-renewal is your revenue predictability mechanism — defend the concept but consider whether 30-day windows create more churn (angry surprise renewals) than 90-day windows (planned retention conversations)
  3. Opt-out AI data clauses work today because the market hasn't standardized. When it does, you'll need to retrofit opt-in — consider moving early as a trust differentiator

MSA: The Warranty That Doesn't Protect You

The Warranty That Doesn't Protect You

60% of technology agreements contain the Illusory Protection pattern: a remedy that appears meaningful but can't structurally function when you need it.

The most common form in MSAs is the warranty-remedy mismatch.

How It Works

Section 8.2 of a typical SaaS MSA says something like:

"Vendor warrants that the Services will perform substantially in accordance with the Documentation during the Subscription Term."

That looks protective. Your vendor is warranting that the product works as described. But the warranty is only as strong as the remedy for breach. Scroll to the limitation of liability section:

"Customer's exclusive remedy for any breach of the foregoing warranty shall be, at Vendor's option, (a) correction of the non-conforming Services, or (b) termination of the affected Order Form and a pro-rata refund of prepaid fees for the remainder of the Subscription Term."

Read that again. Your exclusive remedy is the vendor's choice: fix it or let you leave. If the product failure costs you $2M in lost revenue, your recovery is capped at a pro-rata refund of what you paid. If you paid $200K annually and the breach happened 6 months in, your remedy is $100K.

The warranty exists. The protection doesn't.

The Data Security Version

The same pattern appears in data security provisions, often with more severe consequences.

In the SolarWinds breach — one of the most significant supply chain attacks in enterprise software history — the contractual structure followed this exact pattern (per Contract Teardown Show, "SolarWinds Software Services Agreement", featuring Otto Hanson of TermScout):

  • Section 7.2 committed to security measures
  • Section 11 eliminated meaningful remedies through an indirect damages waiver
  • The liability cap was 12 months of fees with no exception for data security breaches

Most harm from a data breach is consequential (investigation costs, notification costs, regulatory fines, business interruption, reputational damage). The indirect damages waiver excludes exactly those categories. The warranty promises security. The remedy structure ensures you can't recover when security fails.

Benchmarking reality: Of 327 vendor contracts benchmarked by TermScout, 100% waive indirect damages. Only 7 — roughly 2% — offer an elevated liability cap for data security breaches. If your MSA doesn't have a carve-out, you're in the 98%. (Source: Otto Hanson, Founder & CEO of TermScout, via Contract Teardown Show. Verify specific report at termscout.com for current figures.)

Both Sides of the Table

If you're the buyer:

  1. Read the warranty AND the exclusive remedy AND the limitation of liability as a single unit — they're designed to work together
  2. Push for a data breach carve-out from the liability cap (Snowflake's Terms of Service Section 12(C), last updated January 28, 2026, establishes a 2x "Data Protection Claims Cap" separate from the general liability cap — use it as a benchmark)
  3. Reject "at Vendor's option" remedy language — you should choose whether to accept a fix or terminate
  4. If the vendor won't move on the cap, negotiate for an insurance requirement instead

If you're the vendor:

  1. The warranty-remedy structure IS the business model for enterprise SaaS — unlimited liability is not commercially viable at scale
  2. Offering a modest data breach carve-out (2x cap) is a competitive differentiator that costs you almost nothing in most scenarios
  3. "At Vendor's option" protects you from customers who want both a fix and a refund; it's worth defending

The Pattern Signal

The Illusory Protection has the strongest co-occurrence with the Missing Provision pattern (Jaccard similarity 0.529 — the second-tightest pair in the entire 37-pattern library, based on LawSnap's co-occurrence analysis across 110 Contract Teardown Show episodes). When the warranty is illusory, check: is there a provision that should exist but doesn't? Common missing provisions in MSAs:

  • No SLA with financial teeth (uptime warranty without credits)
  • No transition assistance on termination (your data is hostage)
  • No restrictions on vendor's use of your data for model training

MSA: IP Indemnification — What the Vendor Covers

IP Indemnification — What the Vendor Covers

The standard IP indemnification clause protects you if a third party claims the vendor's software infringes their intellectual property. Under 17 U.S.C. § 501, anyone who reproduces protected expression without authorization is a potential infringer — the IP indemnification clause shifts that risk to the vendor for infringement arising from the vendor's own product. That protection is real, but its scope is defined by three variables: what triggers coverage, what excludes it, and whether the general liability cap applies.

[17 U.S.C. § 501, https://www.law.cornell.edu/uscode/text/17/501 (anyone who violates the exclusive rights of the copyright owner under sections 106 through 122 is an infringer).]

What It Covers

The standard clause covers claims that the vendor's software (or the vendor's delivery of services) infringes a third party's patent, copyright, or trademark. The vendor defends the claim, pays any judgment, and typically has three options: modify the infringing component, procure a license for continued use, or terminate the affected service and refund prepaid fees.

That last option — terminate and refund — is the vendor's exit valve. If a real IP claim arises and the vendor cannot fix or license the infringing component, they can end the relationship and refund your prepaid fees. The indemnification obligation does not require the vendor to keep you running.

What Excludes Coverage

Three exclusions appear in nearly every IP indemnification clause:

1. Customer modifications. If you modify the vendor's software or combine it with products or services the vendor did not authorize or recommend, the infringement arising from that combination is yours, not the vendor's.

2. Third-party components. If you specified or provided the allegedly infringing component, coverage does not apply.

3. Continued use after notice. If the vendor notifies you of an infringement risk and you continue using the product, coverage for that specific known risk may be voided.

The Cap Question

Indemnification obligations are frequently excluded from the general liability cap — making IP indemnification among the more valuable protections in the MSA when a real claim arises. But this is not universal. Check whether IP indemnification is:

  • Excluded from the general cap (most protective — vendor's defense and payment obligation has no ceiling from the general limit)
  • Subject to the general cap (vendor's total exposure for IP claims is the same 12-months-of-fees ceiling as everything else)
  • Independently capped at a separate amount (a separate, lower ceiling — common in AI indemnification clauses)

The AI Addendum Version

AI vendors have diverged sharply from the standard IP indemnification structure. Some offer uncapped coverage for AI-generated outputs; some cap it at $10,000 per claim; some exclude AI outputs from indemnification entirely. The near-universal modification exclusion voids coverage for any output your team edits before publishing — which is virtually all output that creates real IP risk. If AI features are central to your use of the product, the AI addendum's indemnification terms require separate analysis. The standard IP indemnification clause was written for deterministic software; it was not written with AI outputs in mind.

Both Sides of the Table

If you're the buyer:

  1. Confirm IP indemnification is excluded from the general liability cap. If it is not, your maximum recovery for a real IP judgment is the same ceiling as a warranty breach — likely inadequate.
  2. Review each exclusion carefully. "Customer modifications" should not void coverage for routine configuration, branding, or integration the vendor recommends or supports.
  3. Add a right to continued use during the resolution period — the vendor's three-option exit should not include immediate termination while a claim is being defended.
  4. If AI features matter, address AI output indemnification separately from the standard IP indemnification clause.

If you're the vendor:

  1. The modification exclusion is defensible and worth maintaining — it protects you from infringement your customer caused by combining your software with third-party components.
  2. Uncapped indemnification excluded from the general liability cap is a significant exposure at scale; a separate, higher cap for IP indemnification is a reasonable middle ground.
  3. The terminate-and-refund exit valve is your risk management mechanism for scenarios where you cannot cure the infringement — preserve it, but consider whether it is commercially appropriate for mission-critical, mature deployments.

The Pattern Signal

IP indemnification co-occurs with:

  • The Illusory Protection — if IP indemnification is subject to the general liability cap, the protection may be inadequate for the actual cost of a real IP claim.
  • AI output carve-outs — AI-specific indemnification terms have diverged dramatically from the standard structure and deserve separate analysis at any AI vendor renewal.

LIABILITY-CAP

The Liability Cap — What It Actually Covers

The liability cap does not limit what can go wrong. It limits what you can recover when it does. Most enterprise MSAs combine two provisions that work together: a cap on total recovery (typically 12 months of fees) and an exclusion of consequential damages (the categories that represent most of the actual harm). Either one alone is significant. Together, they mean your practical recovery for a major vendor failure may be close to zero relative to your actual loss.

How It Works

The liability cap sets your ceiling. A typical clause:

"In no event shall either party's aggregate liability arising out of or related to this Agreement exceed the amounts paid or payable by Customer in the twelve (12) months preceding the claim."

For a $200K/year SaaS contract, your maximum recovery is $200K — regardless of what the failure cost your business.

The consequential damages exclusion removes the floor. A typical clause:

"In no event shall either party be liable for any indirect, incidental, special, exemplary, consequential, or punitive damages, including but not limited to loss of profits, loss of revenue, loss of data, loss of goodwill, or cost of substitute goods or services."

The categories explicitly excluded — lost profits, lost revenue, lost data, cost of substitute goods — are the categories that represent the actual financial harm from most vendor failures. Investigation costs after a data breach are consequential. Regulatory fines are consequential. Customer churn, business interruption, and reputational damage are consequential.

Read the two clauses together: the cap limits your total recovery; the exclusion removes the largest line items from that total. What remains is typically direct damages — the fees you paid for services you didn't receive. In practice, a fraction of your annual fee.

The Data Breach Version

This structure becomes most consequential in a data breach. The costs of a breach are almost entirely consequential: forensic investigation, breach notification, regulatory fines under state notification laws and GDPR, credit monitoring, customer churn, reputational harm. These costs can reach multiples of annual contract value. Under a standard MSA, they are all excluded.

Of 327 vendor contracts benchmarked by TermScout, 100% waive indirect damages. Only approximately 2% offer an elevated liability cap for data security breaches. If your MSA does not have a data breach carve-out, you are in the majority — not the exception. (Source: Otto Hanson, Founder & CEO of TermScout, via Contract Teardown Show. Verify current figures at termscout.com.)

The AI Version

AI failures compound the liability cap problem in two ways.

First, AI failures generate exactly the costs that consequential damages exclusions remove. An AI system that produces inaccurate outputs affecting customer decisions, triggers a regulatory enforcement action under state AI law, or causes business interruption during a model failure produces losses that are almost entirely consequential in nature.

Second, the cap was sized for software bug risk, not for AI-scale consequential harm. A 12-months-of-fees cap calibrated for a $200K SaaS contract was priced against the risk that the software might not work correctly. It was not priced against the risk of an AI system making consequential decisions at scale across your customer base or operations.

Both Sides of the Table

If you're the buyer:

  1. Push for a data breach carve-out from the liability cap — Snowflake's Terms of Service Section 12(C) establishes a 2× "Data Protection Claims Cap" as a separate ceiling from the general cap. Use it as a benchmark.
  2. For AI vendors specifically, push for an AI-incident carve-out analogous to the data breach carve-out — the risk profile is comparable.
  3. If the vendor won't move on the cap, negotiate for a cyber insurance requirement and verify you are named as additional insured.
  4. Make sure the consequential damages exclusion does not apply to the vendor's breach of its own confidentiality or data security obligations — that carve-out is achievable.

If you're the vendor:

  1. Unlimited liability is not commercially viable at scale — the cap is the business model.
  2. A modest data breach carve-out (2× annual fees) costs almost nothing in most scenarios and is a competitive differentiator.
  3. The consequential damages exclusion is not negotiable for core liability structure, but offering limited carve-outs for specific high-stakes categories signals confidence in your product.

The Pattern Signal

The liability cap and consequential damages exclusion co-occur in nearly every enterprise MSA. When reviewing this provision, also check:

  • The warranty-remedy unit — the exclusive remedy clause (termination + pro-rata refund) often makes the liability cap redundant. In practice, the cap only matters if you can get past the exclusive remedy clause first. Read both.
  • The SLA remedy — if the SLA designates service credits as your exclusive remedy for downtime, the liability cap may never even be reached for outage-related claims.
  • Indemnification carve-outs — indemnification obligations are typically excluded from the general liability cap. Confirm whether AI output indemnification, if any, is also excluded from the cap.

MSA: The Liability Cap — What It Actually Covers

The Liability Cap — What It Actually Covers

The liability cap does not limit what can go wrong. It limits what you can recover when it does. Most enterprise MSAs combine two provisions that work together: a cap on total recovery (typically 12 months of fees) and an exclusion of consequential damages (the categories that represent most of the actual harm). Either one alone is significant. Together, they mean your practical recovery for a major vendor failure may be close to zero relative to your actual loss.

How It Works

The liability cap sets your ceiling. A typical clause:

"In no event shall either party's aggregate liability arising out of or related to this Agreement exceed the amounts paid or payable by Customer in the twelve (12) months preceding the claim."

For a $200K/year SaaS contract, your maximum recovery is $200K — regardless of what the failure cost your business.

The consequential damages exclusion removes the floor. A typical clause:

"In no event shall either party be liable for any indirect, incidental, special, exemplary, consequential, or punitive damages, including but not limited to loss of profits, loss of revenue, loss of data, loss of goodwill, or cost of substitute goods or services."

The categories explicitly excluded — lost profits, lost revenue, lost data, cost of substitute goods — are the categories that represent the actual financial harm from most vendor failures. Investigation costs after a data breach are consequential. Regulatory fines are consequential. Customer churn, business interruption, and reputational damage are consequential.

Read the two clauses together: the cap limits your total recovery; the exclusion removes the largest line items from that total. What remains is typically direct damages — the fees you paid for services you did not receive. In practice, a fraction of your annual fee.

The Data Breach Version

This structure becomes most consequential in a data breach. The costs of a breach are almost entirely consequential: forensic investigation, breach notification, regulatory fines under state notification laws and GDPR, credit monitoring, customer churn, reputational harm. These costs can reach multiples of annual contract value. Under a standard MSA, they are all excluded.

Elevated caps for data security breaches are uncommon across enterprise vendor agreements. If your MSA does not have a data breach carve-out, you are in the majority, not the exception.

The AI Version

AI failures compound the liability cap problem in two ways.

First, AI failures generate exactly the costs that consequential damages exclusions remove. An AI system that produces inaccurate outputs affecting customer decisions, triggers a regulatory enforcement action under state AI law, or causes business interruption during a model failure produces losses that are almost entirely consequential in nature.

Second, the cap was sized for software bug risk, not for AI-scale consequential harm. A 12-months-of-fees cap calibrated for a $200K SaaS contract was priced against the risk that the software might not work correctly. It was not priced against the risk of an AI system making consequential decisions at scale across your customer base or operations.

Both Sides of the Table

If you're the buyer:

  1. Push for a data breach carve-out from the liability cap — Snowflake's Terms of Service Section 12(C) (https://www.snowflake.com/en/legal/terms-of-service/, verified 2026-04-20) establishes a 2x Data Protection Claims Cap as a separate ceiling from the general cap. Use it as a benchmark.
  2. For AI vendors specifically, push for an AI-incident carve-out analogous to the data breach carve-out — the risk profile is comparable.
  3. If the vendor won't move on the cap, negotiate for a cyber insurance requirement and verify you are named as additional insured.
  4. Make sure the consequential damages exclusion does not apply to the vendor's breach of its own confidentiality or data security obligations — that carve-out is achievable.

If you're the vendor:

  1. Unlimited liability is not commercially viable at scale — the cap is the business model.
  2. A modest data breach carve-out (2x annual fees) costs almost nothing in most scenarios and is a competitive differentiator.
  3. The consequential damages exclusion is not negotiable for core liability structure, but offering limited carve-outs for specific high-stakes categories signals confidence in your product.

The Pattern Signal

The liability cap and consequential damages exclusion co-occur in nearly every enterprise MSA. When reviewing this provision, also check:

  • The warranty-remedy unit — the exclusive remedy clause (termination + pro-rata refund) often makes the liability cap redundant. In practice, the cap only matters if you can get past the exclusive remedy clause first. Read both.
  • The SLA remedy — if the SLA designates service credits as your exclusive remedy for downtime, the liability cap may never even be reached for outage-related claims.
  • Indemnification carve-outs — indemnification obligations are typically excluded from the general liability cap. Confirm whether AI output indemnification, if any, is also excluded from the cap.
Tags: msa

SLA-REMEDY

The SLA Trap

"99.9% uptime" sounds like a real commitment. It is a real commitment — to a remedy worth a fraction of one month's fee, claimable only if you file within 30 days of the incident, applicable only to platform availability and not to AI feature quality. The SLA uptime guarantee is real. What it is worth is defined by the remedy clause, not the headline number.

How It Works

A standard SLA designates service credits as the exclusive remedy for downtime. UCC § 2-719(1)(b) permits commercial parties to make a specified remedy exclusive, and most SLAs do exactly this:

"Service credits are Customer's sole and exclusive remedy for any failure by Vendor to meet the Service Level Commitments set forth in this SLA."

The credit formula typically produces a recovery of a small fraction of one month's fee:

Uptime achievedCredit (typical)
99.0%–99.9%10% of monthly fee
95.0%–99.0%25% of monthly fee
Below 95.0%50% of monthly fee

For a $200K/year contract ($16,667/month), a qualifying outage at the worst tier generates a credit of $8,333. If that outage caused a day of lost operations — customer-facing downtime, employee productivity loss, emergency response costs — the credit is likely a small fraction of the actual harm.

[UCC § 2-719(1)(b), https://www.law.cornell.edu/ucc/2/2-719 (parties may agree that a remedy is exclusive; if expressly agreed to be exclusive, it is the sole remedy).]

The 30-Day Claim Window

Most SLAs require the customer to submit a credit request within 30 days of the incident. This window is easy to miss:

  • During an outage, your team is focused on remediation, not paperwork
  • The credit request process often requires technical documentation of downtime duration
  • The 30-day window runs from the incident date, not from when the incident is resolved
  • Missing the window typically waives the credit entirely — most SLAs contain a "no credits for claims submitted outside the request window" provision

If the outage caused losses an order of magnitude above the credit amount, those losses are also excluded as consequential damages under the MSA. You have no recourse outside the SLA remedy.

The AI Version

AI features are typically excluded from SLA coverage in two ways.

First, most SLAs define "downtime" as the platform being fully unavailable or unavailable above a threshold error rate. AI feature degradation — the model producing lower-quality outputs, slower response times, increased hallucination rates — does not meet the definition of "downtime." The platform is up. The AI is underperforming. No credit applies.

Second, some vendors explicitly exclude AI features from SLA coverage:

"The Service Level Commitments in this SLA do not apply to: (a) AI Features, (b) Beta features, or (c) features that Vendor identifies as experimental or subject to change."

AI features, beta features, and experimental features often overlap significantly. If the AI functionality your company has integrated into its workflows is classified as any of these, the SLA provides no remedy when it fails.

Both Sides of the Table

If you're the buyer:

  1. Read the "exclusive remedy" language in the SLA — confirm that service credits are not your only recourse for outages that cause business-level harm.
  2. Extend the credit request window to 60 or 90 days — the standard 30 days is operationally unrealistic during an incident.
  3. Push for AI features to be covered under the SLA, or negotiate a separate AI availability commitment with financial teeth.
  4. If the vendor won't move on the SLA remedy, make sure the consequential damages exclusion in the MSA has a carve-out for major outages or data incidents.

If you're the vendor:

  1. The service credit structure is your risk management mechanism — predictable, capped, actuarially priced.
  2. The 30-day window protects against long-tail claims; extending it to 60 days has minimal financial impact but improves customer goodwill.
  3. Excluding AI features from SLA coverage is defensible for early-stage AI functionality; it becomes a negotiating liability as AI features mature into core product.

The Pattern Signal

The SLA trap co-occurs with:

  • The Illusory Protection — the SLA commitment is the warranty; the service credit is the exclusive remedy. Same structure as the warranty-remedy mismatch in the main MSA.
  • The Liability Cap and consequential damages exclusion — the credit is your remedy inside the SLA; business losses from the outage are excluded outside it.
  • The 30-day window — a procedural Silence Trap: failure to file within the window waives the credit, even if the underlying outage was real and documented.

MSA: The SLA Trap

The SLA Trap

"99.9% uptime" sounds like a real commitment. It is a real commitment — to a remedy worth a fraction of one month's fee, claimable only if you file within 30 days of the incident, applicable only to platform availability and not to AI feature quality. The SLA uptime guarantee is real. What it is worth is defined by the remedy clause, not the headline number.

How It Works

A standard SLA designates service credits as the exclusive remedy for downtime. UCC § 2-719(1)(b) permits commercial parties to make a specified remedy exclusive, and most SLAs do exactly this:

"Service credits are Customer's sole and exclusive remedy for any failure by Vendor to meet the Service Level Commitments set forth in this SLA."

The credit formula typically produces a recovery of a small fraction of one month's fee:

Uptime achievedCredit (typical)
99.0%–99.9%10% of monthly fee
95.0%–99.0%25% of monthly fee
Below 95.0%50% of monthly fee

For a $200K/year contract ($16,667/month), a qualifying outage at the worst tier generates a credit of $8,333. If that outage caused a day of lost operations — customer-facing downtime, employee productivity loss, emergency response costs — the credit is likely a small fraction of the actual harm.

[UCC § 2-719(1)(b), https://www.law.cornell.edu/ucc/2/2-719 (parties may agree that a remedy is exclusive; if expressly agreed to be exclusive, it is the sole remedy).]

The 30-Day Claim Window

Most SLAs require the customer to submit a credit request within 30 days of the incident. This window is easy to miss:

  • During an outage, your team is focused on remediation, not paperwork
  • The credit request process often requires technical documentation of downtime duration
  • The 30-day window runs from the incident date, not from when the incident is resolved
  • Missing the window typically waives the credit entirely — most SLAs contain a "no credits for claims submitted outside the request window" provision

If the outage caused losses an order of magnitude above the credit amount, those losses are also excluded as consequential damages under the MSA. You have no recourse outside the SLA remedy.

The AI Version

AI features are typically excluded from SLA coverage in two ways.

First, most SLAs define "downtime" as the platform being fully unavailable or unavailable above a threshold error rate. AI feature degradation — the model producing lower-quality outputs, slower response times, increased hallucination rates — does not meet the definition of "downtime." The platform is up. The AI is underperforming. No credit applies.

Second, some vendors explicitly exclude AI features from SLA coverage:

"The Service Level Commitments in this SLA do not apply to: (a) AI Features, (b) Beta features, or (c) features that Vendor identifies as experimental or subject to change."

AI features, beta features, and experimental features often overlap significantly. If the AI functionality your company has integrated into its workflows is classified as any of these, the SLA provides no remedy when it fails.

Both Sides of the Table

If you're the buyer:

  1. Read the "exclusive remedy" language in the SLA — confirm that service credits are not your only recourse for outages that cause business-level harm.
  2. Extend the credit request window to 60 or 90 days — the standard 30 days is operationally unrealistic during an incident.
  3. Push for AI features to be covered under the SLA, or negotiate a separate AI availability commitment with financial teeth.
  4. If the vendor won't move on the SLA remedy, make sure the consequential damages exclusion in the MSA has a carve-out for major outages or data incidents.

If you're the vendor:

  1. The service credit structure is your risk management mechanism — predictable, capped, actuarially priced.
  2. The 30-day window protects against long-tail claims; extending it to 60 days has minimal financial impact but improves customer goodwill.
  3. Excluding AI features from SLA coverage is defensible for early-stage AI functionality; it becomes a negotiating liability as AI features mature into core product.

The Pattern Signal

The SLA trap co-occurs with:

  • The Illusory Protection — the SLA commitment is the warranty; the service credit is the exclusive remedy. Same structure as the warranty-remedy mismatch in the main MSA.
  • The Liability Cap and consequential damages exclusion — the credit is your remedy inside the SLA; business losses from the outage are excluded outside it.
  • The 30-day window — a procedural Silence Trap: failure to file within the window waives the credit, even if the underlying outage was real and documented.
Tags: msa

AUTO-RENEWAL

The Auto-Renewal Trap

The auto-renewal clause is one of the most expensive missed deadlines in commercial contracting. The window is short, the notice obligation is on you, the renewal locks in terms at list price, and the clause is typically in the termination section — not the pricing section where you'd expect to find it. Under New York law, an auto-renewal provision is unenforceable against the customer if the contractor failed to provide the required 15-30 day notice — meaning the vendor cannot compel you to honor a renewal that occurred without proper notice.

What the Clause Looks Like

A standard auto-renewal clause:

"This Agreement will automatically renew for successive twelve (12) month periods at Vendor's then-current list price unless either party provides written notice of non-renewal at least sixty (60) days prior to the end of the then-current term."

Three things to notice:

  1. "Then-current list price" — your negotiated discount expires at renewal. Year 1 discount was a customer acquisition cost; the auto-renewal recovers it.
  2. "Sixty days prior" — the notice deadline is measured from the contract end date, not from when the vendor sends a reminder. Most vendors send no reminder.
  3. Location — this clause is almost never in the pricing section. It is in the termination section, often at the end of the agreement, where it receives the least scrutiny during negotiation.

The New York Statute

Under N.Y. Gen. Oblig. Law § 5-903(2), a provision for automatic renewal in a service contract is unenforceable against the customer unless the contractor gave written notice of the renewal between 15 and 30 days before the deadline to cancel. (Applicability to pure SaaS subscriptions is contested — § 5-903 is more likely to apply where the service involves processing customer data qualifying as personal property than where it is pure platform access.)

Many enterprise MSAs are New York-governed. Many auto-renew annually without any reminder notice to the customer. Where § 5-903 applies and the required notice was not given, the auto-renewal provision is unenforceable against you — the contractor cannot compel you to honor the renewal.

[N.Y. Gen. Oblig. Law § 5-903, https://www.nysenate.gov/legislation/laws/GOB/5-903 (auto-renewal provision in a service contract is unenforceable against the customer absent the contractor's written notice given between 15 and 30 days before the cancellation deadline).]

The AI Addendum Version

The AI addendum creates a compound version of the auto-renewal trap. The sequence:

  1. You sign an MSA with standard auto-renewal terms
  2. Mid-term, the vendor adds an AI Services Addendum, incorporated by reference into the MSA
  3. The AI addendum includes a 30-day opt-out window for data training, compliance obligations you did not negotiate, and indemnification carve-outs for AI outputs
  4. The 30-day opt-out window closes before you act
  5. The MSA auto-renews at year end, carrying all AI addendum terms forward
  6. You have now renewed into AI-specific terms you never actively accepted at a price you didn't negotiate for the AI tier

Each step is defensible in isolation. The sequence produces a result that most in-house counsel would consider unreasonable if they saw it laid out in advance.

Practical Defense

The most reliable defense against the auto-renewal trap is operational, not contractual:

  1. Calendar every contract end date on the day you sign. Set a reminder 90 days before end — not 60, not 30. You need time to evaluate, decide, and send proper notice.
  2. Track the full document stack at renewal, not just the MSA. At each renewal, audit every incorporated document for changes since the last renewal, particularly any new AI addenda or AUP updates.
  3. Confirm notice requirements in writing. If you send a non-renewal notice, send it via the method specified in the MSA's notice provision and confirm receipt.
  4. If you missed the window: Check whether the MSA is New York-governed and whether the vendor provided the required 15-30 day notice. If not, the auto-renewal provision may be unenforceable against you under § 5-903 — meaning the vendor cannot compel you to honor the renewal.

Both Sides of the Table

If you're the buyer:

  1. Negotiate the non-renewal notice window to 90 days minimum — 60-day windows are industry standard but operationally tight.
  2. Add a vendor obligation to provide written renewal notice 90-120 days before the contract end date.
  3. Lock renewal pricing in the Order Form — cap increases at CPI or a fixed percentage, and specify that the cap applies at auto-renewal, not just initial renewal.
  4. For AI addenda: treat the addendum opt-out window with the same urgency as a contract signing deadline — the window is real and consequential.

If you're the vendor:

  1. Auto-renewal at list price is your revenue predictability mechanism — defend the concept but consider whether aggressive pricing at renewal creates more churn than it recovers.
  2. Short notice windows protect against budget-cycle cancellations; the trade-off is customer resentment and, in New York, potential unenforceability exposure.
  3. Proactive renewal notice — even when not contractually required — reduces churn and is increasingly a customer expectation at the enterprise tier.

The Pattern Signal

The auto-renewal trap co-occurs with:

  • The Silence Trap — auto-renewal is structurally a Silence Trap: your inaction constitutes consent to another year of the agreement at list price.
  • The Invisible Operative Document — the auto-renewal clause is in the termination section, not the pricing section. The document stack means the terms you're renewing into include incorporated documents you may not have reviewed since signing.
  • The Dynamic Document — AI addenda added mid-term and incorporated by reference are carried forward at renewal. The renewal covers terms that did not exist when you signed.
Tags: msa

Get notified when this changes.

MSA AI Provisions — Practitioner Guide rules change without warning. We monitor the primary sources so you don't have to. One email when something on this page updates — nothing else.

One email per update. No newsletter. Unsubscribe anytime.

Also on LawSnap