MSA-AI: Watchpoints — Patterns That Get Worse with AI
AI: 8 Patterns That Get Worse
8 Patterns That Get Worse When AI Enters Your MSA
Where standard MSA risk calculus breaks down.
The MSA you negotiated is not AI-ready by default. AI provisions arrive through incorporated documents, not redlines -- which means the terms governing your vendor's AI features may have changed after you signed, without your knowledge and without reopening negotiation. The eight patterns below are not invented for AI. They are structural patterns that already existed in your MSA -- that AI makes significantly more dangerous.
1. The Dynamic Document / The Living Agreement
The AI addendum that changed your terms may have arrived through a URL in your existing MSA -- not through a redline, not through negotiation, and not with your knowledge. Vendors are not updating the MSA itself to add AI provisions. They are updating the documents the MSA incorporates by reference: the Acceptable Use Policy, the Data Processing Addendum, a new AI Services Addendum. A clause making the agreement "subject to the Acceptable Use Policy at [URL], as may be updated from time to time" gives the vendor the right to add AI-specific data usage rights, indemnification exclusions, and compliance obligations without your consent. Your continued use of the platform constitutes acceptance.
Recognition signal: Any incorporated document referenced by URL rather than version number or date. If you can't name every document that governs this relationship -- including when each was last updated -- The Dynamic Document is in play.
[By analogy from the consumer context, courts have enforced terms incorporated by hyperlink where the user had reasonable notice. Meyer v. Uber Technologies, Inc., 868 F.3d 66, 75 (2d Cir. 2017), https://law.justia.com/cases/federal/appellate-courts/ca2/16-2750/16-2750-2017-08-17.html (enforcing arbitration clause in terms incorporated by hyperlink). The principle is well-developed in the consumer context; commercial B2B authority for hyperlink-incorporation under modern enterprise SaaS conditions remains thin.]
Why your MSA is not one document ->
2. The Illusory Protection / The Liability Ceiling -- Warranty Layer
A warranty that AI features will perform "commercially reasonable" performance, with termination as your exclusive remedy, means your recovery for an AI-driven failure is a fraction of your annual fee -- regardless of what that failure actually cost your business. UCC § 2-719(1)(b) permits commercial parties to make a specified remedy exclusive. The warranty-remedy mismatch was always a structural problem; AI makes it worse: "commercially reasonable" is indefinable for a probabilistic system whose outputs change with every model update, and the vendor controls when and how the model changes. Read the warranty and the exclusive remedy clause together, as a unit.
Recognition signal: "Customer's exclusive remedy for any failure of the foregoing warranty shall be..." following a warranty section. For AI specifically: a "commercially reasonable" standard on a non-deterministic system paired with termination as the exclusive remedy.
[UCC § 2-719(1)(b), https://www.law.cornell.edu/ucc/2/2-719 (parties may agree that a remedy is exclusive; if expressly agreed to be exclusive, it is the sole remedy).]
The warranty that doesn't protect you ->
3. The Illusory Protection / The Liability Ceiling -- Cap Layer
The consequential damages exclusion removes exactly the costs that AI failures generate -- and the liability cap was sized for software bugs, not for an AI system making consequential decisions at scale. UCC § 2-719(3) allows commercial parties to exclude consequential damages unless unconscionable. The harms most likely to follow an AI vendor failure -- regulatory fines, customer harm from inaccurate outputs, reputational damage, business interruption -- are all consequential damages. A cap of 12 months of fees calibrated to software bug risk was not calibrated to AI-scale exposure.
Recognition signal: Liability cap + consequential damages exclusion appearing together. For AI: check whether the cap was set before AI features were added -- it may not reflect the current risk profile of the platform.
[UCC § 2-719(3), https://www.law.cornell.edu/ucc/2/2-719 (consequential damages may be limited or excluded unless the limitation or exclusion is unconscionable; limitation of damages where loss is commercial is not prima facie unconscionable).]
The liability cap -- what it actually covers ->
4. The Illusory Protection / The Liability Ceiling -- Indemnification Layer
Standard IP indemnification covers copyright claims from the vendor's software. AI vendor MSAs have diverged: some offer uncapped indemnification for AI outputs, some cap it at $10,000 per claim, some exclude AI outputs from indemnification entirely. The structural kill switch present in nearly every AI indemnification clause is the modification exclusion -- the indemnification is voided when output is "modified." Your team edits AI output before publishing. That editing is modification. The work you actually publish may not be covered by any indemnification at all.
Recognition signal: IP indemnification with a carve-out for output that was "modified, transformed, or used in combination with products or services not provided by Vendor." The question to ask: what percentage of AI output gets published without any editing? That is the fraction actually covered.
[17 U.S.C. § 501, https://www.law.cornell.edu/uscode/text/17/501 (anyone who violates the exclusive rights of the copyright owner under sections 106 through 122 is an infringer).]
The AI addendum teardown -- indemnification ->
5. The Compliance Burden Shift
EU AI Act Article 26 imposes affirmative compliance obligations on you as a deployer -- implementing technical measures, maintaining logs, conducting assessments -- for a model the vendor built, controls, and updates without telling you. Under Regulation (EU) 2024/1689, Article 3(4), a "deployer" is any entity using an AI system under its authority (except in personal non-professional use) -- your company, when you use a vendor's AI tool. Article 26, which applies from 2 August 2026, attaches affirmative obligations whenever that deployer is using a high-risk AI system: implementing appropriate technical and organizational measures, maintaining operational logs for at least six months, informing affected natural persons that they are subject to AI-based decisions, and carrying out data protection impact assessments where required under GDPR Art. 35. For specific Annex III deployers, Article 27 additionally requires a fundamental rights impact assessment. Colorado SB 24-205 creates analogous deployer obligations in the US context. The vendor's standard MSA disclaims all of these obligations and places them on the customer. The compliance obligation is yours regardless of whether you can see inside the model, audit the training data, or receive advance notice of model changes.
Recognition signal: "Customer is solely responsible for ensuring that Customer's use of the AI Features complies with all applicable laws and regulations, including without limitation laws governing automated decision-making." The word "solely" does the structural work -- the vendor built the system; you bear the regulatory exposure for using it.
[Regulation (EU) 2024/1689, Art. 3(4), https://eur-lex.europa.eu/eli/reg/2024/1689/oj (definition of deployer); Art. 26 (obligations of deployers of high-risk AI systems, applicable from 2 August 2026); Art. 27 (fundamental rights impact assessment for specific Annex III deployers). Colo. Rev. Stat. § 6-1-1701 et seq., https://leg.colorado.gov/bills/sb24-205 (Colorado AI Act); SB 25B-004, https://leg.colorado.gov/bills/sb25b-004 (postponing operative date to June 30, 2026).]
The compliance burden shift ->
6. The Dynamic Document / The Living Agreement -- Training Data Layer
"Improve the Services" is not a documented instruction from you to the vendor -- and using your data to train the vendor's AI model under that clause may lack a valid legal basis under GDPR. Under GDPR Article 28(3)(a), a data processor may process personal data only on documented instructions from the controller. "Improve the Services" as a purpose in the vendor's standard terms is a vendor-defined purpose, not a documented controller instruction. Opt-out windows that begin at the addendum's effective date -- often the date the vendor posts the update to a URL, not the date you receive notice -- mean the processing may have already begun before you had a practical opportunity to stop it.
Recognition signal: "Vendor may use Customer Data to improve the Services, including to develop, train, and improve AI and machine learning features" with opt-out architecture (rather than opt-in). The absence of a proactive opt-in is the structural signal; your silence from the effective date is treated as consent.
[GDPR Art. 28(3)(a), https://gdpr-info.eu/art-28-gdpr/ (processor processes personal data only on documented instructions from the controller); GDPR Art. 6(1), https://gdpr-info.eu/art-6-gdpr/ (lawfulness of processing requires a valid legal basis).]
7. The Illusory Protection / The Liability Ceiling -- SLA Layer
AI features are typically not covered by the standard SLA -- so when AI output is degraded, inaccurate, or unavailable, service credits may not apply. UCC § 2-719(1)(b) permits SLAs to designate service credits as the exclusive remedy for downtime. Most do. But the SLA covers platform uptime -- the availability of the software infrastructure. A vendor whose platform is running at 99.9% uptime while its AI features produce degraded outputs is in technical SLA compliance. The feature you're actually paying for may have no remedy at all.
Recognition signal: SLA definitions of "Downtime" or "Service Unavailability" that require complete platform unavailability. If partial degradation of AI features doesn't trigger the definition, AI output quality falls outside SLA coverage -- and inside the consequential damages exclusion.
[UCC § 2-719(1)(b), https://www.law.cornell.edu/ucc/2/2-719 (parties may agree that a remedy is exclusive; if expressly agreed to be exclusive, it is the sole remedy).]
8. The Dynamic Document / The Living Agreement -- Renewal Layer
If your AI addendum arrived mid-term, the MSA's auto-renewal carries its terms forward automatically -- and the opt-out window for the AI provisions may have already closed before you had a practical opportunity to act. N.Y. Gen. Oblig. Law § 5-903(2) makes automatic renewal provisions in service contracts unenforceable unless written notice is given 15-30 days before the cancellation deadline. A mid-term addendum adds AI-specific provisions -- data usage rights, compliance obligations, indemnification carve-outs -- that were not present when you originally signed. The auto-renewal carries those provisions forward. The compound question: when did the AI addendum's opt-out window close, and when does the MSA next renew?
Recognition signal: AI addendum with its own opt-out window, incorporated by reference into an MSA that auto-renews. If the addendum's opt-out window closed before your next renewal review, you may have renewed into terms you never actively accepted.
[N.Y. Gen. Oblig. Law § 5-903, https://www.nysenate.gov/legislation/laws/GOB/5-903 (automatic renewal unenforceable absent 15-30 day written notice; applies to contracts for service, maintenance, or repair to real or personal property).]
These eight patterns cover the places where standard MSA risk calculus breaks down when AI provisions enter the picture. For the full pattern analysis -- every named pattern, the cross-vendor indemnification spectrum, and both-sides-of-the-table playbook -- see the MSA Pattern Guide below.
MSA: The Hyperlink Trap
The Hyperlink Trap: Why Your MSA Is Not One Document
The single most common pattern in technology contracts is the Hidden Complexity Trap. It appears more consistently across technology agreements than any other pattern.
The mechanism is simple: the agreement you're reading incorporates other documents by reference. Those documents incorporate still others. By the time you've followed every link, you're reading 4-5 separate documents, and the terms that actually govern your relationship are scattered across all of them.
What It Looks Like
A typical SaaS MSA contains language like:
"Customer's use of the Services is subject to the Acceptable Use Policy available at [vendor URL], the Data Processing Addendum available at [vendor URL], and the Service Level Agreement available at [vendor URL], each as may be updated from time to time."
That last clause — "as may be updated from time to time" — transforms a static agreement into a living one. The vendor can modify the AUP, DPA, or SLA after you sign, without your consent, and your continued use constitutes acceptance. This triggers a second pattern: the Dynamic Document.
The Real-World Stack
For a major SaaS vendor like Salesforce, the full contractual relationship requires reading:
- The MSA — framework terms (liability, IP, confidentiality)
- The Order Form — commercial terms (pricing, scope, renewal)
- The Data Processing Addendum — privacy and security obligations
- The Acceptable Use Policy — usage restrictions (including competitive intelligence prohibitions)
- The Service Level Agreement — uptime commitments and remedy for downtime
Each document has its own definitions, its own limitation of liability provisions, and its own termination triggers. Conflicts between documents are resolved by a hierarchy of precedence that's usually buried in a boilerplate section of the MSA.
Most practitioners read documents 1 and 2 carefully. Document 3 gets a skim. Documents 4 and 5 are often accepted as-is.
Why This Matters for AI Provisions
The Hidden Complexity Trap has intensified since vendors began adding AI-specific terms. Many vendors are not modifying the MSA itself — they're adding an AI Services Addendum or updating the Acceptable Use Policy. This means:
- The MSA you redlined last year hasn't changed
- The AUP it incorporates by reference has changed significantly
- Your continued use of the platform after the AUP update may constitute acceptance of the new AI terms
- Those AI terms may include data usage rights, output ownership provisions, and indemnification exclusions that didn't exist when you signed
If you're reviewing a renewal and the MSA looks identical to last year's, that's not reassuring — it's a signal. Check every incorporated document for changes, especially the AUP and any new AI-specific addenda.
Both Sides of the Table
If you're receiving this clause:
- Request a complete list of all documents incorporated by reference — don't hunt for them
- Add a "no unilateral modification" clause for incorporated documents, or at minimum require notice of material changes
- Define the hierarchy of precedence explicitly — if the DPA conflicts with the MSA on data usage, which controls?
- Calendar a review of incorporated documents at each renewal, not just the Order Form
If you're drafting this clause:
- Incorporated-by-reference documents are your flexibility mechanism — they let you update operational terms without reopening the MSA
- Burying controversial terms in the AUP works in the short term but creates trust erosion and churn risk
- Proactive disclosure of material changes (even when not contractually required) reduces negotiation friction at renewal
The Pattern Signal
When you find the Hidden Complexity Trap, these patterns are frequently nearby:
- The Illusory Protection — commonly co-occurs in tech contracts. The warranty looks protective but the exclusive remedy (buried in a different section or document) guts it.
- Template Contamination — strongly associated with Hidden Complexity; the two patterns frequently appear together. The template was designed for a different deal; nobody adapted the incorporated documents.
- Speed-Pressure Waiver — commonly co-occurs. Quarter-end pressure means you sign before reading the full document stack.
MSA: The Silence Trap
The Silence Trap: What Happens When Nobody Says Anything
The Silence Trap appears in 28% of commercial contracts. The mechanism: inaction is treated as consent, or a process is defined so vaguely that exercising your rights becomes practically impossible.
The Disputed Charges Pattern
Salesforce's MSA (Section 5.5, last updated September 15, 2025) addresses disputed charges with language requiring parties to act in "good faith" to resolve billing disputes. It defines no process, no timeline, no escalation path, and no standard for what constitutes good faith.
In practice, this means:
- You dispute a charge
- The vendor's billing team responds (eventually)
- There's no deadline for resolution
- There's no mechanism to pause payment during the dispute
- If you withhold payment pending resolution, you may trigger a breach provision
- The vendor has no contractual incentive to resolve quickly
The silence isn't in the contract language — it's in what the contract doesn't say. The provision exists so both parties can point to it. It doesn't function as a remedy.
The Auto-Renewal Version
The auto-renewal clause is a Silence Trap by design. Your contract renews unless you affirmatively opt out within a narrow window — typically 30 to 60 days before the term ends. Miss the window and you're locked in for another year at a price you didn't negotiate.
What makes it a trap, not just a deadline:
- The notification window is pegged to the contract end date, not the calendar (in-house counsel commonly manage dozens of contracts with different end dates)
- Renewal pricing defaults to list price (your negotiated discount expires)
- The vendor has no obligation to remind you the window is approaching
- Enterprise SaaS switching costs make the "just don't renew" remedy theoretical rather than practical
The AI Consent Version
The newest form of the Silence Trap is in AI data usage provisions:
"Vendor may use anonymized and aggregated Customer Data to improve the Services, including AI and machine learning features. Customer may opt out by submitting a request to [email address] within thirty (30) days of the effective date of this Addendum."
The clock starts when the addendum takes effect — which may be when the vendor posts it to a URL, not when you read it. Silence equals consent to data usage for model training.
Both Sides of the Table
If you're the buyer:
- Replace "good faith" dispute language with a defined process: written notice → 15-day response → escalation to [named roles] → mediation if unresolved at 30 days
- Add a right to withhold disputed amounts during the resolution period without triggering breach
- Extend auto-renewal notice windows to 90 days and require the vendor to provide written renewal notice 120 days before term end
- For AI opt-outs: replace opt-out with opt-in — data usage for model training requires affirmative consent, not silence
If you're the vendor:
- Vague dispute resolution protects your cash flow; detailed dispute resolution protects the relationship. Choose based on whether you want renewals.
- Auto-renewal is your revenue predictability mechanism — defend the concept but consider whether 30-day windows create more churn (angry surprise renewals) than 90-day windows (planned retention conversations)
- Opt-out AI data clauses work today because the market hasn't standardized. When it does, you'll need to retrofit opt-in — consider moving early as a trust differentiator
mail Subscribe to MSA-AI: Watchpoints — Patterns That Get Worse with AI email updates
Primary sources. No fluff. Straight to your inbox.