Intellectual Property

Intellectual Property

15 entries in Legal Intelligence Tracker

Top Legal Issues Facing Fashion & Retail in 2026

No single core event defines the headline; it summarizes ongoing legal pressures shaping fashion and retail operations in early 2026, mirroring 2025 trends and projecting persistence. Key developments include escalating tariffs and trade enforcement, AI/digital commerce risks, e-commerce scrutiny, sustainability mandates (e.g., PFAS restrictions, climate disclosures, extended producer responsibility), labor/immigration issues, Proposition 65 enforcement, financial distress with rising bankruptcies, and private equity shifts.[2][5][6][11] Specific cases testing boundaries involve IP disputes (e.g., Naghedi’s woven neoprene trademark push amid "dupes," Quince vs. Deckers on UGG trade dress as alleged monopolization), origin labeling scrutiny, and regulatory actions like Texas suing Shein over toxic chemicals and data risks under DTPA.[1][3][13]

The AI Knows Too Much: When Employees Feed Trade Secrets into Generative AI Tools

Employees feeding trade secrets into public generative AI tools like ChatGPT, Claude, or Google Gemini risk waiving legal protections, as these inputs may constitute voluntary disclosure to third parties without confidentiality guarantees.[1][2] The core event stems from a February 2026 U.S. District Court ruling in United States v. Heppner (Southern District of New York), where the court held that attorney-client privilege did not apply to documents prepared using Anthropic's Claude due to its privacy policy allowing data sharing with third parties, a logic now extending to trade secrets under the Defend Trade Secrets Act (DTSA).[1][2]

The White House Releases National AI Legislative Framework

Core Event: On March 20, 2026, the White House under President Donald J. Trump released a four-page "National Policy Framework for Artificial Intelligence" (also called the National AI Legislative Framework), providing legislative recommendations to Congress for a unified federal AI policy.[1][2][3][4][5][6][7] It outlines 6-7 high-level objectives, emphasizing U.S. AI dominance, innovation, national security, child safety, consumer protection, IP rights, free speech, workforce development, and community safeguards, while advocating preemption of conflicting state AI laws (except in areas like child protection, fraud prevention, consumer laws, zoning for AI infrastructure, and state AI procurement/use).[1][2][3][4][6][7]

Attorney Accountability Is the Missing Layer in Legal AI

The headline "Attorney Accountability Is the Missing Layer in Legal AI," published March 23, 2026, highlights a growing call for lawyers to bear direct responsibility for AI-generated errors, such as hallucinations in filings or unverified outputs, amid rapid AI adoption in legal practice.[2][3] The core development is the emphasis on attorney accountability as an overlooked safeguard, building on ABA Formal Opinion 512, which mandates lawyers verify AI outputs for competence, confidentiality, candor to tribunals, and supervision under Model Rules 1.1, 1.6, 3.3, and 5.1-5.3.[2][3]

Policy Week in Review – March 20, 2026

On March 20, 2026, the White House released the National Policy Framework for Artificial Intelligence, a document with legislative recommendations urging Congress to enact a unified federal AI policy that preempts state regulations, promotes innovation, and addresses key issues like child safety, intellectual property, free speech, workforce development, and national security.[1][4][5][7][9] The framework outlines seven policy areas, including regulatory sandboxes, access to federal datasets, reliance on existing sector-specific regulators (e.g., FTC, FDA, SEC), protections against AI-enabled fraud, and streamlined permitting for AI infrastructure while preventing states from regulating AI development or penalizing developers for third-party misuse.[1][4][6][9]

White House Outlines AI Policy Agenda in New National Framework

On March 20, 2026, the White House under President Donald J. Trump released the National Policy Framework for Artificial Intelligence, a set of non-binding legislative recommendations to Congress for a unified federal AI approach emphasizing innovation, safety, and oversight.[1][3][4]

China Raises the Stakes on Trade Secret Protection: What Companies and Counsel Need to Know About the New Rules

On February 24, 2026, China's State Administration for Market Regulation (SAMR) issued the Provisions on the Protection of Trade Secrets, a major overhaul replacing the outdated Several Provisions on Prohibiting Infringement of Trade Secrets from 1995 (last amended 1998), which had 12 articles; the new Provisions expand to 31 articles and take effect June 1, 2026.[1][2][4][8] Key developments include an expanded definition of trade secrets covering technical information (e.g., algorithms, computer programs, code, AI datasets) and business information (e.g., customer data, sales strategies, financial plans) with actual or potential commercial value, plus lower enforcement barriers like a presumption of infringement if substantial similarity and access are shown.[1][4][5] They also introduce extraterritorial reach, detailed confidentiality measures (e.g., tiered access, data encryption, employee exit protocols), and alignment with the digital economy.[1][4][7]

Grammarly’s AI tool mimicked experts without their consent. Now it’s being sued

Grammarly, owned by Superhuman Platform Inc., launched its "Expert Review" AI tool in August 2025, allowing users to pay $12/month for real-time writing feedback mimicking styles and advice from prominent figures like journalist Julia Angwin, author Stephen King, and Neil deGrasse Tyson—without obtaining their consent.[1][Input Summary] On March 12, 2026 (noted as "Wednesday" in reports), Angwin filed a class-action lawsuit in the U.S. Southern District of New York, alleging misappropriation of names and identities for commercial gain, violating New York and California privacy and publicity rights laws; the suit seeks class certification, an injunction, and damages for affected journalists, authors, editors, and others.[1][Input Summary]

Goodwin Launches OC Office With 3 Ex-Jones Day Partners

Core event: Goodwin Procter LLP launched its first Orange County office in Newport Beach, California, on March 17, 2026, by recruiting three partners—Richard Grabowski, John Vogt, and Ryan Ball—from Jones Day to lead it.[1][3][7][9] These attorneys specialize in cybersecurity, privacy, technology litigation, trade secrets, and consumer financial services, bringing elite trial experience including top defense verdicts and summary judgments in high-stakes cases.[1][6][8][10]

Emory Law School Launching An AI Study Program

Emory University School of Law in Atlanta is launching a new AI and the Law concentration starting Fall 2026 (academic year 2026–27), providing students with specialized coursework and interdisciplinary training on AI's legal implications, including regulation, liability, intellectual property, and ethical issues in areas like healthcare, work, and data science.[1][2][3][4][5]

AI Vendor Contracts: The Terms And Conditions Trap

Above the Law article warns in-house lawyers of hidden risks in standard AI vendor contracts, exemplified by one agreement granting vendors co-ownership of customer-generated content and perpetual licenses to data via broadly defined "Aggregated Statistics," with no anonymization standards or data recovery options.[1]

TRUMP America AI Act Bill Sets Direction for Future US AI Regulation

Core event: On March 18, 2026, Sen. Marsha Blackburn (R-TN) released a 291- to 391-page discussion draft of the TRUMP AMERICA AI Act (The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act), proposing the first comprehensive federal AI regulatory framework to preempt state laws.[1][3][4][11][13]

mail

Get notified about new Intellectual Property developments

Primary sources. No fluff. Straight to your inbox.