Law And Technology

Law And Technology

49 entries in Legal Intelligence Tracker

The AI Knows Too Much: When Employees Feed Trade Secrets into Generative AI Tools

Employees feeding trade secrets into public generative AI tools like ChatGPT, Claude, or Google Gemini risk waiving legal protections, as these inputs may constitute voluntary disclosure to third parties without confidentiality guarantees.[1][2] The core event stems from a February 2026 U.S. District Court ruling in United States v. Heppner (Southern District of New York), where the court held that attorney-client privilege did not apply to documents prepared using Anthropic's Claude due to its privacy policy allowing data sharing with third parties, a logic now extending to trade secrets under the Defend Trade Secrets Act (DTSA).[1][2]

The White House Releases National AI Legislative Framework

Core Event: On March 20, 2026, the White House under President Donald J. Trump released a four-page "National Policy Framework for Artificial Intelligence" (also called the National AI Legislative Framework), providing legislative recommendations to Congress for a unified federal AI policy.[1][2][3][4][5][6][7] It outlines 6-7 high-level objectives, emphasizing U.S. AI dominance, innovation, national security, child safety, consumer protection, IP rights, free speech, workforce development, and community safeguards, while advocating preemption of conflicting state AI laws (except in areas like child protection, fraud prevention, consumer laws, zoning for AI infrastructure, and state AI procurement/use).[1][2][3][4][6][7]

Tencent integrates WeChat with OpenClaw AI agent amid China tech battle - Reuters

Tencent launched ClawBot on March 22, 2026, integrating the open-source OpenClaw AI agent into WeChat as a chat contact, enabling over 1 billion users to automate tasks like file transfers and email sending directly in the app. [1][2][3][4] This embeds advanced, autonomous AI capabilities—beyond traditional chatbots—into WeChat's messaging, payments, and mini-program ecosystem, supporting multimodal interactions with text, images, videos, and files.[1][2][3]

Key Considerations When Using AI for Clinical Documentation

Physicians are increasingly adopting AI tools for clinical documentation to automate note generation from patient conversations, using ambient listening, NLP, speech recognition, and machine learning for structured records, EHR integration, and reduced burnout.[1][2][3][4] The core development is the mainstream implementation of these systems in 2026, with platforms like HealOS AI Scribe, OptiMantra, Heidi Health, and AI scribes (e.g., athenaAmbient, Nuance DAX, Abridge, Suki) delivering 20-40% time savings, 98%+ transcription accuracy, and features like predictive analytics and billing code suggestions.[1][2][4][6]

Attorney Accountability Is the Missing Layer in Legal AI

The headline "Attorney Accountability Is the Missing Layer in Legal AI," published March 23, 2026, highlights a growing call for lawyers to bear direct responsibility for AI-generated errors, such as hallucinations in filings or unverified outputs, amid rapid AI adoption in legal practice.[2][3] The core development is the emphasis on attorney accountability as an overlooked safeguard, building on ABA Formal Opinion 512, which mandates lawyers verify AI outputs for competence, confidentiality, candor to tribunals, and supervision under Model Rules 1.1, 1.6, 3.3, and 5.1-5.3.[2][3]

Policy Week in Review – March 20, 2026

On March 20, 2026, the White House released the National Policy Framework for Artificial Intelligence, a document with legislative recommendations urging Congress to enact a unified federal AI policy that preempts state regulations, promotes innovation, and addresses key issues like child safety, intellectual property, free speech, workforce development, and national security.[1][4][5][7][9] The framework outlines seven policy areas, including regulatory sandboxes, access to federal datasets, reliance on existing sector-specific regulators (e.g., FTC, FDA, SEC), protections against AI-enabled fraud, and streamlined permitting for AI infrastructure while preventing states from regulating AI development or penalizing developers for third-party misuse.[1][4][6][9]

Big Week on the AI Legislation Front

On March 17, 2026, the Colorado AI Policy Working Group released a proposed framework titled "Concerning the Use of Automated Decision Making Technology in Consequential Decisions" (Proposed ADMT Framework), unanimously endorsed by Governor Jared Polis, to repeal and replace the existing Colorado AI Act before its June 30, 2026 effective date.[1][3][4][6][9] This rewrite shifts from the original law's heavy governance requirements—such as AI impact assessments, risk management policies, and reporting algorithmic discrimination—to a lighter transparency-focused regime emphasizing up-front consumer notices, post-adverse decision disclosures, rights to correct information, and human review for "Covered ADMT" materially influencing consequential decisions in areas like employment, housing, healthcare, education, finance, insurance, and government services.[1][3][5][6][7][9] It narrows scope with a higher "materially influence" threshold (vs. original "substantial factor"), carves out low-stakes uses (e.g., spellcheck, advertising), and mandates Attorney General rulemaking by December 31, 2026.[1][3][4][5][8]

Trump Administration Unveils New AI Policy Framework Calling on Congress to Act

On March 20, 2026, the Trump Administration released the “National Policy Framework for Artificial Intelligence: Legislative Recommendations,” a blueprint urging Congress to enact federal laws promoting AI innovation, preempting state regulations, and avoiding new agencies.[1][5][9][10] Organized around seven pillars (protecting children/communities/creators/free speech, U.S. competitiveness, workforce/education, and state preemption), it recommends sector-specific oversight by existing regulators, industry-led standards, regulatory sandboxes, AI resources for small businesses (grants/tax incentives), child safety measures (e.g., age-gating), anti-censorship protections, energy cost safeguards for data centers, and streamlined permitting.[1][3][5][7][9]

The Backlash Against AI Devices That Are Always Watching

Core event: Mounting privacy backlash against always-on AI devices and data practices, highlighted by a U.S. class-action lawsuit against Meta over its AI smart glasses, where Kenya-based workers reviewed user footage including nudity, sex, and toilet use, contradicting privacy claims.[2] This joins EU probes into Meta's AI training data scraping from public Facebook/Instagram posts via short opt-out windows, covert Pixel tracking, and noyb cease-and-desist actions under GDPR.[1]

A Reminder About Florida’s Ban on Offshore Health Data Storage: What Providers and Vendors Should Know

Core event: In May 2023, Florida enacted Senate Bill 264, amending the Florida Electronic Health Records Exchange Act (codified at Fla. Stat. § 408.051(3)) to ban healthcare providers using certified electronic health record technology (CEHRT) from storing patient information outside the continental United States, its territories, or Canada—including in third-party cloud services or subcontracted facilities.[1][2][4][9]

Colorado Moves to Replace AI Law’s Bias Audit Requirements With Transparency Framework: 5 Action Steps for Employers

On March 17, 2026, the Colorado AI Policy Work Group unanimously approved a proposed rewrite of the state's landmark 2024 AI law (SB24-205, the Colorado AI Act), replacing mandatory bias audits, risk impact assessments, and algorithmic discrimination reporting with a streamlined transparency-and-notice framework for "Automated Decision Making Technology" (ADMT) in consequential decisions (e.g., employment, housing, education, insurance).[1][2][4][5] Key changes include upfront public notices of AI use (via links or postings), 30-day post-adverse-decision disclosures with rights to data correction and human review, recordkeeping for three years, exclusions for common tools like spell-checkers or general LLMs, and a delayed effective date of January 1, 2027.[1][2][4]

White House Outlines AI Policy Agenda in New National Framework

On March 20, 2026, the White House under President Donald J. Trump released the National Policy Framework for Artificial Intelligence, a set of non-binding legislative recommendations to Congress for a unified federal AI approach emphasizing innovation, safety, and oversight.[1][3][4]

Indiana Establishes Digital Asset Framework and Requires Cryptocurrency Options in Public Retirement Plans

Core Event: On March 3, 2026, Indiana Governor Mike Braun signed House Enrolled Act 1042 (HEA 1042) into law, establishing a comprehensive digital asset framework and mandating cryptocurrency investment options in select state-administered public retirement plans by July 1, 2027.[2][3][4][5][7] The law requires self-directed brokerage accounts with at least one cryptocurrency option (excluding payment stablecoins) in plans like Hoosier START (457(b)/401(a) deferred compensation), legislators’ defined contribution plan, and specified public employees’/teachers’ funds; boards set guidelines, valuations, and fees.[3][4][5][6][7] It also prohibits most state/local agencies (except Indiana Department of Financial Institutions) from restricting digital asset use as payment, self-custody in wallets, blockchain activities (e.g., nodes, staking, mining), or imposing unequal taxes/fees, while clarifying noncustodial software use isn't money transmission.[1][2][3][7]

China Raises the Stakes on Trade Secret Protection: What Companies and Counsel Need to Know About the New Rules

On February 24, 2026, China's State Administration for Market Regulation (SAMR) issued the Provisions on the Protection of Trade Secrets, a major overhaul replacing the outdated Several Provisions on Prohibiting Infringement of Trade Secrets from 1995 (last amended 1998), which had 12 articles; the new Provisions expand to 31 articles and take effect June 1, 2026.[1][2][4][8] Key developments include an expanded definition of trade secrets covering technical information (e.g., algorithms, computer programs, code, AI datasets) and business information (e.g., customer data, sales strategies, financial plans) with actual or potential commercial value, plus lower enforcement barriers like a presumption of infringement if substantial similarity and access are shown.[1][4][5] They also introduce extraterritorial reach, detailed confidentiality measures (e.g., tiered access, data encryption, employee exit protocols), and alignment with the digital economy.[1][4][7]

Grammarly’s AI tool mimicked experts without their consent. Now it’s being sued

Grammarly, owned by Superhuman Platform Inc., launched its "Expert Review" AI tool in August 2025, allowing users to pay $12/month for real-time writing feedback mimicking styles and advice from prominent figures like journalist Julia Angwin, author Stephen King, and Neil deGrasse Tyson—without obtaining their consent.[1][Input Summary] On March 12, 2026 (noted as "Wednesday" in reports), Angwin filed a class-action lawsuit in the U.S. Southern District of New York, alleging misappropriation of names and identities for commercial gain, violating New York and California privacy and publicity rights laws; the suit seeks class certification, an injunction, and damages for affected journalists, authors, editors, and others.[1][Input Summary]

Feedback on Frictionless Opt-Outs: CalPrivacy Seeks Comments from Businesses on Experience with Opt-Out Preference Signals

Core event: The California Privacy Protection Agency (CalPrivacy) issued an invitation for preliminary comments on March 6, 2026, seeking stakeholder input, especially from businesses, on experiences with Opt-Out Preference Signals (OOPS) and reducing friction in exercising privacy rights under the CCPA; comments are due by April 6, 2026.[1][3][7]

Goodwin Launches OC Office With 3 Ex-Jones Day Partners

Core event: Goodwin Procter LLP launched its first Orange County office in Newport Beach, California, on March 17, 2026, by recruiting three partners—Richard Grabowski, John Vogt, and Ryan Ball—from Jones Day to lead it.[1][3][7][9] These attorneys specialize in cybersecurity, privacy, technology litigation, trade secrets, and consumer financial services, bringing elite trial experience including top defense verdicts and summary judgments in high-stakes cases.[1][6][8][10]

DCP+ Podcast Episode 5: Georgina Merhom on How Quality Data Can Transform Financial Services, Part 2

DCP+ Podcast Episode 5 (Part 2) released on March 20, 2026, featuring Georgina Merhom discussing structural gaps in financial data ecosystems, data traceability, and evolving data ownership models in financial services. Hosted by Kaylee Cox Bankston and Boris Segalis from Morrison & Foerster LLP, this continues their prior interview with Merhom, founder and CEO of SOLO, a fintech platform focused on business financial management, real-time data consolidation, and first-party credit reporting.[headline][1]

Emory Law School Launching An AI Study Program

Emory University School of Law in Atlanta is launching a new AI and the Law concentration starting Fall 2026 (academic year 2026–27), providing students with specialized coursework and interdisciplinary training on AI's legal implications, including regulation, liability, intellectual property, and ethical issues in areas like healthcare, work, and data science.[1][2][3][4][5]

South Dakota Enacts Licensing Framework for Virtual Currency Kiosks

On March 11, 2026, South Dakota Governor Larry Rhoden signed Senate Bill 98 (SB 98) into law, establishing a licensing framework for virtual currency kiosks by classifying their operations as money transmission and imposing anti-fraud measures.[1][2][6][8] The law requires operators to obtain a money transmission license, cap daily transactions at $1,000 and 30-day limits at $10,000 per user, limit fees to 3% of transaction value, issue full refunds (including fees) within 72 hours for verified fraud victims, display fraud warnings, maintain live customer service from 8 a.m. to 10 p.m., use blockchain analytics to block high-risk addresses, verify user identities with government ID, comply with Bank Secrecy Act/AML rules, and submit annual reports on volumes, complaints, refunds, and suspicious activities.[1][2][3][4][6]

Four Standards Law Firms Should Use to Evaluate AI Marketing Tools

The article "Four Standards Law Firms Should Use to Evaluate AI Marketing Tools," published March 23, 2026, by Jamie Adams of Scorpion, outlines four key criteria for law firms to assess AI marketing solutions amid hype and rapid adoption. It argues that effective AI must deliver measurable business outcomes like increased signed cases, reduced costs per client, and revenue growth, rather than superficial metrics such as website traffic or form submissions[1].

CalPrivacy Issues $1.1 Million Fine for CCPA Violations Involving Student Privacy

The California Privacy Protection Agency (CalPrivacy) fined PlayOn Sports (2080 Media, Inc.) $1.1 million on March 3, 2026, for CCPA violations stemming from its GoFan digital ticketing platform used by about 1,400 California schools. PlayOn collected personal information via tracking technologies (e.g., cookies, Meta Pixel) for targeted advertising, constituting "sale" and "sharing" under CCPA, but failed to provide effective opt-out mechanisms.[1][4][5][9] Violations included phone/email opt-outs that didn't block website trackers, reliance on third-party tools like NAI/DAA, non-recognition of Global Privacy Control signals, coercive "agree-only" banners forcing consent (especially from students), misleading privacy notices claiming no sales, and inadequate disclosures.[1][3][5][7]

Court Allows Discovery Into Insurer’s Use of AI to Deny Claims

A Minnesota federal court ruled on March 9, 2026, in the Lokken case, granting plaintiffs' motion to compel discovery from UnitedHealth Group (UHC) into its use of the AI tool nH Predict—developed by Optum subsidiary naviHealth—for evaluating and denying Medicare Advantage post-acute care claims.[1][2][4] The court approved broad production of documents on nH Predict's development, goals, use policies, employee training, oversight, and certain government investigations, but denied requests for source code, internal probes, financial data, and employee incentives.[1][2] UHC maintains nH Predict is a non-generative care-support tool aiding recovery planning, with final decisions by physicians per CMS guidelines.[2]

What If We’re Just Mad This March? — See Generally

Above the Law's "What If We’re Just Mad This March? — See Generally" (published March 22, 2026) is a satirical newsletter aggregating recent legal controversies, framed as a "March anger bracket" parodying NCAA Madness, where readers vote on which Trump administration lawyers deserve disbarment first across four regions.[INPUT]

Privacy Tip #483 – Whistleblower Alleges DOGE Employee Stole Social Security Data on a Thumb Drive

A whistleblower complaint alleges that a former Department of Government Efficiency (DOGE) software engineer stole two highly sensitive U.S. Social Security Administration (SSA) databases—"Numident" and the "Master Death File"—containing records on over 500 million living and dead Americans, including Social Security numbers, birth data, citizenship, race, ethnicity, and parents' names, by copying them onto a thumb drive.[1][2][3] The unnamed engineer reportedly boasted to colleagues at his new employer, a government contractor, about possessing the data and planning to use it there, prompting an investigation by SSA's independent inspector general.[1][2][3] SSA denies the theft, calling it "fake news" from The Washington Post aimed at scaring seniors, while all named parties—SSA, the ex-employee, and the contractor—refute the claims.[1][2]

AI Vendor Contracts: The Terms And Conditions Trap

Above the Law article warns in-house lawyers of hidden risks in standard AI vendor contracts, exemplified by one agreement granting vendors co-ownership of customer-generated content and perpetual licenses to data via broadly defined "Aggregated Statistics," with no anonymization standards or data recovery options.[1]

State Enforcers Step Up Scrutiny of Foreign Data Transfers: What Organizations Should Know

State enforcers, particularly in Florida and Texas, have intensified scrutiny of organizations' foreign data transfers involving sensitive personal information like precise location, biometrics, and genomic data, amid a broader shift to enforcement of existing U.S. state privacy laws in 2026.[6][7] The core development builds on a 2024 federal Department of Justice (DOJ) rule restricting outbound transfers of "covered data" to "countries of concern" (e.g., China, Iran, North Korea), now amplified by state-level actions targeting vendor relationships, data brokers, and service providers potentially exposing U.S. data to foreign adversaries.[6]

Opinion | The Economics of Regulating AI

No core event or development is tied to the March 20, 2026, opinion piece "The Economics of Regulating AI," which critiques government overreach in regulating unfamiliar industries like AI and advocates alternatives to heavy-handed rules. It reflects broader 2026 debates amid surging AI investments exceeding $2 trillion globally, driving economic growth but risking bubbles from inflation, high interest rates, and unmet productivity expectations.[2][7]

TRUMP America AI Act Bill Sets Direction for Future US AI Regulation

Core event: On March 18, 2026, Sen. Marsha Blackburn (R-TN) released a 291- to 391-page discussion draft of the TRUMP AMERICA AI Act (The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act), proposing the first comprehensive federal AI regulatory framework to preempt state laws.[1][3][4][11][13]

Utah SB 275’s “Digital Identity Bill of Rights”: What It Could Mean for Businesses

Utah SB 275 passed unanimously in both legislative chambers, establishing the nation's first "Digital Identity Bill of Rights" and a voluntary state-endorsed digital identity program. The bill creates rights for residents to control their digital identities, including selective disclosure of attributes (e.g., name without birthdate or address), protection from compelled digital ID use, and safeguards like explicit consent, purpose limitation, and a "duty of loyalty" prohibiting exploitation by providers.[1][2][3][5] It mandates standards for digital wallets, verifiers, and relying parties, with requirements for tamper-resistant tech, secure processing, and minimal data use; the program endorses specific attributes like name, birthdate, image, and Utah address.[1][4][5]

GSA Proposes New Contract Clause Focused on the Government Use of AI

Core event: On March 6, 2026, the U.S. General Services Administration (GSA) released a draft contract clause, GSAR 552.239-7001, "Basic Safeguarding of Artificial Intelligence Systems" (Feb 2026 GSAR Deviation), for inclusion in GSA solicitations and contracts involving AI capabilities, particularly ahead of the Multiple Award Schedule (MAS) refresh planned for late March or April 2026.[1][3][4][5]

Navigating Global Background Checks- Key Insights for Employers

No specific core event occurred; the headline summarizes ongoing 2026 regulatory updates and trends in global background checks for employers. These center on "Clean Slate" and "Fair Chance" reforms tightening criminal history access, alongside rising demands for compliant international screening amid global hiring.[1][5][6]

The New York Congressional Race Turning Into a Bitter AI War

Core event: In New York's 12th Congressional District race for the 2026 Democratic primary, AI industry opponents of strict regulations, via the super PAC Leading the Future, launched over $1 million in negative ads targeting Democratic state Assemblymember Alex Bores, who co-sponsored New York's RAISE Act imposing safety and transparency rules on frontier AI models.[1][2][5][10]

New Surveillance Tools in Retail Stores Pose Legal Risks

Retailers are increasingly deploying AI-powered surveillance tools like facial recognition, AI-enhanced cameras, and body cameras on security guards to combat rising theft rates, but these raise significant legal risks under privacy laws.[1][7] The core event is a March 19, 2026, legal alert highlighting how audio recording without consent violates the Federal Wiretap Act and state all-party consent laws (e.g., California, Illinois), while biometric data collection implicates state restrictions and FTC rules against unfair practices.[1][7]

mail

Get notified about new Law And Technology developments

Primary sources. No fluff. Straight to your inbox.