Washington Governor Signs AI Companion Chatbot Regulation into Law
What Happened
49 entries in Corporate Counsel Tracker
What Happened
On March 27, 2026, New York Governor Kathy Hochul signed chapter amendments finalizing the Responsible AI Safety and Education (RAISE) Act, regulating developers of frontier AI models—defined as models trained with over (10^{26}) FLOPs and compute costs exceeding $100 million, including those via knowledge distillation.[1][3][8] The law takes effect January 1, 2027, applying to developers with annual revenues over $500 million operating in New York, requiring safety protocols, 72-hour incident reporting, transparency reports, annual frameworks, and assessments by a new DFS office; accredited universities are exempt.[1][3][5][8]
Core event: Major AI companies have launched public relations and engagement initiatives, described as a "charm offensive," to counter widespread unpopularity of artificial intelligence revealed by recent polls, aiming to ease public concerns about risks like misinformation, job loss, privacy invasion, and bias.[1][3][5]
A survey reveals 15% of employees over 55 report increased desire to retire due to rising AI adoption at work, with many viewing it as the final technological shift after personal computing, internet, and smartphones.[1][5] This trend emerges as companies accelerate AI integration, leading to layoffs and role changes disproportionately affecting mid-career workers aged 30-50, particularly in tech and white-collar sectors.[7][8]
Based on the search results available, I cannot provide specific details about the April 6, 2026 Above the Law article you referenced, as the search results do not include that particular piece. However, I can provide relevant context about the broader issues the headline appears to address.
University of Houston law professor Renee Knake Jefferson's "Legal Ethics Roundup" (LER No. 126, published April 6, 2026) summarizes recent U.S. legal ethics developments, including Pam Bondi's departure from a role, Emil Bove's recusal, a "Strip Law" issue, widespread judge AI use amid lawyer sanctions, and viral judge misconduct videos.[1][2]
Core event: Sen. Bernie Sanders published an op-ed on April 3, 2026, expressing dire concerns about AI's threats to jobs, democracy, privacy, and human survival, prompting a counter-op-ed on April 6 titled "Opinion | Bernie Sanders Is Wrong About AI Innovation," which argues AI combined with human ingenuity drives progress.[3][1]
Core event: Lower-ranking employees, such as executive assistants, recruiters, coders, and valets, are driving AI adoption in companies through self-taught experiments, creating efficient workflows that spread bottom-up to executives, rather than top-down mandates.[headline summary]
Core Event
Anthropic is in discussions to invest approximately $200 million in a new private-equity-backed joint venture aimed at deploying its Claude AI models across portfolio companies of major buyout firms to accelerate enterprise adoption.[1][2][5][10] The venture, targeting up to $1 billion in total funding, would function as a hybrid consulting and implementation platform, embedding forward-deployed engineers—modeled on Palantir's approach—to integrate AI for automation, workflow transformation, and operational efficiencies beyond basic subscriptions.[1][2][4]
Core event: OpenAI and Anthropic are projected to spend nearly $65 billion combined in 2026 on training and operating AI models, exemplifying the spiraling costs of AI development amid explosive infrastructure demands.[8] This ties into Anthropic's April 6-7 announcements of expanded deals with Broadcom and Google, securing 3.5 gigawatts of TPU-based compute capacity starting 2027 to fuel growth, plus Broadcom's long-term supply of custom AI chips to Google through 2031.[3][5]
Core event: AI agents are shifting e-commerce from human-controlled interfaces (websites/apps) to autonomous machine-mediated transactions, where agents handle browsing, querying inventory, comparisons, and purchases on users' behalf without visiting brand sites.[1][2] This "agentic AI" era prioritizes machine-readable data, protocols, and structured APIs over optimized funnels, as exemplified by OpenAI's Operator (browser-based task execution), Anthropic's Model Context Protocol (MCP) for tool/data connections, and Google's Universal Commerce Protocol (UCP) enabling direct sales in AI environments like Gemini and Copilot.[headline]
Goldman Sachs released a report on April 6, 2026, analyzing 40 years of labor market data from over 20,000 workers since 1980, warning that AI-displaced workers face prolonged economic hardship, including a 3% average pay cut upon reemployment, 10 percentage points less real earnings growth over a decade compared to non-displaced workers, and higher unemployment risk, with effects worsening during recessions.[1]
Core event: In 2025, AI directly contributed to over 55,000 job cuts across U.S. tech firms, a 12-fold increase from two years prior, with the trend accelerating into 2026 via major announcements like Block's 4,000 roles, Amazon's 16,000 corporate positions, and ongoing cuts at Meta, Atlassian, and Pinterest.[1][7][9]
The State AG Report – 04.02.2026 is a curated newsletter by Cozen O'Connor summarizing recent state attorneys general (AGs) and federal regulatory actions across the US, published on April 6, 2026.[3][7][11]
Marketing leaders in financial services are shifting strategies amid tighter budgets, leaner teams, and rapid tech changes, prioritizing data-driven proof of impact over traditional efforts. The core development is a transition from questioning "are we doing enough?" to demanding quantifiable results, as highlighted in Integreon's April 6, 2026 analysis[INPUT]. This involves reallocating budgets—39% flat, 7% decreasing—with 60% of institutions spending ≤5% of revenue on marketing, favoring digital channels (62% of budgets) for measurable attribution[7][11].
Core event: Broadcom announced on April 6, 2026, an expanded collaboration to develop and supply custom AI chips—specifically Google's Tensor Processing Units (TPUs)—along with networking components for Google's next-generation AI data centers through 2031, and to provide Anthropic with approximately 3.5 gigawatts of TPU-based computing capacity starting in 2027, contingent on Anthropic's commercial success.[1][3][5]
Dreamix CTO Denis Danov published an article on April 6, 2026, outlining three key mistakes clients make when selecting software development partners: (1) prescribing team composition before scoping the problem, (2) assuming AI is the solution without validation, and (3) leaving business outcomes undefined at kickoff. The piece draws from Dreamix's client experiences, including a case where a client-mandated senior team led to engineer demotivation, project delays, and knowledge loss; AI misapplications where rule-based workflows sufficed; and over-engineering beyond business needs, as when 90% accuracy exceeded a prior 80% baseline.[headline summary]
Core Event: A Reuters Breakingviews column published April 6, 2026, argues that the AI boom's lofty ambitions are colliding with a harsh economic reality: building the necessary infrastructure—data centers, power grids, and chips—could cost up to $7 trillion globally over the next decade, far exceeding current projections and funding.
The "Tech, Media & Telecom Roundup: Market Talk" on April 6, 2026, compiles insights on sector developments, including Shopify's database re-architecture for AI-driven commerce, strong performance in communication services fueled by AI investments and M&A, and emerging trends like headless commerce and unified retail platforms.[1][2][3]
What Happened
Samsung Electronics forecasted a record Q1 2026 operating profit of 57.2 trillion won ($38 billion), an eightfold jump from 6.69 trillion won a year earlier and nearly triple its prior record of 20 trillion won from Q4 2025. This exceeded analyst estimates (e.g., LSEG's 40.6 trillion won) amid revenue growth of 68% to 133 trillion won.[1][3][5][6]
Nvidia acquired SchedMD, developer of the open-source Slurm workload manager, in December 2025 to bolster its AI and supercomputing ecosystem.[2][3] Slurm schedules computing tasks across hardware from Nvidia, AMD, Intel, and others, powering AI model training at companies like Meta, Mistral, and Anthropic, as well as government supercomputers for weather and security research.[2][3] Nvidia pledged to keep Slurm open-source and vendor-neutral, but AI executives, supercomputing specialists, and analysts worry it could prioritize Nvidia hardware in updates or roadmaps, eroding competition.[1][2][3]
What Happened In 2025, companies directly attributed approximately 55,000 job cuts to artificial intelligence—a more than 1,100% increase from 2023 levels[2]. The layoffs have accelerated into 2026, with major tech companies announcing significant cuts: Block eliminated 4,000 roles, Amazon cut 16,000 corporate positions, and Meta, Atlassian, and Pinterest have announced additional reductions[6]. Simultaneously, career advice is shifting focus from traditional job-search tactics to relationship-building as the primary determinant of reemployment speed[1].
The core event is the publication of an opinion article on April 6, 2026, arguing that AI can and must be governed like past technologies such as nuclear weapons and recombinant DNA, contrasting this with Silicon Valley's resistance amid rapid AI development.[INPUT]
Lean In, the nonprofit founded by Sheryl Sandberg, published new research on March 2-6, 2026, surveying 1,015 U.S. adults, revealing a gender gap in workplace AI use: 78% of men vs. 73% of women have used AI, with 33% of men using it daily compared to 27% of women.[1][3][6][8] Women face barriers including 32% higher concern about being seen as cheating, greater ethical and accuracy worries, and less support—only 30% of women vs. 37% of men report manager encouragement to use AI, while men are 27% more likely to receive praise for it.[2][3][6][8]
Brands like Aerie (American Eagle Outfitters) are adopting "No AI" disclaimers in marketing to differentiate from AI-generated "slop" and appeal to skeptical consumers[1][3][5][7]. The core event is Aerie's ad campaign last month (March 2026) promising "We commit: No AI-generated bodies or people," explicitly labeling content as human-made to build trust[1][3][7].
Core event: On April 6, 2026, Arnall Golden Gregory LLP (AGG), an Am Law 200 law firm based in Atlanta with offices in Washington, D.C., promoted four leaders to chief officer roles to advance its AI strategy, talent growth, and future expansion.[1][3][8]
No specific core event on April 5, 2026, matches the headline "The Singularity has learned to teach itself," as search results lack direct reports of such a development; the phrase likely refers hyperbolically to ongoing AI self-improvement advances, like OpenAI's GPT-5.4 (released March 5, 2026), which autonomously executes multi-step workflows and exceeds human benchmarks on desktop tasks (75% vs. 72.4% human baseline on OSWorld-V).[2][6] Skeptics argue no true "Singularity"—a point of uncontrollable superintelligence—is occurring, dismissing it as incremental software improvements in larger, faster systems without fundamental leaps.[1]
Older workers, particularly those aged 55+, are increasingly choosing early retirement over adapting to AI integration in their jobs, viewing it as a final technological disruption after waves of personal computing, internet, and smartphones.[1][2] This trend manifests as professionals exiting amid company-mandated AI adoption, early retirement offers, or perceived threats to autonomy and skills, with U.S. workforce participation for those 55+ hitting a record low of 37.2% in March 2026 per Bureau of Labor Statistics data.[2] Examples include Luke Michel, a 68-year-old content strategist at Dana-Farber Cancer Institute who retired via an early offer rather than learn AI tools, and an unnamed 40-year IT veteran who left post-acquisition when required to use parent company's AI systems.[2]
OpenAI urged the attorneys general of California and Delaware to investigate Elon Musk and associates for alleged "improper and anti-competitive behavior," claiming his ongoing lawsuit—seeking over $100 billion in damages—could cripple its nonprofit foundation and hinder efforts to develop artificial general intelligence (AGI) for humanity's benefit.[1][2][3][4]
The core event is the US military's use of AI to conduct over 1,000 strikes on Iranian targets in the first 24 hours of war, exceeding 3,000 by week's end—twice the "shock and awe" phase of the 2003 Iraq invasion—while U.S. Central Command (Centcom) maintains humans approve every target. This mirrors Israel's Lavender AI system in Gaza, where operators spent ~20 seconds per target, often just confirming the individual was male, reducing human role to approval. In parallel, Cigna's 2023 algorithm led physicians to deny claims in 1.2 seconds on average, with one doctor rejecting 60,000 monthly via batch approvals.[Input]
Core event: On April 6, 2026, Sidley Austin LLP announced that Scott Bennett, former co-head of Cravath, Swaine & Moore LLP's Venture Capital & Growth Equity practice and Digital Assets practice, joined Sidley as a partner in its New York office and Head of Technology Capital Markets.[1][3]
Apple's App Store saw 235,800 new apps submitted in Q1 2026, an 84% increase from Q1 2025, reversing a 48% decline from 2016-2024, driven by vibe coding—AI tools like Cursor, Claude Code, and Anthropic's offerings that generate code from natural language prompts.[1][5][6] This follows 557,000-600,000 new apps in 2025, with Sensor Tower noting 56% monthly submission growth by Dec 2025 and 54.8% in Jan 2026.[1][2][3][5] Vibe coding, coined by OpenAI cofounder Andrej Karpathy in Feb 2025, enables rapid app creation even by novices, flooding submissions and straining review processes, with times rising from 24 hours to 30 days.[1][3][6]
The National Highway Traffic Safety Administration (NHTSA) closed its investigation into Tesla's "Actually Smart Summon" (ASS) feature on April 6, 2026, after determining that reported incidents were rare, low-speed, and involved only minor property damage with no injuries or fatalities.[1][2][4][5] The probe covered approximately 2.59-2.6 million Tesla vehicles (Models S, X, 3, Y), reviewing about 100-159 incidents out of millions of sessions, where vehicles typically struck stationary objects like parked cars, garage doors, or gates early in operation due to detection failures.[1][2][4][7][8]
Broadcom announced a long-term agreement with Google on April 6, 2026, to develop and supply custom Tensor Processing Units (TPUs) and networking components for Google's next-generation AI racks through 2031[1][2][3][5]. The deal positions Broadcom as Google's primary design partner for TPUs, which power advanced AI models like Gemini, and includes supply assurance for hardware connecting large-scale chip clusters[1][3]. Separately, Broadcom signed a tripartite arrangement providing Anthropic access to ~3.5 gigawatts of TPU-based computing capacity starting in 2027[1][2][3][4].
Core Event: Anthropic announced Claude Mythos, its most powerful AI model to date, alongside Project Glasswing, an initiative granting early access to over 40 major technology companies to identify and patch vulnerabilities in their systems before the model's broader release.[1][6] The announcement came approximately two weeks after Anthropic accidentally leaked internal documents describing the model due to a misconfigured content management system.[4][6]
Intel announced on April 7, 2026, its partnership with Tesla, SpaceX, and xAI on the Terafab project, a massive semiconductor initiative to produce 1 terawatt of compute annually for AI and robotics. The core event involves Intel contributing expertise in chip design, fabrication, packaging, and its 18A process node to two planned facilities in Austin, Texas: one for terrestrial AI chips (AI5/AI6 for Tesla's Full Self-Driving, Optimus robots, and Cybercab) and another for space-optimized D3 chips for orbital data centers launched via SpaceX Starship.[1][2][3][6][9]
No specific core event ties directly to the headline; it addresses ongoing trends in AI-powered attacks, supply chain vulnerabilities, and regulatory pressures reshaping cybersecurity. Recent developments include a supply chain attack on the widely-used AI package LiteLLM, risking thousands of companies[15], AI-assisted attacks targeting GitHub repositories[13], and predictions of autonomous AI agents executing multi-stage attacks at machine speeds, as seen in Anthropic-documented cases affecting 30 organizations[5]. Supply chain attacks have surged 67% since 2021 (IBM data) and over 700% recently, with malicious package uploads to open-source repositories up 156%[1][5][9].
Core event: Elon Musk amended his ongoing lawsuit against OpenAI on April 7, 2026, requesting that OpenAI's nonprofit arm receive zero damages and seeking the removal of Sam Altman from its board, amid claims of breach of nonprofit principles during OpenAI's for-profit shift.[1][2][5][7]
Core event: OpenAI announced on January 16, 2026, that it would test advertisements in ChatGPT starting in February 2026, with ads appearing at the bottom of relevant responses for free and low-tier users, clearly labeled and separated from organic content; no ads near sensitive topics like health or politics[1][7][9].
Core event: Recent releases of unflattering body cam footage from Tiger Woods' March 2026 Florida DUI arrest—showing him surprised during handcuffing, admitting to medications (zero alcohol breath test), and mentioning a call to "the president"—and Justin Timberlake's June 2024 Sag Harbor, NY DWI arrest have gone viral, fueling memes and jokes despite Woods' not guilty plea and Timberlake's guilty plea to impaired driving (fine and community service).[1][6][7] Timberlake sued Sag Harbor Village and police on March 1, 2026, to block full release of 8 hours of footage under FOIL, citing privacy invasion, stigma, and "irreparable harm" from exposing vulnerable moments like his "ruin the tour" comment.[2][4]
Taiwan's National Security Bureau reported to lawmakers that China is intensifying efforts to acquire advanced semiconductor technology and talent from Taiwan to circumvent international "containment" measures, including U.S.-led export controls.[1][3][5][11] The agency detailed China's use of direct luring of high-tech industries like AI and semiconductors, indirect poaching via networks of firms, technology theft, and procurement of controlled goods to obtain Taiwan's advanced-process chips.[1][5] Taiwan has repeatedly busted such illegal networks and enforces strict laws to block tech transfers.[1][3]
OpenAI launched a pilot ad program embedding sponsored links directly below ChatGPT responses for free-tier U.S. users, with expansion to other countries underway; ads are clearly labeled "Sponsored," appear conversationally based on user prompts (e.g., "best PI lawyer in Vegas"), and may include interactive "Chat with Sponsor" features[1][3][5]. This targets high-intent queries like legal advice on divorce or accidents, positioning advertisers in front of users at decision moments, unlike sidebar ads on traditional platforms[1][5]. The program is invite-only, operates on a high-cost CPM model (~$60, potentially $200K+ monthly minimum), and lacks a confirmed broad launch date[3].
What Happened
Cities and states are proposing bans, surcharges, zoning restrictions, and environmental regulations on data centers due to their surging electricity demand (projected to reach 130 GW or 12% of U.S. total by 2030) and environmental impacts like 105 million metric tons of annual CO2 emissions, mostly from fossil fuels (56% of power source).[2][4][7][13]
Mohr Marketing launched a high-precision mesothelioma lead program on April 6, 2026, designed exclusively for plaintiff law firms nationwide. The program uses real-time consumer search intelligence, proprietary data modeling, and strict validation for medical diagnoses (e.g., biopsy-confirmed) and asbestos exposure (e.g., high-risk trades pre-1980s) to deliver intake-ready cases, filtering from ~25,000 monthly searchers into ~73% actionable profiles targeting terms like mesothelioma litigation, asbestos exposure, and FDA treatments such as Pemetrexed (Alimta), Ipilimumab (Yervoy), and Nivolumab (Opdivo).[1][5][6][7]
In U.S. v. Heppner (No. 25-cr-503, S.D.N.Y.), Judge Jed S. Rakoff ruled on February 10, 2026 (bench ruling), and February 17, 2026 (written opinion) that defendant Bradley Heppner's 31 documents—generated via Anthropic's consumer Claude AI using info from his counsel—lacked attorney-client privilege or work-product protection, making them discoverable by prosecutors.[1][2][4][5][8][11] Rakoff cited three privilege failures: Claude is not a lawyer, thus no client-attorney communication; no confidentiality expectation per Claude's policy allowing data use for training and disclosure to third parties (including regulators); and Heppner used it voluntarily for advice Claude disclaims providing, not at counsel's direction.[1][3][5][8][9][11] The work-product claim failed as materials weren't prepared anticipatorily for litigation under counsel's supervision.[1][6][8]
Primary sources. No fluff. Straight to your inbox.