What to Know About OpenAI’s Ideas for a World With ‘Superintelligence’

Published
Score
7

Why it matters

OpenAI released policy proposals on April 6, 2026, outlining ideas for governing and benefiting from superintelligence—AI systems far surpassing human intelligence—expected by 2026-2028. These include massive energy infrastructure (e.g., 100 GW annually), tax incentives, legal protections, and an international oversight body like the IAEA to manage risks such as monopolies, misalignment, and safety.[1][5][7]

Key players are OpenAI CEO Sam Altman, Chief Scientist Jakub Pachocki, Chief Futurist Joshua Achiam, and VP of Global Affairs Chris Lehane, with ties to Microsoft ($135B partnership) and potential U.S. regulatory involvement amid 2026 midterms. The proposals coincide with OpenAI's roadmap from a October 28, 2025 live broadcast: AI research interns by September 2026 and fully automated AI researchers by March 2028, backed by $25B in new tasks and infrastructure like 1 GW weekly compute factories.[2][3]

This builds on prior OpenAI governance calls (e.g., 2023 superintelligence blog) and Altman's warnings of a "governance emergency" by 2028, accelerated by recursive AI self-improvement and $100B+ investments. Recent shifts include product rebranding to "AGI Deployment," Sora cuts, and internal debates on regulation.[3][5][8]

**Newsworthy due to timing with OpenAI's "Spud" model launch, U.S. elections, declining AI public support, and leadership rifts (e.g., Achiam vs. anti-regulation PACs), sparking controversy over "rethinking the social contract" amid job displacement fears and global race with China.[1][3]

Sources

mail

Get notified about new Antitrust developments

Primary sources. No fluff. Straight to your inbox.

See more entries tagged Antitrust.