OpenAI wants a New Deal for AI. An attack on Sam Altman’s home made it urgent
Days before someone threw a Molotov cocktail at Sam Altman’s San Francisco home, OpenAI had published a 13-page policy document (opens in new tab) warning that the technology it is building could reshape society faster than society is prepared to handle. The document called for a public wealth fund, a robot tax, and a four-day workweek — a New Deal for the AI age, authored by a company responsible for making the old one feel precarious.
Then came the 3:45 a.m. attack. Police arrested a suspect within hours; no one was hurt. But Altman, writing that morning (opens in new tab) soon after the attack, acknowledged that there was a growing sense of anxiety around AI. “The fear and anxiety about AI is justified,” he wrote. “We are in the process of witnessing the largest change to society in a long time, and perhaps ever.”
The Chronicle has since
reported (opens in new tab)that the suspect, Daniel Alejandro Moreno-Gama, had written extensively online about his fear that AI would “lead to human extinction.”
In a series of Substack essays,
Moreno-Gama reportedly
described AI as an existential threat and wrote of Altman that he was “consistently reported to be a pathological liar.” Moreno-Gama also appeared to have posted, in December, in the Discord server of PauseAI, an activist group focused on halting powerful AI development: “We are close to midnight, it’s time to actually act.” A moderator responded: “Advocating violence in any form is grounds for a ban.”
Altman’s admission — hours after an apparent attempt on his life — that there was justified anxiety around the speed of AI development brought into sharp focus the central problem with OpenAI’s ambitious policy agenda: the company calling for a new social contract is also responsible for potentially making the old one obsolete.
Days earlier, OpenAI’s “Industrial Policy for the Intelligence Age: Ideas to Keep People First” (opens in new tab) was OpenAI’s most explicit acknowledgment yet that the technology it is building could reshape society in ways that society is unprepared for — and that the company itself bears some responsibility for what happens next.
A recent Quinnipiac poll (opens in new tab) found that 80% of Americans are concerned about AI, 55% believe it does more harm than good in daily life and 70% think AI advances will reduce job opportunities. Altman’s company is at the heart of a growing public sense that tech companies are driving profound change faster than governments can govern it, or protect those most likely to bear the costs.
Altman’s early morning blog post was a visceral reaction to a physical threat, but the report is a considered, strategic reaction to a systemic issue — the fear that AI will create broad disruption politically, culturally, and economically.
The report has been characterised in some quarters as an (overly) ambitious New Deal for the AI age. Its headline proposals — a public wealth fund that would give every American a dividend stake in AI-driven growth, higher taxes on capital gains and corporate income, a robot tax, and a four-day workweek — have raised eyebrows across Washington, D.C. Axios noted (opens in new tab) that, with Senator Bernie Sanders now the leading congressional antagonist of the AI industry, much of the document reads as “distinctly Sanders-coded.”
The blueprint shares policy positions with Sanders on capital gains taxes, strengthening Medicare and Medicaid, and a tax on automated labor. The business site Quartz declared (opens in new tab) that “OpenAI is having a Bernie Sanders moment,” which is striking, given that Sanders had recently toured through Silicon Valley.
Critics, though, are asking whether OpenAI — which completed a for-profit restructuring last year and has cultivated close ties to the Trump administration — is genuinely supportive of the New Deal-style proposals in its report or simply wants to be seen as getting ahead of growing public anxiety.
Lucia Velasco, a former head of AI policy at the U.N., noted in Fortune (opens in new tab) that OpenAI is “the most interested party in how this conversation turns out” and that its proposals shape an environment “in which OpenAI operates with significant freedom under constraints it has largely helped define.” She added that this was not a reason to dismiss the document, but “it is a reason to ensure that the conversation it is trying to start does not end with the same company that started it.”
That concern is not abstract. Reporting by The Standard revealed that OpenAI quietly funded a child-safety coalition without some of its own members realizing that — and at least two organizations quit the coalition they found out. A University of Michigan professor who reviewed the coalition’s website said it met “the classic definition of astroturfing.” It is, critics say, another example of the company attempting to secretly undermine the debate it claims to be engaged with in good faith.
Soribel Feliz, an independent AI policy adviser who previously served as a senior AI and tech policy adviser for the U.S. Senate, was more blunt still, telling (opens in new tab) Fortune that the ideas themselves are not wrong — but that she had heard every single one of them in Senate policy sessions back in 2023-24.
“The problem is the gap between naming the solutions and building real mechanisms to achieve them,” she said.
Anton Leicht, a visiting scholar at the Carnegie Endowment, argued that the policy document only makes sense if you understand what OpenAI is really asking for. Writing on X (opens in new tab), he framed it as a bid for “deployment absorption instead of development friction” — rather than accepting regulatory brakes on how fast AI is built and released, OpenAI wants policymakers to build a society capable of absorbing the speed at which it plans to move.
The problem, as Leicht noted, is that a public wealth fund or a robot tax are far heavier political lifts than simply regulating the industry a bit, and they are not just going to emerge as an organic alternative.
“On that read, this is comms work to provide cover for regulatory nihilism,” he wrote.
But he held out the possibility that OpenAI could redeploy its substantial lobbying muscle and political funding toward making progress on this expansive agenda — and to stop spending money to defeat the very candidates who might champion it.
To Leicht’s point, OpenAI’s ambitious policy proposals are unfolding against a backdrop that makes the likelihood of realizing them remote. In December 2025, President Donald Trump signed an executive order doubling down on AI deregulation.
David Sacks, until recently the White House’s AI and crypto czar, has been explicit about the stakes, warning at Davos (opens in new tab) in January that “if we have 1,200 different AI laws in the states clamping down on innovation, I worry that we could lose the AI race” — dismissing safety concerns as a “fit of pessimism” and a “self-inflicted injury”.
And yet, Sacks’ position is not a settled one on the right. Steve Bannon, Trump’s former chief strategist, has emerged as a vocal opponent of Sacks’ deregulatory approach, calling for (opens in new tab) a pause on AI labs’ pursuit of superintelligence and arguing that there are “10 times more regulations to open a nail salon on Capitol Hill than there are on one of the most promising yet most dangerous technologies ever invented.” (opens in new tab)
The deregulatory push has drawn criticism from many conservative organizations too. Michael Toscano of the Institute for Family Studies, a conservative think tank, called the executive order (opens in new tab) “a huge lost opportunity by the Trump administration to lead the Republican Party into a broadly consultative process.”
The OpenAI document sits alongside a series of lengthy public interventions by Dario Amodei, CEO of rival AI company Anthropic. Where OpenAI’s report is focused on the economic fallout and the civic policy response, Amodei has been sounding a more existential alarm.
In a lengthy January essay titled “The Adolescence of Technology; Confronting and Overcoming the Risks of Powerful AI, (opens in new tab)” Amodei offered a detailed warning about AI systems he described as a “country of geniuses in a data center,” capable of displacing half of all entry-level white-collar jobs within five years and, unlike past technological shifts, acting as “a general labor substitute for humans.
We are still unclear as to the motivations for the frightening attack on Altman’s family home. But it is impossible for that incident — which could potentially have been far graver — not to bring the debate about AI, its impact, and society’s response sharply into focus.
It may yet concentrate the minds of both sides — tech companies and public officials — into thinking that public anxiety, left unaddressed, may have a trajectory of its own.
