|
Here's this week's free edition of Platformer: a look at the Trump administration's remarkable about-face on AI safety in the wake of Anthropic's new Mythos model and what it tells us about the waning power of accelerationists in the administration. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts, including Spotify and Apple. Want to support more independent reporting like this? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent piece about the potential end of the Meta Oversight Board. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
This is a column about AI. My fiancé works at Anthropic. See my full ethics disclosure here. In February 2025, Vice President JD Vance took the stage at the Paris AI Action Summit to share the administration’s views on AI regulation. “The AI future is not going to be won by hand-wringing about safety,” he warned. Excessive regulations might “kill a transformative industry just as it’s taking off,” Vance said, and suggested that AI companies asking to be regulated might simply be trying to crush their future competitors. Vance’s remarks reflected the idea, then common among Trump officials, that fears about AI capabilities are dramatically overstated. David Sacks, the White House’s AI and crypto czar, has referred to a “doomer industrial complex” enacting a “sophisticated regulatory capture strategy based on fearmongering.” Michael Kratsios, who leads the Office of Science and Technology Policy, has complained that international efforts to govern AI “maintain a general atmosphere of fear.” The administration has backed up its rhetoric with a lobbying push intended to block most state-level AI regulation. Axios reported last month that Trump officials are pressuring Republican lawmakers in Nebraska and Tennessee to weaken or abandon bills in their respective states that would introduce safety and transparency requirements for AI companies. Which is what makes the administration's latest move so striking. Trump is quietly reviving a Biden-era idea his own officials once mocked — pre-release government review of powerful new AI models. Here are Tripp Mickle, Julian E. Barnes, Sheera Frenkel and Dustin Volz in the New York Times: The administration is discussing an executive order to create an A.I. working group that would bring together tech executives and government officials to examine potential oversight procedures, according to U.S. officials, who declined to be identified in order to discuss deliberations over sensitive policies. Among the potential plans is a formal government review process for new A.I. models. [...]
The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said.
The Biden administration had issued its own executive order that instructed AI companies to perform safety testing and share the results with the government before releasing new models. Trump revoked the order on his first day of his second term. Three days later, he issued a new order titled “Removing Barriers to American Leadership in Artificial Intelligence” that effectively ended safety testing requirements. What changed? Mythos. Anthropic’s latest large language model, now available in preview to a small number of companies, has proven capable enough at developing cybersecurity exploits that the government believes it poses national security risks. The White House now opposes the company’s plan to expand access from roughly 50 companies to 120 for security reasons. (It also says it worries Anthropic doesn’t have enough compute available to serve the model to government customers; Anthropic denies this.) All of this is complicated, of course, by the fact that the Trump administration has also sought to designate Anthropic as a “supply chain risk” because it refused to amend its contract with the Pentagon to enable “all lawful use” of its technologies. While continuing to defend that designation in court, the administration has simultaneously been working to expand access to Mythos throughout the government. Trump officials are now in the nonsensical position of trying to help agencies get around the legal roadblock they themselves set up to stop them from using Anthropic’s models. One set of officials is working to phase out the use of Anthropic models over the next six months; another is working to expand agencies’ access to its technology throughout the government. In the meantime, the rest of the industry now faces a regulatory environment that looks awfully similar to the one Democrats had implemented under Biden: a world where they submit their models to the government for review before releasing them widely. On Tuesday, Google, Microsoft and xAI all said that they would give the government early access to their models. The reviews will be handled by the US Commerce Department’s Center for AI Standards and Innovation. Before Trump 2.0, by the way, that body was known as the US AI Safety Institute. Its name changed last June. “For far too long, censorship and regulations have been used under the guise of national security,” Commerce Sec. Howard Lutnick said at the time. “Innovators will no longer be limited by these standards.” Less than a year later, the administration’s sneering dismissal of safety concerns has transformed into something that resembles a mild panic. The National Security Agency is now using the model to look for vulnerabilities in Microsoft products — and, one assumes, contemplating the fact that foreign nations will soon be using similarly capable technology against US critical infrastructure, if they aren’t already. Meanwhile, the public backlash against data centers and other symbols of AI power is putting the Trump administration increasingly at odds with its own base. And the government’s half-baked AI sales pitch to the general public, which has amounted to little more than “get rich and beat China,” has failed to resonate much beyond the venture-capitalist offices where it was originally conceived. One result of this is that Trump’s effort to place a moratorium on most state-level regulations of AI now seems even less likely to pass than it was before. Another likely effect of accelerationists’ declining influence is that we’ll see a push for expanded export controls on powerful chips to China. (Sacks, who recently left his job as AI czar for a role on the Council of Advisors on Science and Technology, had been a vocal proponent of loosening those controls.) A less likely but welcome development would be that the US re-engages with the United Kingdom, Japan, Korea and other allies to develop a shared strategy toward governing more powerful models. Still, less democratic possibilities exist as well. Critics of the White House’s plans to subject frontier models to safety evaluations worry that the Trump administration will use any licensing regime for censorship — denying releases to models whose output is deemed “woke,” for example, or simply to pressure companies into doing other favors for the administration. Imagine Brendan Carr’s Federal Communications Commission, but for AI. Some level of worrying there is warranted. But after Vance’s speech in Paris, I noted here the dangerous negligence of an AI policy that amounted to little more than “let’s see what happens.” A year later, the administration has come to realize that all those AI safety concerns were no mere hand-wringing. The models are getting more capable — and more dangerous. What Sacks once dismissed as the doomer industrial complex now includes a growing number of federal agencies and Trump administration officials. And while they should have taken these fears seriously all along, I will settle for the administration taking them seriously now. A MESSAGE FROM OUR SPONSOR Become an AI-native team with RovoAtlassian Rovo is AI that knows your projects, code, and people so it can bring context (and guardrails) to every workflow.
And because Rovo lives where your teams already work, it doesn’t just find the answers — it helps you do the work.
See how Sprout Social is becoming an AI-native team with Rovo. Learn more. Following The OpenAI-Elon Musk trial enters week two
This week in court: OpenAI co-founder Greg Brockman said he didn’t want Elon Musk to be OpenAI’s CEO because “he did not – and I believe does not – know AI,” in federal court today. Brockman added that he and co-founder Ilya Sutskever “did not think that he was going to spend the time required to actually get good at it.” Brockman told jurors that Musk called a predecessor to ChatGPT “stupid,” and said that “kids on the internet could do a better job of it,” which raised concerns within OpenAI about his ability to run the company. During discussions about a potential for-profit conversion, Brockman says Musk demanded a majority stake, saying he needed $80 billion to start a city on Mars. When Brockman pushed back, Musk allegedly said he could start another AI company tomorrow with “one Tweet.” Musk, who also owns AI company xAI, is suing OpenAI for unlawful enrichment. He claims his original charitable donation to OpenAI should not have contributed to the for-profit venture OpenAI eventually created. OpenAI claims the suit is a “jealous” bid to attack a competitor to Musk’s xAI. During his time on the stand, Brockman got grilled about his personal journal, which included such musings on OpenAI’s for-profit conversion as: “Financially what will take me to $1B?” Musk lawyer Steven Molo asked why — if Brockman’s goal was a mere billion dollars — he hasn’t donated the rest of his $30 billion stake to OpenAI’s nonprofit. “It takes 30 billion dollars to get you out of bed in the morning?” Molo asked. Brockman said Molo was twisting his words. In one of the trial’s more operatic twists, Brockman testified that when then-OpenAI board member Shivon Zilis had twins, she didn’t initially tell him Musk was the father. The trial proceedings previously revealed that Zilis secretly funneled information about OpenAI to Musk. Brockman said he found out Musk was the father of her children through public reporting — and that Zilis told him at the time that her relationship with Musk was “platonic” and that the children were born via IVF. Elsewhere at trial, Brockman testified that OpenAI will spend $50 billion on compute this year. Why we’re following: Musk is asking the court to remove Brockman and OpenAI CEO Sam Altman from their leadership positions, and is seeking as much as $134 billion in damages, which he says he will donate to the non-profit foundation that controls OpenAI. While the stakes for OpenAI’s future are high, we are admittedly more attuned to the various petty dramas that are unfolding in court. (Just two days before the trial began, after Brockman rebuffed Musk’s text suggesting that the parties settle, Musk responded: “By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.”) In any case, let Brockman’s experience be a reminder to all of us to never, ever, write your diary in a Google Doc. What people are saying: The “takeaway from Greg Brockman[‘s] testimony at Elon vs. OpenAI trial today is that no grown man should have a diary,” wrote Sources’ Alex Heath. Meanwhile, Musk agreed to pay $1.5 million to settle SEC allegations that he deceived Twitter shareholders when he failed to disclose his growing stake in the company, which the SEC alleged led to an artificially low stock price. Fascinatingly, notorious Silicon Valley fraudster Elizabeth Holmes congratulated Elon on his Twitter settlement, writing, “I had an SEC settlement too” (you don't say). She added, “Elon's $1.5M settlement is basically a parking ticket. No admission. No criminal conviction,” concluding, “Big win for @elonmusk.” —Ella Markianos Those good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and feedback on these changes: casey@platformer.news. Read our ethics policy here.
|