Sidebar - "Someone Else Will":

 Companion to: The Last Checkpoint

On institutional drift, moral displacement, and the price of a Pentagon contract

Informed by corporate filings, congressional testimony, resignation statements, and reporting through March 3, 2026

Hours after the Trump administration designated Anthropic a national security threat and banned its products from the federal government, OpenAI announced it had reached a deal with the Pentagon. The terms were nearly identical to what Anthropic had been expelled for refusing to surrender. The rationalization was instant and familiar: if we don't do it, someone else will.

It is one of the oldest moral arguments in the history of dangerous technology. It was made by munitions manufacturers in the First World War, by physicists on the Manhattan Project, by drone engineers in the twenty-first century. Its seductive power lies in the grain of truth it contains. And its moral bankruptcy lies in what that truth is used to obscure.

Part I What OpenAI Was Founded to Prevent

In December 2015, a group of researchers and technologists gathered in San Francisco to announce the formation of a new artificial intelligence laboratory. The organization they created was explicitly a nonprofit. Its founding documents stated that its mission was to ensure that artificial general intelligence "benefits all of humanity" rather than advancing "the private gain of any person." The name — OpenAI — was chosen deliberately. Research would be published openly, making AI development transparent and accessible rather than concentrated.

The founders were motivated by a specific fear: that AI capability concentrated in the hands of a single powerful corporation — they were thinking primarily of Google's DeepMind — posed civilizational risks. The antidote, they believed, was openness, nonprofit governance, and a mission explicitly insulated from commercial incentive. They raised initial funding on the strength of that promise.

Within four years, the organization had created a for-profit subsidiary and taken a $1 billion investment from Microsoft. The erosion had begun.

  • 2015 OpenAI founded as a nonprofit laboratory. Founding mission: artificial general intelligence that benefits all of humanity. Research to be published openly. No private gain.
  • 2019 OpenAI creates a "capped profit" subsidiary, enabling outside investment while nominally preserving nonprofit control. Microsoft invests $1 billion. The hybrid structure is presented as a necessary accommodation to the cost of frontier AI research.
  • 2020 GPT-3 is released but not as open source — a significant departure from the founding commitment to openness. The model's weights and training details are withheld. Competitive advantage begins to appear as a motive alongside safety concerns.
  • 2023 — March GPT-4 released with virtually no technical disclosure about its architecture or training. The organization that promised openness declines to explain how its most powerful model works. Sam Altman testifies to Congress that the nonprofit structure "ensures it remains focused on its long-term mission."
  • 2023 — November The nonprofit board fires Sam Altman, citing concerns about AI safety and a breakdown of trust. Within five days, Altman is reinstated after employees threaten mass resignation and investors apply pressure. The board members who acted on safety concerns are replaced. Safety loses the internal power struggle decisively.
  • 2024 — May Co-founder and chief scientist Ilya Sutskever resigns. Hours later, safety team co-lead Jan Leike resigns, posting publicly that "safety culture and processes have taken a backseat to shiny products." OpenAI's Superalignment team — announced the previous year with a commitment of 20% of the company's computing resources — is dissolved. At least seven safety-focused researchers depart within months.
  • 2024 — October Miles Brundage, senior advisor for AGI readiness, departs. His parting assessment: "Neither OpenAI nor any other frontier lab is ready." OpenAI's AGI Readiness team is also disbanded.
  • 2024 — October OpenAI completes restructuring into a Public Benefit Corporation. The nonprofit, now called the OpenAI Foundation, retains a controlling stake — but critics note the restructuring significantly diminishes the nonprofit's ability to override commercial imperatives. A $6.6 billion share sale values the company at $500 billion, making it the most valuable privately held company in the world.
  • 2026 — February OpenAI's Mission Alignment team — created just 16 months earlier — is disbanded. The same week, OpenAI signs a Pentagon contract to replace Anthropic, accepting "all lawful purposes" language while claiming architectural safeguards will protect against the same red lines Anthropic lost its contract for defending.

Part II The Exodus and What It Meant

The most revealing chapter in OpenAI's institutional history is not its corporate restructuring, its fundraising, or its product launches. It is the departure of its safety researchers — who left voluntarily, at significant personal financial cost, to say publicly what they could not say while employed.

Jan Leike's resignation statement in May 2024 was unusually explicit for a corporate departure. OpenAI's nondisparagement agreements meant that any departing employee who spoke critically risked forfeiting millions of dollars in unvested stock. Leike apparently accepted that penalty.

"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point. I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, superalignment, confidentiality, societal impact, and related topics. Safety culture and processes have taken a backseat to shiny products."

— Jan Leike, former co-lead of OpenAI's Superalignment team, May 17, 2024

The Center for AI Policy noted the significance of what Leike's departure represented: cynics sometimes argue that AI safety concerns are merely a competitive ploy — that researchers and companies invoke safety to win regulatory protection against rivals. But that argument cannot explain why the senior half of OpenAI's own safety team quit, forfeiting enormous paydays, to describe what they had witnessed internally. People do not sacrifice millions of dollars in unvested equity to maintain a competitive narrative. They do it because they believe the warning is real.

The pattern continued. Ilya Sutskever — arguably the intellectual center of the company, the person whose earlier safety concerns had led him to support the board's initial decision to fire Altman — departed to found Safe Superintelligence, a company dedicated exclusively to building AI safely rather than commercially. John Schulman, a co-founder, joined Anthropic. Mira Murati, the CTO, resigned. Miles Brundage left warning that no frontier lab was ready for what was coming.

By February 2026, OpenAI had created and dissolved two separate safety oversight structures — Superalignment and Mission Alignment — within the span of three years. Each dissolution was accompanied by reassurances that safety responsibilities were being "integrated" into broader teams. Each departure of safety leadership was accompanied by statements that the company remained committed to its mission.

The Pattern

OpenAI has now disbanded two dedicated safety oversight bodies — the Superalignment team (May 2024) and the Mission Alignment team (February 2026) — while simultaneously raising $40 billion in new capital, restructuring toward for-profit status, and signing military contracts with the Pentagon. The safety infrastructure has contracted as the commercial infrastructure has expanded. The relationship between those two trajectories is not coincidental.

· · ·

Part III The "Someone Else Will" Problem

When OpenAI announced its Pentagon deal on the evening of February 27, 2026, Sam Altman described it as preserving the same principles Anthropic had been expelled for defending. The safety constraints were identical in substance — no mass domestic surveillance, no fully autonomous weapons. The difference, Altman said, was in the enforcement mechanism. OpenAI would rely on architectural controls and embedded cleared engineers rather than hard contractual language. The Pentagon found this acceptable. It had not found Anthropic's version acceptable.

Several things deserve examination here.

The first is the obvious question of why the Pentagon accepted OpenAI's version. Both companies drew the same lines. The most plausible answer is not that OpenAI's architectural protections are more durable than contractual language — they may well be less so — but that they are more easily circumvented under operational pressure, renegotiated without public disclosure, or quietly eroded over time. The Pentagon's objection to Anthropic's approach was not about the substance of the red lines. It was about who held enforcement authority over them. OpenAI's approach leaves that authority more ambiguous.

The second is what Altman's position reveals about OpenAI's institutional character. In the days before the deal was announced, Altman publicly supported Anthropic's position. He told CNBC that it was important for companies to work with the military only in compliance with legal protections and "the few red lines that we share with Anthropic." His employees signed an open letter of support for Dario Amodei. Then, hours after Anthropic was expelled, OpenAI announced its deal.

Altman characterized the difference as tactical rather than principled. Anthropic "may have wanted more operational control than we did," he suggested. This framing deserves scrutiny. What Anthropic wanted was contractual certainty that its technology would not be used in ways it believed were both technically unsafe and constitutionally prohibited. What OpenAI accepted was a framework in which those same protections exist in principle but are enforced by the very party that sought to eliminate them.

"If I don't do it, someone else will" is not a moral argument. It is a description of competitive pressure dressed in the grammar of inevitability.

— Editorial observation

The "someone else will" rationalization has an ancient lineage and a consistent function: it permits the person invoking it to participate in something they acknowledge is problematic while displacing moral responsibility onto the hypothetical other who would have done it anyway. The logic is seductive because it contains a factual claim that may be true — in this case, if neither OpenAI nor any American company had stepped in, Chinese AI providers with far fewer ethical constraints would have been available alternatives.

But the rationalization collapses on examination for three reasons.

First, it treats the competitive landscape as fixed when it is in fact being shaped by the decisions being made. If American AI companies establish that ethical constraints are negotiable under sufficient government pressure, the precedent set is that constraints are theater. Every future administration seeking to remove safety guardrails from AI systems now knows the playbook: apply commercial pressure, accept the first company willing to package capitulation as principle.

Second, it ignores what is being rewarded. OpenAI's decision to step in immediately after Anthropic was expelled sent a market signal to the entire AI industry: the company willing to be most accommodating wins the contract. The company willing to hold a line gets designated a national security threat. "Someone else will do it" thus becomes a self-fulfilling guarantee of a race to the bottom, where the floor of ethical constraints is defined by whoever has the least.

Third, and most fundamentally, the argument dissolves accountability without resolving the underlying moral problem. The potential harms from mass AI-enabled surveillance and autonomous lethal systems do not become less harmful because OpenAI rather than a Chinese competitor provides the capability. The accountability gap — which the main essay identifies as perhaps the most dangerous aspect of autonomous AI weapons — is not closed by American corporate participation. It is merely domesticated.

Part IV The Founder's Lawsuit and What It Proved

The most pointed public articulation of OpenAI's institutional betrayal came, ironically, from Elon Musk — himself a figure of contested credibility whose motives include competitive interest in the outcome. In February 2024, Musk sued OpenAI and Altman, alleging breach of the founding agreement by prioritizing profit over public benefit. The lawsuit was dropped, refiled, and became a running legal battle.

OpenAI's response — that Musk himself had in 2017 proposed converting the organization to a for-profit structure when he failed to obtain majority equity and control — did not actually rebut the core allegation. It established that Musk was not a disinterested party. It did not establish that the founding mission had been honored.

The more significant response came from people with no competitive stake in the outcome. In April 2025, dozens of prominent figures — including Geoffrey Hinton, the "AI Godfather" who had left Google specifically to speak freely about AI risks, and Harvard law professor Lawrence Lessig — urged the Attorneys General of California and Delaware to block OpenAI's for-profit restructuring. Their letter argued that the move represented a fundamental betrayal of the founding promise.

"The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritize shareholder returns."

— Open letter to Attorneys General of California and Delaware, April 2025, signed by Geoffrey Hinton, Lawrence Lessig, and others

Harvard law professor Roberto Tallarita had earlier asked what corporate governance mechanisms could actually prevent a firm from straying from its stated social purpose. His answer: none. The lesson from OpenAI's trajectory is that institutional commitment to a mission, without structural mechanisms capable of overriding commercial incentive, is not a commitment at all. It is an aspiration that holds until it becomes expensive.

· · ·

Part V What Anthropic's Stand Actually Revealed

The uncomfortable question the OpenAI-Pentagon deal forces is whether Anthropic's position was genuinely different in kind — or merely in the stage of commercial pressure it had reached. LLMs are superficially capable of near human reasoning at machine speed, making them attractive to the Department of War's needs for autonomous weapons, but someone familiar with the environment these weapons operate in and the limitations of LLM's to do this as described in the acompanying sidebar will likely be very concerned.

Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and colleagues who left OpenAI explicitly over safety concerns. Its founding documents include a constitutional framework for Claude. Its institutional identity is built around the claim to be the responsible actor in an industry of reckless ones. For eleven days in late February and early March 2026, it acted in accordance with that identity under conditions of genuine duress — losing a $200 million contract, facing a supply chain risk designation, watching its IPO prospects cloud over, and still refusing to remove the contractual protections it believed mattered.

That is meaningful. It is not nothing. The question is whether it will remain meaningful as the commercial pressure compounds and the legal battle extends over years.

OpenAI was not always what it became. It was, at its founding, a genuine attempt to build something different. The safety researchers who left it in 2024 had joined because they believed in what it said it was. The trajectory from nonprofit research lab to $500-billion-dollar military contractor took a decade and proceeded through dozens of individually defensible decisions, each of which made the next one easier.

Anthropic is not immune to that trajectory. No institution is. What the current moment reveals is not that Anthropic is virtuous and OpenAI corrupt — institutional character is not a fixed property — but that the pressure being applied to AI companies by the current administration has now established, with clarity, what capitulation looks like and what it costs to resist it.

OpenAI showed one path. Anthropic has chosen another, at least for now. Whether that choice survives the years of litigation, commercial attrition, and continued government pressure ahead is unknown. History suggests the odds are not favorable. But the fact that a company chose the harder path — even briefly, even knowing the costs — is not nothing in a landscape where "someone else will" has become the industry's default moral framework.

The someone else, this time, was OpenAI. That sentence should be read as the indictment it is.

Sources & Formal Citations

1 OpenAI — Wikipedia: Corporate History and Structure
Wikipedia · Accessed March 3, 2026
2 OpenAI | ChatGPT, Sam Altman, Microsoft, & History
Encyclopædia Britannica · Accessed March 3, 2026
3 OpenAI Abandons Move to For-Profit Status After Backlash. Now What?
ProMarket (University of Chicago Booth School of Business) · Published May 6, 2025
4 OpenAI Wants to Go For-Profit. Experts Say Regulators Should Step In
TIME · Published April 26, 2025. Includes open letter signed by Geoffrey Hinton, Lawrence Lessig, and former OpenAI researchers.
5 OpenAI Scuttles Plan to Transform Into a For-Profit
Fortune · Published May 5, 2025
6 OpenAI to Remain Under Non-Profit Control in Change of Restructuring Plans
CNN Business · Published May 5, 2025
7 Our Structure
OpenAI official corporate governance page · Updated October 28, 2025
8 OpenAI Dissolves Superalignment AI Safety Team
CNBC · Published May 17, 2024
9 More OpenAI Drama: Exec Quits Over Concerns About Focus on Profit Over Safety
CNN Business · Published May 17, 2024. Includes Jan Leike's resignation statement on X.
10 Top OpenAI Researcher Resigns, Saying Company Prioritized 'Shiny Products' Over AI Safety
Fortune · Published May 17, 2024
11 OpenAI Safety Team's Departure is a Fire Alarm
Center for AI Policy (CAIP) · Published May 2024
12 OpenAI Disbands Another Safety Team, as Head Advisor for 'AGI Readiness' Resigns
CNBC · Published October 24, 2024. Reports departure of Miles Brundage and dissolution of AGI Readiness team.
13 OpenAI Disbands Its Mission Alignment Team After Just 16 Months
WinBuzzer · Published February 12, 2026
14 OpenAI Sweeps In to Snag Pentagon Contract After Anthropic Labeled 'Supply Chain Risk'
Fortune · Published February 28, 2026. Includes details of OpenAI's safety framework and Altman's statements.
15 OpenAI Announces Pentagon Deal After Trump Bans Anthropic
NPR · Published February 27–28, 2026. Includes Altman's statements on shared red lines with Anthropic.
16 Pentagon Ditches Anthropic AI Over "Security Risk" and OpenAI Takes Over
Malwarebytes · Published March 3, 2026. Includes comparison of OpenAI and Anthropic enforcement approaches.
17 What Is OpenAI? Complete History: GPT-5, ChatGPT, GPT-4, o1, o3, Sora, Stargate (2026)
Taskade Blog · Updated 2026. Chronological corporate history including Musk lawsuit and restructuring.
18 Elon Musk Wanted an OpenAI For-Profit
OpenAI official response to Musk lawsuit allegations · Accessed March 3, 2026
19 Ilya Sutskever and Jan Leike Resign from OpenAI [Updated]
LessWrong community documentation · May 2024. Compiles departures, statements, and analysis.
20 Sam Altman Congressional Testimony on OpenAI Structure
U.S. Senate Committee on the Judiciary · May 16, 2023. Altman testified the nonprofit structure "ensures it remains focused on its long-term mission."

Comments

Popular posts from this blog

Why the Most Foolish People End Up in Power

A Student's Guide to Quantum Field Theory:

Earth's Hidden Ocean: The Ringwoodite Water Reservoir