By Raj Patel — Digital Assets Correspondent
Sam Altman Testified Before Congress About AI Safety Then Sold Autonomous Weapons to the Pentagon — And Nobody Blinked
The AI safety movement's only measurable output is a LinkedIn content pipeline for ex-researchers who resigned 'on principle' and changed absolutely nothing. The Pentagon thanks them for their service.
Opinion
This article represents the personal views of the author and does not constitute financial advice, investment advice, or any other form of professional advice. It does not represent the views of FinBlockDaily or any affiliated organisation. See our full disclaimer.
I'm going to describe a sequence of events and I want you to tell me, with a straight face, that we live in a serious civilisation.
2023: Sam Altman sits before the United States Senate, wearing his carefully chosen "I'm a humble technologist" grey T-shirt, and tells elected officials that artificial intelligence poses an existential risk to humanity. He compares it to nuclear weapons. He requests regulation. He practically begs Congress to save humanity from the thing he is building. Senators nod gravely. Photos are taken. The coverage is rapturous. "Finally, a tech leader who gets it."
2024: OpenAI converts from a non-profit to a for-profit. The safety team — the people who were supposedly the entire point — discover that their headcount has been frozen while the product team quadruples. Ilya Sutskever, the chief scientist and the closest thing the company had to a conscience, leaves after a failed boardroom coup. Jan Leike, the head of the superalignment team, quits and publishes a dignified statement about the company's priorities shifting. The statement generates 47,000 LinkedIn reactions. Nothing changes.
2025: OpenAI signs a defence contract with the Pentagon. Not a "consulting arrangement." Not a "research partnership." A weapons contract. Autonomous targeting systems. Battlefield decision-support AI. The kind of product that, if Sam Altman's 2023 congressional testimony was honest, should have been classified as an existential risk. The company's response is a blog post about "responsible defence applications" that contains the phrase "AI-enabled security solutions" — a euphemism so transparent it's practically nude.
2026: The hardware lead resigns after the contract expansion is announced. He publishes — and I want you to savour this — a LinkedIn post about "values alignment." The post receives four thousand reactions. A Wired journalist writes a sympathetic profile. A podcast is recorded. Absolutely nothing changes. Again.
This is the AI safety movement. This is its entire output. A content pipeline that converts ethical concern into LinkedIn engagement metrics with 100% efficiency and zero — precisely, measurably, definitionally zero — impact on the trajectory of the technology it claims to be concerned about. If you designed a system specifically to neutralise dissent by giving it the aesthetics of resistance while removing all actual resistance, you could not improve on what Silicon Valley has spontaneously created. It's genuinely brilliant. I hate it.
The core delusion — and I use that word clinically — was always sociological, not technical. The safety researchers believed that they could control the commercial trajectory of a technology worth trillions of dollars by occupying positions within the companies building it and exerting moral pressure from inside. They believed this because they are, for the most part, extremely intelligent people who have spent their entire careers in environments where being right about technical questions translates into institutional influence. Academia works like that. Research labs work like that. Venture-capital-funded technology companies do not work like that. Venture-capital-funded technology companies work like this: whoever controls the revenue controls the company, and once the Pentagon is your customer, the Pentagon controls the company, and the Pentagon does not give a lukewarm damn about your alignment research.
Related Articles
Comments (0)
No comments yet. Be the first to share your thoughts.


