Doomsday: Pentagon demand for AI-controlled nuclear weapons
Hegseth tries to twist Anthropic's arm to allow non-human launch prerogatives
Unarmed Trident II D5 missile launches from the ballistic missile submarine Nebraska.
As a result of Anthropic’s refusal to grant permission for such madness, Trump has ordered the federal government to stop using Anthropic’s products, which will likely result in lawsuits regarding breach of contract and arbitrary and capricious behavior from the increasingly unstable and unelected pedophile, rapist, business cheat, and pathological liar in the position of chief puppet of the Anglo-Euro-American banking cartel (here, here, here, here, and here).
The government then quickly signed a contract with OpenAI, which claims that the limitations put in place exceed what the Department of War was rejecting from Anthropic:
We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:
No use of OpenAI technology for mass domestic surveillance.
No use of OpenAI technology to direct autonomous weapons systems.
No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).
While this seems to raise questions concerning the Pentagon’s agreement to accept AI limitations with OpenAI that it would not with Anthropic, apparently there are holes in the agreement that provide the Department of War with the leeway it wanted:
(Sam) Altman stated that he had received guarantees that OpenAI’s models wouldn’t be used for mass surveillance or autonomous weapons either, but given Hegseth’s unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman’s contract must be weaker or, in a worst-case scenario, completely toothless.
The debate centers on the Department of War’s demand that AIs be permitted for “all lawful use”. Anthropic worried that mass surveillance and autonomous weaponry would de facto fall in this category; Hegseth and Altman have tried to reassure the public that they won’t, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman’s initial statement seemed to suggest additional prohibitions, but on a closer read, provide little tangible evidence of meaningful further restrictions.
* * *
Ever since the Epstein files surfaced and pressure mounted to release its massive trove of evidence, the Trump administration (specifically the Department of Justice and the FBI), operated by the intelligence services serving the global for-profit corporate pyramid, has done its utmost to delay releasing the files while it redacts, suppresses, and deletes key evidence.
Meanwhile, various military actions (Venezuela, Iran, and the Levant) and international disputes (tariffs, EU NATO payments, Greenland, Chagos Archipelago, etc.) serve as distractions from exposing and indicting those who participated in Epstein’s sex crimes and are therefore blackmailed by those whose control the international pedophile network, who have avoided scrutiny as well.
Inaction on the part of Congress and the UN, speaks for itself.
Clearly, just as Bondi has stated, a full and unedited release of the Epstein files, with only the victims names redacted, would provide the opportunity to bring down the current power structure.
Bondi stated this as a threat, insinuating that it would lead to a global collapse, but that is a bluff. The current nation-state institutions would not go away. Under the right conditions, they would be repopulated by persons outside the debt-slavery blackmail matrix.
* * *
NEW: Our two-volume published work, 7 Steps to Global Economic and Spiritual Transformation, is available at Volume I, Access to Tools, here, and at Volume II, Application of Tools, here.



If we’re even debating whether machines should hold launch authority, we’ve already crossed a psychological line. Replacing human judgment with probabilistic models changes the nature of responsibility itself. Who answers for a machine’s decision? The coder? The general? The algorithm? The deeper question is whether removing human hesitation from nuclear command makes us safer — or just faster at making irreversible mistakes.