The Pentagon Blacklisted Anthropic. The Treasury Is Telling Banks to Use Its AI.
Last updated: April 14, 2026
By Don Ho, Esq. | April 14, 2026 Last updated: April 2026
The U.S. Department of Defense blacklisted Anthropic as a supply chain risk in March 2026 while the Treasury Department simultaneously urged the six largest U.S. banks to adopt Anthropic’s Claude Mythos model for cybersecurity defense, creating the most extreme example of contradictory federal AI policy to date. The same administration that branded Anthropic a national security threat is now urging Wall Street to adopt its technology. Treasury Secretary Scott Bessent and Fed Chair Jerome Powell personally called executives at JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley this past week and told them to test Anthropic’s Claude Mythos model for cybersecurity vulnerabilities. Bloomberg confirmed five of the six largest U.S. banks are now running Mythos internally.
This is the same company the Pentagon designated a “supply chain risk” in March after CEO Dario Amodei refused Defense Secretary Pete Hegseth’s demand to remove two safety restrictions: no deployment for fully autonomous weapons, and no use in mass surveillance of American citizens. The label bars Anthropic from all military contracts and directs defense contractors to stop using its Claude models.
Two branches of the same government, working at cross purposes, on the same company, in the same month. If you advise companies on AI governance, this is the case study you’ve been waiting for. It’s also the most extreme example yet of the AI regulatory patchwork we’ve been tracking — contradictory guidance from different agencies creating impossible compliance conditions.
The Mythos Model and Project Glasswing
Claude Mythos Preview launched April 7, 2026. Anthropic describes it as their most capable model for coding and autonomous operation. During internal testing, Mythos identified thousands of zero-day vulnerabilities (flaws unknown to software developers) across every major operating system and web browser.
Anthropic chose not to release it publicly. Instead, they created Project Glasswing, a restricted access program distributing the model to roughly 50 organizations including AWS, Apple, Google, Microsoft, Nvidia, CrowdStrike, and JPMorgan Chase. Anthropic committed up to $100 million in usage credits and $4 million in direct donations to open-source security organizations as part of the initiative.
The security community has pushed back on the marketing. Tom’s Hardware noted that claims of “thousands” of severe zero-day discoveries relied on just 198 manual reviews, and many of the flagged vulnerabilities were in older software or were impractical to exploit. Critics described the restricted release as less about responsible governance and more about enterprise sales: create scarcity, generate fear, and let the customers line up.
Fair enough. But even if you discount the vulnerability numbers by half, the defensive logic is real. If Mythos can find holes in banking infrastructure, so can the next model from a less safety-conscious lab.
How We Got Here
The Pentagon dispute started in February 2026. Hegseth gave Amodei a Friday deadline to drop the safety restrictions or lose a $200 million defense contract. Amodei refused. Hours later, Hegseth declared Anthropic a supply chain risk on social media. President Trump separately ordered federal agencies to stop using Anthropic’s technology and called the company “radical left, woke.”
Anthropic filed two federal lawsuits on March 9, challenging the designation as unconstitutional retaliation for protected speech. The courts split. A federal judge in San Francisco issued a preliminary injunction, writing that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.” A D.C. appeals court denied Anthropic’s separate request to halt the blacklisting while the case proceeds.
The net effect: Anthropic is excluded from DOD contracts but can continue working with other government agencies. Treasury and the Fed walked right through that gap.
What This Means for GCs and Compliance Teams
If you’re advising companies on AI procurement, the Anthropic situation exposes three problems you need to address right now.
Government AI policy is incoherent, and you can’t wait for it to resolve. The federal government has no unified position on which AI providers are acceptable. The Pentagon says Anthropic is a security risk. The Treasury says it’s essential for financial system defense. Both positions carry regulatory weight. If your company operates in both defense and financial services, you are navigating contradictory federal guidance with no clear precedent for how to reconcile it.
Supply chain risk designations are political weapons. The Anthropic designation did not follow the standard interagency review process. It was announced on social media by the Defense Secretary after a contract negotiation collapsed. This is the same procurement environment where the GSA declared AI can be used for “any lawful purpose” — one agency opening the door while another slams it shut. A federal judge called it “Orwellian.” Regardless of how the litigation resolves, the precedent is set: any AI vendor that disagrees with government contract terms can be branded a national security risk. If you rely on a single AI provider for critical operations, you now have a political risk to model, not just a technical one.
Safety guardrails are a litigation and procurement advantage. Anthropic’s refusal to remove its safety restrictions is the reason it lost the Pentagon contract. It is also the reason the Treasury is now recommending it. The company’s willingness to set boundaries on military use became a selling point for financial regulators who need to trust that the model won’t be deployed recklessly. For AI vendors, this is a strategic lesson. For buyers, it’s a diligence question: does your AI provider have clear, documented use restrictions? Because the absence of guardrails is not a feature.
The UK Is Already Reacting
The Financial Times reported that UK officials at the Bank of England, the Financial Conduct Authority, and HM Treasury are in discussions with the National Cyber Security Centre to assess vulnerabilities highlighted by Mythos. Representatives from major British banks, insurers, and exchanges are expected to receive briefings within two weeks.
This matters because it signals that Mythos’s cybersecurity capabilities are being taken seriously by regulators outside the U.S. political fight. If the model is genuinely finding zero-day vulnerabilities in global financial infrastructure, the policy question moves from “should banks use this” to “can banks afford not to.”
What to Do Now
If you need a framework for navigating this, the 5-layer AI compliance stack is built for exactly this kind of multi-dimensional risk.
Audit your AI vendor relationships for political risk. If you use Anthropic, OpenAI, or any frontier AI provider in regulated operations, map the current regulatory landscape for each. Understand which agencies approve and which agencies restrict. Document your reasoning for vendor selection.
Build redundancy into critical AI deployments. The Anthropic blacklisting happened in days. No warning. No transition period. If a single executive order or agency designation can cut off your access to a critical AI tool overnight, your continuity plan is incomplete. We saw the same dynamic play out when Anthropic killed OAuth access and third-party developers lost their integration overnight — platform dependency is a business risk, not just a technical one.
Track the litigation. The Anthropic v. DOD cases are moving fast. The San Francisco injunction may be appealed. The D.C. case continues. The outcomes will shape how aggressively the government can use supply chain risk designations against AI companies, and by extension, how much vendor risk you’re carrying in your own AI stack.
The Mythos paradox is not just an Anthropic story. It’s a preview of how AI governance is going to work in the U.S. for the foreseeable future: messy, political, and contradictory. The White House AI framework pushing federal preemption only adds another layer of contradiction for companies trying to build a coherent compliance posture. Plan accordingly.
When the same AI company is blacklisted and recommended by different agencies, compliance isn’t straightforward. Book a diagnostic.