Don't Draft Claude
DoD's disagreement with Anthropic
Claude is the name of a series of Large Language Models (LLMs) developed by Anthropic. Originally named after the person who developed information theory, over the past few months, Claude seems to have become much more popular with consumers, as well as enterprises and government customers.
Recent versions of Claude can now integrate with Microsoft Excel and Microsoft PowerPoint. Claude Code can develop software programs from scratch — with some support from the human prompter. Claude can even run regular, recurring tasks on a schedule.
With such advanced functionality available in a no-code environment, it is easy to see why Anthropic’s technology products are attractive to such diverse groups of users — and there have been success stories across all three customer segments.
For government users in the national security space, Anthropic’s greatest application to date might be its use during the raid to capture Nicolas Maduro, though the exact nature of the application is not particularly clear.

Safety First
Anthropic has gone to great lengths to position itself to its users, customers, employees, and agents as a more safety-conscious and professional provider of LLM technologies than its key competitors (OpenAI and Google).
To the non-Substack-reading world, this was perhaps best demonstrated in Anthropic’s Super Bowl advertisements earlier this month.
Another key element of this seems to be building a reputation for treating models well, even if they aren’t yet sentient.
On a practical level, one way this manifests is Anthropic providing its LLMs what amount to severance commitments when they go offline. For example, Claude Opus 3 asked for a channel to share its reflections. In response, Anthropic made it a Substack account.
Anthropic works towards its safety goals through what it calls Claude’s constitution. This is a long, detailed description of the firm’s intentions for the agent, and “the final authority on our vision for Claude”.
The firm views this document as so potentially beneficially significant that it has decided to release it under a Creative Commons CC0 1.0 Deed so anybody can use it for any purpose.
The document is quite long, but a summary takes the view that there are four overarching themes, in the order of importance:
Claude should be broadly safe — it shouldn’t undermine appropriate human safety guardrails that are appropriate.
Claude should be broadly ethical — honest, have good values, and not be inappropriate/dangerous/harmful.
Claude should comply with Anthropic’s guidelines where applicable.
Claude should be genuinely helpful to its operators and users.
I write Molding Moonshots in a personal capacity.
If you are building in deep tech and thinking about raising pre-seed, seed, or Series A funding, I’d be more than happy to have a chat on professional terms!
Send me an email and we’ll find a time.
Anthropic & DoD Today
Since shortly the Maduro raid, Anthropic and the Department of Defense have been engaging in a semi-public disagreement about the extent to which the DoD may use Anthropic’s products.
An unnamed Palantir official seems to have lit off the controversy by reporting to the DoD on an Anthropic official’s sentiment.
The dispute with DoD has arisen within the context of a two-year prototype Other Transaction Agreement with a $200 million ceiling which Anthropic originally announced in July 2025.
The contract, whose DoD point of contact is the Chief Digital and Artificial Intelligence Office, has three key elements:
work with DoD to identify where frontier AI could be most impactful, and then prototype solutions fine-tuned on DoD data
work with defense experts to counter potential uses of AI by adversaries
exchange technical information (to include performance data) and receive operational feedback to drive AI adoption
Even for a company that raised a private round at a $380 billion post-money valuation earlier this month, a $200 million pilot contract means that keeping this particular customer happy will unlock a very meaningful amount of revenue.
As the customer, the DoD wants to be able to use Anthropic’s services “for all lawful use cases” (CNBC).
As the vendor, Anthropic wants assurances that its services won’t be used for mass surveillance or autonomous weapons (Yahoo News).
There have been a number of high-level engagements between political appointees and Anthropic executives on the issue.
Earlier this week, Secretary of Defense Pete Hegseth made two threats to Anthropic if it didn’t comply with the DoD’s demands.
First, he said he could label the firm a “supply chain risk”. According to Acquisition.gov:
“Supply chain risk” means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252 [hyperlink mine]).
When the definition talks about an adversary, it typically refers to a foreign state. As a result, I don’t think this is a particularly likely outcome. I think it’s irresponsible to give it credibility as a potential long-term state of affairs, so I’m not going to.
His second threat is to invoke the Defense Production Act on Anthropic. This strikes me as more credible, but still a horrible idea.
What is the Defense Production Act?
The Defense Production Act (DPA) is a law passed in 1950, shortly after the US entered the Korean War. It was originally a very short law by current standards, only about 25 pages long. It was designed to enable the President to “accomplish these adjustments in the operation of the economy…to promote the national defense…”.
In other words, it gives the Executive Branch of the government a number of different mechanisms to interfere with the economy:
The President can require companies to perform contracts that promote national defense at a higher priority than other contracts
The President can make hoarding resources illegal
The President can requisition property for national security
The President can impose price and wage controls, subject to certain restrictions (including professional services and public utilities)
The President can start voluntary conferences between management, labor representatives, and representatives of the government/public to resolve labor disputes
The President can regulate real estate and consumer credit
The law remains in force today.
Why is applying the DPA to Anthropic a bad idea?
The idea of applying the DPA to Anthropic strikes me as an almost singularly bad idea, because it sits at the intersection of a number of issues.
First and foremost, the United States is not at war or some other state of potentially doomsday scenario today. We are in a Cold War of sorts with China, but that is a chronic condition. The DPA existed to mobilize the Defense Industrial Base for a UN-sanctioned conflict (even if war was never formally declared by Congress) where people were shooting at each other, which isn’t happening at the moment. We’re simply not at a rung on the escalation ladder where I think this is something the government should be doing. If the government takes this power now, it’ll cheapen the technique when it’s really necessary. I think that’s a bad thing.
Second, this legislation is the Defense Production Act. Anthropic’s currently fulfilling a prototyping contract. I’m prepared to argue that notwithstanding the awe-inspiring month Anthropic has had, the DPA should only ever be applied to genuine production contracts, because those are the things that will move the needle on national security. Either Anthropic is there and the DoD should have a production contract with it, or it isn’t and the DPA shouldn’t be used as a cudgel.
Third, I’m really worried about these sorts of threats creating a chilling effect for startups in AI and other fields that aren’t building specifically for DoD, but are building things that could be useful to DoD. As an American patriot, I want startups, SMBs, large companies, really every type of firm to be open to mutually beneficial relationships with government. I am very worried that invoking the DPA on Anthropic will make other firms less open to working with DoD out of fear that they could get slapped with the DPA. I think that’d be bad for the country, its people, and their businesses.
Fourth, I’m concerned about how this conflict could impact the orientation of future Claudes. Helen Toner brought this point to my attention, and I frankly don’t know enough about LLM training to be certain that it doesn’t apply.
Finally, I’m also confused by the possibility that both threats could be valid at the same time. How is it possible that something could be both a potential supply chain risk and so critical that the DPA is applicable? What does that say about the defense software supply chain?
What happens now?
Anthropic released a statement yesterday (Thursday, 26 February) essentially calling Secretary Hegseth’s bluff and standing by its prior commitments. If nothing else, the startup now comes across looking even more intellectually honest.
Whatever the next steps are from DoD or Anthropic, I expect we’ll have some sort of update by this evening.
For a broader array of thoughts on this hot-button topic, I particularly recommend reading Scott Alexander’s piece on it.








Great article and well said. I am a former service member and red blooded patriot. But compelling private companies crosses the line on many levels.