Technology

Pentagon Partners with Seven AI Companies, Blocks One Over Security Concerns

The Pentagon signed agreements with seven major AI companies to use their technology on classified military networks, but notably excluded Anthropic due to disputes over how the military could deploy

Martin HollowayPublished 6d ago6 min readBased on 6 sources
Reading level
Pentagon Partners with Seven AI Companies, Blocks One Over Security Concerns

Pentagon Partners with Seven AI Companies, Blocks One Over Security Concerns

The Pentagon announced on May 1, 2026 that it had reached deals with seven major AI companies to use their technology on the Defense Department's most secure computer networks. The companies are SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, and Amazon Web Services.

Anthropic, an AI company focused on safety, was left out. The Pentagon formally declared Anthropic a supply chain risk — essentially a security threat — earlier in 2026, which blocks the Pentagon and its contractors from using Anthropic's Claude AI tool.

Why Anthropic Was Blocked

According to Anthropic CEO Dario Amodei, the Pentagon said Anthropic poses a supply chain risk. The disagreement came down to rules: Anthropic wanted stricter limits on how the military could use Claude, while the other companies accepted looser restrictions.

This designation matters beyond just Pentagon contracts. Under federal rules, other government contractors may also have to stop using Claude when it's part of their military work. However, Amodei clarified that the Pentagon's ban applies specifically to Claude being used directly in military contracts, not to broader commercial use of the software.

What This Means for Military Networks

The seven approved companies will now access the Pentagon's most classified computer systems — ones that handle secrets and top-secret information. This is a big shift. Instead of building their own AI systems, the Pentagon is partnering directly with the companies that make cutting-edge AI.

Why. The military AI systems built just for defense cannot keep pace with the AI that commercial companies create for billions of people. The Pentagon's approach recognizes this gap and tries to close it by bringing in the best technology available.

SpaceX being included suggests the deals cover space-based military systems, not just general AI. NVIDIA's involvement points to the computer chips and computing power needed to run these AI systems.

The Anthropic Question

The broader context here involves a growing divide in the AI industry. Some AI companies, like Anthropic, prioritize safety and caution. Others, like OpenAI, have shifted toward accepting military uses. Anthropic's approach — emphasizing careful, responsible AI — appears to clash with what the military needs: flexible systems that can operate in real combat situations without heavy restrictions.

The Pentagon's choice to exclude Anthropic also favors large, established companies. Smaller AI startups cannot easily meet the security clearances and technical requirements needed for classified networks. This reinforces the position of companies like Google and Microsoft while limiting variety in what the Pentagon can use.

The Technical Reality

Getting commercial AI to work on the Pentagon's most secure networks is complex. These networks are isolated from the internet (called air-gapped), have strict rules about where data can live, and must record everything that happens. Consumer-facing AI services are built for the open internet and need significant reworking.

Military AI also has a harder job than chatbots do. It needs to explain why it reaches a conclusion, deliver the same result every time under the same conditions, and have safe fallbacks if something goes wrong. Regular AI systems — ones that generate text or images — do not necessarily have these features.

What Happens Next

The Pentagon is not putting all its eggs in one basket. By signing deals with multiple companies instead of picking just one, the Pentagon gets backup options and keeps competition alive. This is different from older Pentagon contracts, which often had one big winner.

The timing matters too. These agreements came early in 2026, suggesting a coordinated push to speed up AI use across the military.

In my view, this moment marks a turning point for military AI. The Pentagon is now set up to use advanced AI across defense operations. The choice to exclude Anthropic signals that the military prioritizes operational capability over the more cautious safety approaches some companies push for. Whether this trade-off works out well depends on how safely the approved companies can deliver AI systems that perform well under pressure. History suggests the answer will be mixed — both valuable and challenging in ways we cannot yet predict.