Key Takeaways
- Anthropic was labeled a supply chain risk after refusing to permit unrestricted military use of its AI systems
- U.S. agencies shifted toward OpenAI models even as reports surfaced of continued battlefield use of Anthropic tools
- The Iran conflict exposed the growing role of AI driven surveillance, targeting and cloud infrastructure in modern warfare
Artificial intelligence has shifted from a theoretical debate to a live operational tool inside the U.S. defense ecosystem, and the latest clash between Anthropic and the Department of Defense has pushed that shift into full public view. The timing has been striking. The dispute unfolded in parallel with active U.S.-Israeli operations against Iran, creating a situation in which emerging technologies were shaping events even as companies argued over how those technologies should be governed.
The conflict escalated when U.S. Defense Secretary Pete Hegseth ordered Anthropic CEO Dario Amodei to approve unrestricted military use of the company's AI systems or risk termination of federal contracts. Amodei refused. That alone would have been notable, given the general trajectory of the AI industry over the past few years. But the Department of Defense then designated Anthropic a supply chain risk to national security, a move the company immediately vowed to challenge in court.
Here is where things became messy. Within hours of the designation, The Wall Street Journal reported that U.S. Central Command had already used Anthropic technology in operations in Iran through existing integrations. It added a confusing twist. How was a tool considered too risky for procurement still considered reliable enough for active missions? The answer has not been offered, and the ambiguity has left B2B technology leaders wondering how procurement risk frameworks keep pace with battlefield urgency.
The earlier report that the U.S. military allegedly relied on Claude during the operation to capture Venezuelan President Nicolas Maduro raised similar questions. That system was apparently accessed through Palantir, a long-standing defense contractor, according to anonymous sources cited by the Journal. Whether those claims are fully accurate or partly anecdotal, they show how intertwined these companies have become within defense analytics stacks. Untangling those relationships will not be simple.
On the opposite side of the strategic divide, OpenAI secured a deal to provide AI systems to classified Pentagon networks. CEO Sam Altman publicly emphasized guardrails such as bans on domestic mass surveillance and requirements for human responsibility in the use of force. Days later, however, The Guardian reported that Altman privately told employees the company ultimately cannot control how the Pentagon deploys its tools. That kind of contradiction is not unusual in fast-moving tech sectors, although it does highlight the tension between public assurance and operational reality.
Meanwhile, a grassroots-style boycott, QuitGPT, began spreading across social platforms. In another era, celebrity endorsements might have seemed irrelevant to defense contracting. Yet Katy Perry and others lending support to Claude showed how public sentiment is shaping brand perception in a way that even national security debates cannot fully escape.
A different part of the story emerged from the Middle East itself. Iran's strike on an Amazon Web Services data center in the United Arab Emirates reflected a growing belief among states that cloud infrastructure is now a legitimate target in conflict. The link to Israeli intelligence activity stored on AWS servers, as reported by +972 Magazine and The Guardian, pushed this conversation into operational reality. For companies selling cloud or data tools to governments, the risks suddenly looked more physical.
Palantir again featured prominently, given its Maven Smart System platform, which the U.S. Department of Defense uses to identify and track objects across satellite, drone, and sensor networks. The company also has a strategic partnership with Israel's Defense Ministry. For B2B providers, Palantir's role illustrates the commercial opportunity and reputational exposure that come with such deep integration into military operations.
Then there was the long-running Israeli intelligence campaign allegedly involving widespread access to Tehran traffic cameras, reported by the Financial Times. It is not a new idea that surveillance can shape conflicts, although the level of detail described in the reporting underscored how far capabilities have evolved. One has to ask how international law will treat this type of state-level penetration, especially as AI models take on more of the analysis load.
Legal experts have been clear that current humanitarian law does not explicitly regulate artificial intelligence. Even defining autonomous weapons is contentious. The International Committee of the Red Cross uses a strict definition requiring zero human involvement in target selection or use of force. The U.S. Department of Defense uses a looser one that allows for human oversight and cancellation authority. On paper, those differences may seem small. In practice, they open very different pathways for deployment.
The most disturbing claim linked to the Iran conflict came through social media speculation that AI tools might have selected an elementary school as a bombing target. While unverified, it was enough to prompt questions from journalists. CENTCOM declined to answer, saying only that it had nothing to provide at this time. The silence did not stop the conversation. It had already spread.
For technology companies, the convergence of battlefield urgency, cloud infrastructure, model governance and geopolitical escalation is forcing new decisions faster than many leaders expected. Some of these choices are ethical. Some are contractual. Some are matters of national security. And all of them are happening at once.
⬇️