Anthropic
●TrustworthyRefused Pentagon's surveillance terms at significant financial cost.
Key Facts
- Company
- Anthropic
- Product
- Claude
- Grade
- Trustworthy
- Overall Score
- 4.4 / 5
- Last Updated
- 2026-03-27
- Military Status
- Refused
Safety-first enterprise AI. Founded by ex-OpenAI researchers over safety disagreements. 80% revenue from enterprises. Constitutional AI approach to alignment.
Had a $200M Pentagon contract. Refused to remove restrictions on mass domestic surveillance and autonomous weapons. Designated 'supply chain risk' by Pentagon. Trump ordered agencies to stop using Claude. Anthropic filed lawsuits. Google's Jeff Dean and 30+ rival employees filed amicus brief in support. On March 26, 2026, U.S. District Judge Rita Lin blocked the supply chain risk designation in a 43-page ruling, finding Anthropic is likely to succeed on the merits. Lin called the government's actions 'classic illegal First Amendment retaliation' and said they appeared 'designed to punish Anthropic.' The ruling also blocked Trump's order for agencies to stop using Claude. The government has 7 days to appeal to the 9th Circuit. Third parties who filed supporting briefs included Microsoft, retired military leaders, and Catholic theologians.
Assessment by Pillar
Does not train on user conversations by default across all tiers. No ads. Committed to remaining ad-free.
Major investors publicly disclosed. Heavy Big Tech dependency creates Compute Paradox concerns.
Refused Pentagon's 'any lawful use' demand at significant cost. Drew contractual red lines on surveillance and autonomous weapons.
Models closed-source. But publishes extensive safety research, alignment methodology, and model cards.
Full history deletion. No training on conversations. No ads. Enterprise data isolation options.
Key Findings
Your conversations are not used for training at any tier, including free. This is the strongest default data privacy posture among major AI providers.
The Pentagon refusal cost Anthropic its government contracts and earned a 'supply chain risk' designation. This demonstrated willingness to accept material financial harm to maintain stated principles.
Anthropic's heavy reliance on Amazon, Google, Microsoft, and Nvidia for funding and compute creates a structural dependency. If those relationships change, Anthropic's independence could be tested.
Models are closed-source. Published safety research is extensive, but the models themselves cannot be independently audited.
Who Funds Them
Recent News
Frequently Asked Questions — Anthropic
Does Claude train on my conversations?
Anthropic does not train on user conversations at any tier, including free. This is the strongest default data privacy posture among major AI providers.
Why was Anthropic designated a supply chain risk?
Anthropic refused the Pentagon's demand to allow 'any lawful use' of Claude, including mass surveillance and autonomous weapons. The Pentagon designated it a supply chain risk in response. A federal judge blocked this designation on March 26, 2026.
Who funds Anthropic?
Anthropic has raised $67B+ from Amazon ($8B+), Google ($2B+), Microsoft (up to $5B), Nvidia (up to $10B), and others. Ontario Teachers' Pension Plan is also an investor.
Sources
- [1]$30B Series G at $380B valuation— CNBC, Feb 12, 2026
- [2]Total raised $67B+ from 90 investors— Tracxn, Mar 2026
- [3]$14B ARR; Claude Code >$2.5B run rate— Axios, Feb 12, 2026
- [4]Amazon invested $8B+; Google $2B+— Wikipedia/Anthropic, Mar 2026
- [5]Ontario Teachers' Pension Plan participated— Anthropic press release, 2025
- [6]Refused Pentagon 'any lawful use' demand— NPR, Feb 27, 2026
- [7]Designated supply chain risk by Pentagon— NPR, Feb 27, 2026
- [8]Trump ordered all federal agencies to stop using Claude— NPR, Feb 27, 2026
- [9]Filed lawsuits in CA and DC courts— Axios, Mar 9, 2026
- [10]Google's Jeff Dean and 30+ rival employees filed amicus brief— Fortune, Mar 10, 2026
- [11]Microsoft, retired military leaders, Catholic theologians filed supporting briefs— AP, Mar 26, 2026
- [12]Judge Rita Lin blocked supply chain risk designation; called actions 'Orwellian' and 'designed to punish'— The Hill, Mar 26, 2026
- [13]Lin ruled Anthropic likely to succeed on merits; called Trump order 'classic First Amendment retaliation'— CNN, Mar 26, 2026
- [14]Ruling stayed 7 days for potential appeal— CBS News, Mar 26, 2026
- [15]ChatGPT uninstalls surged 295%; Claude hit #1 on App Store— TechCrunch, Mar 2, 2026
- [16]Public Benefit Corporation structure— Wikipedia, Mar 2026
- [17]Long-Term Benefit Trust governance— Wikipedia, Mar 2026
- [18]Super Bowl ads emphasizing ad-free commitment— Wikipedia, Feb 2026
- [19]Does not train on user conversations by default— Anthropic product documentation, Current
- [20]Previously partnered with Palantir and AWS for defense/intelligence— Wikipedia, Nov 2024
Related Assessments
Disclaimer
Assessments reflect publicly available information and the published methodology of the Behind the AI Research Team. Grades represent analytical assessments derived from the published scoring framework, not statements of fact about internal company operations. If you believe any claim is inaccurate, contact corrections@behindtheai.org with the specific claim and your evidence.