Behind the AI

Who funds, controls, and profits from the AI tools you use every day? Independent assessments. No corporate sponsors. Plain language.

Refused Pentagon's surveillance terms at significant financial cost.

Key Facts

Company
Anthropic
Product
Claude
Grade
Trustworthy
Overall Score
4.4 / 5
Last Updated
2026-03-27
Military Status
Refused
Last Reviewed2026-03-27
Company ContactedNot yet contacted
MethodEmail to press@anthropic.com
ResponseNo response received
Data Confidence: HighAssessed by: Behind the AI Research Team
Founded2021
HQSan Francisco, USA
StructurePublic Benefit Corporation
CEODario Amodei
Valuation$380B (Feb 2026)
Revenue$14B ARR
Core Identity

Safety-first enterprise AI. Founded by ex-OpenAI researchers over safety disagreements. 80% revenue from enterprises. Constitutional AI approach to alignment.

Military & Government

Had a $200M Pentagon contract. Refused to remove restrictions on mass domestic surveillance and autonomous weapons. Designated 'supply chain risk' by Pentagon. Trump ordered agencies to stop using Claude. Anthropic filed lawsuits. Google's Jeff Dean and 30+ rival employees filed amicus brief in support. On March 26, 2026, U.S. District Judge Rita Lin blocked the supply chain risk designation in a 43-page ruling, finding Anthropic is likely to succeed on the merits. Lin called the government's actions 'classic illegal First Amendment retaliation' and said they appeared 'designed to punish Anthropic.' The ruling also blocked Trump's order for agencies to stop using Claude. The government has 7 days to appeal to the 9th Circuit. Third parties who filed supporting briefs included Microsoft, retired military leaders, and Catholic theologians.

Assessment by Pillar

Data Practices

Does not train on user conversations by default across all tiers. No ads. Committed to remaining ad-free.

Funding Transparency

Major investors publicly disclosed. Heavy Big Tech dependency creates Compute Paradox concerns.

Military & Gov't

Refused Pentagon's 'any lawful use' demand at significant cost. Drew contractual red lines on surveillance and autonomous weapons.

Model Transparency

Models closed-source. But publishes extensive safety research, alignment methodology, and model cards.

User Rights

Full history deletion. No training on conversations. No ads. Enterprise data isolation options.

Key Findings

STRENGTH

Your conversations are not used for training at any tier, including free. This is the strongest default data privacy posture among major AI providers.

STRENGTH

The Pentagon refusal cost Anthropic its government contracts and earned a 'supply chain risk' designation. This demonstrated willingness to accept material financial harm to maintain stated principles.

CONTEXT

Anthropic's heavy reliance on Amazon, Google, Microsoft, and Nvidia for funding and compute creates a structural dependency. If those relationships change, Anthropic's independence could be tested.

CONTEXT

Models are closed-source. Published safety research is extensive, but the models themselves cannot be independently audited.

Who Funds Them

Amazon (AWS) Primary cloud$8B+
Google Cloud partner$2B+
Microsoft Big TechUp to $5B
Nvidia Chip monopolyUp to $10B
GIC (Singapore) Sovereign wealthCo-led Series G
Ontario Teachers' Pension Canadian pension fundParticipated

Recent News

Feb 2026: Raised $30B at $380B valuation
Feb 2026: Refused Pentagon's demands; designated 'supply chain risk'
Mar 2026: Filed lawsuits against government designation
Mar 2026: Google/OpenAI employees filed amicus brief in support
Mar 26, 2026: Federal judge blocked Pentagon supply chain risk designation in 43-page ruling, called it 'Orwellian' and 'designed to punish.' Stayed 7 days for appeal.

Frequently Asked Questions — Anthropic

Does Claude train on my conversations?

Anthropic does not train on user conversations at any tier, including free. This is the strongest default data privacy posture among major AI providers.

Why was Anthropic designated a supply chain risk?

Anthropic refused the Pentagon's demand to allow 'any lawful use' of Claude, including mass surveillance and autonomous weapons. The Pentagon designated it a supply chain risk in response. A federal judge blocked this designation on March 26, 2026.

Who funds Anthropic?

Anthropic has raised $67B+ from Amazon ($8B+), Google ($2B+), Microsoft (up to $5B), Nvidia (up to $10B), and others. Ontario Teachers' Pension Plan is also an investor.

Sources

  1. [1]$30B Series G at $380B valuationCNBC, Feb 12, 2026
  2. [2]Total raised $67B+ from 90 investorsTracxn, Mar 2026
  3. [3]$14B ARR; Claude Code >$2.5B run rateAxios, Feb 12, 2026
  4. [4]Amazon invested $8B+; Google $2B+Wikipedia/Anthropic, Mar 2026
  5. [5]Ontario Teachers' Pension Plan participatedAnthropic press release, 2025
  6. [6]Refused Pentagon 'any lawful use' demandNPR, Feb 27, 2026
  7. [7]Designated supply chain risk by PentagonNPR, Feb 27, 2026
  8. [8]Trump ordered all federal agencies to stop using ClaudeNPR, Feb 27, 2026
  9. [9]Filed lawsuits in CA and DC courtsAxios, Mar 9, 2026
  10. [10]Google's Jeff Dean and 30+ rival employees filed amicus briefFortune, Mar 10, 2026
  11. [11]Microsoft, retired military leaders, Catholic theologians filed supporting briefsAP, Mar 26, 2026
  12. [12]Judge Rita Lin blocked supply chain risk designation; called actions 'Orwellian' and 'designed to punish'The Hill, Mar 26, 2026
  13. [13]Lin ruled Anthropic likely to succeed on merits; called Trump order 'classic First Amendment retaliation'CNN, Mar 26, 2026
  14. [14]Ruling stayed 7 days for potential appealCBS News, Mar 26, 2026
  15. [15]ChatGPT uninstalls surged 295%; Claude hit #1 on App StoreTechCrunch, Mar 2, 2026
  16. [16]Public Benefit Corporation structureWikipedia, Mar 2026
  17. [17]Long-Term Benefit Trust governanceWikipedia, Mar 2026
  18. [18]Super Bowl ads emphasizing ad-free commitmentWikipedia, Feb 2026
  19. [19]Does not train on user conversations by defaultAnthropic product documentation, Current
  20. [20]Previously partnered with Palantir and AWS for defense/intelligenceWikipedia, Nov 2024

Related Assessments

Disclaimer

Assessments reflect publicly available information and the published methodology of the Behind the AI Research Team. Grades represent analytical assessments derived from the published scoring framework, not statements of fact about internal company operations. If you believe any claim is inaccurate, contact corrections@behindtheai.org with the specific claim and your evidence.