2021
FEB 3, 2021
Anthropic founded
Registered in California by Dario Amodei, Daniela Amodei, and ~9 former OpenAI researchers who left over disagreements about balancing capability and safety. Initial funding: $124M.
2021
Incorporated as Public Benefit Corporation
Delaware PBC structure imposes dual fiduciary obligation: increase shareholder profits AND prioritize the mission of ensuring transformative AI helps people and society flourish.
2022 – 2023
2022 – 2023
Google invests ~$2B
Google becomes one of Anthropic's earliest major strategic backers. No board seats. No voting rights. Ownership capped at 15%.
SEP 2023
Amazon invests $1.25B; LTBT announced
Amazon makes initial investment. Separately, Anthropic announces the Long-Term Benefit Trust — five financially disinterested trustees holding special stock to elect board members over time. Full Trust Agreement: unpublished.
2024
MAR 2024
Amazon total reaches $4B
Additional $2.75B investment. No board seat. No voting rights. Capped below 33%.
AUG 12, 2024
Common Sense Media rates Claude "Minimal Risk"
CSM notes Anthropic "generally does not use your prompts and results to train its models." This becomes the basis for the "Use Data Responsibly: Minimal" rating.
The rating has not been updated since.
SAFETY FRAMING
NOV 7, 2024
Claude deployed on classified networks
Partnership with Palantir and AWS to deploy Claude at Impact Level 6 — one level below Top Secret. First AI model on Pentagon classified systems. Not widely reported at the time.
DEFENSE / GOVERNMENT
NOV 2024
Amazon total reaches $8B
Another $4B committed. Still no board seat. No voting rights. Largest single investor.
INVESTMENT
2025
JUL 2025
$200M DOD contract secured
Claude integrated into mission workflows on classified networks for defense and intelligence. Anthropic becomes the only AI model company deployed across Pentagon classified systems.
DEFENSE / GOVERNMENT
AUG 28, 2025
Training data policy reversed
Previously: consumer conversations not used for training, deleted within 30 days. New policy: training on by default, opt-out toggle pre-set to "On," data retention extended to 5 years. Consent flow designed with prominent "Accept" button and smaller toggle underneath. Enterprise/API users excluded.
This directly invalidates the CSM rating.
PRIVACY REVERSAL
NOV 2025
Microsoft ($5B) and Nvidia ($10B) enter
Anthropic commits to $30B in Azure compute. Valuation reaches ~$350B. Claude becomes only frontier model on all three major clouds. Total outside investment exceeds $16B. Total cloud commitments: $80B through 2029.
INVESTMENT / INFRASTRUCTURE
2026: JANUARY
JAN 2026
Hegseth memorandum
Defense Secretary issues AI strategy memo directing all DOD AI contracts to include "any lawful use" language within 180 days. Directly contradicts Anthropic's contract restrictions.
DEFENSE / GOVERNMENT
2026: FEBRUARY
FEB 9
Head of Safeguards Research resigns
Mrinank Sharma departs. Public statement: "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions." Warning: "The world is in peril." 18 days before the presidential ban.
INTERNAL SAFETY
FEB 11
Zoë Hitzig leaves OpenAI
Publishes NYT essay criticizing ChatGPT's ad implementation. Joins Anthropic weeks later as founding hire of the Institute. Most symbolically loaded hire.
FEB 12
$30B Series G at $380B valuation
Largest AI funding round to date.
INVESTMENT
FEB 24 — THE TRIPLE MOVE
Three things happen on the same day
1. Hegseth ultimatum: Agree to unrestricted use by 5:01 PM Friday Feb 27 or face consequences.
2. RSP v3.0 released: Removes hard commitment to pause model development at risk thresholds. Replaces with "nonbinding but publicly-declared goals." Chief Science Officer: "We felt that it wouldn't actually help anyone for us to stop training AI models." Safety commitments softened the same day the Pentagon demands they soften safety commitments.
3. Distillation attacks article published: Reveals DeepSeek, Moonshot, MiniMax ran 16M+ exchanges through 24K fraudulent accounts to steal Claude's capabilities. Frames Anthropic as defending American AI from Chinese theft. Positions Anthropic as essential to national security — on the same day it's told it's a liability to national security.
DEFENSE
SAFETY
STRATEGIC
FEB 27 — THE BAN
Presidential directive and supply chain risk designation
Trump directs all federal agencies to immediately cease using Anthropic's technology. Hegseth designates Anthropic a supply chain risk — first time this designation (traditionally for foreign adversaries) is applied to an American company.
See FASCSA comparative analysis.
Hours later: OpenAI announces Pentagon deal with the same three prohibitions — framed as voluntary commitments rather than contractual restrictions. Sam Altman later calls it "opportunistic and sloppy."
DEFENSE / GOVERNMENT
2026: MARCH
MAR 4
OpenAI employees "fuming"
CNN reports internal backlash at OpenAI over the Pentagon deal.
MAR 5 – 6
Formal designation; removal ordered
DOD officially notifies Anthropic. Internal memo orders removal from nuclear weapons, missile defense, and cyber warfare systems within 180 days. 35 former military officials call it a "dangerous precedent."
DEFENSE / GOVERNMENT
MAR 9 — ANTHROPIC SUES
Two federal lawsuits filed
California federal court (Judge Rita F. Lin) and D.C. Circuit. Claims: denied due process, First Amendment retaliation, president lacks authority.
See legal analysis.
LEGAL
MAR 10
Amicus briefs filed
37 engineers from OpenAI and Google file joint brief supporting Anthropic (as individuals). Microsoft files brief requesting temporary restraining order.
LEGAL
MAR 11 — THE INSTITUTE
Anthropic Institute announced
Consolidates existing internal teams under Jack Clark ("Head of Public Benefit"). Announced between filing lawsuits and filing emergency motions.
No separate legal identity. No independent board. No published charter. No editorial independence guarantee. Internally funded.
Clark tells The Verge it was "planned since November."
See structural independence analysis.
GOVERNANCE
MAR 18 — THE STUDY
Institute publishes "What 81,000 People Want from AI"
First major Institute output. Claude interviewed Claude users about Claude. Claude classified the responses. Anthropic published through its own Institute. No external IRB, no independent review, no published classifier validation. Self-selected sample from tens of millions of accounts.
See methodology analysis.
GOVERNANCE
MAR 24 — UPCOMING
Preliminary injunction hearing
Judge Rita F. Lin, N.D. Cal. First judicial test of whether a supply chain risk designation can be effectuated through the process used here.
LEGAL
THE PATTERN
Safety commitments softened the same day the Pentagon demanded they soften safety commitments.
National security defense published the same day they were told they're a national security threat.
The Institute launched between filing lawsuits and filing emergency motions.
The Institute's first publication was a self-conducted study showing people love AI — published during the week they need public support.
Every move has a plausible standalone explanation, which is itself part of the pattern.
Nothing happened in isolation. Nothing was early. Nothing was late.