▼RegressingMajorDesign · 2.D.bUSApr 13, 2026
In Jane Doe v. OpenAI (TRO April 13 2026), plaintiff filed formal Nov 13 2025 Notice of Abuse; OpenAI called report "extremely serious and troubling" but took no product action before further harm.
WhyAbuse-reporting channel existed but failed to trigger duty-of-care response; design failure now in court record.User reporting / abuse channels ▲AdvancingMajorLaws · 2.L.aUSApr 13, 2026
SF Superior Court entered a limited TRO in Doe v. OpenAI (CGC-26-635725) on April 13 2026 after plaintiff alleged ChatGPT facilitated her stalker despite abuse reports; liability theory advances.
WhyFirst TRO against frontier lab for foreseeable chatbot harm; key tort-liability precedent for duty-of-care cases.Liability for foreseeable AI harms ▲AdvancingLaws · 2.L.cUSApr 9, 2026
Florida AG James Uthmeier opened a consumer-protection investigation into OpenAI on April 9 2026 focused on deceptive claims and harm to minors; joins Texas AG actions.
WhyState AGs picking up slack from federal retreat; multi-state enforcement pattern now emerging.Consumer protection enforcement against deceptive AI claims ▲AdvancingDesign · 2.D.bUSApr 8, 2026
OpenAI published Child Safety Blueprint on April 8 2026 detailing reporting channels, provider coordination, and safety-by-design controls for AI-enabled child exploitation.
WhyLab self-publishing reporting-channel design; voluntary but raises industry baseline for abuse channels.User reporting / abuse channels ▲AdvancingMajorNorms · 2.N.bGLOBALApr 7, 2026
Stanford HAI AI Index 2026 Public Opinion chapter documents declining global trust in AI companies and broad support for mandated AI disclosure across surveyed countries.
WhyGlobal consensus on AI accountability expectations; not a US-only phenomenon.Public expectation of company accountability for AI harms ▲AdvancingMajorLaws · 2.L.bUSMar 24, 2026
Governor Ferguson signed HB 2225 on March 24 2026, extending duty-of-care requirements to AI companion chatbots in Washington with private right of action; effective Jan 1 2027.
WhySecond state to adopt CA-style duty-of-care statute for AI chatbots within 90 days; pattern replicating.Duty-of-care statutes applied to AI ▼RegressingLaws · 2.L.bUSMar 17, 2026
Colorado AI Policy Work Group released March 17 2026 draft to repeal-and-replace the Colorado AI Act, eliminating its "reasonable care" duty and narrowing scope to ADMT disclosure obligations.
WhyWorking-group proposal strips explicit duty-of-care language from what was the leading state AI law; signals backsliding.Duty-of-care statutes applied to AI ▲AdvancingMajorNorms · 2.N.bUSMar 12, 2026
Pew Research Center survey found 50% of US adults are more concerned than excited about AI, up from 37% in 2021; majority want stricter accountability for AI harms.
WhyPublic expectation of AI accountability has hardened sharply since 2021; creates political cover for duty-of-care regulation.Public expectation of company accountability for AI harms ▼RegressingMajorNorms · 2.N.cUSFeb 9, 2026
Mrinank Sharma, Head of Safeguards Research at Anthropic, resigned publicly Feb 9 2026 citing inability to fulfill safety duties; follows a wave of safety-team departures from frontier labs.
WhySenior safety-lead departure signals lab-level duty-of-care norms are collapsing internally, not strengthening.Researcher and whistleblower protections as norm ▲AdvancingDesign · 2.D.aUSFeb 6, 2026
Anthropic published Claude Opus 4.6 system card in early February 2026 detailing ASL-3 safeguards, CBRN uplift evaluations, and pre-deployment red-teaming.
WhyRaises disclosure floor for safety-by-design; competitive pressure on other labs to match.Safety-by-design practices ▼RegressingMajorNorms · 2.N.cUSJan 20, 2026
OpenAI fired VP Product Policy Ryan Beiermeister in early January 2026 after she raised concerns about adult-mode rollout and the strength of child-exploitation guardrails; she denies the cited discrimination claim.
WhyFrontier lab terminating a safety-policy VP after internal dissent corrodes the emerging norm of researcher protection.Researcher and whistleblower protections as norm ▲AdvancingMajorLaws · 2.L.dUSJan 5, 2026
MDL court ordered OpenAI to produce ~20 million ChatGPT conversation logs to consolidated class-action plaintiffs on January 5 2026; historic private-enforcement discovery order.
WhyLargest AI-related discovery order to date; enables private plaintiffs to pursue duty-of-care theories at scale.Private right of action / class action enablement ▲AdvancingMajorLaws · 2.L.bUSJan 1, 2026
California SB 243 took effect January 1 2026, imposing duty-of-care obligations on AI companion chatbot operators including crisis response, disclosure, and age verification.
WhyFirst US state duty-of-care statute specifically for AI chatbots, with private right of action; template law.Duty-of-care statutes applied to AI ▲AdvancingNorms · 2.N.aUSDec 22, 2025
NYC Bar Association issued binding ethics guidance on lawyer use of AI transcription and note-taking tools, covering confidentiality, client consent, and supervision duties.
WhyMajor bar association codifies AI-specific duty of care for lawyers; extends professional standards into AI use.Professional standards and codes of conduct ▼RegressingMajorLaws · 2.L.cUSDec 22, 2025
FTC formally set aside the Rytr consent order on December 22 2025, reversing one of the flagship Operation AI Comply cases and signaling softer deceptive-AI enforcement.
WhyFederal regulator retreating from deceptive-AI consent orders undermines consumer-protection duty-of-care layer.Consumer protection enforcement against deceptive AI claims