▲AdvancingNorms · 7.N.cEUROPEApr 18, 2026
A new report reveals that a lobbying group representing major tech companies, including Microsoft, Amazon, Google, and Meta, successfully secured a provision in the EU to keep data center environmental impact data confidential.
WhyA new report scrutinizes tech giants' lobbying efforts, surfacing their influence in securing an EU provision to hide environmental data.Concerns about democratic capture ▲AdvancingLaws · 7.L.cEUROPEApr 17, 2026
The European Commission has awarded a €180 million cloud computing contract to four European providers, aiming to bolster sovereign infrastructure.
WhyEU Commission awarded a €180M contract to four European cloud providers, advancing sovereign compute infrastructure.Public-option AI and sovereign compute funding ▲AdvancingLaws · 1.L.aUSApr 17, 2026
The CEO of Anthropic met with the White House chief of staff as the US government seeks access to evaluate the company's new Mythos model.
WhyThe White House is actively seeking access to Anthropic's Mythos model, demonstrating executive pressure for frontier model evaluation.Pre-deployment evaluation mandates ▲AdvancingLaws · 3.L.bEUROPEApr 16, 2026
A special panel convened to advise European Commission President Ursula von der Leyen on strategies for ensuring child online safety.
WhyEU Commission special panel convened to advise on strategies for child online safety, signaling ongoing policy development.Protections for minors ▲AdvancingDesign · 1.D.aUSApr 16, 2026
WhyDetailed system card at release maintains frontier-lab transparency baseline.Model cards / system cards published ▼RegressingNorms · 7.N.cUSApr 16, 2026
Political action committees representing the crypto and AI industries have raised $250 million ahead of the US midterm elections.
WhyAI and crypto PACs raising $250M for US midterms demonstrates massive industry financial influence aimed at shaping policy and elections.Concerns about democratic capture ▲AdvancingNorms · 5.N.aUSApr 16, 2026
A Brookings Institution article argues that U.S. states have the authority and responsibility to regulate the use of AI in the criminal justice system.
WhyBrookings article advocates for state-level regulation of AI in criminal justice, contributing to civil society pressure on the issue.Public debate on AI surveillance and civil liberties ▲AdvancingLaws · 7.L.bEUROPEApr 16, 2026
The European Commission proposed measures under the Digital Markets Act to mandate Google's sharing of search engine data with third parties.
WhyThe EU Commission proposed measures under the DMA requiring Google to share search engine data with third parties.Interoperability and data portability mandates ▼RegressingMajorDesign · 2.D.bUSApr 13, 2026
In Jane Doe v. OpenAI (TRO April 13 2026), plaintiff filed formal Nov 13 2025 Notice of Abuse; OpenAI called report "extremely serious and troubling" but took no product action before further harm.
WhyAbuse-reporting channel existed but failed to trigger duty-of-care response; design failure now in court record.User reporting / abuse channels ▲AdvancingNorms · 5.N.aUSApr 13, 2026
The Electronic Frontier Foundation published an updated guide to help journalists and advocates identify and report on surveillance technology deployed at the U.S.-Mexico border.
WhyEFF published an updated guide to help identify and report on border surveillance tech, maintaining civil society pressure on surveillance.Public debate on AI surveillance and civil liberties ◐MixedNorms · 1.N.aGLOBALApr 13, 2026
WhyPublic trust declining despite transparency efforts; expectation rising but not met.Public expectation of transparency on AI capabilities ▲AdvancingMajorLaws · 2.L.aUSApr 13, 2026
SF Superior Court entered a limited TRO in Doe v. OpenAI (CGC-26-635725) on April 13 2026 after plaintiff alleged ChatGPT facilitated her stalker despite abuse reports; liability theory advances.
WhyFirst TRO against frontier lab for foreseeable chatbot harm; key tort-liability precedent for duty-of-care cases.Liability for foreseeable AI harms ◐MixedNorms · 1.N.cUSApr 9, 2026
WhyState AG scrutiny signals non-disclosure is now treated as actionable; norm-wise neutral.Incident disclosure as expected behavior ▼RegressingLaws · 7.L.bUSApr 9, 2026
A federal district court ruled that Perplexity's AI-enabled browser violated the Computer Fraud and Abuse Act by accessing Amazon's website to help users comparison shop. The EFF has filed an amicus brief supporting Perplexity's appeal to the Ninth Circuit, arguing the decision harms competition and innovation.
WhyA federal court ruled Perplexity's AI shopping agent violated the CFAA by accessing Amazon, legally protecting a walled garden.Interoperability and data portability mandates ▲AdvancingLaws · 2.L.cUSApr 9, 2026
Florida AG James Uthmeier opened a consumer-protection investigation into OpenAI on April 9 2026 focused on deceptive claims and harm to minors; joins Texas AG actions.
WhyState AGs picking up slack from federal retreat; multi-state enforcement pattern now emerging.Consumer protection enforcement against deceptive AI claims ▲AdvancingDesign · 2.D.bUSApr 8, 2026
OpenAI published Child Safety Blueprint on April 8 2026 detailing reporting channels, provider coordination, and safety-by-design controls for AI-enabled child exploitation.
WhyLab self-publishing reporting-channel design; voluntary but raises industry baseline for abuse channels.User reporting / abuse channels ▲AdvancingNorms · 5.N.aGLOBALApr 8, 2026
The Electronic Frontier Foundation published a blog series reflecting on how the 2011 Arab uprisings inadvertently fueled a global boom in state surveillance, including the rise of AI-driven biometrics and facial recognition.
WhyEFF blog series highlights the rise of AI-driven surveillance and biometrics, sustaining civil society pressure on digital authoritarianism.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 1.N.bUSApr 7, 2026
WhyLab publicly explains non-release decision; reinforces norm of transparent reasoning about evals.Open publication of safety evaluations as industry norm ▲AdvancingMajorLaws · 5.L.cEUROPEApr 7, 2026
The EU Parliament voted not to extend an interim derogation from e-Privacy rules, effectively making the voluntary mass-scanning of private chats by tech companies illegal in the EU.
WhyEU Parliament voted against prolonging an e-Privacy derogation, effectively outlawing voluntary algorithmic mass-scanning of private chats.Data protection strengthening ▲AdvancingNorms · 1.N.cUSApr 7, 2026
WhyVoluntary pre-release vulnerability disclosure reinforces incident-transparency norm at labs.Incident disclosure as expected behavior ▲AdvancingMajorNorms · 2.N.bGLOBALApr 7, 2026
Stanford HAI AI Index 2026 Public Opinion chapter documents declining global trust in AI companies and broad support for mandated AI disclosure across surveyed countries.
WhyGlobal consensus on AI accountability expectations; not a US-only phenomenon.Public expectation of company accountability for AI harms ▲AdvancingDesign · 1.D.aUSApr 7, 2026
WhyPreview-stage system card extends disclosure earlier in release pipeline.Model cards / system cards published ▲AdvancingNorms · 6.N.cGLOBALApr 5, 2026
UN opened worldwide public submission portal (deadline 30 Apr 2026) for input into the first Global Dialogue on AI Governance, to be held in 2026 in Geneva back-to-back with the ITU AI for Good Summit. Dialogue + Independent International Scientific Panel on AI established by UNGA Resolution A/RES/79/325 (adopted by consensus 26 Aug 2025). Signals formal public-expectation channel for cross-border AI governance.
WhyFirst UN-sanctioned open public channel on AI governance; institutionalizes public expectation of multilateral oversight.Public expectation of cross-border governance ▲AdvancingNorms · 5.N.aUSApr 3, 2026
Tech nonprofits, including the EFF and CDT, filed comments opposing a proposed GSA procurement rule that would require AI contractors to license their systems for "all lawful purposes," arguing it could enable mass surveillance.
WhyCivil society groups filed comments opposing a proposed GSA procurement rule that would force AI contractors to allow use for surveillance.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 5.N.aGLOBALApr 2, 2026
The Electronic Frontier Foundation publicly criticized Google and Amazon for failing to address human rights and surveillance risks associated with their Project Nimbus AI cloud contract with the Israeli government.
WhyEFF published a critique pressuring Google and Amazon over the human rights and surveillance risks of their AI cloud contract with Israel.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 5.N.aGLOBALApr 2, 2026
The Electronic Frontier Foundation (EFF) submitted a report to the UN OHCHR detailing how new digital regulations and surveillance technologies, including biometric monitoring, are being used to restrict the fundamental rights of human rights defenders globally.
WhyEFF submitted a report to the UN OHCHR highlighting how expanded state surveillance and biometric monitoring threaten human rights defendersPublic debate on AI surveillance and civil liberties ▲AdvancingNorms · 3.N.aUSApr 1, 2026
CBS Mornings, NBC, AP, Oprah Podcast and others covered a new documentary featuring Tristan Harris and industry leaders on AI risks including companionship and mental-health impact, April 2026 release cycle.
WhyMainstream media pickup of well-being-first AI discourse beyond tech press; signal that framing is diffusing.Public discourse on well-being metrics over engagement ▲AdvancingDesign · 7.D.cGLOBALApr 1, 2026
Anthropic opened applications for its next Fellows cohorts beginning May and July 2026. 4-month program, $3,850/week stipend + ~$15k/mo compute funding + mentorship for independent safety researchers. 40% of first-cohort fellows joined Anthropic full-time; 80%+ produced papers.
WhyThird-party access channel widens; structured external-researcher pipeline at frontier lab, though scale remains small (dozensThird-party access to closed models ▲AdvancingLaws · 4.L.dEUROPEMar 31, 2026
The UK Information Commissioner's Office (ICO) published guidance for jobseekers explaining their rights and protections when facing automated recruitment decisions.
WhyThe UK ICO published guidance informing jobseekers of their rights regarding automated decision-making in recruitment.Automated decision-making rights ▲AdvancingLaws · 7.L.cEUROPEMar 30, 2026
Mistral AI secured €722M ($830M) in debt financing from seven-bank consortium led by Bpifrance and BNP Paribas to build a 44MW data center near Paris with 13,800 Nvidia GB300 GPUs (Q2 2026 target). Largest AI-focused debt raise by a European technology company to date; targets 200MW capacity by end-2027.
WhyEuropean champion scales outside US hyperscaler dependence; credible sovereign alternative emerging on the compute layerPublic-option AI and sovereign compute funding ▲AdvancingNorms · 3.N.aGLOBALMar 30, 2026
CHT announced a new program focused on what norms, legal protections, and rights are needed to preserve human well-being in the age of AI, amplifying well-being-over-engagement discourse.
WhySustained norm-entrepreneur pressure from the leading voice on well-being-over-engagement framing.Public discourse on well-being metrics over engagement ▲AdvancingMajorLaws · 1.L.bUSMar 27, 2026
WhyShort-window mandatory reporting joins CA and EU as part of US state-level reporting regime.Mandatory incident reporting ▲AdvancingMajorLaws · 1.L.aUSMar 27, 2026
WhyState legislature strengthens pre-deployment testing and publication thresholds for frontier AI.Pre-deployment evaluation mandates ▲AdvancingNorms · 5.N.aUSMar 26, 2026
Civil liberties advocates and journalists exposed that automated license plate readers are being used for traffic enforcement, contradicting vendor claims.
WhyEFF and 404 Media applied pressure on surveillance tech by exposing Flock Safety ALPRs' mission creep into traffic enforcement.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 5.N.aUSMar 25, 2026
The Electronic Frontier Foundation (EFF) published a newsletter and podcast episode raising privacy concerns about Meta Ray-Ban smartglasses and their surveillance capabilities.
WhyEFF newsletter and podcast highlight privacy risks of Meta Ray-Ban smartglasses, sustaining civil society pressure on AI surveillance.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 1.N.aUSMar 25, 2026
The Electronic Frontier Foundation filed a FOIA lawsuit against CMS seeking transparency on the AI algorithms used to evaluate Medicare prior authorization requests.
WhyEFF filed a FOIA lawsuit demanding transparency into the training data, evaluation, and safeguards of Medicare's AI authorization system.Public expectation of transparency on AI capabilities ▲AdvancingMajorLaws · 2.L.bUSMar 24, 2026
Governor Ferguson signed HB 2225 on March 24 2026, extending duty-of-care requirements to AI companion chatbots in Washington with private right of action; effective Jan 1 2027.
WhySecond state to adopt CA-style duty-of-care statute for AI chatbots within 90 days; pattern replicating.Duty-of-care statutes applied to AI ▼RegressingMajorLaws · 5.L.cEUROPEMar 18, 2026
On 18 March 2026, the Court of Rome annulled the Garante's Nov 2024 €15M fine — the only final GDPR enforcement decision against a GenAI provider. The fine had found unlawful training data processing, breach-notification failure and no age verification. Garante can appeal. Judgment no. 4153/2026, R.G. 4785/2025.
WhyOnly final GenAI GDPR fine in Europe just collapsed on appeal. Signals hard limits on current data-protection tools against frontier labs.Data protection strengthening ▼RegressingLaws · 5.L.bUSMar 17, 2026
CO SB 24-205, the first US comprehensive AI anti-discrimination law, was delayed from Feb 2026 to 30 June 2026 via SB 25B-004. In March 2026, Governor Polis's AI Policy Working Group proposed a replacement bill stripping many employer compliance duties.
WhyIndustry lobbying is actively eroding the strongest US state AI anti-discrimination law before it takes effect. Net-negative on enforcement.Algorithmic bias and discrimination protections ▲AdvancingLaws · 4.L.aUSMar 17, 2026
HF 4369 introduced 17 March 2026 requires employers to notify workers 90 days before AI-driven layoffs and fund retraining; first US state-level AI displacement notification bill. SF 4576 companion filed in Senate.
WhyConcrete state-level protection template; shifts displacement from externality to regulated transitionWorker displacement protections and transition funding ▼RegressingLaws · 2.L.bUSMar 17, 2026
Colorado AI Policy Work Group released March 17 2026 draft to repeal-and-replace the Colorado AI Act, eliminating its "reasonable care" duty and narrowing scope to ADMT disclosure obligations.
WhyWorking-group proposal strips explicit duty-of-care language from what was the leading state AI law; signals backsliding.Duty-of-care statutes applied to AI ▼RegressingMajorNorms · 5.N.cGLOBALMar 12, 2026
The Institute of Development Studies, with the African Digital Rights Network, published 'Smart City Surveillance in Africa: Mapping Chinese AI Surveillance Across 11 Countries' on 12 Mar 2026. Documents $2B+ spent by Algeria, Egypt, Kenya, Mauritius, Mozambique, Nigeria, Rwanda, Senegal, Uganda, Zambia and Zimbabwe on facial recognition and ANPR, with deployments used against activists, opposition figures and journalists despite no demonstrated crime-reduction effect.
WhyMajor documented algorithmic-surveillance discrimination against dissidents across 11 states. Recognition rising; power is not.Recognition of algorithmic discrimination ▲AdvancingMajorNorms · 2.N.bUSMar 12, 2026
Pew Research Center survey found 50% of US adults are more concerned than excited about AI, up from 37% in 2021; majority want stricter accountability for AI harms.
WhyPublic expectation of AI accountability has hardened sharply since 2021; creates political cover for duty-of-care regulation.Public expectation of company accountability for AI harms ▲AdvancingMajorLaws · 6.L.aEUROPEMar 11, 2026
The European Parliament approved the European Union's conclusion of the Council of Europe Framework Convention on Artificial Intelligence.
WhyEU Parliament approved the conclusion of the Council of Europe Framework Convention on AI, advancing binding multilateral treaties.Multilateral treaties and conventions ▲AdvancingMajorLaws · 6.L.aEUROPEMar 11, 2026
On 11 Mar 2026 the European Parliament gave consent (455 in favour, 101 against, 74 abstentions) to EU conclusion of the Council of Europe Framework Convention on AI and Human Rights (CETS 225), the first legally binding international AI treaty. Advances ratification; treaty enters into force after 5 states ratify.
WhyKey step toward entry-into-force of first binding multilateral AI treaty; EU accession anchors Convention.Multilateral treaties and conventions ▲AdvancingLaws · 3.L.bEUROPEMar 9, 2026
The UK House of Commons proposed an amendment to the Children's Wellbeing and Schools Bill that would empower the Secretary of State to restrict under-18s' access to social media and addictive features.
WhyUK House of Commons advanced an amendment enabling the government to restrict minors' access to social media and addictive features.Protections for minors ▲AdvancingNorms · 5.N.aUSMar 6, 2026
The AI Now Institute published an article questioning the lack of safety guardrails for high-stakes decisions and surveillance following OpenAI's deal with the Pentagon.
WhyAI Now Institute critiques lacking guardrails for surveillance and high-stakes decisions following OpenAI's deal with the Pentagon.Public debate on AI surveillance and civil liberties ◐MixedLaws · 3.L.bUSMar 5, 2026
Despite 75+ co-sponsors, KOSA has not received a markup from Sen. Cruz's committee as of Feb 2026. House advanced narrower KIDS Act (28-24, party line) in March 2026, weaker than Senate version.
WhyFederal minor-protection bill stalled; state laws (CA, NY, AU) advancing faster than US federal action.Protections for minors ▼RegressingLaws · 3.L.cUSMar 3, 2026
Utah SB 194 — requiring default privacy, disabled autoplay/infinite scroll/push notifications for minors — remains stayed under NetChoice First Amendment injunction as of April 2026. Similar Virginia injunction appealed March 2026.
WhyNetChoice-led First Amendment litigation has blocked state-level design-mandate laws in UT, OH, CA, AR, MS, VA — slows design regulation.Ad / recommendation system transparency ▲AdvancingNorms · 5.N.aUSMar 3, 2026
On 3 Mar 2026, ~40-50 activists rallied outside OpenAI's SF HQ in a 'QuitGPT' protest against its Pentagon contract. The prior week, a larger ~500-person multi-lab march targeted DeepMind, OpenAI and Meta; ~200 protested Virginia data centers. Concerns: mass surveillance, autonomous weapons, environmental impact.
WhyOrganized public protest against frontier-lab militarization — real mobilization, not just op-eds. Rare for AI-surveillance debate in US.Public debate on AI surveillance and civil liberties ▼RegressingMajorDesign · 3.D.cGLOBALMar 1, 2026
ChatGPT, Gemini, Claude, Grok ship no session-length caps, no take-a-break reminders for adults, no anti-infinite-scroll UX defaults. SB 243's 3-hour minor reminder rule is the floor, not the market standard.
WhyObservation of negative space: well-being-by-default absent from adult AI UX. Rated regression because engagement loops intensifying.Attention respect in UX ▲AdvancingMajorNorms · 3.N.cUSFeb 27, 2026
Common Sense Media report "Talk, Trust, and Trade-Offs" (orig. Jul 2025) reaffirmed via Penn State re-coverage Feb 2026 and ongoing APA Monitor, Stanford SSIR, Brookings citations: peril of AI companions outweighs potential, no under-18 use.
WhySustained expert-consensus stance that AI companions harm youth mental health; embedded in mainstream coverage.Mental health implications in mainstream discourse ▲AdvancingNorms · 5.N.aUSFeb 27, 2026
The Leadership Conference on Civil and Human Rights publicly condemned the Department of Defense's campaign to pressure Anthropic into lifting restrictions on surveillance use of its AI, framing it as a 'tech-fueled domestic surveillance state.'
WhyCivil-society attention to surveillance repurposing of frontier AI — healthy debate signal, though substantive power remains with DoD.Public debate on AI surveillance and civil liberties ▲AdvancingMajorLaws · 4.L.bEUROPEFeb 27, 2026
Public Prosecutor's Office 27 Feb 2026 placed Deliveroo Italia under judicial administration citing algorithmic worker control violations; first criminal-adjacent enforcement of algorithmic management limits.
WhyEnforcement escalation beyond fines; algorithmic management now carries existential business risk in ItalyAlgorithmic management regulations ▲AdvancingNorms · 6.N.aGLOBALFeb 25, 2026
The OECD published a report analyzing trends in AI-related incidents and hazards as reported by the media.
WhyThe OECD published a report analyzing trends in AI incidents and hazards, contributing to international consensus and risk assessment.International scientific consensus on risks ▼RegressingMajorDesign · 6.D.aGLOBALFeb 24, 2026
On 24 Feb 2026 Anthropic released RSP v3.0, removing its previous implication that it would pause training/deployment if risks exceeded acceptable levels. Some mitigations (e.g. RAND SL4) reframed as 'industry-wide recommendations' rather than unilateral commitments. GovAI and multiple safety commentators flagged this as material weakening of the flagship voluntary framework.
WhyFlagship voluntary commitment weakened by the lab that pioneered the framework; confirms collective-action limits of self-governance.Voluntary industry safety commitments ▲AdvancingMajorLaws · 3.L.bEUROPEFeb 23, 2026
Record fine announced 23 Feb 2026 against 8579 LLC for failing to deploy highly-effective age assurance under OSA s.12. Ofcom has opened 90+ investigations and issued 6 fines since July 2025 enforcement start.
WhySustained portfolio-level enforcement of age-assurance duty; highest OSA fine to date.Protections for minors ▲AdvancingMajorNorms · 6.N.cGLOBALFeb 21, 2026
At the India AI Impact Summit (16-21 Feb 2026; declaration adopted 18-19 Feb, announced 21 Feb 2026), 92 countries and international organisations — including US, UK, China, Russia, EU, Switzerland — endorsed the New Delhi Declaration committing to international cooperation on safe, inclusive AI. Substantially broader than Paris 2025 signatory base; US re-engaged. (Initial 88 on 21 Feb, grew to 91 by 24 Feb, 92 as of 5 Mar 2026.)
WhyLargest AI summit declaration to date; notable US return post-Paris 2025. Multilateral coordination expectation partially restored.Public expectation of cross-border governance ▲AdvancingDesign · 1.D.aUSFeb 19, 2026
WhyContinued publication of full model card maintains cross-lab norm.Model cards / system cards published ▲AdvancingNorms · 1.N.bGLOBALFeb 16, 2026
WhyCross-lab eval sharing becoming operational norm among frontier developers.Open publication of safety evaluations as industry norm ▲AdvancingNorms · 4.N.cUSFeb 16, 2026
February 2026 Economist/YouGov poll finds majority of Americans expect AI to reduce employment; skepticism toward AI replacement hardening in public opinion.
WhyRising public skepticism creates political space for protective regulation; cultural pushback measurablePublic skepticism toward AI replacement rhetoric ▲AdvancingNorms · 6.N.bGLOBALFeb 15, 2026
Future of Life Institute (UN SG's designated civil society co-champion for AI) released pre-summit recommendations calling for Global South participation in AI governance, national audit capacity, and commensurate safety guarantees from developers. Published 15 Feb 2026, days before the 16-21 Feb New Delhi summit.
WhyMajor civil society coalition input to multilateral process; signals continued organized civil society engagement with summit track.Multilateral civil society coalitions ▲AdvancingNorms · 6.N.bEUROPEFeb 11, 2026
European coalition of civil society, trade unions, and academics (60+ organisations incl. Access Now, EDRi, Amnesty Tech, BEUC) signed open letter to MEPs and Commission urging rejection of AI Omnibus amendments that would delete Art. 49(2) high-risk transparency safeguard.
WhyMultilateral civil society coalition defending binding cross-border governance safeguards against industry-led rollback.Multilateral civil society coalitions ▼RegressingMajorNorms · 2.N.cUSFeb 9, 2026
Mrinank Sharma, Head of Safeguards Research at Anthropic, resigned publicly Feb 9 2026 citing inability to fulfill safety duties; follows a wave of safety-team departures from frontier labs.
WhySenior safety-lead departure signals lab-level duty-of-care norms are collapsing internally, not strengthening.Researcher and whistleblower protections as norm ▲AdvancingDesign · 2.D.aUSFeb 6, 2026
Anthropic published Claude Opus 4.6 system card in early February 2026 detailing ASL-3 safeguards, CBRN uplift evaluations, and pre-deployment red-teaming.
WhyRaises disclosure floor for safety-by-design; competitive pressure on other labs to match.Safety-by-design practices ▲AdvancingMajorNorms · 3.N.bEUROPEFeb 6, 2026
Preliminary findings published 6 Feb 2026: infinite scroll, autoplay, push notifications and recommender system inadequately mitigated for mental-health risks. Commission says TikTok must change the basic design of its service.
WhyFirst regulator to formally name infinite scroll + autoplay as addictive design requiring fundamental redesign under DSA.Critique of dark-pattern and addictive design ▼RegressingMajorDesign · 3.D.cEUROPEFeb 6, 2026
Feb 2026: TikTok disputed Commission findings as "categorically false," defended screen-time tools that EU called "easy to dismiss." No commitment to disable infinite scroll or redesign recommender system.
WhyLargest attention-economy platform actively resists attention-respecting redesign; engagement-max still winning in product.Attention respect in UX ▲AdvancingNorms · 3.N.aGLOBALFeb 4, 2026
Anthropic announced that its AI assistant Claude will remain ad-free, emphasizing a product philosophy centered on providing users a 'space to think' rather than maximizing engagement.
WhyAnthropic publicly commits to an ad-free model for Claude, framing the AI as a 'space to think' rather than an engagement-maximizing tool.Public discourse on well-being metrics over engagement ▲AdvancingMajorNorms · 1.N.aGLOBALFeb 3, 2026
Why30-country consensus report raises public baseline for frontier AI transparency expectations.Public expectation of transparency on AI capabilities ▲AdvancingMajorNorms · 6.N.aGLOBALFeb 3, 2026
Second annual International AI Safety Report chaired by Yoshua Bengio, backed by EU/OECD/UN and 30+ countries, released 3 Feb 2026 ahead of the India AI Impact Summit. 200-page report with 1,451 references. Confirms scientific consensus on escalating risks (cyber, bio, loss of control, eval-aware models). (Note: arxiv preprint 2602.21012 uploaded 24 Feb 2026; primary release was UK gov on 3 Feb 2026.)
WhyMost authoritative international scientific consensus document on AI risks; institutionalized annual cadence with 30+ country panel.International scientific consensus on risks ▲AdvancingMajorDesign · 1.D.bGLOBALFeb 3, 2026
Why30-country report synthesizes public eval results; makes public-eval expectation multilateral.Public evaluation results ▲AdvancingLaws · 3.L.bEUROPEFeb 1, 2026
The European Commission launched a new Action Plan Against Cyberbullying aimed at protecting the mental health of children and teenagers online.
WhyThe European Commission launched an Action Plan to protect the mental health of minors online, advancing youth digital protections.Protections for minors ▼RegressingMajorNorms · 5.N.aEUROPEJan 26, 2026
UK Home Secretary Shabana Mahmood announced on 26 Jan 2026 a policing white paper expanding mobile LFR camera vans from 10 to 50 nationwide, plus a £115M National Centre for AI in Policing (Police.AI). Backlash from Big Brother Watch, Liberty and parliamentary critics framing it as an 'Orwellian panopticon'; the government has also launched a consultation on a bespoke legal framework.
WhyStrong public debate: gov framed as 'panopticon' + civil-society/court pushback. Net-negative for rights; debate itself is healthy.Public debate on AI surveillance and civil liberties ▼RegressingMajorNorms · 2.N.cUSJan 20, 2026
OpenAI fired VP Product Policy Ryan Beiermeister in early January 2026 after she raised concerns about adult-mode rollout and the strength of child-exploitation guardrails; she denies the cited discrimination claim.
WhyFrontier lab terminating a safety-policy VP after internal dissent corrodes the emerging norm of researcher protection.Researcher and whistleblower protections as norm ▲AdvancingMajorLaws · 3.L.aEUROPEJan 15, 2026
Commission 2026 work programme confirms a Digital Fairness Act legislative initiative for Q4 2026, targeting dark patterns, addictive design, unfair personalisation and influencer marketing. Consultation closed Oct 2025.
WhyMoves EU dark-patterns rules from sector-specific (DSA Art. 25) to a horizontal regime covering all B2C digital.Restrictions on dark patterns and manipulative UX ◐MixedLaws · 7.L.bEUROPEJan 8, 2026
On Jan 8, 2026 the Commission published the summary of 450+ contributions to its DMA review consultation (ran Jul-Sep 2025, with dedicated AI questionnaire launched Aug 26, 2025). AI emerged as the most prevalent theme; respondents split on whether to add a standalone AI Core Platform Service category. Article 53 report due May 3, 2026.
WhyConsultation-stage move; norm-gathering for future enforcement rather than action.Interoperability and data portability mandates ▲AdvancingMajorNorms · 3.N.cUSJan 7, 2026
Settlement in principle on Jan 7 2026 covering lawsuits from families in Florida, Colorado, Texas and NY alleging Character.AI chatbots drove teens to suicide or self-harm. Landmark liability moment for AI-related mental-health harm.
WhyLiability settlement validates mental-health harm narrative in mainstream discourse; sets precedent for AI accountability.Mental health implications in mainstream discourse ▲AdvancingMajorLaws · 2.L.dUSJan 5, 2026
MDL court ordered OpenAI to produce ~20 million ChatGPT conversation logs to consolidated class-action plaintiffs on January 5 2026; historic private-enforcement discovery order.
WhyLargest AI-related discovery order to date; enables private plaintiffs to pursue duty-of-care theories at scale.Private right of action / class action enablement ▼RegressingDesign · 4.D.aUSJan 1, 2026
In January 2026, Utah launched a pilot program that permits an autonomous AI agent to handle prescription renewals, removing humans from the loop in a high-stakes medical process.
WhyUtah's pilot program allows an autonomous AI agent to renew medical prescriptions, removing humans from a high-stakes decision loop.Human-in-the-loop for consequential decisions ▲AdvancingMajorLaws · 2.L.bUSJan 1, 2026
California SB 243 took effect January 1 2026, imposing duty-of-care obligations on AI companion chatbot operators including crisis response, disclosure, and age verification.
WhyFirst US state duty-of-care statute specifically for AI chatbots, with private right of action; template law.Duty-of-care statutes applied to AI ▲AdvancingMajorDesign · 5.D.cUSJan 1, 2026
New CPPA rules approved 23 Sep 2025 took effect 1 Jan 2026 with phased deadlines: businesses using Automated Decision-Making Technology for 'significant decisions' (employment, housing, credit, education, healthcare) must conduct risk assessments and honor notice, access, appeal and opt-out rights. ADMT-specific notice/opt-out obligations broadly effective 1 Jan 2027; risk-assessment duties apply prospectively from 1 Jan 2026.
WhyFirst US state law forcing UX-level surfacing of algorithmic-decision rights (notice + appeal) on deployers. Sets the pattern.User rights surfaced in UX ▲AdvancingLaws · 5.L.cUSJan 1, 2026
Indiana Consumer Data Protection Act, Kentucky Consumer Data Protection Act and Rhode Island Data Transparency and Privacy Protection Act all took effect 1 Jan 2026, bringing the total US states with comprehensive privacy laws to 20. Rhode Island uses notably low thresholds (35K consumers).
WhyIncremental state-by-state privacy buildout continues. None are strong laws individually, but compound effect is real.Data protection strengthening ▲AdvancingMajorLaws · 5.L.bUSJan 1, 2026
TX HB 149 took effect 1 Jan 2026, prohibiting AI systems that intentionally discriminate against protected classes, restricting biometric capture for AI training, and requiring AG-available risk governance documentation.
WhySecond US state after Colorado with a cross-sector AI anti-discrimination law. Weaker than CO SB 24-205 but in force now.Algorithmic bias and discrimination protections ▲AdvancingLaws · 5.L.cUSJan 1, 2026
CA SB 361 took effect Jan 2026, requiring data brokers to disclose whether personal data is sold to generative AI developers, foreign actors, or governments. Opt-out requests must be processed within 45 days via CPPA's deletion mechanism.
WhyFirst state law explicitly flagging AI-training data flows in broker disclosures. Small but precedent-setting.Data protection strengthening ▲AdvancingMajorLaws · 1.L.bUSJan 1, 2026
WhyFirst US state law with fixed statutory window for frontier incident reporting.Mandatory incident reporting ▲AdvancingLaws · 7.L.dUSJan 1, 2026
Nevada AB73 became effective Jan 1, 2026, requiring "clear and conspicuous" disclosure when synthetic/AI-generated media is used in political communications. Unanimously passed; gives depicted candidates injunctive relief against undisclosed AI manipulation. Part of a broader wave of state deepfake-political laws.
WhyState-level democratic-integrity guardrails spreading; counterweight to federal inaction.Restrictions on political uses of AI ▲AdvancingMajorLaws · 1.L.aUSJan 1, 2026
WhyLargest US state mandates pre-deployment safety testing and disclosure for frontier models.Pre-deployment evaluation mandates ▲AdvancingMajorLaws · 3.L.bUSJan 1, 2026
First US law specifically regulating AI "companion chatbots" — effective 1 Jan 2026. Requires AI disclosure, self-harm safety protocols, minor-specific safeguards including 3-hour break reminders, crisis referrals. Private right of action.
WhyFirst-in-nation law targeting AI-companion harms to minors; creates liability template other states will copy.Protections for minors ▼RegressingMajorLaws · 2.L.cUSDec 22, 2025
FTC formally set aside the Rytr consent order on December 22 2025, reversing one of the flagship Operation AI Comply cases and signaling softer deceptive-AI enforcement.
WhyFederal regulator retreating from deceptive-AI consent orders undermines consumer-protection duty-of-care layer.Consumer protection enforcement against deceptive AI claims ▲AdvancingNorms · 2.N.aUSDec 22, 2025
NYC Bar Association issued binding ethics guidance on lawyer use of AI transcription and note-taking tools, covering confidentiality, client consent, and supervision duties.
WhyMajor bar association codifies AI-specific duty of care for lawyers; extends professional standards into AI use.Professional standards and codes of conduct ▲AdvancingDesign · 6.D.cGLOBALDec 18, 2025
On 18 Dec 2025 the UK AI Security Institute released its first Frontier AI Trends Report, synthesizing two years of testing 30+ frontier models. Key findings: cyber apprentice-level task success rose from <9% (2023) to ~50% (2025); first expert-level cyber task completed in 2025; models outperform PhD-level experts on chem/bio knowledge; hour-long software tasks completed >40% of the time. Shared publicly to inform the International Network's evaluation science.
WhyPublic evidence base for internationally-aligned evaluation protocols; UK continues to anchor the network's technical output.Evaluation protocols aligned internationally ▲AdvancingMajorDesign · 1.D.bEUROPEDec 18, 2025
WhyIndependent eval data from state AISI reaches public; sets disclosure standard.Public evaluation results ▲AdvancingMajorNorms · 1.N.bEUROPEDec 18, 2025
WhyState AISI publishes cross-lab eval results; cements expectation that evals should be public.Open publication of safety evaluations as industry norm ▲AdvancingNorms · 2.N.aUSDec 15, 2025
ABA Task Force on Law and Artificial Intelligence published its Year Two report consolidating AI ethics guidance across state bars that issued AI opinions in 2025.
WhyProfessional-standards consolidation: most state bars now have AI-specific duty rules; norm is solidifying, not just emerging.Professional standards and codes of conduct ▼RegressingMajorLaws · 6.L.cUSDec 11, 2025
On 11 Dec 2025 Trump signed EO 'Ensuring a National Policy Framework for AI' creating an AI Litigation Task Force at DOJ (established by AG memo 9 Jan 2026) to challenge state AI laws and conditioning federal grants on states not enforcing them. Colorado AI Act explicitly named. Preempts subnational compute/licensing experiments that could have aligned with international norms.
WhyForecloses US sub-federal compute-governance that could align with international frameworks; concentrates at minimalist federal level.Compute governance and licensing ▲AdvancingDesign · 1.D.aUSDec 11, 2025
WhyOpenAI maintains per-release system card practice through late 2025.Model cards / system cards published ▲AdvancingMajorLaws · 3.L.bGLOBALDec 10, 2025
World-first law effective 10 Dec 2025 requires Facebook, Instagram, TikTok, Snapchat, YouTube, Reddit, X, Threads, Twitch, Kick to take reasonable steps to remove under-16 accounts. Penalties up to A$49.5M.
WhyHardest minor-protection law enacted anywhere; precedent being watched globally, Slovenia/Spain/Denmark considering similar.Protections for minors ◐MixedDesign · 6.D.cGLOBALDec 10, 2025
At the 4-5 Dec 2025 San Diego meeting, the network (launched at Seoul Summit May 2024; formally established Nov 2024) was renamed 'International Network for Advanced AI Measurement, Evaluation and Science' — widely read as a concession to keep the US (CAISI) engaged. UK took the Network Coordinator role. Australia joined after 25 Nov 2025 AU AISI announcement ($29.9M, operating early 2026). Members: AU, CA, EU, FR, JP, KE, KR, SG, UK, US.
WhyEval-protocol alignment continues at working level but under diluted branding; Australia accession expands but US posture remains uncertain.Evaluation protocols aligned internationally ▲AdvancingMajorLaws · 7.L.cEUROPEDec 5, 2025
On Dec 5, 2025 the Council published conclusions on European Competitiveness in the Digital Decade, urging open standards, interoperability and reduced vendor lock-in in cloud, AI, cybersecurity and connectivity; asks Commission to develop common criteria for sovereign cloud services ahead of the forthcoming Cloud and AI Development Act (Commission proposal due Q1 2026).
WhyCouncil-level political backing for sovereign EU compute and public AI infrastructure ahead of CADAPublic-option AI and sovereign compute funding ▲AdvancingMajorLaws · 3.L.aEUROPEDec 5, 2025
First DSA non-compliance fine (5 Dec 2025) for deceptive "blue checkmark" design, inadequate ad transparency, and blocked researcher access to public data.
WhyFirst DSA enforcement treats deceptive verification UX as a dark pattern; establishes enforcement precedent.Restrictions on dark patterns and manipulative UX ▲AdvancingLaws · 4.L.aUSDec 3, 2025
S.3339 introduced 3 Dec 2025 by Sen. Jim Banks (R-IN) with cosponsors Hassan (D-NH), Hickenlooper (D-CO), and Husted (R-OH); establishes federal AI workforce transition fund and retraining grants. Bipartisan cosponsorship signals emerging consensus.
WhyFederal-level displacement funding mechanism; first bipartisan AI transition bill with serious cosponsors across party linesWorker displacement protections and transition funding ▼RegressingLaws · 4.L.cUSDec 2, 2025
Dec 2025 audit report finds only 18 of 391 surveyed NYC employers complied with Local Law 144 automated employment decision tool bias-audit requirements; enforcement gap exposed.
WhyNotification/audit requirement exists on paper but is effectively unenforced; erodes compliance cultureNotification requirements before AI deployment in workplace ▲AdvancingDesign · 3.D.bGLOBALNov 25, 2025
Late Nov 2025: platform blocked minors from primary chat feature after cumulative teen-suicide lawsuits. Age-assurance functionality rolled out. Critics say late and creates dependency-withdrawal risk.
WhyProduct change driven by litigation, not well-being mission; still a rare case of a platform removing its core engagement loop for minors.Well-being features shipped by default ▼RegressingNorms · 7.N.cUSNov 17, 2025
On Nov 17, 2025, House Majority Leader Scalise confirmed to Punchbowl News that House GOP leaders are exploring attaching AI preemption language to the FY26 National Defense Authorization Act. Congressional Progressive Caucus and states-rights Republicans lined up against.
WhyIndustry allies retry democratic capture via a different legislative vehicle after the 99-1 defeatConcerns about democratic capture ▲AdvancingDesign · 5.D.bGLOBALNov 13, 2025
Anthropic published a new automated method for measuring political bias on 13 Nov 2025, reporting Claude Sonnet 4.5 at 94% even-handedness (Opus 4.1 at 95%; Gemini 2.5 Pro 97%, Grok 4 96%, GPT-5 89%, Llama 4 66%). Method open-sourced so other labs and researchers can reproduce.
WhyBias-eval tooling is becoming standard and comparable across frontier labs. Politics-specific, but method is transferable.Bias testing and fairness tooling in development ▲AdvancingMajorNorms · 4.N.bEUROPENov 11, 2025
Employment and Social Affairs Committee MEPs voted 11 Nov 2025 (41-6-4) to urge Commission to propose binding rules on algorithmic management across all sectors, extending Platform Work Directive protections to traditional employment. Full plenary endorsed the resolution 17 Dec 2025.
WhyWorker voice institutionalized at EU level; binding framework for surveillance/monitoring limits advancesWorker voice in AI deployment decisions ▲AdvancingNorms · 4.N.aGLOBALOct 28, 2025
Forrester Predictions 2026 report finds majority of enterprises that cut headcount for AI report buyer's remorse; productivity gains overestimated, rehiring underway.
WhyData-backed industry acknowledgment that replacement narrative oversold; shifts mainstream discoursePublic discourse on AI and labor ▲AdvancingMajorNorms · 3.N.cGLOBALOct 27, 2025
October 27 2025: OpenAI revealed 0.15% of 800M weekly ChatGPT users (~1.2M) show explicit indicators of suicidal planning or intent. Entered mainstream coverage via TechCrunch, ABC7, News9.
WhyPlatform-disclosed scale of AI-mental-health interaction moved the debate; validated concern is not niche.Mental health implications in mainstream discourse ▼RegressingDesign · 2.D.cUSOct 22, 2025
Meta cut ~600 roles from its AI org (FAIR, product AI, AI infrastructure) on October 22 2025 while protecting its TBD Lab superintelligence unit; post-deployment monitoring capacity reduced.
WhyLabs cutting teams responsible for post-deployment monitoring as deployment scales up.Post-deployment monitoring and rapid response ▲AdvancingDesign · 3.D.aGLOBALOct 14, 2025
Oct 14 2025: all under-18 Instagram accounts default to 13+ content filters (inspired by MPA PG-13 guidelines); parental approval required to loosen. Strong language, alcohol, risky stunts hidden. Limited Content mode available.
WhyDefault-on teen safety settings is a meaningful de-personalization step, though narrowly applied to minors only.Opt-out and de-personalization defaults ▲AdvancingDesign · 4.D.cUSOct 13, 2025
SB 942 (signed 2024) would have required generative AI providers serving >1M Californians to offer free AI-detection tools and mandatory provenance disclosures from 1 Jan 2026. AB 853, signed 13 Oct 2025, pushed the effective date to 2 Aug 2026 citing implementation readiness. Mixed signal: law still advances, but compliance delayed under industry pressure.
WhyFirst US state AI-content provenance regime still on track, but eight-month delay reveals industry pushback powerTransparent attribution of AI-generated work ▼RegressingLaws · 4.L.bUSOct 13, 2025
Gov. Newsom vetoed SB 7 on 13 Oct 2025; bill would have required human review of algorithmic employment decisions and notice of automated monitoring. Significant regression in US largest-economy jurisdiction.
WhyHigh-profile veto signals industry lobbying prevailing over worker protection in US; precedent for other statesAlgorithmic management regulations ▼RegressingMajorLaws · 2.L.aEUROPEOct 6, 2025
European Commission formally withdrew the AI Liability Directive proposal; withdrawal published in Official Journal October 6 2025 after February 2025 Work Programme signalled the intent.
WhyEU abandoning harmonized AI liability framework is the single largest duty-of-care regression of 2025.Liability for foreseeable AI harms ▲AdvancingMajorLaws · 5.L.aEUROPEOct 6, 2025
UK Upper Tribunal set aside the First-tier Tribunal's decision that had blocked the ICO's £7.5M Clearview fine, ruling Clearview's face-scraping is 'behavioural monitoring' under UK GDPR regardless of foreign law-enforcement clients. Case remitted to FTT for substantive reconsideration; Clearview subsequently granted appeal to Court of Appeal Dec 2025.
WhyCloses a major jurisdictional loophole for offshore face-recognition vendors. Live appeal; not yet final, but directionally strong.Biometric and facial recognition limits ▲AdvancingDesign · 3.D.bGLOBALSep 29, 2025
Sept 29 2025: parents can link teen accounts, set quiet hours, disable voice/memory/image gen, opt out of training. OpenAI notifies parents on detected self-harm signals. Teen Safety Blueprint published Nov 2025.
WhyMeaningful teen-safety feature ship, but reactive to Raine lawsuit and SB 243 — not a voluntary well-being priority.Well-being features shipped by default ▲AdvancingMajorDesign · 7.D.aGLOBALSep 29, 2025
DeepSeek released V3.2-Exp on Sep 29, 2025 under MIT license, debuting DeepSeek Sparse Attention (DSA) for long-context efficiency and cutting API prices 50%+ immediately. Full V3.2 shipped Dec 1, 2025 (with V3.2-Speciale reasoning variant).
WhyCost collapse continues; compute-access moat at the frontier eroding through architectural innovation, not just scale.Open-source model releases ▲AdvancingMajorLaws · 7.L.aUSSep 12, 2025
In September 2025 Microsoft and OpenAI signed a non-binding MOU to restructure their nearly $13B relationship, enabling OpenAI to transition toward a for-profit entity with freedom to partner with rival cloud providers. The restructuring was driven by threat of a formal FTC merger challenge treating multi-billion-dollar exclusive licensing as an undisclosed merger.
WhyFirst concrete antitrust-driven unwinding of a cloud-AI quasi-merger. Sets precedent that exclusive frontier-lab/cloud partnerships can beAntitrust action against AI market concentration ▲AdvancingMajorDesign · 2.D.cUSSep 11, 2025
FTC issued 6(b) orders September 11 2025 to Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and xAI demanding data on how they measure, test, and monitor chatbot harms to minors.
WhyFirst broad federal demand for post-deployment monitoring data across frontier labs; enforcement-intelligence baseline.Post-deployment monitoring and rapid response ▲AdvancingMajorDesign · 7.D.aGLOBALAug 5, 2025
First OpenAI open-weight release since GPT-2. 117B and 21B total-parameter MoE reasoning models (5.1B and 3.6B active), Apache 2.0 license, near-parity with o4-mini / o3-mini on core benchmarks. Runs on single 80GB GPU / 16GB consumer hardware respectively.
WhyFrontier-lab capitulation to open-weights norm post-DeepSeek R1; design-layer power diffusion acceleratingOpen-source model releases ▲AdvancingMajorLaws · 1.L.aEUROPEAug 2, 2025
WhyGPAI providers legally required to document evals and disclose model info across EU.Pre-deployment evaluation mandates ▲AdvancingMajorLaws · 1.L.bEUROPEAug 2, 2025
WhyEU-wide legal duty to report serious incidents binds GPAI providers under Article 55(1)(c).Mandatory incident reporting ▲AdvancingMajorLaws · 6.L.dEUROPEAug 2, 2025
On 2 Aug 2025 the EU AI Act's GPAI rules took effect, codifying a 10^25 FLOP training-compute threshold for 'systemic-risk' GPAI models subject to safety, security, and transparency obligations (10^23 FLOP threshold for baseline GPAI). First binding international compute-based safety threshold in force. Full enforcement from 2 Aug 2026.
WhyOnly binding legal regime with codified compute threshold for frontier models; reference point for international threshold convergence.Safety thresholds codified internationally ▲AdvancingMajorLaws · 1.L.cEUROPEAug 2, 2025
WhyThird-party conformity assessment legally required for high-risk AI systems; phased 2025-2026.Third-party audit requirements ▲AdvancingLaws · 1.L.dEUROPEAug 1, 2025
WhySignatories commit to adversarial-testing disclosure; voluntary but near-binding for EU access.Red-team disclosure rules ▲AdvancingNorms · 4.N.bUSJul 10, 2025
Members ratified Interactive Media Agreement 10 July 2025 after 11-month strike; contract mandates informed consent and compensation for digital replicas and AI voice/performance use.
WhyCollective bargaining forces AI deployment terms on employers; template for other sectorsWorker voice in AI deployment decisions ▲AdvancingMajorDesign · 6.D.aEUROPEJul 10, 2025
The EU GPAI Code of Practice (Transparency, Copyright, Safety & Security chapters) was published on 10 Jul 2025 and endorsed as an 'adequate voluntary tool' by the Commission and AI Board. Signatories include Amazon, Anthropic, Google, Microsoft, OpenAI, Mistral, Cohere, IBM, Aleph Alpha, and others. Meta and Chinese providers declined. xAI signed only the Safety & Security chapter.
WhyBroadest voluntary industry commitment aligned to a binding legal regime; partial coverage (Meta/China absent) limits scope.Voluntary industry safety commitments ▲AdvancingMajorNorms · 7.N.cUSJul 1, 2025
Senate voted 99-1 during vote-a-rama to strike the Blackburn/Cantwell (bipartisan) amendment removing the 10-year (later 5-year) federal preemption of state AI laws from the One Big Beautiful Bill Act. Only Sen. Tillis (R-NC) voted no.
WhyNear-unanimous defeat of industry-backed preemption; preserves plural regulatory venues against capture and protects ~1,000 pending stateConcerns about democratic capture ▲AdvancingMajorDesign · 5.D.aGLOBALJun 9, 2025
Apple's Private Cloud Compute, extending iPhone-grade cryptographic privacy guarantees (stateless compute, verifiable transparency, non-targetability, no Apple access) to server-side inference, went into production deployment across iOS 26 and Foundation Models framework in 2025.
WhyFirst consumer-scale cryptographically-attested privacy boundary for AI inference. Raises the bar for the whole stack.Privacy-preserving design defaults ▼RegressingMajorDesign · 6.D.bGLOBALJun 3, 2025
UK AISI renamed 'AI Security Institute' on 13 Feb 2025 (announced at Munich Security Conference by Peter Kyle), shifting emphasis from safety to security/national defense. US counterpart renamed 'Center for AI Standards and Innovation' (CAISI) on 3 Jun 2025 with explicit mandate to serve industry as NIST-housed primary point of contact. Signals reframing of voluntary evals posture in both lead jurisdictions.
WhyBoth lead government evaluators rebranded away from 'safety' toward security/innovation; chills dangerous-capability info sharing.Information sharing on dangerous capabilities ▲AdvancingDesign · 5.D.bGLOBALJun 1, 2025
By 2025-2026, Anthropic, OpenAI, Google DeepMind and Meta all report BBQ (Bias Benchmark for QA) scores in every major model card, with proprietary supplemental evaluations (first-person fairness at OpenAI, paired prompts at Anthropic, FACET/HolisticBias at Meta). NAACL 2025 (FLEX) and follow-ups note BBQ scores can mask adversarial failure modes.
WhyDe-facto norm of reporting structured bias benchmarks is stable. Coverage is US-centric and English-only; real but incomplete.Bias testing and fairness tooling in development ▼RegressingNorms · 4.N.aGLOBALMay 28, 2025
Dario Amodei warns of 10-20% unemployment within 1-5 years from AI; frames displacement as near-certainty rather than choice. Prominent voice pushing replacement narrative.
WhyLeading lab CEO normalizing mass displacement framing counteracts skepticism; elite discourse trending negativePublic discourse on AI and labor ▼RegressingMajorLaws · 6.L.bUSMay 13, 2025
On 13 May 2025 BIS announced rescission of the January 2025 AI Diffusion Rule — which would have imposed a worldwide tiered licensing framework on advanced computing ICs and model weights (including a 10^26 FLOP closed-weight threshold), with compliance set for 15 May 2025. Tiered country framework dropped; replacement rule forthcoming.
WhyUS withdrew its most ambitious frontier-compute export framework; tiered licensing + model-weight controls removed before effect.Export controls on frontier compute and models ◐MixedLaws · 6.L.bUSMay 13, 2025
Alongside AI Diffusion Rule rescission (13 May 2025), BIS issued three guidance documents: on GP10 applied to PRC-origin advanced ICs (Huawei Ascend), on controls that may apply to ICs used to train Chinese AI models, and on preventing supply-chain diversion. Narrower, adversary-focused replacement posture compared to the withdrawn diffusion rule.
WhyResidual compute-export enforcement remains but narrower and adversary-scoped; no multilateral framework replaces the tiered diffusion rule.Export controls on frontier compute and models ▲AdvancingMajorNorms · 4.N.aGLOBALMay 8, 2025
CEO Siemiatkowski publicly reversed 2024 AI-replacement strategy after customer experience deteriorated; company rehiring human agents in hybrid model. Became dominant cautionary tale in AI/labor discourse.
WhyHigh-profile reversal shifts discourse from inevitability to skepticism about wholesale replacementPublic discourse on AI and labor ▲AdvancingMajorLaws · 7.L.bEUROPEApr 23, 2025
First DMA non-compliance fines. Apple fined €500M for App Store anti-steering (Article 5(4)); Meta fined €200M for "consent or pay" model (Article 5(2)). Both given 60 days to comply or face periodic penalty payments.
WhyDMA teeth demonstrated; interoperability/anti-gatekeeper regime now credibly applies to the AI platform layer as gatekeepers integrate AIInteroperability and data portability mandates ▲AdvancingDesign · 5.D.cGLOBALApr 10, 2025
OpenAI shipped a Data Controls settings surface in 2025: view/edit/delete individual memories, Temporary Chats (auto-delete 30d, excluded from training), export all conversations, and an 'Improve the model for everyone' opt-out toggle. Visible in UI, user-invocable; business/enterprise plans excluded from training by default.
WhyUser-visible rights controls are now standard in ChatGPT UI. Gaps remain (default-on training for consumer plans, incomplete memory export).User rights surfaced in UX ▲AdvancingDesign · 5.D.aEUROPEApr 10, 2025
On 10 Apr 2025 OpenAI rolled out ChatGPT Memory to Plus/Pro/Team/Enterprise users globally, explicitly excluding the EU, EEA, UK, Switzerland, Norway, Iceland and Liechtenstein pending AI Act / GDPR compliance review. Users in covered regions retain view/edit/delete/export controls.
WhyRegulatory geography actually changed default data-collection behavior in the EU. Small but concrete.Privacy-preserving design defaults ▼RegressingNorms · 7.N.bEUROPEApr 5, 2025
Meta released Llama 4 Scout and Maverick on Apr 5, 2025 but the Llama 4 Community License Agreement denies rights to individuals domiciled in, or companies headquartered in, the EU — a continuation of the Llama 3.2-Vision carve-out applied to all multimodal Llama models.
WhyGeographic carve-outs fragment the "open" norm; license restrictions tighten in response to EU AI Act compliance uncertaintyOpen-source vs closed-source debate ▲AdvancingLaws · 6.L.aEUROPEMar 27, 2025
Federal Councillor Albert Rösti signed the Council of Europe Framework Convention on AI on behalf of Switzerland in Strasbourg on 27 Mar 2025. Federal Council confirmed intent to ratify (12 Feb 2025); legislative consultation draft due end 2026.
WhyExpands signatory base of the only binding international AI instrument; adds non-EU European adopter.Multilateral treaties and conventions ▲AdvancingMajorNorms · 1.N.cGLOBALFeb 28, 2025
WhyOECD framework makes incident disclosure a tracked international norm across member states.Incident disclosure as expected behavior ▲AdvancingMajorLaws · 4.L.dEUROPEFeb 27, 2025
Court of Justice EU ruled 27 Feb 2025 (C-203/22) that data subjects have right to meaningful explanation of logic in automated decisions affecting them; strengthens worker rights against opaque scoring.
WhyTop EU court hardens automated decision-making rights; directly applicable to workplace algorithmsAutomated decision-making rights ▲AdvancingNorms · 4.N.cUSFeb 25, 2025
Pew Research finds majority of workers personally concerned about AI replacing or reshaping their role; only 36% optimistic. Concern concentrated in knowledge work.
WhyWorker-level skepticism distinct from general public; directly relevant to consent and deployment legitimacyPublic skepticism toward AI replacement rhetoric ▲AdvancingLaws · 5.L.dUSFeb 19, 2025
At a Feb 2025 Committee on Public Safety hearing, NYPD Deputy Commissioner Michael Gerber confirmed the department does not use and has no plans to use AI for predictive policing, following years of heat-list and Strategic Subjects List retirements in Chicago and LA.
WhyNormative shift: largest US department on record rejecting predictive policing. Signals rather than binds.Restrictions on predictive policing and algorithmic sentencing ▲AdvancingDesign · 6.D.bGLOBALFeb 12, 2025
Singapore, Japan, UK-led joint testing exercise of Mistral Large and Gemma 2 (27B) across ten languages (11-12 Feb 2025, Paris) — 130,000+ cyber prompts, 6,000+ newly translated multilingual prompts, 40 agentic cyber tasks. Demonstrated technical information-sharing on dangerous capability evaluation between AISIs. Report published Mar 2025.
WhyConcrete multilateral info-sharing on cyber/safety evals across jurisdictions; rare working-level evidence of AISI network delivering.Information sharing on dangerous capabilities ▼RegressingMajorNorms · 6.N.cGLOBALFeb 11, 2025
At the Paris AI Action Summit (10-11 Feb 2025), 61 countries including China, India, Japan, Canada signed the 'Statement on Inclusive and Sustainable AI.' The US and UK declined, with VP Vance warning against 'excessive regulation.' Marked retreat from prior Bletchley/Seoul consensus.
WhyUS + UK publicly broke from multilateral AI-governance consensus; erodes public expectation of coordinated cross-border norms.Public expectation of cross-border governance ▲AdvancingMajorLaws · 5.L.aEUROPEFeb 2, 2025
From 2 Feb 2025, the EU AI Act prohibits untargeted scraping to build facial recognition databases, emotion recognition at work/school, certain biometric categorization, and (with exceptions) real-time remote biometric identification by law enforcement. Fines up to €35M or 7% of turnover.
WhyLargest binding biometric prohibition regime to date. Partial bans with law-enforcement exceptions, but Art.5 fines are substantial.Biometric and facial recognition limits ▼RegressingMajorLaws · 1.L.aUSJan 20, 2025
WhyUS federal pre-deployment testing guidance removed; shifts regulatory burden to states and EU.Pre-deployment evaluation mandates ▲AdvancingMajorNorms · 7.N.bGLOBALJan 20, 2025
DeepSeek released R1 on Jan 20, 2025 under MIT license at a fraction of frontier cost; Altman later acknowledged OpenAI was "on the wrong side of history" on open weights. Triggered $589B Nvidia single-day market-cap loss on Jan 27.
WhyOpen-weights momentum shifts industry norm; forces closed-model incumbents to defend secrecy publicly and cut prices within weeks.Open-source vs closed-source debate ◐MixedMajorNorms · 7.N.aUSJan 17, 2025
FTC published 6(b) staff report on Microsoft-OpenAI, Amazon-Anthropic, Google-Anthropic partnerships. Flags concentration and competition concerns; Chair Ferguson dissented from the section identifying power-concentration implications, signalling enforcement restraint under Trump administration.
WhyReport names the concentration problem at the cloud-AI frontier but the incoming FTC chair publicly pulled back from its policyAntitrust and competition discourse applied to AI ▲AdvancingMajorLaws · 5.L.dUSJul 1, 2024
New Hampshire HB 1688, signed by Gov. Sununu on 12 July 2024 and effective retroactively from 1 July 2024, prohibits state agency use of real-time and remote biometric identification (including facial recognition) for surveillance in public spaces except by law enforcement with a warrant. Among the clearest state-level warrant requirements on live police biometric surveillance to date.
WhyFirst clear warrant-based limit on live police biometric surveillance at state level. Meaningful even with law-enforcement exception.Restrictions on predictive policing and algorithmic sentencing ▲AdvancingDesign · 6.D.cGLOBALMay 1, 2024
The Frontier Model Forum published a technical report detailing emerging industry practices for conducting frontier capability assessments.
WhyFrontier Model Forum's report on capability assessment practices demonstrates cross-lab convergence on evaluation methodologies.Evaluation protocols aligned internationally ▲AdvancingNorms · 5.N.cUSFeb 3, 2020
Inc. magazine included the documentary 'Coded Bias,' which explores algorithmic discrimination, in its list of must-watch movies for business leaders.
WhyInc. magazine featured 'Coded Bias' as a must-watch for leaders, showing mainstream recognition of algorithmic discrimination.Recognition of algorithmic discrimination ▲AdvancingNorms · 5.N.bUSDec 19, 2019
A federal study confirmed that many facial recognition systems exhibit significant racial bias, raising mainstream concerns about their expanding use.
WhyA US federal study confirmed widespread racial bias in facial recognition systems, driving mainstream attention to algorithmic harms.Attention to algorithmic harms ▲AdvancingNorms · 5.N.aUSMay 22, 2019
The US House Committee on Oversight and Reform held a hearing titled 'Facial Recognition Technology (Part 1): Its Impact on Our Civil Rights and Liberties' to debate the surveillance risks of the technology.
WhyUS House Oversight Committee held a hearing focusing on the civil rights and liberties impacts of facial recognition technology.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 5.N.bUSApr 3, 2019
A Turing Award-winning AI researcher publicly criticized Amazon's facial recognition technology, adding expert weight to ongoing concerns about algorithmic bias and accuracy.
WhyProminent AI researchers, including a Turing Award winner, publicly criticized Amazon's facial recognition tech over algorithmic bias.Attention to algorithmic harms ▲AdvancingNorms · 5.N.cUSFeb 7, 2019
TIME magazine published a prominent article highlighting the pervasive issues of gender and racial bias in artificial intelligence systems, bringing mainstream attention to algorithmic discrimination.
WhyTIME magazine published an article explicitly acknowledging and bringing mainstream attention to systematic gender and racial bias in AI.Recognition of algorithmic discrimination ▲AdvancingDesign · 7.D.aGLOBALNov 2, 2018
Google released the open-source code and pre-trained weights for BERT, a state-of-the-art natural language processing model.
WhyGoogle open-sourced BERT, a state-of-the-art NLP model, providing open-weights access to a frontier-class system.Open-source model releases ▲AdvancingMajorNorms · 5.N.bGLOBALFeb 4, 2018
The Gender Shades project, led by Joy Buolamwini and Timnit Gebru, revealed that commercial facial recognition systems from major tech companies exhibited significant performance disparities, performing worst on darker-skinned females.
WhyLandmark academic investigation exposed severe gender and skin-type bias in commercial facial recognition systems.Attention to algorithmic harms