Humanein the Loop
← Home

All signals

Every coded event moving the matrix — tagged to an indicator, checked for triangulation, and weighted by magnitude. Most recent first.

AdvancingNorms · 7.N.cEUROPEApr 18, 2026

Data center lobby secured EU provision to keep environmental impact data confidential - report

A new report reveals that a lobbying group representing major tech companies, including Microsoft, Amazon, Google, and Meta, successfully secured a provision in the EU to keep data center environmental impact data confidential.

WhyA new report scrutinizes tech giants' lobbying efforts, surfacing their influence in securing an EU provision to hide environmental data.Concerns about democratic capture
RegressingLaws · 7.L.bUSApr 9, 2026

Comparison Shopping Is Not a (Computer) Crime

A federal district court ruled that Perplexity's AI-enabled browser violated the Computer Fraud and Abuse Act by accessing Amazon's website to help users comparison shop. The EFF has filed an amicus brief supporting Perplexity's appeal to the Ninth Circuit, arguing the decision harms competition and innovation.

WhyA federal court ruled Perplexity's AI shopping agent violated the CFAA by accessing Amazon, legally protecting a walled garden.Interoperability and data portability mandates
AdvancingDesign · 2.D.bUSApr 8, 2026

OpenAI Child Safety Blueprint published

OpenAI published Child Safety Blueprint on April 8 2026 detailing reporting channels, provider coordination, and safety-by-design controls for AI-enabled child exploitation.

WhyLab self-publishing reporting-channel design; voluntary but raises industry baseline for abuse channels.User reporting / abuse channels
AdvancingNorms · 5.N.aGLOBALApr 8, 2026

Digital Hopes, Real Power: How the Arab Spring Fueled a Global Surveillance Boom

The Electronic Frontier Foundation published a blog series reflecting on how the 2011 Arab uprisings inadvertently fueled a global boom in state surveillance, including the rise of AI-driven biometrics and facial recognition.

WhyEFF blog series highlights the rise of AI-driven surveillance and biometrics, sustaining civil society pressure on digital authoritarianism.Public debate on AI surveillance and civil liberties
AdvancingMajorLaws · 5.L.cEUROPEApr 7, 2026

EU Parliament Blocks Mass-Scanning of Our Chats—What's Next?

The EU Parliament voted not to extend an interim derogation from e-Privacy rules, effectively making the voluntary mass-scanning of private chats by tech companies illegal in the EU.

WhyEU Parliament voted against prolonging an e-Privacy derogation, effectively outlawing voluntary algorithmic mass-scanning of private chats.Data protection strengthening
AdvancingNorms · 6.N.cGLOBALApr 5, 2026

UN opens public consultation for first Global Dialogue on AI Governance

UN opened worldwide public submission portal (deadline 30 Apr 2026) for input into the first Global Dialogue on AI Governance, to be held in 2026 in Geneva back-to-back with the ITU AI for Good Summit. Dialogue + Independent International Scientific Panel on AI established by UNGA Resolution A/RES/79/325 (adopted by consensus 26 Aug 2025). Signals formal public-expectation channel for cross-border AI governance.

WhyFirst UN-sanctioned open public channel on AI governance; institutionalizes public expectation of multilateral oversight.Public expectation of cross-border governance
AdvancingNorms · 5.N.aUSApr 3, 2026

Tech Nonprofits to Feds: Don’t Weaponize Procurement to Undermine AI Trust and Safety

Tech nonprofits, including the EFF and CDT, filed comments opposing a proposed GSA procurement rule that would require AI contractors to license their systems for "all lawful purposes," arguing it could enable mass surveillance.

WhyCivil society groups filed comments opposing a proposed GSA procurement rule that would force AI contractors to allow use for surveillance.Public debate on AI surveillance and civil liberties
AdvancingNorms · 5.N.aGLOBALApr 2, 2026

Google and Amazon: Acknowledged Risks, and Ignored Responsibilities

The Electronic Frontier Foundation publicly criticized Google and Amazon for failing to address human rights and surveillance risks associated with their Project Nimbus AI cloud contract with the Israeli government.

WhyEFF published a critique pressuring Google and Amazon over the human rights and surveillance risks of their AI cloud contract with Israel.Public debate on AI surveillance and civil liberties
AdvancingNorms · 5.N.aGLOBALApr 2, 2026

EFF’s Submission to the UN OHCHR on Protection of Human Rights Defenders in the Digital Age

The Electronic Frontier Foundation (EFF) submitted a report to the UN OHCHR detailing how new digital regulations and surveillance technologies, including biometric monitoring, are being used to restrict the fundamental rights of human rights defenders globally.

WhyEFF submitted a report to the UN OHCHR highlighting how expanded state surveillance and biometric monitoring threaten human rights defendersPublic debate on AI surveillance and civil liberties
AdvancingDesign · 7.D.cGLOBALApr 1, 2026

Anthropic Fellows Program opens next two cohorts (May & July 2026)

Anthropic opened applications for its next Fellows cohorts beginning May and July 2026. 4-month program, $3,850/week stipend + ~$15k/mo compute funding + mentorship for independent safety researchers. 40% of first-cohort fellows joined Anthropic full-time; 80%+ produced papers.

WhyThird-party access channel widens; structured external-researcher pipeline at frontier lab, though scale remains small (dozensThird-party access to closed models
AdvancingLaws · 7.L.cEUROPEMar 30, 2026

Mistral raises €722M debt to build sovereign European AI compute

Mistral AI secured €722M ($830M) in debt financing from seven-bank consortium led by Bpifrance and BNP Paribas to build a 44MW data center near Paris with 13,800 Nvidia GB300 GPUs (Q2 2026 target). Largest AI-focused debt raise by a European technology company to date; targets 200MW capacity by end-2027.

WhyEuropean champion scales outside US hyperscaler dependence; credible sovereign alternative emerging on the compute layerPublic-option AI and sovereign compute funding
RegressingMajorLaws · 5.L.cEUROPEMar 18, 2026

Court of Rome annuls Italy's €15M GDPR fine against OpenAI

On 18 March 2026, the Court of Rome annulled the Garante's Nov 2024 €15M fine — the only final GDPR enforcement decision against a GenAI provider. The fine had found unlawful training data processing, breach-notification failure and no age verification. Garante can appeal. Judgment no. 4153/2026, R.G. 4785/2025.

WhyOnly final GenAI GDPR fine in Europe just collapsed on appeal. Signals hard limits on current data-protection tools against frontier labs.Data protection strengthening
RegressingLaws · 5.L.bUSMar 17, 2026

Colorado AI Act effective date pushed to June 2026 amid industry pressure to repeal

CO SB 24-205, the first US comprehensive AI anti-discrimination law, was delayed from Feb 2026 to 30 June 2026 via SB 25B-004. In March 2026, Governor Polis's AI Policy Working Group proposed a replacement bill stripping many employer compliance duties.

WhyIndustry lobbying is actively eroding the strongest US state AI anti-discrimination law before it takes effect. Net-negative on enforcement.Algorithmic bias and discrimination protections
AdvancingLaws · 4.L.aUSMar 17, 2026

Minnesota introduces HF 4369 (Safeguarding Human Intelligence Act) with 90-day AI displacement notice

HF 4369 introduced 17 March 2026 requires employers to notify workers 90 days before AI-driven layoffs and fund retraining; first US state-level AI displacement notification bill. SF 4576 companion filed in Senate.

WhyConcrete state-level protection template; shifts displacement from externality to regulated transitionWorker displacement protections and transition funding
RegressingMajorNorms · 5.N.cGLOBALMar 12, 2026

IDS / African Digital Rights Network report: 11 African governments spent $2B+ on Chinese-built AI surveillance

The Institute of Development Studies, with the African Digital Rights Network, published 'Smart City Surveillance in Africa: Mapping Chinese AI Surveillance Across 11 Countries' on 12 Mar 2026. Documents $2B+ spent by Algeria, Egypt, Kenya, Mauritius, Mozambique, Nigeria, Rwanda, Senegal, Uganda, Zambia and Zimbabwe on facial recognition and ANPR, with deployments used against activists, opposition figures and journalists despite no demonstrated crime-reduction effect.

WhyMajor documented algorithmic-surveillance discrimination against dissidents across 11 states. Recognition rising; power is not.Recognition of algorithmic discrimination
AdvancingMajorLaws · 6.L.aEUROPEMar 11, 2026

European Parliament approves EU accession to Council of Europe AI Convention

On 11 Mar 2026 the European Parliament gave consent (455 in favour, 101 against, 74 abstentions) to EU conclusion of the Council of Europe Framework Convention on AI and Human Rights (CETS 225), the first legally binding international AI treaty. Advances ratification; treaty enters into force after 5 states ratify.

WhyKey step toward entry-into-force of first binding multilateral AI treaty; EU accession anchors Convention.Multilateral treaties and conventions
MixedLaws · 3.L.bUSMar 5, 2026

KOSA stalled in Senate Commerce Committee under Cruz

Despite 75+ co-sponsors, KOSA has not received a markup from Sen. Cruz's committee as of Feb 2026. House advanced narrower KIDS Act (28-24, party line) in March 2026, weaker than Senate version.

WhyFederal minor-protection bill stalled; state laws (CA, NY, AU) advancing faster than US federal action.Protections for minors
RegressingLaws · 3.L.cUSMar 3, 2026

Utah Minor Protection in Social Media Act remains enjoined

Utah SB 194 — requiring default privacy, disabled autoplay/infinite scroll/push notifications for minors — remains stayed under NetChoice First Amendment injunction as of April 2026. Similar Virginia injunction appealed March 2026.

WhyNetChoice-led First Amendment litigation has blocked state-level design-mandate laws in UT, OH, CA, AR, MS, VA — slows design regulation.Ad / recommendation system transparency
AdvancingNorms · 5.N.aUSMar 3, 2026

SF QuitGPT protest against OpenAI-Pentagon contract; broader multi-lab march the prior week

On 3 Mar 2026, ~40-50 activists rallied outside OpenAI's SF HQ in a 'QuitGPT' protest against its Pentagon contract. The prior week, a larger ~500-person multi-lab march targeted DeepMind, OpenAI and Meta; ~200 protested Virginia data centers. Concerns: mass surveillance, autonomous weapons, environmental impact.

WhyOrganized public protest against frontier-lab militarization — real mobilization, not just op-eds. Rare for AI-surveillance debate in US.Public debate on AI surveillance and civil liberties
AdvancingMajorNorms · 3.N.cUSFeb 27, 2026

Common Sense Media "no AI companions for under-18s" stance re-surfaces in mainstream

Common Sense Media report "Talk, Trust, and Trade-Offs" (orig. Jul 2025) reaffirmed via Penn State re-coverage Feb 2026 and ongoing APA Monitor, Stanford SSIR, Brookings citations: peril of AI companions outweighs potential, no under-18 use.

WhySustained expert-consensus stance that AI companions harm youth mental health; embedded in mainstream coverage.Mental health implications in mainstream discourse
AdvancingNorms · 5.N.aUSFeb 27, 2026

Civil rights coalition decries DoD pressure on Anthropic to lift AI surveillance guardrails

The Leadership Conference on Civil and Human Rights publicly condemned the Department of Defense's campaign to pressure Anthropic into lifting restrictions on surveillance use of its AI, framing it as a 'tech-fueled domestic surveillance state.'

WhyCivil-society attention to surveillance repurposing of frontier AI — healthy debate signal, though substantive power remains with DoD.Public debate on AI surveillance and civil liberties
RegressingMajorDesign · 6.D.aGLOBALFeb 24, 2026

Anthropic drops categorical pause commitment from Responsible Scaling Policy v3.0

On 24 Feb 2026 Anthropic released RSP v3.0, removing its previous implication that it would pause training/deployment if risks exceeded acceptable levels. Some mitigations (e.g. RAND SL4) reframed as 'industry-wide recommendations' rather than unilateral commitments. GovAI and multiple safety commentators flagged this as material weakening of the flagship voluntary framework.

WhyFlagship voluntary commitment weakened by the lab that pioneered the framework; confirms collective-action limits of self-governance.Voluntary industry safety commitments
AdvancingMajorNorms · 6.N.cGLOBALFeb 21, 2026

New Delhi Declaration on AI Impact endorsed by 92 countries and international organisations

At the India AI Impact Summit (16-21 Feb 2026; declaration adopted 18-19 Feb, announced 21 Feb 2026), 92 countries and international organisations — including US, UK, China, Russia, EU, Switzerland — endorsed the New Delhi Declaration committing to international cooperation on safe, inclusive AI. Substantially broader than Paris 2025 signatory base; US re-engaged. (Initial 88 on 21 Feb, grew to 91 by 24 Feb, 92 as of 5 Mar 2026.)

WhyLargest AI summit declaration to date; notable US return post-Paris 2025. Multilateral coordination expectation partially restored.Public expectation of cross-border governance
AdvancingNorms · 6.N.bGLOBALFeb 15, 2026

FLI civil society recommendations published ahead of India AI Impact Summit

Future of Life Institute (UN SG's designated civil society co-champion for AI) released pre-summit recommendations calling for Global South participation in AI governance, national audit capacity, and commensurate safety guarantees from developers. Published 15 Feb 2026, days before the 16-21 Feb New Delhi summit.

WhyMajor civil society coalition input to multilateral process; signals continued organized civil society engagement with summit track.Multilateral civil society coalitions
AdvancingNorms · 6.N.bEUROPEFeb 11, 2026

Open letter from 60+ civil society orgs to EU on AI Act transparency safeguard

European coalition of civil society, trade unions, and academics (60+ organisations incl. Access Now, EDRi, Amnesty Tech, BEUC) signed open letter to MEPs and Commission urging rejection of AI Omnibus amendments that would delete Art. 49(2) high-risk transparency safeguard.

WhyMultilateral civil society coalition defending binding cross-border governance safeguards against industry-led rollback.Multilateral civil society coalitions
AdvancingMajorNorms · 3.N.bEUROPEFeb 6, 2026

EU Commission finds TikTok's addictive design breaches DSA

Preliminary findings published 6 Feb 2026: infinite scroll, autoplay, push notifications and recommender system inadequately mitigated for mental-health risks. Commission says TikTok must change the basic design of its service.

WhyFirst regulator to formally name infinite scroll + autoplay as addictive design requiring fundamental redesign under DSA.Critique of dark-pattern and addictive design
RegressingMajorDesign · 3.D.cEUROPEFeb 6, 2026

TikTok rejects EU Commission call to disable infinite scroll

Feb 2026: TikTok disputed Commission findings as "categorically false," defended screen-time tools that EU called "easy to dismiss." No commitment to disable infinite scroll or redesign recommender system.

WhyLargest attention-economy platform actively resists attention-respecting redesign; engagement-max still winning in product.Attention respect in UX
AdvancingNorms · 3.N.aGLOBALFeb 4, 2026

Claude is a space to think

Anthropic announced that its AI assistant Claude will remain ad-free, emphasizing a product philosophy centered on providing users a 'space to think' rather than maximizing engagement.

WhyAnthropic publicly commits to an ad-free model for Claude, framing the AI as a 'space to think' rather than an engagement-maximizing tool.Public discourse on well-being metrics over engagement
AdvancingMajorNorms · 6.N.aGLOBALFeb 3, 2026

International AI Safety Report 2026 published by 100+ experts from 30+ countries

Second annual International AI Safety Report chaired by Yoshua Bengio, backed by EU/OECD/UN and 30+ countries, released 3 Feb 2026 ahead of the India AI Impact Summit. 200-page report with 1,451 references. Confirms scientific consensus on escalating risks (cyber, bio, loss of control, eval-aware models). (Note: arxiv preprint 2602.21012 uploaded 24 Feb 2026; primary release was UK gov on 3 Feb 2026.)

WhyMost authoritative international scientific consensus document on AI risks; institutionalized annual cadence with 30+ country panel.International scientific consensus on risks
RegressingMajorNorms · 5.N.aEUROPEJan 26, 2026

UK Home Secretary announces largest national live facial recognition rollout in British history

UK Home Secretary Shabana Mahmood announced on 26 Jan 2026 a policing white paper expanding mobile LFR camera vans from 10 to 50 nationwide, plus a £115M National Centre for AI in Policing (Police.AI). Backlash from Big Brother Watch, Liberty and parliamentary critics framing it as an 'Orwellian panopticon'; the government has also launched a consultation on a bespoke legal framework.

WhyStrong public debate: gov framed as 'panopticon' + civil-society/court pushback. Net-negative for rights; debate itself is healthy.Public debate on AI surveillance and civil liberties
MixedLaws · 7.L.bEUROPEJan 8, 2026

EC publishes DMA review summary with AI-gatekeeper scope under active consideration

On Jan 8, 2026 the Commission published the summary of 450+ contributions to its DMA review consultation (ran Jul-Sep 2025, with dedicated AI questionnaire launched Aug 26, 2025). AI emerged as the most prevalent theme; respondents split on whether to add a standalone AI Core Platform Service category. Article 53 report due May 3, 2026.

WhyConsultation-stage move; norm-gathering for future enforcement rather than action.Interoperability and data portability mandates
AdvancingMajorNorms · 3.N.cUSJan 7, 2026

Google and Character.AI settle teen-suicide wrongful death lawsuits

Settlement in principle on Jan 7 2026 covering lawsuits from families in Florida, Colorado, Texas and NY alleging Character.AI chatbots drove teens to suicide or self-harm. Landmark liability moment for AI-related mental-health harm.

WhyLiability settlement validates mental-health harm narrative in mainstream discourse; sets precedent for AI accountability.Mental health implications in mainstream discourse
AdvancingMajorDesign · 5.D.cUSJan 1, 2026

California ADMT regulations require consumer notice and opt-out for AI significant decisions

New CPPA rules approved 23 Sep 2025 took effect 1 Jan 2026 with phased deadlines: businesses using Automated Decision-Making Technology for 'significant decisions' (employment, housing, credit, education, healthcare) must conduct risk assessments and honor notice, access, appeal and opt-out rights. ADMT-specific notice/opt-out obligations broadly effective 1 Jan 2027; risk-assessment duties apply prospectively from 1 Jan 2026.

WhyFirst US state law forcing UX-level surfacing of algorithmic-decision rights (notice + appeal) on deployers. Sets the pattern.User rights surfaced in UX
AdvancingLaws · 5.L.cUSJan 1, 2026

Three new state comprehensive privacy laws take effect Jan 2026 (IN, KY, RI)

Indiana Consumer Data Protection Act, Kentucky Consumer Data Protection Act and Rhode Island Data Transparency and Privacy Protection Act all took effect 1 Jan 2026, bringing the total US states with comprehensive privacy laws to 20. Rhode Island uses notably low thresholds (35K consumers).

WhyIncremental state-by-state privacy buildout continues. None are strong laws individually, but compound effect is real.Data protection strengthening
AdvancingLaws · 5.L.cUSJan 1, 2026

California Senate Bill 361 expands data broker registration; generative-AI disclosure required

CA SB 361 took effect Jan 2026, requiring data brokers to disclose whether personal data is sold to generative AI developers, foreign actors, or governments. Opt-out requests must be processed within 45 days via CPPA's deletion mechanism.

WhyFirst state law explicitly flagging AI-training data flows in broker disclosures. Small but precedent-setting.Data protection strengthening
AdvancingLaws · 7.L.dUSJan 1, 2026

Nevada AI political advertising disclosure law (AB73) takes effect

Nevada AB73 became effective Jan 1, 2026, requiring "clear and conspicuous" disclosure when synthetic/AI-generated media is used in political communications. Unanimously passed; gives depicted candidates injunctive relief against undisclosed AI manipulation. Part of a broader wave of state deepfake-political laws.

WhyState-level democratic-integrity guardrails spreading; counterweight to federal inaction.Restrictions on political uses of AI
AdvancingMajorLaws · 3.L.bUSJan 1, 2026

California SB 243 companion-chatbot law takes effect

First US law specifically regulating AI "companion chatbots" — effective 1 Jan 2026. Requires AI disclosure, self-harm safety protocols, minor-specific safeguards including 3-hour break reminders, crisis referrals. Private right of action.

WhyFirst-in-nation law targeting AI-companion harms to minors; creates liability template other states will copy.Protections for minors
AdvancingDesign · 6.D.cGLOBALDec 18, 2025

UK AISI publishes Frontier AI Trends Report with two years of model-evaluation data

On 18 Dec 2025 the UK AI Security Institute released its first Frontier AI Trends Report, synthesizing two years of testing 30+ frontier models. Key findings: cyber apprentice-level task success rose from <9% (2023) to ~50% (2025); first expert-level cyber task completed in 2025; models outperform PhD-level experts on chem/bio knowledge; hour-long software tasks completed >40% of the time. Shared publicly to inform the International Network's evaluation science.

WhyPublic evidence base for internationally-aligned evaluation protocols; UK continues to anchor the network's technical output.Evaluation protocols aligned internationally
RegressingMajorLaws · 6.L.cUSDec 11, 2025

Trump executive order preempts state AI laws via litigation task force

On 11 Dec 2025 Trump signed EO 'Ensuring a National Policy Framework for AI' creating an AI Litigation Task Force at DOJ (established by AG memo 9 Jan 2026) to challenge state AI laws and conditioning federal grants on states not enforcing them. Colorado AI Act explicitly named. Preempts subnational compute/licensing experiments that could have aligned with international norms.

WhyForecloses US sub-federal compute-governance that could align with international frameworks; concentrates at minimalist federal level.Compute governance and licensing
AdvancingMajorLaws · 3.L.bGLOBALDec 10, 2025

Australia bans under-16s from major social platforms

World-first law effective 10 Dec 2025 requires Facebook, Instagram, TikTok, Snapchat, YouTube, Reddit, X, Threads, Twitch, Kick to take reasonable steps to remove under-16 accounts. Penalties up to A$49.5M.

WhyHardest minor-protection law enacted anywhere; precedent being watched globally, Slovenia/Spain/Denmark considering similar.Protections for minors
MixedDesign · 6.D.cGLOBALDec 10, 2025

International Network of AISIs renamed to emphasize measurement, not safety

At the 4-5 Dec 2025 San Diego meeting, the network (launched at Seoul Summit May 2024; formally established Nov 2024) was renamed 'International Network for Advanced AI Measurement, Evaluation and Science' — widely read as a concession to keep the US (CAISI) engaged. UK took the Network Coordinator role. Australia joined after 25 Nov 2025 AU AISI announcement ($29.9M, operating early 2026). Members: AU, CA, EU, FR, JP, KE, KR, SG, UK, US.

WhyEval-protocol alignment continues at working level but under diluted branding; Australia accession expands but US posture remains uncertain.Evaluation protocols aligned internationally
AdvancingMajorLaws · 7.L.cEUROPEDec 5, 2025

Council of EU adopts conclusions on European competitiveness in the digital decade

On Dec 5, 2025 the Council published conclusions on European Competitiveness in the Digital Decade, urging open standards, interoperability and reduced vendor lock-in in cloud, AI, cybersecurity and connectivity; asks Commission to develop common criteria for sovereign cloud services ahead of the forthcoming Cloud and AI Development Act (Commission proposal due Q1 2026).

WhyCouncil-level political backing for sovereign EU compute and public AI infrastructure ahead of CADAPublic-option AI and sovereign compute funding
AdvancingLaws · 4.L.aUSDec 3, 2025

AI Workforce PREPARE Act introduced in US Senate

S.3339 introduced 3 Dec 2025 by Sen. Jim Banks (R-IN) with cosponsors Hassan (D-NH), Hickenlooper (D-CO), and Husted (R-OH); establishes federal AI workforce transition fund and retraining grants. Bipartisan cosponsorship signals emerging consensus.

WhyFederal-level displacement funding mechanism; first bipartisan AI transition bill with serious cosponsors across party linesWorker displacement protections and transition funding
AdvancingDesign · 3.D.bGLOBALNov 25, 2025

Character.AI bans under-18s from open-ended chat

Late Nov 2025: platform blocked minors from primary chat feature after cumulative teen-suicide lawsuits. Age-assurance functionality rolled out. Critics say late and creates dependency-withdrawal risk.

WhyProduct change driven by litigation, not well-being mission; still a rare case of a platform removing its core engagement loop for minors.Well-being features shipped by default
RegressingNorms · 7.N.cUSNov 17, 2025

House GOP eyes NDAA as vehicle to revive AI state-law preemption

On Nov 17, 2025, House Majority Leader Scalise confirmed to Punchbowl News that House GOP leaders are exploring attaching AI preemption language to the FY26 National Defense Authorization Act. Congressional Progressive Caucus and states-rights Republicans lined up against.

WhyIndustry allies retry democratic capture via a different legislative vehicle after the 99-1 defeatConcerns about democratic capture
AdvancingDesign · 5.D.bGLOBALNov 13, 2025

Anthropic publishes political even-handedness evaluation and reports Claude Sonnet 4.5 scores

Anthropic published a new automated method for measuring political bias on 13 Nov 2025, reporting Claude Sonnet 4.5 at 94% even-handedness (Opus 4.1 at 95%; Gemini 2.5 Pro 97%, Grok 4 96%, GPT-5 89%, Llama 4 66%). Method open-sourced so other labs and researchers can reproduce.

WhyBias-eval tooling is becoming standard and comparable across frontier labs. Politics-specific, but method is transferable.Bias testing and fairness tooling in development
AdvancingMajorNorms · 4.N.bEUROPENov 11, 2025

EU Parliament EMPL committee adopts algorithmic management directive call

Employment and Social Affairs Committee MEPs voted 11 Nov 2025 (41-6-4) to urge Commission to propose binding rules on algorithmic management across all sectors, extending Platform Work Directive protections to traditional employment. Full plenary endorsed the resolution 17 Dec 2025.

WhyWorker voice institutionalized at EU level; binding framework for surveillance/monitoring limits advancesWorker voice in AI deployment decisions
AdvancingDesign · 3.D.aGLOBALOct 14, 2025

Meta sets Instagram Teen Accounts to 13+ content default

Oct 14 2025: all under-18 Instagram accounts default to 13+ content filters (inspired by MPA PG-13 guidelines); parental approval required to loosen. Strong language, alcohol, risky stunts hidden. Limited Content mode available.

WhyDefault-on teen safety settings is a meaningful de-personalization step, though narrowly applied to minors only.Opt-out and de-personalization defaults
AdvancingDesign · 4.D.cUSOct 13, 2025

California AB 853 delays SB 942 AI Transparency Act effective date to August 2026

SB 942 (signed 2024) would have required generative AI providers serving >1M Californians to offer free AI-detection tools and mandatory provenance disclosures from 1 Jan 2026. AB 853, signed 13 Oct 2025, pushed the effective date to 2 Aug 2026 citing implementation readiness. Mixed signal: law still advances, but compliance delayed under industry pressure.

WhyFirst US state AI-content provenance regime still on track, but eight-month delay reveals industry pushback powerTransparent attribution of AI-generated work
RegressingMajorLaws · 2.L.aEUROPEOct 6, 2025

EU AI Liability Directive officially withdrawn

European Commission formally withdrew the AI Liability Directive proposal; withdrawal published in Official Journal October 6 2025 after February 2025 Work Programme signalled the intent.

WhyEU abandoning harmonized AI liability framework is the single largest duty-of-care regression of 2025.Liability for foreseeable AI harms
AdvancingMajorLaws · 5.L.aEUROPEOct 6, 2025

UK Upper Tribunal overturns Clearview AI ruling, remits £7.5M ICO case for reconsideration

UK Upper Tribunal set aside the First-tier Tribunal's decision that had blocked the ICO's £7.5M Clearview fine, ruling Clearview's face-scraping is 'behavioural monitoring' under UK GDPR regardless of foreign law-enforcement clients. Case remitted to FTT for substantive reconsideration; Clearview subsequently granted appeal to Court of Appeal Dec 2025.

WhyCloses a major jurisdictional loophole for offshore face-recognition vendors. Live appeal; not yet final, but directionally strong.Biometric and facial recognition limits
AdvancingMajorDesign · 7.D.aGLOBALSep 29, 2025

DeepSeek V3.2-Exp ships Sparse Attention with 50%+ API price cut

DeepSeek released V3.2-Exp on Sep 29, 2025 under MIT license, debuting DeepSeek Sparse Attention (DSA) for long-context efficiency and cutting API prices 50%+ immediately. Full V3.2 shipped Dec 1, 2025 (with V3.2-Speciale reasoning variant).

WhyCost collapse continues; compute-access moat at the frontier eroding through architectural innovation, not just scale.Open-source model releases
AdvancingMajorLaws · 7.L.aUSSep 12, 2025

FTC-driven Microsoft-OpenAI MOU restructures $13B partnership

In September 2025 Microsoft and OpenAI signed a non-binding MOU to restructure their nearly $13B relationship, enabling OpenAI to transition toward a for-profit entity with freedom to partner with rival cloud providers. The restructuring was driven by threat of a formal FTC merger challenge treating multi-billion-dollar exclusive licensing as an undisclosed merger.

WhyFirst concrete antitrust-driven unwinding of a cloud-AI quasi-merger. Sets precedent that exclusive frontier-lab/cloud partnerships can beAntitrust action against AI market concentration
AdvancingMajorDesign · 7.D.aGLOBALAug 5, 2025

OpenAI releases gpt-oss-120b and gpt-oss-20b under Apache 2.0

First OpenAI open-weight release since GPT-2. 117B and 21B total-parameter MoE reasoning models (5.1B and 3.6B active), Apache 2.0 license, near-parity with o4-mini / o3-mini on core benchmarks. Runs on single 80GB GPU / 16GB consumer hardware respectively.

WhyFrontier-lab capitulation to open-weights norm post-DeepSeek R1; design-layer power diffusion acceleratingOpen-source model releases
AdvancingMajorLaws · 6.L.dEUROPEAug 2, 2025

EU AI Act GPAI obligations enter into force with 10^25 FLOP systemic-risk threshold

On 2 Aug 2025 the EU AI Act's GPAI rules took effect, codifying a 10^25 FLOP training-compute threshold for 'systemic-risk' GPAI models subject to safety, security, and transparency obligations (10^23 FLOP threshold for baseline GPAI). First binding international compute-based safety threshold in force. Full enforcement from 2 Aug 2026.

WhyOnly binding legal regime with codified compute threshold for frontier models; reference point for international threshold convergence.Safety thresholds codified internationally
AdvancingMajorDesign · 6.D.aEUROPEJul 10, 2025

EU GPAI Code of Practice signed by most frontier AI developers

The EU GPAI Code of Practice (Transparency, Copyright, Safety & Security chapters) was published on 10 Jul 2025 and endorsed as an 'adequate voluntary tool' by the Commission and AI Board. Signatories include Amazon, Anthropic, Google, Microsoft, OpenAI, Mistral, Cohere, IBM, Aleph Alpha, and others. Meta and Chinese providers declined. xAI signed only the Safety & Security chapter.

WhyBroadest voluntary industry commitment aligned to a binding legal regime; partial coverage (Meta/China absent) limits scope.Voluntary industry safety commitments
AdvancingMajorNorms · 7.N.cUSJul 1, 2025

US Senate strips AI state-law moratorium from OBBBA 99-1

Senate voted 99-1 during vote-a-rama to strike the Blackburn/Cantwell (bipartisan) amendment removing the 10-year (later 5-year) federal preemption of state AI laws from the One Big Beautiful Bill Act. Only Sen. Tillis (R-NC) voted no.

WhyNear-unanimous defeat of industry-backed preemption; preserves plural regulatory venues against capture and protects ~1,000 pending stateConcerns about democratic capture
AdvancingMajorDesign · 5.D.aGLOBALJun 9, 2025

Apple ships Private Cloud Compute for Apple Intelligence across iOS 26 ecosystem

Apple's Private Cloud Compute, extending iPhone-grade cryptographic privacy guarantees (stateless compute, verifiable transparency, non-targetability, no Apple access) to server-side inference, went into production deployment across iOS 26 and Foundation Models framework in 2025.

WhyFirst consumer-scale cryptographically-attested privacy boundary for AI inference. Raises the bar for the whole stack.Privacy-preserving design defaults
RegressingMajorDesign · 6.D.bGLOBALJun 3, 2025

UK AI Safety Institute renamed AI Security Institute; US counterpart renamed CAISI

UK AISI renamed 'AI Security Institute' on 13 Feb 2025 (announced at Munich Security Conference by Peter Kyle), shifting emphasis from safety to security/national defense. US counterpart renamed 'Center for AI Standards and Innovation' (CAISI) on 3 Jun 2025 with explicit mandate to serve industry as NIST-housed primary point of contact. Signals reframing of voluntary evals posture in both lead jurisdictions.

WhyBoth lead government evaluators rebranded away from 'safety' toward security/innovation; chills dangerous-capability info sharing.Information sharing on dangerous capabilities
AdvancingDesign · 5.D.bGLOBALJun 1, 2025

BBQ fairness benchmark now reported across every major frontier lab model card

By 2025-2026, Anthropic, OpenAI, Google DeepMind and Meta all report BBQ (Bias Benchmark for QA) scores in every major model card, with proprietary supplemental evaluations (first-person fairness at OpenAI, paired prompts at Anthropic, FACET/HolisticBias at Meta). NAACL 2025 (FLEX) and follow-ups note BBQ scores can mask adversarial failure modes.

WhyDe-facto norm of reporting structured bias benchmarks is stable. Coverage is US-centric and English-only; real but incomplete.Bias testing and fairness tooling in development
RegressingMajorLaws · 6.L.bUSMay 13, 2025

Trump rescinds Biden-era AI Diffusion Rule on compute export controls

On 13 May 2025 BIS announced rescission of the January 2025 AI Diffusion Rule — which would have imposed a worldwide tiered licensing framework on advanced computing ICs and model weights (including a 10^26 FLOP closed-weight threshold), with compliance set for 15 May 2025. Tiered country framework dropped; replacement rule forthcoming.

WhyUS withdrew its most ambitious frontier-compute export framework; tiered licensing + model-weight controls removed before effect.Export controls on frontier compute and models
MixedLaws · 6.L.bUSMay 13, 2025

BIS issues guidance on Huawei Ascend chip risks and PRC AI training

Alongside AI Diffusion Rule rescission (13 May 2025), BIS issued three guidance documents: on GP10 applied to PRC-origin advanced ICs (Huawei Ascend), on controls that may apply to ICs used to train Chinese AI models, and on preventing supply-chain diversion. Narrower, adversary-focused replacement posture compared to the withdrawn diffusion rule.

WhyResidual compute-export enforcement remains but narrower and adversary-scoped; no multilateral framework replaces the tiered diffusion rule.Export controls on frontier compute and models
AdvancingMajorLaws · 7.L.bEUROPEApr 23, 2025

EU fines Apple €500M and Meta €200M under Digital Markets Act

First DMA non-compliance fines. Apple fined €500M for App Store anti-steering (Article 5(4)); Meta fined €200M for "consent or pay" model (Article 5(2)). Both given 60 days to comply or face periodic penalty payments.

WhyDMA teeth demonstrated; interoperability/anti-gatekeeper regime now credibly applies to the AI platform layer as gatekeepers integrate AIInteroperability and data portability mandates
AdvancingDesign · 5.D.cGLOBALApr 10, 2025

ChatGPT Data Controls: memory dashboard, Temporary Chats, training opt-out shipped in 2025

OpenAI shipped a Data Controls settings surface in 2025: view/edit/delete individual memories, Temporary Chats (auto-delete 30d, excluded from training), export all conversations, and an 'Improve the model for everyone' opt-out toggle. Visible in UI, user-invocable; business/enterprise plans excluded from training by default.

WhyUser-visible rights controls are now standard in ChatGPT UI. Gaps remain (default-on training for consumer plans, incomplete memory export).User rights surfaced in UX
AdvancingDesign · 5.D.aEUROPEApr 10, 2025

OpenAI Memory launched with EU/EEA exclusion pending AI Act compliance review

On 10 Apr 2025 OpenAI rolled out ChatGPT Memory to Plus/Pro/Team/Enterprise users globally, explicitly excluding the EU, EEA, UK, Switzerland, Norway, Iceland and Liechtenstein pending AI Act / GDPR compliance review. Users in covered regions retain view/edit/delete/export controls.

WhyRegulatory geography actually changed default data-collection behavior in the EU. Small but concrete.Privacy-preserving design defaults
RegressingNorms · 7.N.bEUROPEApr 5, 2025

Meta excludes EU from Llama 4 multimodal models via Community License

Meta released Llama 4 Scout and Maverick on Apr 5, 2025 but the Llama 4 Community License Agreement denies rights to individuals domiciled in, or companies headquartered in, the EU — a continuation of the Llama 3.2-Vision carve-out applied to all multimodal Llama models.

WhyGeographic carve-outs fragment the "open" norm; license restrictions tighten in response to EU AI Act compliance uncertaintyOpen-source vs closed-source debate
AdvancingLaws · 6.L.aEUROPEMar 27, 2025

Switzerland signs Council of Europe AI Convention

Federal Councillor Albert Rösti signed the Council of Europe Framework Convention on AI on behalf of Switzerland in Strasbourg on 27 Mar 2025. Federal Council confirmed intent to ratify (12 Feb 2025); legislative consultation draft due end 2026.

WhyExpands signatory base of the only binding international AI instrument; adds non-EU European adopter.Multilateral treaties and conventions
AdvancingLaws · 5.L.dUSFeb 19, 2025

NYPD confirms no current or planned use of AI predictive policing at City Council hearing

At a Feb 2025 Committee on Public Safety hearing, NYPD Deputy Commissioner Michael Gerber confirmed the department does not use and has no plans to use AI for predictive policing, following years of heat-list and Strategic Subjects List retirements in Chicago and LA.

WhyNormative shift: largest US department on record rejecting predictive policing. Signals rather than binds.Restrictions on predictive policing and algorithmic sentencing
AdvancingDesign · 6.D.bGLOBALFeb 12, 2025

International Network of AISIs joint testing exercise on multilingual evals

Singapore, Japan, UK-led joint testing exercise of Mistral Large and Gemma 2 (27B) across ten languages (11-12 Feb 2025, Paris) — 130,000+ cyber prompts, 6,000+ newly translated multilingual prompts, 40 agentic cyber tasks. Demonstrated technical information-sharing on dangerous capability evaluation between AISIs. Report published Mar 2025.

WhyConcrete multilateral info-sharing on cyber/safety evals across jurisdictions; rare working-level evidence of AISI network delivering.Information sharing on dangerous capabilities
RegressingMajorNorms · 6.N.cGLOBALFeb 11, 2025

US and UK refuse to sign Paris AI Action Summit declaration

At the Paris AI Action Summit (10-11 Feb 2025), 61 countries including China, India, Japan, Canada signed the 'Statement on Inclusive and Sustainable AI.' The US and UK declined, with VP Vance warning against 'excessive regulation.' Marked retreat from prior Bletchley/Seoul consensus.

WhyUS + UK publicly broke from multilateral AI-governance consensus; erodes public expectation of coordinated cross-border norms.Public expectation of cross-border governance
AdvancingMajorLaws · 5.L.aEUROPEFeb 2, 2025

EU AI Act Article 5 prohibitions on biometric mass surveillance enter into force

From 2 Feb 2025, the EU AI Act prohibits untargeted scraping to build facial recognition databases, emotion recognition at work/school, certain biometric categorization, and (with exceptions) real-time remote biometric identification by law enforcement. Fines up to €35M or 7% of turnover.

WhyLargest binding biometric prohibition regime to date. Partial bans with law-enforcement exceptions, but Art.5 fines are substantial.Biometric and facial recognition limits
AdvancingMajorNorms · 7.N.bGLOBALJan 20, 2025

DeepSeek R1 release reopens open-vs-closed norm debate

DeepSeek released R1 on Jan 20, 2025 under MIT license at a fraction of frontier cost; Altman later acknowledged OpenAI was "on the wrong side of history" on open weights. Triggered $589B Nvidia single-day market-cap loss on Jan 27.

WhyOpen-weights momentum shifts industry norm; forces closed-model incumbents to defend secrecy publicly and cut prices within weeks.Open-source vs closed-source debate
MixedMajorNorms · 7.N.aUSJan 17, 2025

FTC Section 6(b) staff report on AI partnerships and investments

FTC published 6(b) staff report on Microsoft-OpenAI, Amazon-Anthropic, Google-Anthropic partnerships. Flags concentration and competition concerns; Chair Ferguson dissented from the section identifying power-concentration implications, signalling enforcement restraint under Trump administration.

WhyReport names the concentration problem at the cloud-AI frontier but the incoming FTC chair publicly pulled back from its policyAntitrust and competition discourse applied to AI
AdvancingMajorLaws · 5.L.dUSJul 1, 2024

New Hampshire bans warrantless police real-time biometric surveillance (HB 1688)

New Hampshire HB 1688, signed by Gov. Sununu on 12 July 2024 and effective retroactively from 1 July 2024, prohibits state agency use of real-time and remote biometric identification (including facial recognition) for surveillance in public spaces except by law enforcement with a warrant. Among the clearest state-level warrant requirements on live police biometric surveillance to date.

WhyFirst clear warrant-based limit on live police biometric surveillance at state level. Meaningful even with law-enforcement exception.Restrictions on predictive policing and algorithmic sentencing
AdvancingNorms · 5.N.aUSMay 22, 2019

Facial Recognition Technology Hearing (Committee on Oversight and Reform)

The US House Committee on Oversight and Reform held a hearing titled 'Facial Recognition Technology (Part 1): Its Impact on Our Civil Rights and Liberties' to debate the surveillance risks of the technology.

WhyUS House Oversight Committee held a hearing focusing on the civil rights and liberties impacts of facial recognition technology.Public debate on AI surveillance and civil liberties
AdvancingDesign · 7.D.aGLOBALNov 2, 2018

BERT

Google released the open-source code and pre-trained weights for BERT, a state-of-the-art natural language processing model.

WhyGoogle open-sourced BERT, a state-of-the-art NLP model, providing open-weights access to a frontier-class system.Open-source model releases
AdvancingMajorNorms · 5.N.bGLOBALFeb 4, 2018

Gender Shades: Uncovering large gender and skin-type bias in commercial AI products

The Gender Shades project, led by Joy Buolamwini and Timnit Gebru, revealed that commercial facial recognition systems from major tech companies exhibited significant performance disparities, performing worst on darker-skinned females.

WhyLandmark academic investigation exposed severe gender and skin-type bias in commercial facial recognition systems.Attention to algorithmic harms