Humanein the Loop
← The matrix
06

AI should have internationally agreed-upon limits

Current path

AI development races ahead of governance. Frontier labs operate with voluntary and inconsistent safety commitments. Dangerous capabilities emerge before international agreement is possible.

Better future

AI development operates within internationally agreed limits. Safety thresholds are shared across jurisdictions. Export controls and compute governance align across allies. Dangerous capabilities are coordinated, not raced.

Drift across the three domains

Norms

Advancing6 signals
CHT recommends
  • Build international scientific consensus on AI risks (IPCC-style).
  • Foster multilateral civil society coordination on AI governance demands.
Indicators we track
  • 6.N.aInternational scientific consensus on risks
  • 6.N.bMultilateral civil society coalitions
  • 6.N.cPublic expectation of cross-border governance

Laws

Mixed3 signals
CHT recommends
  • Advance binding multilateral instruments (e.g., Council of Europe AI Convention).
  • Coordinate export controls on frontier compute and models.
  • Establish compute governance and licensing regimes.
  • Codify shared safety thresholds (bioweapon uplift, cyber offense, autonomy).
Indicators we track
  • 6.L.aMultilateral treaties and conventions
  • 6.L.bExport controls on frontier compute and models
  • 6.L.cCompute governance and licensing
  • 6.L.dSafety thresholds codified internationally

Design

Mixed3 signals
CHT recommends
  • Publish voluntary frontier-safety commitments that are specific and testable.
  • Share dangerous-capability evaluations with peers and regulators.
  • Align evaluation protocols internationally.
Indicators we track
  • 6.D.aVoluntary industry safety commitments
  • 6.D.bInformation sharing on dangerous capabilities
  • 6.D.cEvaluation protocols aligned internationally

Recent signals

AdvancingNorms · 6.N.cGLOBALApr 5, 2026

UN opens public consultation for first Global Dialogue on AI Governance

UN opened worldwide public submission portal (deadline 30 Apr 2026) for input into the first Global Dialogue on AI Governance, to be held in 2026 in Geneva back-to-back with the ITU AI for Good Summit. Dialogue + Independent International Scientific Panel on AI established by UNGA Resolution A/RES/79/325 (adopted by consensus 26 Aug 2025). Signals formal public-expectation channel for cross-border AI governance.

WhyFirst UN-sanctioned open public channel on AI governance; institutionalizes public expectation of multilateral oversight.Public expectation of cross-border governance
AdvancingMajorLaws · 6.L.aEUROPEMar 11, 2026

European Parliament approves EU accession to Council of Europe AI Convention

On 11 Mar 2026 the European Parliament gave consent (455 in favour, 101 against, 74 abstentions) to EU conclusion of the Council of Europe Framework Convention on AI and Human Rights (CETS 225), the first legally binding international AI treaty. Advances ratification; treaty enters into force after 5 states ratify.

WhyKey step toward entry-into-force of first binding multilateral AI treaty; EU accession anchors Convention.Multilateral treaties and conventions
RegressingMajorDesign · 6.D.aGLOBALFeb 24, 2026

Anthropic drops categorical pause commitment from Responsible Scaling Policy v3.0

On 24 Feb 2026 Anthropic released RSP v3.0, removing its previous implication that it would pause training/deployment if risks exceeded acceptable levels. Some mitigations (e.g. RAND SL4) reframed as 'industry-wide recommendations' rather than unilateral commitments. GovAI and multiple safety commentators flagged this as material weakening of the flagship voluntary framework.

WhyFlagship voluntary commitment weakened by the lab that pioneered the framework; confirms collective-action limits of self-governance.Voluntary industry safety commitments
AdvancingMajorNorms · 6.N.cGLOBALFeb 21, 2026

New Delhi Declaration on AI Impact endorsed by 92 countries and international organisations

At the India AI Impact Summit (16-21 Feb 2026; declaration adopted 18-19 Feb, announced 21 Feb 2026), 92 countries and international organisations — including US, UK, China, Russia, EU, Switzerland — endorsed the New Delhi Declaration committing to international cooperation on safe, inclusive AI. Substantially broader than Paris 2025 signatory base; US re-engaged. (Initial 88 on 21 Feb, grew to 91 by 24 Feb, 92 as of 5 Mar 2026.)

WhyLargest AI summit declaration to date; notable US return post-Paris 2025. Multilateral coordination expectation partially restored.Public expectation of cross-border governance
AdvancingNorms · 6.N.bGLOBALFeb 15, 2026

FLI civil society recommendations published ahead of India AI Impact Summit

Future of Life Institute (UN SG's designated civil society co-champion for AI) released pre-summit recommendations calling for Global South participation in AI governance, national audit capacity, and commensurate safety guarantees from developers. Published 15 Feb 2026, days before the 16-21 Feb New Delhi summit.

WhyMajor civil society coalition input to multilateral process; signals continued organized civil society engagement with summit track.Multilateral civil society coalitions
AdvancingNorms · 6.N.bEUROPEFeb 11, 2026

Open letter from 60+ civil society orgs to EU on AI Act transparency safeguard

European coalition of civil society, trade unions, and academics (60+ organisations incl. Access Now, EDRi, Amnesty Tech, BEUC) signed open letter to MEPs and Commission urging rejection of AI Omnibus amendments that would delete Art. 49(2) high-risk transparency safeguard.

WhyMultilateral civil society coalition defending binding cross-border governance safeguards against industry-led rollback.Multilateral civil society coalitions
AdvancingMajorNorms · 6.N.aGLOBALFeb 3, 2026

International AI Safety Report 2026 published by 100+ experts from 30+ countries

Second annual International AI Safety Report chaired by Yoshua Bengio, backed by EU/OECD/UN and 30+ countries, released 3 Feb 2026 ahead of the India AI Impact Summit. 200-page report with 1,451 references. Confirms scientific consensus on escalating risks (cyber, bio, loss of control, eval-aware models). (Note: arxiv preprint 2602.21012 uploaded 24 Feb 2026; primary release was UK gov on 3 Feb 2026.)

WhyMost authoritative international scientific consensus document on AI risks; institutionalized annual cadence with 30+ country panel.International scientific consensus on risks
AdvancingDesign · 6.D.cGLOBALDec 18, 2025

UK AISI publishes Frontier AI Trends Report with two years of model-evaluation data

On 18 Dec 2025 the UK AI Security Institute released its first Frontier AI Trends Report, synthesizing two years of testing 30+ frontier models. Key findings: cyber apprentice-level task success rose from <9% (2023) to ~50% (2025); first expert-level cyber task completed in 2025; models outperform PhD-level experts on chem/bio knowledge; hour-long software tasks completed >40% of the time. Shared publicly to inform the International Network's evaluation science.

WhyPublic evidence base for internationally-aligned evaluation protocols; UK continues to anchor the network's technical output.Evaluation protocols aligned internationally
RegressingMajorLaws · 6.L.cUSDec 11, 2025

Trump executive order preempts state AI laws via litigation task force

On 11 Dec 2025 Trump signed EO 'Ensuring a National Policy Framework for AI' creating an AI Litigation Task Force at DOJ (established by AG memo 9 Jan 2026) to challenge state AI laws and conditioning federal grants on states not enforcing them. Colorado AI Act explicitly named. Preempts subnational compute/licensing experiments that could have aligned with international norms.

WhyForecloses US sub-federal compute-governance that could align with international frameworks; concentrates at minimalist federal level.Compute governance and licensing
MixedDesign · 6.D.cGLOBALDec 10, 2025

International Network of AISIs renamed to emphasize measurement, not safety

At the 4-5 Dec 2025 San Diego meeting, the network (launched at Seoul Summit May 2024; formally established Nov 2024) was renamed 'International Network for Advanced AI Measurement, Evaluation and Science' — widely read as a concession to keep the US (CAISI) engaged. UK took the Network Coordinator role. Australia joined after 25 Nov 2025 AU AISI announcement ($29.9M, operating early 2026). Members: AU, CA, EU, FR, JP, KE, KR, SG, UK, US.

WhyEval-protocol alignment continues at working level but under diluted branding; Australia accession expands but US posture remains uncertain.Evaluation protocols aligned internationally
AdvancingMajorLaws · 6.L.dEUROPEAug 2, 2025

EU AI Act GPAI obligations enter into force with 10^25 FLOP systemic-risk threshold

On 2 Aug 2025 the EU AI Act's GPAI rules took effect, codifying a 10^25 FLOP training-compute threshold for 'systemic-risk' GPAI models subject to safety, security, and transparency obligations (10^23 FLOP threshold for baseline GPAI). First binding international compute-based safety threshold in force. Full enforcement from 2 Aug 2026.

WhyOnly binding legal regime with codified compute threshold for frontier models; reference point for international threshold convergence.Safety thresholds codified internationally
AdvancingMajorDesign · 6.D.aEUROPEJul 10, 2025

EU GPAI Code of Practice signed by most frontier AI developers

The EU GPAI Code of Practice (Transparency, Copyright, Safety & Security chapters) was published on 10 Jul 2025 and endorsed as an 'adequate voluntary tool' by the Commission and AI Board. Signatories include Amazon, Anthropic, Google, Microsoft, OpenAI, Mistral, Cohere, IBM, Aleph Alpha, and others. Meta and Chinese providers declined. xAI signed only the Safety & Security chapter.

WhyBroadest voluntary industry commitment aligned to a binding legal regime; partial coverage (Meta/China absent) limits scope.Voluntary industry safety commitments
RegressingMajorDesign · 6.D.bGLOBALJun 3, 2025

UK AI Safety Institute renamed AI Security Institute; US counterpart renamed CAISI

UK AISI renamed 'AI Security Institute' on 13 Feb 2025 (announced at Munich Security Conference by Peter Kyle), shifting emphasis from safety to security/national defense. US counterpart renamed 'Center for AI Standards and Innovation' (CAISI) on 3 Jun 2025 with explicit mandate to serve industry as NIST-housed primary point of contact. Signals reframing of voluntary evals posture in both lead jurisdictions.

WhyBoth lead government evaluators rebranded away from 'safety' toward security/innovation; chills dangerous-capability info sharing.Information sharing on dangerous capabilities