Seven principles. One scoreboard.
In early 2026, the Center for Humane Technology published seven principles for how AI should be built and governed so it serves humanity instead of extracting from it. The roadmap is comprehensive. What's missing is a running scoreboard — how far have we actually come?
That's what this is. A public, event-driven tracker that answers one question every day in one number, then explains that number principle by principle, bill by bill, event by event. A charter is only a charter if someone is keeping score.
Humane in the Loop is built and maintained by me, David Felsmann. I do not receive funding from, and am not affiliated with, the Center for Humane Technology.
What we track, principle by principle
AI should be built safely and transparently.
Incident reporting, model evaluations, red-team requirements, and audit regimes. Progress looks like disclosure rules that actually bind.
AI companies owe a duty of care to the public.
Liability frameworks, duty-of-care statutes, and private rights of action. Progress looks like legal accountability when harm is foreseeable.
AI design should center human well-being.
Child and vulnerable-user protections, deceptive-pattern bans, and mental-health safeguards on companion products. Progress looks like product design constrained by reasonably foreseeable harm.
AI should not automate away meaningful work and human dignity.
Workforce transition policy, collective-bargaining protections around algorithmic management, and displacement adjustment. Progress looks like a negotiated transition, not a silent one.
AI innovation should not come at the expense of our rights and freedom.
Surveillance limits, due-process rules for algorithmic decision-making, biometric constraints, and civil-rights enforcement in automated systems. Progress looks like freedoms that survive the default.
AI should have internationally agreed-upon limits.
UN instruments, Council of Europe treaties, cross-border compute and export controls, and multilateral incident-reporting agreements. Progress looks like shared floors, not national drift.
AI power should be balanced in society.
Antitrust action, compute access, public-option AI, and market-structure interventions. Progress looks like more than three labs holding the future.
Open by design
The bill corpus, weights, principle mappings, and status history are public. The scoring rubric is written on the methodology page. Underlying data is CC-BY 4.0 thanks to LegiScan and Congress.gov. Disagree with a weight, spot a missing bill, think a principle is miscounted? Send a pull request. If you don't know how that works, ask an AI agent, or send an email to david[at]tenone.eu.
Get the weekly roadmap update
One email each Friday summarizing what moved, what stalled, and what to watch next week. No spam, ever.
Subscribe on Substack →