Humanein the Loop
← The matrix
05

AI innovation should not come at the expense of our rights and freedom

Current path

AI systems enable new scales of surveillance, discrimination, and manipulation. Biometric capture is expanding. Algorithmic decisions shape access to housing, credit, liberty without meaningful review.

Better future

AI innovation operates within a strong rights and freedom framework. Surveillance is constrained. Bias is measured and mitigated. Data protection is robust. People retain meaningful control over how AI affects their lives.

Drift across the three domains

Norms

Advancing13 signals
CHT recommends
  • Treat algorithmic surveillance as a civil liberties concern, not an infrastructure question.
  • Recognize systemic algorithmic discrimination as a legitimate policy target.
Indicators we track
  • 5.N.aPublic debate on AI surveillance and civil liberties
  • 5.N.bAttention to algorithmic harms
  • 5.N.cRecognition of algorithmic discrimination

Laws

Advancing6 signals
CHT recommends
  • Limit biometric and facial recognition deployment, especially by law enforcement.
  • Extend anti-discrimination law to cover algorithmic decision-making.
  • Strengthen data protection (access, deletion, purpose limitation, portability).
  • Restrict predictive policing and algorithmic sentencing without oversight.
Indicators we track
  • 5.L.aBiometric and facial recognition limits
  • 5.L.bAlgorithmic bias and discrimination protections
  • 5.L.cData protection strengthening
  • 5.L.dRestrictions on predictive policing and algorithmic sentencing

Design

Advancing2 signals
CHT recommends
  • Default to data minimization and on-device processing where feasible.
  • Build fairness testing into the development lifecycle.
  • Surface user rights (access, correction, deletion) as first-class UI.
Indicators we track
  • 5.D.aPrivacy-preserving design defaults
  • 5.D.bBias testing and fairness tooling in development
  • 5.D.cUser rights surfaced in UX

Recent signals

AdvancingNorms · 5.N.aGLOBALApr 8, 2026

Digital Hopes, Real Power: How the Arab Spring Fueled a Global Surveillance Boom

The Electronic Frontier Foundation published a blog series reflecting on how the 2011 Arab uprisings inadvertently fueled a global boom in state surveillance, including the rise of AI-driven biometrics and facial recognition.

WhyEFF blog series highlights the rise of AI-driven surveillance and biometrics, sustaining civil society pressure on digital authoritarianism.Public debate on AI surveillance and civil liberties
AdvancingMajorLaws · 5.L.cEUROPEApr 7, 2026

EU Parliament Blocks Mass-Scanning of Our Chats—What's Next?

The EU Parliament voted not to extend an interim derogation from e-Privacy rules, effectively making the voluntary mass-scanning of private chats by tech companies illegal in the EU.

WhyEU Parliament voted against prolonging an e-Privacy derogation, effectively outlawing voluntary algorithmic mass-scanning of private chats.Data protection strengthening
AdvancingNorms · 5.N.aUSApr 3, 2026

Tech Nonprofits to Feds: Don’t Weaponize Procurement to Undermine AI Trust and Safety

Tech nonprofits, including the EFF and CDT, filed comments opposing a proposed GSA procurement rule that would require AI contractors to license their systems for "all lawful purposes," arguing it could enable mass surveillance.

WhyCivil society groups filed comments opposing a proposed GSA procurement rule that would force AI contractors to allow use for surveillance.Public debate on AI surveillance and civil liberties
AdvancingNorms · 5.N.aGLOBALApr 2, 2026

Google and Amazon: Acknowledged Risks, and Ignored Responsibilities

The Electronic Frontier Foundation publicly criticized Google and Amazon for failing to address human rights and surveillance risks associated with their Project Nimbus AI cloud contract with the Israeli government.

WhyEFF published a critique pressuring Google and Amazon over the human rights and surveillance risks of their AI cloud contract with Israel.Public debate on AI surveillance and civil liberties
AdvancingNorms · 5.N.aGLOBALApr 2, 2026

EFF’s Submission to the UN OHCHR on Protection of Human Rights Defenders in the Digital Age

The Electronic Frontier Foundation (EFF) submitted a report to the UN OHCHR detailing how new digital regulations and surveillance technologies, including biometric monitoring, are being used to restrict the fundamental rights of human rights defenders globally.

WhyEFF submitted a report to the UN OHCHR highlighting how expanded state surveillance and biometric monitoring threaten human rights defendersPublic debate on AI surveillance and civil liberties
RegressingMajorLaws · 5.L.cEUROPEMar 18, 2026

Court of Rome annuls Italy's €15M GDPR fine against OpenAI

On 18 March 2026, the Court of Rome annulled the Garante's Nov 2024 €15M fine — the only final GDPR enforcement decision against a GenAI provider. The fine had found unlawful training data processing, breach-notification failure and no age verification. Garante can appeal. Judgment no. 4153/2026, R.G. 4785/2025.

WhyOnly final GenAI GDPR fine in Europe just collapsed on appeal. Signals hard limits on current data-protection tools against frontier labs.Data protection strengthening
RegressingLaws · 5.L.bUSMar 17, 2026

Colorado AI Act effective date pushed to June 2026 amid industry pressure to repeal

CO SB 24-205, the first US comprehensive AI anti-discrimination law, was delayed from Feb 2026 to 30 June 2026 via SB 25B-004. In March 2026, Governor Polis's AI Policy Working Group proposed a replacement bill stripping many employer compliance duties.

WhyIndustry lobbying is actively eroding the strongest US state AI anti-discrimination law before it takes effect. Net-negative on enforcement.Algorithmic bias and discrimination protections
RegressingMajorNorms · 5.N.cGLOBALMar 12, 2026

IDS / African Digital Rights Network report: 11 African governments spent $2B+ on Chinese-built AI surveillance

The Institute of Development Studies, with the African Digital Rights Network, published 'Smart City Surveillance in Africa: Mapping Chinese AI Surveillance Across 11 Countries' on 12 Mar 2026. Documents $2B+ spent by Algeria, Egypt, Kenya, Mauritius, Mozambique, Nigeria, Rwanda, Senegal, Uganda, Zambia and Zimbabwe on facial recognition and ANPR, with deployments used against activists, opposition figures and journalists despite no demonstrated crime-reduction effect.

WhyMajor documented algorithmic-surveillance discrimination against dissidents across 11 states. Recognition rising; power is not.Recognition of algorithmic discrimination
AdvancingNorms · 5.N.aUSMar 3, 2026

SF QuitGPT protest against OpenAI-Pentagon contract; broader multi-lab march the prior week

On 3 Mar 2026, ~40-50 activists rallied outside OpenAI's SF HQ in a 'QuitGPT' protest against its Pentagon contract. The prior week, a larger ~500-person multi-lab march targeted DeepMind, OpenAI and Meta; ~200 protested Virginia data centers. Concerns: mass surveillance, autonomous weapons, environmental impact.

WhyOrganized public protest against frontier-lab militarization — real mobilization, not just op-eds. Rare for AI-surveillance debate in US.Public debate on AI surveillance and civil liberties
AdvancingNorms · 5.N.aUSFeb 27, 2026

Civil rights coalition decries DoD pressure on Anthropic to lift AI surveillance guardrails

The Leadership Conference on Civil and Human Rights publicly condemned the Department of Defense's campaign to pressure Anthropic into lifting restrictions on surveillance use of its AI, framing it as a 'tech-fueled domestic surveillance state.'

WhyCivil-society attention to surveillance repurposing of frontier AI — healthy debate signal, though substantive power remains with DoD.Public debate on AI surveillance and civil liberties