Humanein the Loop
← The matrix
01

AI should be built safely and transparently

Current path

AI systems are deployed without meaningful transparency about capabilities, training, or failure modes. Incident disclosure is voluntary and uneven. Public understanding lags deployment.

Better future

AI development is visible to the public and to independent scrutiny. Capabilities, evaluations, and failures are disclosed by default. Transparency is a pre-condition of trust, not an optional gesture.

Drift across the three domains

Norms

Advancing8 signals
CHT recommends
  • Treat transparency as a default professional norm, not a competitive liability.
  • Normalize proactive disclosure of incidents and near-misses.
Indicators we track
  • 1.N.aPublic expectation of transparency on AI capabilities
  • 1.N.bOpen publication of safety evaluations as industry norm
  • 1.N.cIncident disclosure as expected behavior

Laws

Advancing5 signals
CHT recommends
  • Mandate pre-deployment evaluations for frontier systems.
  • Require incident reporting to regulators within defined windows.
  • Enable third-party audits with right-of-access.
Indicators we track
  • 1.L.aPre-deployment evaluation mandates
  • 1.L.bMandatory incident reporting
  • 1.L.cThird-party audit requirements
  • 1.L.dRed-team disclosure rules

Design

Advancing6 signals
CHT recommends
  • Publish model cards with substantive technical content, not marketing.
  • Disclose dangerous-capability evaluations publicly.
  • Surface model identity and limits at point of use.
Indicators we track
  • 1.D.aModel cards / system cards published
  • 1.D.bPublic evaluation results
  • 1.D.cCapability disclosure at point of interaction

Recent signals