▲AdvancingNorms · 5.N.aUSApr 21, 2026
The Electronic Frontier Foundation (EFF) published a critique of Palantir's human rights policy, arguing that the company's continued provision of surveillance tools to ICE contradicts its stated commitments.
WhyEFF and other civil society groups are applying sustained pressure on Palantir regarding its surveillance tools used by ICE.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 5.N.bGLOBALApr 21, 2026
A new academic paper analyzes AI-enabled female sex robots, arguing their design perpetuates male-centric bias and epistemic injustice, and proposes feminist design directions for equitable human-robot interaction.
WhyAcademic paper investigates male-centric bias and epistemic injustice in AI sex robots, highlighting discriminatory design outcomes.Attention to algorithmic harms ▲AdvancingNorms · 4.N.aGLOBALApr 20, 2026
A new research paper models how digital labor platforms, including those used for AI data annotation, suppress wages and demonstrates how targeted collective action by workers can effectively counter this.
WhyResearchers published a model showing how gig platforms suppress wages and how targeted collective action can increase worker power.Public discourse on AI and labor ▲AdvancingNorms · 3.N.bGLOBALApr 20, 2026
Researchers published a paper exploring how accessibility guidelines like WCAG can be used to combat specific deceptive UI patterns.
WhyAcademics published research naming specific dark patterns (e.g., Auto-Play) and proposing accessibility guidelines to combat them.Critique of dark-pattern and addictive design ▲AdvancingNorms · 3.N.bGLOBALApr 20, 2026
Researchers proposed a framework to evaluate 'anthropomorphic deception' in AI and robots, aiming to prevent exploitative design practices that mislead users.
WhyAcademics published a framework critiquing anthropomorphic deception in AI, naming it as a specific design pattern that can be exploitative.Critique of dark-pattern and addictive design