▲AdvancingNorms · 5.N.aUSApr 21, 2026
The Electronic Frontier Foundation (EFF) published a critique of Palantir's human rights policy, arguing that the company's continued provision of surveillance tools to ICE contradicts its stated commitments.
WhyEFF and other civil society groups are applying sustained pressure on Palantir regarding its surveillance tools used by ICE.Public debate on AI surveillance and civil liberties ▲AdvancingNorms · 5.N.bGLOBALApr 21, 2026
A new academic paper analyzes AI-enabled female sex robots, arguing their design perpetuates male-centric bias and epistemic injustice, and proposes feminist design directions for equitable human-robot interaction.
WhyAcademic paper investigates male-centric bias and epistemic injustice in AI sex robots, highlighting discriminatory design outcomes.Attention to algorithmic harms ▲AdvancingDesign · 5.D.bGLOBALApr 21, 2026
Researchers introduced FairLogue, a new toolkit for intersectional fairness auditing, and demonstrated its use by evaluating disparities in clinical machine learning models.
WhyAcademics introduced FairLogue, a new toolkit for intersectional fairness auditing, and published evaluation results on clinical ML models.Bias testing and fairness tooling in development ▲AdvancingNorms · 5.N.bUSApr 21, 2026
A new study reveals that LLMs used by federal agencies to summarize public comments exhibit bias based on the commenter's occupation.
WhyAcademic research exposes socioeconomic bias in LLMs summarizing public comments, highlighting algorithmic harms.Attention to algorithmic harms ▲AdvancingNorms · 5.N.bUSApr 21, 2026
Researchers published an algorithmic audit of TikTok, revealing that the platform's personalization algorithm ranks comments differently based on users' political leanings.
WhyAcademic audit investigates TikTok's comment personalization, revealing algorithmic divergence based on users' political leanings.Attention to algorithmic harms