SAGE Publications
Ver recurso originalThis SAGE Publications research dives deep into how communities, researchers, and activists are fighting back against algorithmic harms through different types of "knowledge projects" - systematic efforts to document, expose, and challenge biased AI systems. Building on landmark investigations like ProPublica's Machine Bias series that revealed racial bias in criminal justice risk assessment tools, this work maps out the diverse ways people are creating counter-narratives to the tech industry's claims of algorithmic neutrality. Rather than just documenting problems, it examines how affected communities are generating their own forms of evidence and expertise to challenge harmful AI deployments.
The research situates itself within the wave of algorithmic accountability work that emerged after ProPublica's 2016 Machine Bias investigation exposed how COMPAS risk assessment tools were twice as likely to falsely flag Black defendants as future criminals. But rather than treating such revelations as isolated journalistic victories, this work examines how they've spawned entire ecosystems of resistance - from community-led auditing projects to academic research programs that center affected communities' experiences.
Unlike typical algorithmic bias research that focuses on technical detection methods, this work examines resistance as knowledge production. It takes seriously the expertise of people experiencing algorithmic harms, rather than treating them simply as subjects to be studied. The research also moves beyond individual bias incidents to examine how communities are building sustained capacity to challenge algorithmic systems - creating what the authors call "infrastructures of refusal."
Publicado
2022
Jurisdicción
Global
CategorÃa
Incident and accountability
Acceso
Acceso público
Datasheet for Dataset Template
Transparency and documentation • Florida Atlantic University
AI System Disclosures
Transparency and documentation • National Telecommunications and Information Administration
AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026
Sector specific governance • HR Defense
VerifyWise le ayuda a implementar frameworks de gobernanza de IA, hacer seguimiento del cumplimiento y gestionar riesgos en sus sistemas de IA.