This comprehensive Wikipedia article serves as an essential primer on algorithmic bias, diving deep into how AI systems can perpetuate and amplify discrimination across various sectors. What sets this resource apart is its extensive documentation of real-world cases, including the infamous COMPAS criminal risk assessment software that demonstrated racial bias in predicting recidivism rates. The article expertly weaves together technical explanations, historical context, and concrete examples to illustrate how biased training data and flawed algorithmic design can lead to unfair outcomes in hiring, lending, criminal justice, and beyond.
Algorithmic bias didn't emerge overnight—it's the result of decades of technological advancement without adequate consideration for fairness and equity. The article traces how early AI systems, trained on historical data that reflected societal prejudices, began reproducing and sometimes magnifying these biases at scale. From Amazon's biased hiring algorithm that discriminated against women to facial recognition systems that performed poorly on darker-skinned individuals, the examples paint a clear picture of how bias creeps into supposedly "objective" automated systems.
The COMPAS case study is particularly illuminating, showing how a tool designed to assist judges in making fair sentencing decisions was found to incorrectly flag Black defendants as future criminals at nearly twice the rate of white defendants. This real-world impact makes the technical concepts tangible and urgent.
The article breaks down the various pathways through which bias enters algorithmic systems:
Understanding these mechanisms is crucial for anyone working with AI systems, as each requires different mitigation strategies and intervention points.
Beyond COMPAS, the resource provides a rich collection of documented bias incidents across industries:
These examples serve as both cautionary tales and learning opportunities, showing the wide-reaching implications of algorithmic decision-making.
As a Wikipedia article, this resource reflects the current state of documented knowledge but has inherent limitations. The examples skew toward well-publicized cases in English-speaking countries, potentially missing bias patterns in other regions or less-covered sectors. The article also focuses heavily on binary classifications of bias (present/absent) rather than exploring more nuanced gradations of unfairness.
Additionally, the rapidly evolving nature of AI means some technical mitigation strategies mentioned may become outdated, and new forms of bias continue to emerge as AI systems become more sophisticated.
Publicado
2024
Jurisdicción
Global
CategorÃa
Datasets and benchmarks
Acceso
Acceso público
The IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems
Standards and certifications • IEEE
Ethical Considerations for AI Systems
Standards and certifications • IEEE
IEEE 7000 Standard for Embedding Human Values and Ethical Considerations in Technology Design
Standards and certifications • IEEE
VerifyWise le ayuda a implementar frameworks de gobernanza de IA, hacer seguimiento del cumplimiento y gestionar riesgos en sus sistemas de IA.