This comprehensive Wikipedia article serves as an essential primer on algorithmic bias, diving deep into how AI systems can perpetuate and amplify discrimination across various sectors. What sets this resource apart is its extensive documentation of real-world cases, including the infamous COMPAS criminal risk assessment software that demonstrated racial bias in predicting recidivism rates. The article expertly weaves together technical explanations, historical context, and concrete examples to illustrate how biased training data and flawed algorithmic design can lead to unfair outcomes in hiring, lending, criminal justice, and beyond.
Algorithmic bias didn't emerge overnight—it's the result of decades of technological advancement without adequate consideration for fairness and equity. The article traces how early AI systems, trained on historical data that reflected societal prejudices, began reproducing and sometimes magnifying these biases at scale. From Amazon's biased hiring algorithm that discriminated against women to facial recognition systems that performed poorly on darker-skinned individuals, the examples paint a clear picture of how bias creeps into supposedly "objective" automated systems.
The COMPAS case study is particularly illuminating, showing how a tool designed to assist judges in making fair sentencing decisions was found to incorrectly flag Black defendants as future criminals at nearly twice the rate of white defendants. This real-world impact makes the technical concepts tangible and urgent.
The article breaks down the various pathways through which bias enters algorithmic systems:
Understanding these mechanisms is crucial for anyone working with AI systems, as each requires different mitigation strategies and intervention points.
Beyond COMPAS, the resource provides a rich collection of documented bias incidents across industries:
These examples serve as both cautionary tales and learning opportunities, showing the wide-reaching implications of algorithmic decision-making.
As a Wikipedia article, this resource reflects the current state of documented knowledge but has inherent limitations. The examples skew toward well-publicized cases in English-speaking countries, potentially missing bias patterns in other regions or less-covered sectors. The article also focuses heavily on binary classifications of bias (present/absent) rather than exploring more nuanced gradations of unfairness.
Additionally, the rapidly evolving nature of AI means some technical mitigation strategies mentioned may become outdated, and new forms of bias continue to emerge as AI systems become more sophisticated.
Publié
2024
Juridiction
Mondial
Catégorie
Datasets and benchmarks
Accès
Accès public
Orientations de l'EEOC sur l'IA dans les décisions d'emploi
Sector specific governance • EEOC
Déclaration de Toronto sur l'apprentissage automatique et les droits de l'homme
Ethics and principles • Amnesty International & Access Now
AI Ethics: An Empirical Study on the Views of Practitioners and Lawmakers
Research and academic references • arXiv
VerifyWise vous aide à implémenter des cadres de gouvernance de l'IA, à suivre la conformité et à gérer les risques dans vos systèmes d'IA.