Wikipedia
researchactive

Algorithmic Bias

Wikipedia

View original resource

Algorithmic Bias

Summary

This comprehensive Wikipedia article serves as an essential primer on algorithmic bias, diving deep into how AI systems can perpetuate and amplify discrimination across various sectors. What sets this resource apart is its extensive documentation of real-world cases, including the infamous COMPAS criminal risk assessment software that demonstrated racial bias in predicting recidivism rates. The article expertly weaves together technical explanations, historical context, and concrete examples to illustrate how biased training data and flawed algorithmic design can lead to unfair outcomes in hiring, lending, criminal justice, and beyond.

The Story Behind the Problem

Algorithmic bias didn't emerge overnight—it's the result of decades of technological advancement without adequate consideration for fairness and equity. The article traces how early AI systems, trained on historical data that reflected societal prejudices, began reproducing and sometimes magnifying these biases at scale. From Amazon's biased hiring algorithm that discriminated against women to facial recognition systems that performed poorly on darker-skinned individuals, the examples paint a clear picture of how bias creeps into supposedly "objective" automated systems.

The COMPAS case study is particularly illuminating, showing how a tool designed to assist judges in making fair sentencing decisions was found to incorrectly flag Black defendants as future criminals at nearly twice the rate of white defendants. This real-world impact makes the technical concepts tangible and urgent.

Core Mechanisms of Bias

The article breaks down the various pathways through which bias enters algorithmic systems:

  • Historical bias embedded in training datasets that reflect past discriminatory practices
  • Representation bias where certain groups are underrepresented or misrepresented in data
  • Measurement bias arising from differences in how data is collected across different populations
  • Evaluation bias when inappropriate benchmarks or metrics are used to assess system performance
  • Aggregation bias from assuming one model fits all subgroups within a population

Understanding these mechanisms is crucial for anyone working with AI systems, as each requires different mitigation strategies and intervention points.

Documented Case Studies and Examples

Beyond COMPAS, the resource provides a rich collection of documented bias incidents across industries:

  • Healthcare algorithms that underestimate the medical needs of Black patients
  • Resume screening tools that discriminate against applicants from certain universities or backgrounds
  • Credit scoring systems that perpetuate lending discrimination
  • Search engines that return biased results for professional roles and identity-related queries

These examples serve as both cautionary tales and learning opportunities, showing the wide-reaching implications of algorithmic decision-making.

Who this resource is for

AI practitioners and data scientists who need to understand bias sources and mitigation strategies before deploying models. The technical depth provides practical insights for model development and testing.

Policy makers and regulators working on AI governance frameworks who require concrete examples of bias harms to inform legislation and regulatory approaches.

Legal professionals and civil rights advocates building cases around algorithmic discrimination or advising clients on AI-related legal risks.

Business leaders and product managers making decisions about AI adoption who need to understand reputational and legal risks associated with biased systems.

Researchers and academics studying algorithmic fairness who need a comprehensive overview of the field and its key documented cases.

Journalists and educators seeking authoritative, well-sourced examples to communicate about AI bias to broader audiences.

Limitations and Blind Spots

As a Wikipedia article, this resource reflects the current state of documented knowledge but has inherent limitations. The examples skew toward well-publicized cases in English-speaking countries, potentially missing bias patterns in other regions or less-covered sectors. The article also focuses heavily on binary classifications of bias (present/absent) rather than exploring more nuanced gradations of unfairness.

Additionally, the rapidly evolving nature of AI means some technical mitigation strategies mentioned may become outdated, and new forms of bias continue to emerge as AI systems become more sophisticated.

Tags

algorithmic biasfairnessAI ethicsdiscriminationCOMPAScriminal justice

At a glance

Published

2024

Jurisdiction

Global

Category

Datasets and benchmarks

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

Algorithmic Bias | AI Governance Library | VerifyWise