How Bias Works
AI bias is structural, not a patchable bug. It enters at every stage of development and cannot be solved with better code alone.
Based on: Karen Hao, “This Is How AI Bias Really Happens—and Why It's So Hard to Fix,” MIT Technology Review, February 2019 ↗
Three Stages Where Bias Enters
Bias enters when you choose what to optimize.
Training data reflects historical discrimination.
Feature selection introduces hidden bias.
Why Tech Alone Cannot Fix This
You cannot test for biases you haven't imagined. Systems fail in ways their creators never considered, often affecting communities the creators don't belong to.
Debiasing one metric often worsens another. Equalizing false positive rates across groups may increase false negative rates for the most vulnerable. There is no free lunch in fairness.
Models cannot understand why a correlation is harmful. A model that learns "arrests predict future arrests" doesn't know that arrest rates reflect policing patterns, not crime rates. Context requires human judgment.
Different fairness definitions are mathematically incompatible. You cannot simultaneously satisfy demographic parity, equalized odds, and predictive parity. Every "fair" system makes a choice about which fairness to prioritize.
Organizations Working on This
Founded by Joy Buolamwini. Raises awareness about AI harms and advocates for equitable technology.
Civil RightsFounded by Timnit Gebru. Community-rooted AI research challenging power concentration in AI.
Civil RightsDefending digital privacy, free speech, and innovation through impact litigation and advocacy.
Civil RightsDefending digital rights of people and communities at risk globally.
Civil RightsAlgorithmic Accountability Toolkit and investigations into tech-enabled human rights abuses.
JournalismFunding and supporting accountability journalism on artificial intelligence.
JournalismNonprofit newsroom investigating how powerful institutions use technology to reshape society.
JournalismInvestigative reporting on AI including Karen Hao's foundational AI accountability series.
AcademicFoundation Model Transparency Index scoring AI companies on 100 transparency indicators.
AcademicResearch on internet and society, including AI governance and accountability.
Disclaimer
Assessments reflect publicly available information and the published methodology of the Behind the AI Research Team. Grades represent analytical assessments derived from the published scoring framework, not statements of fact about internal company operations. If you believe any claim is inaccurate, contact corrections@behindtheai.org with the specific claim and your evidence.