Behind the AI

Who funds, controls, and profits from the AI tools you use every day? Independent assessments. No corporate sponsors. Plain language.

How Bias Works

AI bias is structural, not a patchable bug. It enters at every stage of development and cannot be solved with better code alone.

Based on: Karen Hao, “This Is How AI Bias Really Happens—and Why It's So Hard to Fix,” MIT Technology Review, February 2019

Three Stages Where Bias Enters

1Problem Framing

Bias enters when you choose what to optimize.

2Data Collection

Training data reflects historical discrimination.

3Data Preparation

Feature selection introduces hidden bias.

Why Tech Alone Cannot Fix This

1Unknown unknowns

You cannot test for biases you haven't imagined. Systems fail in ways their creators never considered, often affecting communities the creators don't belong to.

2Imperfect processes

Debiasing one metric often worsens another. Equalizing false positive rates across groups may increase false negative rates for the most vulnerable. There is no free lunch in fairness.

3No social context

Models cannot understand why a correlation is harmful. A model that learns "arrests predict future arrests" doesn't know that arrest rates reflect policing patterns, not crime rates. Context requires human judgment.

4No mathematical definition

Different fairness definitions are mathematically incompatible. You cannot simultaneously satisfy demographic parity, equalized odds, and predictive parity. Every "fair" system makes a choice about which fairness to prioritize.

Organizations Working on This

Disclaimer

Assessments reflect publicly available information and the published methodology of the Behind the AI Research Team. Grades represent analytical assessments derived from the published scoring framework, not statements of fact about internal company operations. If you believe any claim is inaccurate, contact corrections@behindtheai.org with the specific claim and your evidence.