Who is Dr. Joy Buolamwini?
The poet of code who held a mirror up to machine learning and forced Big Tech to see what it didn't want to.
Dr Joy Buolamwini is the founder of the The Algorithmic Justice League and one of the most influential voices in AI ethics today. A computer scientist, Rhodes Scholar, Fulbright Fellow, and MIT graduate, she combines rigorous research with powerful storytelling to ask a deceptively simple question: What happens when the coded gaze doesn't see you?
Her research at MIT's Media Lab revealed that commercial facial recognition systems, built and deployed by some of the world's most powerful companies, were highly inaccurate when analysing darker-skinned and female faces. While these systems boasted high accuracy overall, Dr Buolamwini found error rates of up to 34.7% for Black women compared to less than 1% for white men.
But Dr Buolamwini didn't stop at exposing the problem. She built the solution. Her Pilot Parliaments Benchmark created a more representative dataset for testing AI systems, directly addressing the stark imbalances in training data where three-quarters of faces were male and four-fifths were lighter-skinned. Her benchmark became the new standard for fair AI evaluation.
By exposing the racial and gender biases embedded in algorithmic systems, Dr Buolamwini sparked a global reckoning. IBM, Microsoft, and Amazon halted or re-evaluated their facial recognition programmes. The US Congress and EU Commission cited her work in discussions on AI regulation. TIME Magazine named her one of the 100 Most Influential People in AI in 2023.
Her TED Talk, "How I'm Fighting Bias in Algorithms," has been viewed over 1.6 million times, and her book, Unmasking AI, is quickly becoming essential reading for anyone working at the intersection of technology, ethics, and equity.
Dr Buolamwini doesn't reject innovation, she reframes it. She reminds us that AI is not neutral, and that data is not destiny.
In a world rushing to automate, she asks the question every fintech leader, policymaker, and designer should be asking: Are we encoding inequality, or designing for justice?