For her Gender Shades project, MIT researcher Joy Buolamwini fed over 1000 faces of different genders and skin tones into three AI-powered facial recognition systems from Microsoft, IBM, and Face++ to see how well they could recognize different kinds of faces.
The systems all performed well overall, but recognized male faces more readily than female faces and performed better on lighter skinned subjects than darker skinned subjects. For instance, 93.6% of gender misclassification errors by Microsoft’s system were of darker skinned people.
Her message near the end of the video is worth heeding:
We have entered the age of automation overconfident yet underprepared. If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.