A recruiting tool that dismisses women applicants for technical jobs; a re-offending prediction system that is far more likely to identify Black defendants as a potential risk than their white counterparts; facial recognition technology that has a 99% accuracy rate with white male faces, but only 65% for the faces of Black women; and a hiring system that automatically throws out the CVs of women over 55 and men over 60. All these are real-world examples of the bias built into certain AI systems.
The biases are present in these AI applications not because the software itself is inherently sexist, racist or ageist. AI systems learn from data that often contains historical and societal biases, which they can inadvertently replicate. When algorithms are trained on non-representative datasets, how can they reflect the needs of all populations and groups?
Read more at: https://aibusiness.com/responsible-ai/female-leadership-in-the-age-of-ai#close-modal