Humans can show bias, but mathematical algorithms supposedly do not. So why did machine learning initially red-line some minority areas when Amazon first rolled out same-day delivery in Boston? Why has Google been found to show lower-paying jobs to women?
Where does that bias come from and how can businesses correct for it?
Richard Berk, Professor of Statistics and Criminology in the Penn School of Arts and Sciences, addressed these questions in front of a packed room of MBA students. Called “Machine-Learned ‘Bias,’” the lunch-time event was offered during One Wharton Week and co-sponsored by the Wharton Analytics Club.
Prof. Berk’s work focuses primarily on criminal justice applications. He gave the example that men commit most violent crimes, so if you develop an algorithm to forecast risk, the risk will be greater for men. If men are more likely to be sentenced to prison, is this result biased or accurate and fair? The answer, according to Berk, is how you define justice.
Read more:
Kommentare