Ethics and Bias in AI

24.Mar.17
by Jon Follett

Episode Summary

On The Digital Life this week, we discuss ethics and bias in AI, with guest Tomer Perry, research associate at the Edmond J. Safra Center for Ethics at Harvard University. What do we mean by bias when it comes to AI? And how do we avoid including biases we’re not even aware of?

If AI software for processing and analyzing data begins providing decision-making for core elements critical to our society we’ll need to address these issues.

For instance, risk assessments used in the correctional system have been shown to incorporate bias against minorities

When it comes to self-driving cars, people want to be protected, but also want the vehicle, in principle to “do the right thing” when encountering situations where the lives of both the driver and others, like pedestrians, are at risk. How we should deal with it? What are the ground rule sets for ethics and morality in AI, and where do they come from? Join us as we discuss.

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1
Atlas, The Next Generation
Stanford One Hundred Year Study on Artificial Intelligence (AI100)
Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World
How can we address real concerns over artificial intelligence?
Moral Machine

ep_199.jpg

Subscribe to The Digital Life on iTunes and never miss an episode.

Topics: Podcast