Can we trust an algorithm to make morally responsible driving decisions?
Autonomous vehicles are quickly emerging as the next innovation that will change society in radical ways. Champions of this new technology say that driverless cars, which are programed to obey the law and avoid collisions, will be safer than human controlled vehicles. But how do we program these vehicles to act ethically? Should we trust computer programmers to determine the most ethical response to all possible scenarios the vehicle might encounter? And who should be held responsible for the bad − potentially lethal − decisions these cars make? Our hosts take the wheel with Harvard psychologist Joshua Greene, author of "Our Driverless Dilemma: When Should Your Car be Willing to Kill You?" Sunday 12/08 at 11 am and Tuesday 12/10 at 12 noon.
Recorded live at Stanford with support from the Symbolic Systems Program and the McCoy Center for Ethics in Society.