© 2021 KALW
KALW Public Media / 91.7 FM Bay Area
Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations

Could Robots Be Persons?

alex-knight-2EJCSULRwC8-unsplash.jpeg

What would it mean to assign legal or moral responsibility to algorithms in human form?

As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of "The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation." Sunday, January 9 at 11 am.

This is the third and final episode in Philosophy Talk's series The Human and the Machine.

Devon Strolovitch studied medieval Judeo-Portuguese manuscripts and earned a PhD in Linguistics from Cornell University before coming to KALW. He is the Senior Producer of Philosophy Talk, and since 2007 has hosted Fog City Blues, the weekly digest of Blues in the Bay Area and beyond.