Could Robots Be Persons?
What would it mean to assign legal or moral responsibility to algorithms in human form?
As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of "The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation." Sunday, January 9 at 11 am.
This is the third and final episode in Philosophy Talk's series The Human and the Machine.