Can algorithms be truly objective decision-makers, or do they necessarily pick up the biases of their human programmers?
Recent years have seen the rise of machine learning algorithms surrounding us in our homes and back pockets. They're increasingly used in everything from recommending movies to guiding sentencing in criminal courts, thanks to their being perceived as unbiased and fair. But can algorithms really be objective when they are created by biased human programmers? Are such biased algorithms inherently immoral? And is there a way to resist immoral algorithms? Josh and Ken run code with Angèle Christin from Stanford University, author of "Algorithms in Practice: Comparing Web Journalism and Criminal Justice."
Join the conversation LIVE this Sunday 8/12 at 11 am by calling 1-800-525-9917, or catch the re-broadcast Tuesday 8/14 at 12 noon.