JUANA SUMMERS, HOST:
This week, the State of Florida launched a criminal investigation into OpenAI over accusations that ChatGPT advised the alleged gunmen in a mass shooting at Florida State University last year. That investigation comes amid growing scrutiny of the potential harms AI chatbots can pose and questions over how to hold the companies that make them accountable. NPR's Shannon Bond joins us now. Hey.
SHANNON BOND, BYLINE: Hey, Juana.
SUMMERS: So, Shannon, start by just telling us more about this investigation into that shooting in Florida.
BOND: Yeah. This happened on FSU's Tallahassee campus just over a year ago. Two people died and five were wounded, and the alleged shooter is facing multiple charges. Now, Florida Attorney General James Uthmeier says records show this person sought advice from ChatGPT before the shooting. He asked about what type of gun, what type of ammunition to use. He asked when to go to campus to encounter the most people. And Uthmeier had already started a civil inquiry over this shooting. Now he is looking at whether OpenAI should be held criminally liable.
SUMMERS: What do we know about how OpenAI handles these types of situations?
BOND: Yeah. That's a question a lot of people have, including the Florida attorney general. In a statement to NPR, an OpenAI spokesperson called this shooting at FSU a tragedy but says the company is not responsible. She said, in this case, ChatGPT gave the kind of factual responses to questions that could be found anywhere online. And the spokesperson says after the shooting, the company identified an account connected to the suspect and shared that information with law enforcement.
Now, more generally, OpenAI has said it has this system that flags conversations that involve potential harm to human reviewers, then the company will, in some cases, alert law enforcement. But we don't know if this account was internally flagged before the shooting. But Juana, this is just the latest incident that's raising these questions about how AI companies handle these kinds of risks.
SUMMERS: Right. Well, just how big of an issue is this for AI companies?
BOND: Well, they're facing multiple lawsuits, alleging chatbots have contributed to a variety of harms, and that includes a lawsuit against OpenAI over a different mass shooting that happened in British Columbia in February. And that alleged shooter also discussed gun violence with ChatGPT. Now, according to Canadian court filings, OpenAI's internal systems flagged these chats. Staffers were alarmed enough to recommend alerting law enforcement, but company leaders decided not to, and ChatGPT banned the user. Now, OpenAI has said it's overhauling its protocol for referring accounts to law enforcement after this.
I spoke with Janet Haven, who's executive director of Data & Society, an independent research institute. And she says, you know, this question of addressing safety - it really has to start before the debate over when to notify authorities. She says this is about the way that AI chatbots interact with users in the first place.
JANET HAVEN: Chatbots are designed to be agreeable and validating, and that can become really dangerous when users are spiraling or contemplating harmful actions.
BOND: And she says any policy or regulation thinking about safety needs to look at those design choices that happened at the very beginning, as well.
SUMMERS: And what would it look like to focus on design choices?
BOND: Well, you know, people often say these, you know, AI tools, chatbots are essentially black boxes. There's just a lot we don't know about what is happening. And so one thing advocates are pushing for is more transparency into how AI tools are designed, how they're tested and then also calling for auditing that will allow us to understand how they perform once they're released to the public.
I spoke with Alondra Nelson. She worked on AI policy in the Biden administration, and she's now at the Institute for Advanced Study. And she says that kind of transparency would be a really key step in safety and in understanding how to even begin to craft regulations. And she says it should not be left up to the companies to decide what the public knows about these tools. And also, it shouldn't be for us to only learn about these really tragic cases - right? - like, when there's an investigation or a lawsuit.
SUMMERS: Yeah. That's NPR's Shannon Bond. Thanks.
BOND: Thank you. Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.