This story aired in the August 14, 2025 episode of Crosscurrents.
Today, we bring you the second story from our series The Homework Machine.
In part one, we heard about the rapid rise of AI, and the varied approaches teachers are adopting, or resisting, it.
DEVON O'NEIL: It was those two, like super crazy post COVID years. So I come back and it's like, like those movies where like the caveman, like Defrosts or whatever, and they're like, what is this? I would have these really well written paragraphs. Or snippets that are looked to be very well researched and all this, but not at all on topic, the grammar was off. Even the most brilliant 14-year-old still talks like a 14-year-old and still writes like a 14-year-old.
Next we hear how reporter Jesse Dukes has been working with researchers at MIT, speaking to teachers and students all across California and around the country.
He learned that the arrival of ChatGPT and other AI tools has led to more cheating in schools. But, it’s not always easy to define what cheating is… or to know what to do about it.
Click the button above to listen!
Story Transcript:
REPORTER: This past school year, Emilia was a senior. She lives in Northern California and she went to a school that she liked. She loves her folk dancing groups and she loves to learn. And like other high schoolers I’ve spoken to, she has noticed some of her classmates using AI to do their schoolwork.
EMILIA: Yeah, just like, they don't care about doing the work. Like, they just want to find an easy way to pass a class.
REPORTER: Emilia thinks using AI on schoolwork is a bad idea.
EMILIA: I mean, you're not learning. You're just learning how to copy and paste. Why would schools be invented then? Or, like, why would we come to school? It's just like, why, you know?
REPORTER: But, there was this one time. Last fall, Emilia’s English teacher, who was new to the school, didn’t seem to think Emilia was a very good writer. She kept getting poor grades on writing assignments, which surprised her because Emelia thinks she's a strong writer.
EMILIA: I actually took a college class with a professor at UC Berkeley. I always had A’s in that class, and the professor even liked so many of my writings. I was just so confused, you know? I was just like, “How am I getting A’s in my college classes and with you, I'm getting F's and C's?”
REPORTER: Emilia couldn't make sense of what was happening. It seemed like no matter what she did, she couldn't get a higher grade, and it started to feel personal, like this teacher just didn't like her or her writing.
So one day when the teacher gave them an extra credit assignment in class, Emelia thought, “I'm gonna give this AI thing a try.” So she put the prompt in the ChatGPT, got a result, edited it a little, and submitted it to her teacher. Pretty quickly, she got a private message over Google Chat from the teacher
EMILIA: And she was like, “Hey, can you come check with me?” I think I can even look it up on my phone. Oh, right here it says, “Looking at the version history, I see no evidence that this is your own writing. No extra credit. See me if you wish to discuss.” Like, my whole jaw dropped and I was like, “Ah, not even AI can save me.”
REPORTER: Emilia was busted. And she was frustrated with herself. She knew she made a mistake, but she also knew she wasn't normally the kind of student to skip out on an assignment. But now, she felt that this teacher was going to have this false impression of her.
EMILIA: She will now think that I use AI for the rest of the semester.
REPORTER: Emilia says she fell prey to temptation and crossed a line. And she’s not alone. Nowadays, nearly every student in high school or middle school has access to ChatGPT, or similar powerful AI, on laptops or phones. Generative AI can do a reasonable job completing many writing assignments.
Many teachers and students think that’s led to more cheating. Here’s my colleague Holly McDede talking to a student named Samantha Babst at Independence High School in San Francisco.
HOLLY MCDEDE: And how good do you think teachers are at detecting when ChatGPT is being used?
SAMANTHA BABST: Oh, I don't know, really. Hmm. Well, I've heard a lot of students talk about using it and no one says that they've gotten caught. Yeah, I'm not gonna out the teacher, but there was a certain comment I heard about someone's class where they were like, “Oh, that guy's so easy, you just use ChatGPT for all his stuff.” And I was like, “Ehhh, that is a bad look, I think.” But people still use it. People love it. That's just how it is.
REPORTER: One student with a gray area story is Kaitleen Evangelista, who just finished her sophomore year at Abraham Lincoln High School in San Francisco.
JESSE DUKES: Do you think the teachers realize — we don’t have any teachers with us — do you think the teachers realize how much that’s happening?
KAITLEEN EVANGELISTA: I don't think so. Like, we had this one essay on the Great Gatsby. All I saw was people typing it and [asking] “Oh, can I just print it out?” The teacher's like, “Yeah, sure.” And then all I see is them putting in a prompt, like, “Make an essay for this.”
REPORTER: Kaitleen means they’re prompting ChatGPT for the essay, and she says she has not done that. But that doesn’t mean she hasn’t been suspected. Earlier this year, she was writing an essay about the Avatar movies.
KAITLEEN: You know Avatar? The blue people?
JESSE: Yeah.
KAITLEEN: Yeah. So we basically had to write an entire essay about it and I would use Grammarly to help me fix some of my grammar mistakes, because I have a lot of grammar mistakes.
REPORTER: It’s actually automatically enabled on Kaitleen’s school-issued laptop, and she accepted some of the suggestions. Kaitleen’s teacher ran her essay through AI detection software.
KAITLEEN: Apparently, when she used the AI finder, it was 99%.
REPORTER: AI detection software is a complicated and controversial subject. Some programs clearly work better than others but sometimes Grammarly, which is a kind of AI, will read as AI cheating, and most teachers don’t think using Grammarly is the same as cheating.
So in this case, Kaitleen’s teacher did what is recommended by many experts, which is to use the AI checker’s findings to start a conversation.
KAITLEEN: And then I tried explaining my story and she was like, “Oh, okay, well this is just a warning in case you did use AI and I'm gonna tell the class this too, but I want you to rewrite your essay.”
REPORTER: It’s not like Kaitleen got in big trouble. She was still able to get a good grade on the assignment, but I asked her how this felt.
KAITLEEN: I felt kind of offended at first. Like, I put my time and effort into trying to write a good essay and I just get told “You're probably using AI.” But I also use that as a learning experience.
REPORTER: She tries to be understanding of what her teacher is up against.
KAITLEEN: But I was like, maybe the teacher's been having people in her class like using AI. So I mean, it would make sense that she would say that. I wouldn't, I wouldn't be harsh on her about that.
REPORTER: Kaitleen’s story highlights the dilemma that teachers and students are facing. Many teachers are on high alert about AI-powered cheating, and that creates a risk of false accusations. There is a large body of historical evidence showing that Black and Latino teens are more likely to be accused and punished for discipline infractions in schools. In fact, a recent nationally representative survey found that 20% of Black teens report being falsely flagged for using AI, compared to 7% of white teens, and 10% of Latino teens. So, there’s a real risk of unfair false accusations.
On the other hand, letting students get away with using AI to complete work dishonestly isn’t doing them any favors. Jessa Kirk is a former high school teacher and dean, who now works at the Academic Integrity department at UC Santa Cruz. She thinks that many schools are not doing enough to define what cheating with AI really means, and then discourage it.
JESSA KIRK: [These are] schools that, for whatever reason, don't have a policy in place or don't have a clear policy. And then students are using generative AI unchecked, in part because they don't know it's wrong, it hasn't been explained to them. No one is calling them on it, and then they get to the next level of education and are completely unprepared. I know so many students see it as, “Okay, I'm just cutting a corner in this one place,” or “I don't really care about this assignment. I don't see the point of it. I'm gonna use it,” and don't realize that they are robbing themselves of a chance to actually learn.
REPORTER: Kirk thinks some students at some schools might fall through the cracks.
JESSA KIRK: “Their grades are good. We're not gonna worry about that kid. Maybe they're using Gen AI a little bit more than they should, but they're getting A's, let's just move them forward. Whatever.”
JESSE DUKES: “We got them into college.”
JESSA KIRK: Yes. “We got them into college. Our job is done.” And then we in higher education will see these students and meet them, perhaps. when they're reported for misconduct in college, when it feels a lot more serious. Maybe it is a lot more serious. Then they're left wondering, “Do I even belong in college?”
REPORTER: Jessa Kirk says she recognizes that many teachers and schools feel overwhelmed by everything. They’re still recovering from the challenges of the pandemic, teacher burnout, budget cuts, chronic absenteeism. But she also thinks schools need to be having conversations with students and teachers about AI. What kinds of use constitute cheating? What is OK? How can students learn the difference? She’s been reading through what many schools call academic integrity policies.
JESSA KIRK: I have one here from a local school: “Cheating undermines the trust relationship between the student and the teacher and will be considered cause for disciplinary action.” So sure, that's true. I really don't have any idea of what that means. I don't know how they're defining cheating. I don't know what “cause for disciplinary action” means exactly. I don’t know what that’s going to look like. So I see a school wanting to address this but seemingly hesitant about wanting to address what that’s going to look like.
REPORTER: Many teachers I’ve spoken to agree that their AI policies are vague and need to be updated. In a 2024 national survey from RAND, only a quarter of teachers said their school had provided any updated policy guidance around AI, and only five percent said the guidance was helpful.
Many teachers, school leaders, and students have different ideas about what counts as cheating in the age of AI.
SARA FALLS: It's so amorphous at this point.
REPORTER: This is Sara Falls, an English teacher at Abraham Lincoln High School in San Francisco.
SARA FALLS: It's very clear cut when a kid copies and pastes from the internet. It's very clear cut when a kid copies somebody's assignment, you know, word for word.
REPORTER: But lots of the ways students use AI are not clear cut. Falls was the English department chair in 2024–2025, and also led an effort to define responsible AI use at Lincoln High School, for teachers and students. And to understand the approach she took, it helps to understand how she approaches teaching.
Sound of Sara Falls teaching a class.
SARA FALLS: I really love the classes I teach. They're sort of my dream classes.
Sound of Sara Falls teaching a class.
SARA FALLS: I love running the school newspaper. It's kids who are writing and have a real, authentic audience, and so have a reason to be deeply invested in their work. Even if they struggle with it, they have, you know, a motivation to make it the best they can, and usually they learn a lot about themselves in the process.
REPORTER: When I visited Sara Falls’ student newspaper class, the students were putting together an issue. They’d done all the writing, designed features, they were doing art. They’d even sold ads. While the teacher occasionally offered hands-on help, students mostly edited one another.
Sound of students commenting on each other’s work
SARA FALLS: So the hope is to be a cheerleader while also holding high expectations. I have high expectations for my kids. You know, I've been here long enough that I know what they're capable of and I want to push them. And for some kids that's really scary because they've made it to their senior year, sometimes, without being pushed very much.
Sound of Sara Falls giving feedback to students
SARA FALLS: I have my student’s journal, for instance. Not every day but often. We start class with just, I’d say, 10 minutes of non-stop writing. I say, “Even if I have to write, ‘I don’t know what to write next, I don’t know what to write next.’ If you keep your hand moving, and you get into the flow… you’re going to start to create your own fluency and trust your own mind more.” And so I have to remind them that it's okay for it to be hard. If it were easy, it would mean that you already knew everything, and what a waste of time that would be. You know that the whole point of learning is to struggle and maybe fail and then figure out how to learn from that.
REPORTER: So initially, Sara Falls thought her school needed a zero tolerance approach to students using AI.
SARA FALLS: And I was talking to my department about how we articulate to students about why and what the issues of this are, and how we are going to detect it. And my staff was sort of like, “Well, I use it for this,” or, “Well, what about for this?” And it hadn't even really occurred to me that there were useful ways to use it. And so I quickly realized that a zero tolerance approach is maybe not realistic, certainly not how people are thinking about it. And maybe then it becomes I would be setting myself up for failure if I decide, oh, we're never gonna use it, students should never use it.
REPORTER: So, Sara Falls reached out to the school district to see if they had any guidance around AI. She says it took a while to get a response. There’d been budget cuts and personnel changes at the district. But finally, a consultant responded with a document about AI for teachers.
SARA FALLS: And what she shared with me was how teachers can use AI to create rubrics and how teachers can create, use AI to lesson plan. I am a little concerned that that's the teacher-facing, like, “Look at this cool new thing. Look at all the things you can do with it.”
REPORTER: She thought there needed to be some articulation of why students should not reach for AI to complete an assignment. She started working on a “Responsible AI” policy that would help clarify some of those expectations. When it’s okay to use AI, when you shouldn’t.
The policy wasn’t really about punishments and sanctions. The school has a separate academic integrity policy. This was designed to express the school’s values around learning, and give the students a framework to address some of the AI gray areas.
At first she worked with other English teachers. When I spoke with her in February, she was preparing to share it with other department heads in the school. The first section lists some of the concerns around AI.
SARA FALLS: It gets things wrong all the time. It uses copyrighted works and is creative theft.
REPORTER: Then the document has positive statements of what students should do. What values they wanted the students to center.
SARA FALLS: This whole first section, the primary Do for ethical AI use, is to improve as a writer. And so trying to name, “What does it look like to be a good writer?” So, you know. “Practice. Go through the writing process. Get comfortable writing by hand and learn to type. Be willing to ask questions and seek help from your teachers. Read.” That's like the primary do. It's not about AI at all. It's just about developing as a writer.
REPORTER: Sara Falls is busy. This past school year she taught four different classes, was the English department chair, and has her own personal life. She would have preferred that the school or district have a curriculum or technology specialist who took part in determining the responsible AI guidance — maybe with input from teachers like her.
But since that wasn’t happening, or at least it wasn’t happening in a way that made her comfortable, she volunteered to take it on. And she said ultimately, it wasn’t about being totally anti-AI, it was about figuring out how to live in a world with AI, while still centering what her school values in education.
SARA FALLS: I mean, I think it's possible that AI will help us cure cancer. I think it's possible that AI will help us contact life in other galaxies. Like, I love the idea of that, I really do. I think some of this is super fascinating and we need really, really smart people who are going to kind of push the technology of this, and we're not going to get really, really smart people if, at the high school level, we're just letting AI do the work for us. The students still need to learn how to be thinkers, creatives, how to be innovators.
REPORTER: The RAND study I mentioned earlier shows that Sara Falls’ experience is typical. A minority of teachers say they’ve gotten any guidance or training around generative AI, but it also says that teachers in higher socioeconomic school districts are much more likely to say they had gotten guidance, or training. Teachers in urban districts with lower socioeconomics are far more likely to report that they’ve been left to figure this AI thing out on their own.
A teacher in Los Angeles told me that he, with a few colleagues, had been “voluntold” by his principal to rewrite the academic integrity policy — on their own time, after school, without any help or resources.
Experts like UC Santa Cruz’s Jessa Kirk say that too many schools haven’t tackled the gray areas around AI. Schools without clear updated policies are more likely to have unchecked cheating. Their students are more likely to be confused about what counts as cheating.
And students at those schools are more likely to be falsely accused. Nearly three years after the arrival of the “Homework Machine,” plenty of teachers and students are telling us, their schools haven’t caught up.
Jesse Dukes’ reporting was funded by the Kapor Foundation. We had additional reporting from Holly McDede and Marnette Federis. You can hear more of Jesse’s reporting on this topic on the MIT Podcast The Homework Machine.