© 2024 KALW 91.7 FM Bay Area
KALW Public Media / 91.7 FM Bay Area
Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations

AI, Art, & Jobs: How Can We Adapt in the New Age of Automation?

"It's a tool that I see that can change for the better or for the worse. And it can only be used for the better if regular people and working people and artists get to say on how those technologies are implemented, how they're being used and for what purpose."

Rivka Louissant

What is the effect of AI on creative fields? In the wake of new technologies, how are visual artists feeling about how AI can or should be used? Listen to tbh to hear Santa Clara High School senior, Theodore Nguyen, tell his story.

Story Transcript

Earlier this year, I came across an ad that sparked troubling thoughts. It featured images created with artificial intelligence. That got me thinking about how it may affect the future of work for me and my generation.

The music you’re hearing comes from that Coke ad. It went viral around March.

In my mind, it was a huge development. For the first time in corporate history, a major consumer brand had partnered with an AI company to generate a massive marketing campaign using artificial intelligence.

Here’s Coke’s CEO James Quincy discussing the partnership in a video release.

James Quincy Clip: Open AI is a unique opportunity to participate and experiment with that next generation technology. It’s going to be incredibly important, incredibly disruptive, in communication, in knowledge work, and many other things.

What I wonder though, is how that disruption is going to shape my future job prospects, as well as my friends’.

A debate about this topic took place online during the summer. People were talking about AI’s impact on artists. I actually have a lot of friends who are artists, so I decided to look into how it is transforming the world of illustrative art.

But first, what is AI, anyway? Well, it depends on who you ask, and it turns out that even the experts can’t really agree on what “artificial intelligence” is. Here’s one definition.

B CAVELLO Broadly speaking, AI is a collection of technologies. It's the name that we use to describe a bunch of different technologies that emulates human intelligence.

That’s B Cavello, the director of Emerging Technologies at the Aspen Institute. They admit that their definition of AI is slippery.

B CAVELLO: Sometimes, AI can be, something like the auto focus on your camera or, you know, a spam filter in your email. AI is becoming much more ubiquitous in our lives, but it's not often obvious where it exists.

What’s worrying artists, authors, and actors though, is generative AI.

B CAVELLO: So it's a type of artificial intelligence or a type of ai, um, that is, uh, a bunch of different technologies that can be used to create new content.

For the first half of 2023, everyone was talking about Open AI. It’s a software company in San Francisco that created DALL-E. That’s an AI system that can create realistic images from text prompts. It also created Chat-GPT, an advanced chatbot that has taken the world by storm.

It’s as if a company used my voice to train its software’s speaking capabilities.

Robotic Explanation: OpenAI named the system after the Spanish surrealist artist Salvador Dali and the Pixar movie robot character WALL-E.

The company trained the DALL-E algorithm to create original pictures by feeding it a database of 250 million images and the descriptions of them from the internet.

DALL-E is just one of several popular image generators such as Midjourney, Stable Diffusion, Artbreeder.

For its part, the Coca Cola Company worked with Open AI’s software and a consulting company called Bain to create its massive marketing campaign.

A central element of the campaign was a museum-style ad. It showed a lot of art pieces and sculptures lobbing a bottle of Coke to each other until it reached a bored art student in need of inspiration.

Diving a little deeper, a “making of” YouTube video of the ad I saw seemed to show Coke marketers using green screens and computer-generated effects to create the backgrounds featured in the ad.

For all of Coke’s hype about needing to embrace new technologies, this development worried me.

Many humans were still involved in this ad campaign. But, it appears that a lot of CEOs want to use AI to automate as many jobs as humanly possible.

So I’m concerned that there won’t be enough jobs for everybody in the future if we keep up this train of thought.

After all, machines can be cheaper than humans, and are often more reliable and consistent. They never seek or need parental leave, or ask for raises.

Was I overreacting? I talked to my artist friends to see what they thought about this whole thing.


My name is Lyubaa Filimonova. I am almost 16 years old. I'm gonna be a junior in Valley Christian Schools. I do a lot of traditional art. I have been going into digital art lately, and I do a lot of music production.

Lyubaa is one of my friends in San Jose. Her father is a software engineer, and her mother is a graphic designer. She sounded optimistic, but a little concerned.


I don't think that AI art will bring people out of business.

Lyubaa believes that audiences will still want to see the artistry that goes into works like paintings, and that digital technology has actually helped traditional artists gain more exposure.

LYUBAA FILIMONOVA: It is, first of all, hard to succeed as an independent small artist because of how many there are … but it does have been a lot easier recently with like social media or public art galleries. But I, like I said, like with the traditional art, people still would love to see original pieces because for a lot of people, sometimes they like seeing the brush strokes, they like studying it.

She thinks AI can help artists improve their skills.

LYUBAA FILIMONOVA: I do believe that AI is a tool to help learn from. Like I said, it shouldn't be, ai shouldn't be a like start and stop, end goal.

As an artist, she thinks image generators like DALL-E can help as reference points, and spark off creative processes.

LYUBAA FILIMONOVA: It's like using the search bar, except you're getting answers faster and a little like, you know, closer to what you actually want to see.

Again, DALL-E is an image generator. Users can create relatively sophisticated images by entering text prompts. For example, if you want to create a futuristic sci-fi robot, you could type in “create a futuristic sci-fi robot in the style of a famous comic book author,” and the software will create an image matching that description.

Some artists use image generators to brainstorm and create some new artistic concepts.

My friend Lyubaa cautions, though, that too much reliance on generators dulls the creative mind.

LYUBAA FILIMONOVA: You need to unlock your fullest potential and relying on AI to do that for you is not very helpful.

Lyubaa sees the value of using AI in art, but she still thinks that tech companies should pay artists for using their works.

She’s learned online that people have lost their jobs to AI.

But she doesn’t see or want the future of art creation to be either AI or humans.


You want to be better learning from AI rather than learning to make a better AI.

So, people shouldn't like think, oh, there's an AI crisis, time to get rid of everybody cause now they can be replaced.

They need to better think of how can we better like, like, you know, support both AI and real people.

Both Lyubaa and my friend and classmate Brianna Bell at Santa Clara High do believe, however, that the technology companies have exploited artists’ work online.


I'm not a very big fan of it cuz it's, it usually involves a lot of stolen art. Like it can't really generate art without already having stuff, but usually the stuff that it has in its forms is not stuff that people have asked for. They just kind of like slap it in there and like, let's generate new stuff.

But it's not new. It's usually stolen.

That was my friend Brianna talking about generative AI.

She’s expressing a concern that’s on many people’s minds. For example, comedian Sarah Silverman and a group of novelists sued OpenAI for allegedly pirating their books in early January.

And a group of artists sued generative AI companies earlier this year. They allege that Stability AI, Midjourney, and DeviantArt violated their copyrights and are unfairly competing with them.

“The AI companies will replace the very artists whose stolen works power these AI products with whom they are competing,” the lawyers for the plaintiffs said.

Generative AI can synthesize styles and come up with new ideas with text prompts. These models are not pirating work in the sense that they’re not selling exact replicas of artists’ work.

And that’s what makes all this so murky, observes Rivka Louissaint. She is a multimedia artist at the University of San Francisco who works primarily in the offline world.


We're always being inspired by each other. I say “copy” because it is a machine and it is copying. I think even, even when artists are “copying each other”, they'll say like, “I was inspired by so-and-so artists.

I feel like that's also the part where the AI can't tell you “I was inspired by this artist.” You might command it to make a piece in a specific artist style. But how does the artist get the benefit from that production?

Louissaint is not worried about AI shaping her career, but rather who gets to use and control the technology.


It's a tool that I see that can change for the better or for the worse. And it can only be used for the better if regular people and working people and artists get to say on how those technologies are implemented, how they're being used and for what purpose.

Here’s one way to think about what’s happening.

ERIC GOLDMAN: Copyright law doesn't protect an artist's style.

That’s Santa Clara University law professor Eric Goldman. He says that the generative AI companies’ activities are legal.

Say for example, a person wanted to create a story using Stan Lee’s Spiderman. They wouldn’t be able to outright use Spiderman as a character in their story without paying licensing fees, as the character is an explicit expression of Stan Lee’s work.

What Goldman is saying is that current US copyright law allows Open AI to train its algorithms to learn how Stan Lee draws, and then serve up other kinds of artwork using his sketching techniques.

Like my friend Lyubaa, Goldman thinks AI will generate new opportunities for those who can adapt to the tech.

He may be right. A job listing for a generative AI director at Netflix was recently reported to pay 650,000 dollars.


There will be a whole new class of innovators and creative types who will figure out how to use generative AI models to do something that they could never have done before, um, that simply wasn't technically possible before.

I hope my friends do do something really cool and amazing with the new tools. But I also want them to be able to make a living.

And I think that boils down to giving individuals some level of control over how tech companies use their work.

This could involve some form of licensing or contract system, the Aspen Institute’s B Cavello suggests. But it’s not clear how such a licensing system might work.

B CAVELLO: There can be affirmative positive licensing that says, I actively allow this work to be used in these ways, or here are the ways that I allow this work to be used.

B isn’t alone thinking along these lines.

A couple of high profile technologists on POLITICO, a D.C news organization, argued that creators of all kinds should receive periodic paychecks from AI companies who use their work or data.

The thinking is that the AI companies are like oil companies in Alaska extracting and monetizing public resources. They should compensate the public just as the extractive companies pay Alaskans every year.

All these ideas and more are likely to be discussed on Capitol Hill this Fall. Senate Majority Leader Chuck Schumer is convening meetings to come up with ideas on how best to regulate the technology.

This will give us a chance to pose our own questions to create a future we want to live in.

I asked Cavello what kinds of questions we should ask. They’ve been involved in working on federal legislation called the Algorithmic Accountability Act.

Here’s what they suggested.


What level of transparency is there? What do we — as members of the public – or what do we as workers, get to understand about this technology, and how it works in our lives?

THEO: AI is obviously made of algorithms. But we should ask these companies how these are used in our day-to-day lives, and how they work - especially if something goes wrong, Cavello says.


When it comes to companies who are building these tools, we may want to ask them, what limitations on use do they have? Is there anything that they do not allow their users to do using their tools? Are they evaluating these systems for the impacts that they have on people?

Remember my thoughts about the ad, where I worried about companies automating humans out of the creative process?

It’s important to ask all these questions now, because we should be able to collectively determine how that scenario plays out in the future.

We face many choices.

Erik Brynjolfsson is the Stanford Digital Economy Lab’s director and an economist who is frequently consulted by policymakers about AI’s impact on labor markets.

He argued that CEOs should focus on how AI can augment human capabilities and make us more productive – and valuable – as workers earlier this year in a YouTube discussion hosted by the Brookings Institution.


“Far too many people are, they ask me, what’s going to happen, what’s the technology going to do? And what they should be asking, what do we want to do with the technology? What are our values? So we need to think more about our values now than we ever have in history.”

AI is a powerful and expensive tool that has the potential to widen inequality in all fields beyond art. As Byrynjolfsson says:


We need to think very carefully about what kind of world we want to live in. Do we want a world with widely shared prosperity? Do we want a world where everybody has a stake, where everybody has some bargaining power, where everyone has a role to play?

I hope so. I hope that tech companies will address some of the concerns by the artists and give them a potential opt-out ability. In this new era, Congress should start making workable adjustments to copyright law, to better adapt to this new tech.

I hope artists will either be able to adapt or accept these new technologies in their creative processes, instead of worrying about their art being stolen.

I have the same wish for the rest of the workers throughout the economy. As many others have said, we need to improve and adapt, or risk being removed entirely.

As for me, I’m going to keep my eye on new career opportunities – AI has created a new niche, with subsectors like the responsible tech sector and the jobs of AI ethics. For sure, it will carve out its own job category in the near future.

While we probably shouldn’t lose sleep every night over a robot taking all the jobs, we should take the time to think about who gets to set all the rules. We do need to figure out how to harness the technology properly.

But it should be people like my friends who are artists – and my generation who are entering the workplace – who get to have a say over how the tech companies design their products.

Tech companies should give artists some control over how their work is used. They should be required to license creators’ work.

And I hope that we can find a clear definition of AI so we can create some coherent policies that will spread the wealth and balance of power.

If we try to automate everything in our lives, we might come around full circle and work even harder to try and solve the problems that we created for ourselves.

Subscribe to tbh on NPR OneApple PodcastsSpotifyor your favorite podcast player.

This story was made to be heard, click the play button above to listen