
[ad_1]

When bestselling thriller author Douglas Preston started using OpenAI’s new chatbot, ChatGPT, he was surprised at first. But then he realized how deeply GPT knew the books he had written. When prompted, it provided detailed plot summaries and descriptions of even minor characters. He believes it could only have done this after reading his books.
Large language models—the AI that programs like ChatGPT rely on—don’t come fully formed. They first have to be trained on large amounts of text. Douglas Preston and 16 other writers, including George RR Martin, Jodi Picoult, and Jonathan Franzen, were convinced that their novels were being used to train GPT without their permission. So they sued OpenAI for copyright infringement in September.
It seems to be happening a lot lately — one or another big tech company “moves fast and breaks things”, exploring the boundaries of what you can and can’t do without permission. On today’s show, we try to figure out what OpenAI is allegedly doing by training its AI on a ton of copyrighted material. Is that a good thing? Is it a bad thing? Is it legal?
This episode was hosted by Keith Romer and Erika Beras, and produced by Willa Rubin and Sam Yellowhorse Kesler. Kenny Malone edited and Sierra Juarez fact-checked. Robert Rodriguez engineered. Alex Goldmark is the executive producer for Planet Money.
Subscribe to Planet Money+ to help support Planet Money and get bonus episodes In Apple Podcasts or plus.npr.org/planetmoney.
These links are always free: Apple Podcasts, Spotify, Google Podcasts, NPR One Or wherever you get your podcasts.
Find more Planet Money: Facebook / Instagram / Tik Tok / We Weekly communication.
Music: Elias Music – “Elevated”, Universal Music Productions – “Don’t Cross the Line” and “This is Not Goodbye”
[ad_2]
Source link

