NotebookLM, Google’s notetaking tool, has an AI-generated voice feature that is surprisingly close to a real voice. One radio host argues it’s not just close, but an actual ripoff of their voice, and has taken that claim to court.
Former NPR radio host David Greene filed his case in California on January 23, and his lawyer argues, “This case arises from Google’s deliberate acts of theft. Google used Mr. Greene’s voice without authorization and then used those stolen copies to develop, train, and refine its AI broadcasting product, NotebookLM.”
According to The Washington Post, Green said, “I was, like, completely freaked out… It’s this eerie moment where you feel like you’re listening to yourself.”
Citing Greene’s work with NPR shows, but also political radio show Left, Right & Center, the filing argues
“Google sought to replicate Mr. Greene’s distinctive voice—a voice made iconic over decades of decorated radio and public commentary—to create synthetic audio products that mimic his delivery, cadence, and persona.”
David Green’s voice.
It argues that podcast and radio hosts are typically paid for the use of their voices, as it adds credence and legitimacy to shows, which Google can effectively bypass with a similar voice.
This is something I noticed with the tool back in 2024. The podcast hosts confidently announce the facts, which is fair and probably good when they’re actually facts. However, the fact that you can plug in any source you like means it can add that confidence to pretty much any idea or theory.
Greene’s representative argues “Failure to pay the negotiated and agreed-upon price for such professional services, is a violation of multiple statutes and common law”, and that “Defendant Google is attempting to disrupt the podcast industry.”
The case itself compares a clip of David Greene talking about Trump’s Big Beautiful Bill to an AI-generated summary of Greene’s analysis (alongside a few others). You can listen to those clips below to see for yourself how similar they might be.
An excerpt from David Greene’s coverage of the Big Beautiful Bill
NotebookLM’s analysis of David Greene’s clip
Notably, there doesn’t appear to be firm proof that Greene’s voice was used in Google’s training data. Greene’s team notes “an independent forensic software company specializing in voice recognition” analysed the two voices, and “the tests indicated a confidence rating of 53-60% (on a -100% to 100% scale) that Mr. Greene’s voice was used to train the software driving NotebookLM.”
In conversation with The Washington Post, Adam Eisgrau, the Chamber of Progress’s Senior Director of AI, Creativity and Copyright Policy, says, “If a California jury finds that the voice of NotebookLM is fully Mr. Greene’s, he may win. If they find that it’s got attributes he also possesses, but is fundamentally an archetypal anchorperson’s tone and delivery it learned from a large dataset, he may not.”
Google has responded to this suit to the Washington Post, calling the allegations “baseless”.
José Castañeda, a Google spokesperson, said “the sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired.”
This mimics a similar case from 2024, where Scarlett Johansson noticed GPT-4o’s “her” voice was “eerily similar” to her own. OpenAI soon paused the use of the voice, but it looked particularly strange, as Johansson argues OpenAI previously approached her to add her likeness, which she declined.
As Green’s case was only recently filed, and we don’t have any proof that Greene’s voice is in the data set, we don’t yet have an indication of how it will go. Either a win or a loss will play into a larger precedent over which data AI companies can and can’t use, and how different the output needs to be as a response.
