I fed Google’s new notebook summarisation feature my article about the potential dangers of AI scraping and it’s as creepy and self-aware as you would think

Yes, I know it’s a bit hypocritical to be critical of AI, specifically generative AI, and use it like some sort of sick party trick. However, I’m a journalist and this is like doing science, kinda. Google Notebook LM, Google’s AI summarization system, has a new feature to guide its audio summarization feature, with a focus on certain topics and sources, and it’s both quite smart and sort of haunting.

Announced today, Google Labs, the search company’s site for AI tools, has implemented this latest addition and users can test it out for themselves. I wanted to give it a piece of information that is somewhat nuanced yet I know quite well so it’s hard to find a better choice than something you’ve written.

I handed it a piece I had written earlier today, which is critical of opt-out policies when it comes to AI data scraping, and watched two hosts summarize it for the point of helping take notes. Apart from calling opt-out the “Opt O U T” model, it kinda nails it.

The two AI hosts manage to get to my basic opinion in a roundabout way and appear like they’re earnestly and level-headedly criticising the thing that made them exist in the first place (data scraping). It then goes on to argue that users should be more proactive about their data use and that all hope isn’t lost in the AI data war.

In the second interpretation of the same article, I asked the AI hosts to focus a bit more on Elon Musk and his controversies, just to see how far outside of my article it would go.

Apart from a little ire at Musk’s name, it continues to focus on the same basic point, and even makes mistakes in speech patterns, like saying X, then calling it Twitter. It fits “ums” and “ahs” in every now and then, which is surprisingly lifelike.

We noticed many of these same things when testing out the podcast function earlier this month but the Notebook function is a step above as you can ask it follow-up questions around the article. I asked it for the basic arguments in my piece and it gave a succinct four-point answer, going over a few rationales to be critical of data scraping, and specifically the problems with opt-out policies.

When repeated a second time, I caught a few similarities, like the male host calling AI companies sneaky in both versions. The female host also says “The future is shaped by the now” in some fashion twice on the second attempt.

However, the confidence with which the hosts speak feels worrisome to me. There’s a feedback loop here, where, at a moment’s notice, you can have a professional-sounding host, telling you “the truth” through a source you’ve shared. In the case of my article, my argument, whether you agree or not, is relatively straightforward.

It gets some small bits of information wrong, like saying company owners have to opt-out, when it’s actually users, but it’s mostly on the money. How does something like this prepare a potential reader for something deeper and more philosophical?

And, as a result, what makes writers keep writing when their work can be summarized by two very friendly voices who can position the information in a way the reader desires? Fundamentally, our ability to understand the words in front of us requires much greater skills than the ability to read an AI’s summary, and language is so multifaceted that we shouldn’t trust it to get it right.

Like I said at the start, this feels like a spooky party trick but language feels so much bigger than an LLM, however large, can really understand.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

Leave a Reply

Your email address will not be published.

Previous post Share of the Week: Scared
Next post Amazon’s God of War TV show is starting over from scratch after the showrunner and executive producers all quit