The online podcast recording platform Riverside came out with her own version of a year-end review like Spotify’s “Wrapped.” The recap, called “Rewind,” creates three custom videos for podcasters.
Instead of sharing stats like how many minutes you’ve recorded or how many episodes you’ve done, Riverside has created a 15-second laugh collage, showing a quick succession of clips of the podcast host and I making fun of each other. The next video is similar, except it’s a supercut of us saying “um” over and over.
Riverside then scans the AI-generated transcripts of your recordings to find which word you said more than any other (we’re assuming they cut out words like “and” or “the”).
It’s a bit ironic, but for me internet culture podcastmy co-host and I said “book” more often than any other word (this was probably skewed by the subscription-only “book club” recordings… or the fact that my co-host has a book coming out, which we put non-stop).
Another show on our podcast network, Spiritsthey said “Amanda” more often than any other word (not because they’re obsessed with me, but because they also have a host named Amanda).
In the podcast network’s Slack, we traded our Rewind videos. There’s something inherently funny about a video of people saying “um” over and over again. But we also know what these videos represent: our creative tools are becoming increasingly saturated with AI features, many of which we don’t want or need. Riverside Rewind points out the uselessness of these very tools. It’s good for a quick laugh, but no substance.
While I loved Riverside’s AI recap, its arrival comes at a time when my peers in the industry are missing out on opportunities to create, edit, and produce new podcasts thanks to the same AI tools that created our Rewind videos. But while AI allows us to automate some tasks — like editing out “umms” and dead air — the podcast itself isn’t all that mechanical.
Techcrunch event
San Francisco
|
13-15 October 2026
AI can quickly create a transcript of my podcast, which is important for accessibility, helping to automate an activity that used to be incredibly time-consuming and tedious. However, AI is not able to make editorial choices about how to maneuver audio or video to tell a story effectively. Unlike the human editors I work with, AI can’t determine when a tangential conversation in a podcast is funny and when it should be cut because it’s boring.
Despite the rise of personalized AI audio tools like Google’s NotebookLM, its ability to serve as a creation tool has also had high-profile failures recently.
Last week, the Washington Post began rolling out personalized AI-generated podcasts about the day’s news.
You can understand why this would seem like a “good” idea to profit-hungry executives – instead of paying a team to do the labor-intensive work of researching, recording, editing and distributing a daily show, you could automate it – except you can’t.
Podcasts took off made up quotes and factual errors, which is existentially dangerous for a news organization. According to Semafor, Post’s internal testing found that between 68% and 84% of AI podcasts failed to meet publication standards. This seems like a fundamental misunderstanding of how LLMs work. You can’t train an LLM to distinguish fact from fiction because it’s designed to provide the most statistically likely result in a prompt, which isn’t always the truest result — especially in breaking news.
Riverside did a great job creating a fun end-of-the-year product, but it’s also a reminder. Artificial intelligence is penetrating every industry, including podcasting. But in this moment of the “artificial intelligence boom,” as companies engage in new technology, we need to be able to discern when AI serves us and when it’s fodder for useless adventures.
