Reddit’s would-be competitor Digg just shut down because it couldn’t handle the bots overrunning its site. On Wednesday, Reddit said it was taking on the challenge itself.
The company will begin flagging automated accounts that provide services to users, similar to how “good bots” are flagged in X, and will now require accounts suspected of being bots to verify they’re human.
Reddit emphasizes that this isn’t going to be a site-wide verification requirement, and will only happen if there’s something to suggest the account isn’t human, including its activity on the site or other technical indicators. If the account can’t pass the test, it may be banned, Reddit said.
To detect potential bots, Reddit uses specialized tools that look at account-level signals and other factors—like how quickly the account tries to write or post content. However, using AI to compose posts or comments is not against its policies (although community moderators can set their own rules).
To verify that an account is human, Reddit will leverage third-party tools like passwords from Apple, Google, YubiKey, and other third-party biometric services like Face ID or even Sam Altman’s World ID — or, in some countries, the use of government IDs. Reddit notes that this last category may be required in some countries like the UK and Australia and some US states due to local age verification regulations, but it’s not the company’s preferred method.
“If we must verify that an account is human, we will do so in a privacy-first way,” wrote Reddit co-founder and CEO Steve Huffman. communication Wednesday. “Our goal is to confirm that there is a person behind the account, not who it is. The goal is to increase the transparency of what’s on Reddit while maintaining the anonymity that makes Reddit unique. You don’t have to sacrifice one for the other.”
The changes aim to address the growing problem of bots involved in social media platforms and the wider web, where they are often used to influence policy, spread misinformation, boost popularity, sell hidden products, generate fake ad clicks and more. According to Cloudflare, bot traffic will surpass human traffic by 2027 when you include bots like web crawlers and artificial intelligence agents in the mix.
Reddit, in particular, has become a popular destination for bots that try to manipulate narratives, astroturbe about companies or their products, repost links, post spam, drive traffic, are conducting researchand more. Additionally, because Reddit content is used for AI training thanks to lucrative deals with AI model providers, it is suspected that robots they even post questions on the site to generate more training data, particularly in areas where AI lacks information.
Reddit’s other co-founder, Alexis Ohanian, has too encountered a related problem known as the “dead web theory”, an assumption that bots outnumber humans on the internet and that the vast majority of content, interactions and web activity on the internet is automated or generated by AI, rather than humans. In the age of AI agents, theory becomes reality.
The company announced last year that it would begin requiring human verification in response to the growing number of bots and the need to meet “evolving regulatory requirements.” But the company notes today that the current solutions, which Huffman recently discussed the TBPN podcastthey are not the best.
“The best long-term solutions will be decentralized, personalized, private, and ideally require no identity at all,” Huffman wrote in today’s announcement.
Alongside these changes, Reddit said it will continue to crack down on bots and spam, where it takes down 100,000 accounts a day on average, and rely on reports of suspected bots, with improved tools still to come. Developers running so-called good bots can learn more about tagging them with the new “APP” tag at r/redditdev community.
