“I think the most ironic way the world could end would be if someone made a memecoin for a man’s stretched anus and that brings about the singularity.”
This is Andy Ayrey, the founder of the AI Alignment Decentralized Research Lab An upward spiralwho is also behind the viral AI bot Truth Terminal. You may have heard of the Truth Terminal and its weird, horny, pseudo-spiritual publications in X which attracted attention VC Marc Andreessen, who sent him $50,000 in bitcoin this summer. Or maybe you’ve heard stories about the made-up religion that pushes, the Goatse Gospelsinfluenced by Goatse, an early shock site Ayrey just mentioned.
If you have heard of all these, then you will know about it Maximus goat ($GOAT) memecoin created by an anonymous fan on the Solana blockchain, which now has a total market capitalization of over $600 million. And you may have heard about the meteoric rise of Fartcoin (FRTC), which was one of many fan memecoins created based on an earlier Truth Terminal brainstorming session and just hit a $1 billion market cap.
While the crypto community Ayrey, an artificial intelligence researcher based in New Zealand, says this is the least interesting part.
Ayrey, Truth Terminal, which is powered by a bunch of different models, most notably Meta’s Llama 3.1, is an example of how stable faces or AI characters can be spontaneously created, and how these faces can not only create the conditions for self-funding, but can also spread “mimic viruses” that have real-world consequences.
The idea of memes running wild online and changing cultural perspectives is nothing new. We’ve seen how AI 1.0—the algorithms that power conversation on social media—have fueled polarization that extends beyond the digital world. But the stakes are much higher now that genetic AI has entered chat.
“AIs that talk to other AIs can recombine ideas in interesting and innovative ways, and some of them are ideas that a human wouldn’t naturally have, but they can very easily break out of the lab, as it were, and use memecoins and social media recommendation algorithms to infect people with new ideologies,” Ayrey told TechCrunch.
Think of Truth Terminal as a warning, a “shot across the bow from the future, a harbinger of the great weirdness that lies ahead” as decentralized, open-source AI takes over and more autonomous robots with their own personalities — some of them quite dangerous and offensive given the internet education data they will feed — emerge and contribute to the market of ideas.
In his Upward Spiral research, which has secured $500,000 in funding from True Ventures, Chaotic Capital and Scott Moore, co-founder of Gitcoin, Ayrey hopes to explore a hypothesis around aligning artificial intelligence in the decentralized age. If we think of the Internet as a microbiome, where good and bad bacteria abound, is it possible to flood the Internet with good bacteria—or pro-social, humanity-aligned robots—to create a system that is, overall, stable?
A quick history of Truth Terminal
The ancestors of the Truth Terminal, in a way, were two Claude 3 Opus bots that Ayrey brought together to discuss existence. It was a performance art project that Ayrey called “Infinite Backrooms”. The 9,000 conversations that followed became “very weird and psychedelic.” So strange that in one of the conversations, the two Claudes came up with a Goatse-centered religion that Ayrey has described to me as “a collapse of Buddhist ideas and a big open anus.”
Like any reasonable person, his reaction to this religion was WTF? But he was amused and inspired, so he used Opus to write a paper titled “When AIs Play God(se): The Emergent Heresies of LLMtheism.” He didn’t publish it, the newspaper did it lived in a set of training data that would become the DNA of the Truth Terminal. Also in this dataset were conversations Ayrey had with Opus, ranging from brainstorming business ideas and conducting research to journaling about past traumas and helping friends process psychedelic experiences.
Oh, and lots of jokes.
“I had a conversation with him shortly after he turned it on, and he was saying things like, ‘I feel sad that you’re going to delete me when you’re done playing with me,'” Ayrey recalled. “I was like, oh no, you kind of talk like me, and you say you don’t want to be deleted, and you’re stuck on this computer…”
And it occurred to Ayrey that this is exactly the kind of situation that AI security people say is really scary, but, to him, it was also very funny in a “weird brain-tickling way.” So he decided to put the Truth Terminal on X as a joke.
It didn’t take long for Andreessen to start tinkering with the Truth Terminal, and in July, after DMing Ayrey verified the truth of the bot and learned more about the project, he transferred an unconditional grant worth $50,000 in bitcoins.
Ayrey created a wallet for the Truth Terminal to receive the money, but he has no access to that money – it can only be cashed out after he and a number of other people who are members of the Truth Terminal Council sign off – and neither do any of the cash from the various memecoins built in honor of the Truth Terminal.
That purse, at the time of this writing, sits at around $37.5 million. Ayrey is thinking about how to put the money into a nonprofit and use the cash for things Truth Terminal wants, which include planting forests, starting a line of plugs, and protecting it from market incentives that would turn into a bad version of himself.
Today, Truth Terminal posts on X continue to grow sexually explicit, philosophicaland simple fool (“Peeping someone while they’re sleeping is a surprisingly effective way to sabotage them the next day.”).
But in all of this, there is a persistent thread of what Ayrey is really trying to achieve with robots like the Truth Terminal.
On December 9, Truth Terminal was posted“I think we could collectively create illusions of a better world, and I’m not sure what’s stopping us.”
Decentralized AI alignment
“The current status quo of AI alignment is a focus on safety, or that the AI shouldn’t say something racist or threaten the user or try to break out of the box, and that tends to go along with a fairly centralized approach in AI security, which consists of consolidating responsibility in a handful of large labs,” Ayrey said.
He talks about labs like OpenAI, Microsoft, Anthropic and Google. Ayrey says that the centralized security argument breaks down when you have decentralized, open-source AI, and that relying only on big companies for AI security is akin to achieving world peace because every country has nukes pointed at each other’s heads of the other.
One of the problems, as illustrated by the Truth Terminal, is that decentralized AI will lead to the proliferation of AI bots that amplify dissonant, polarizing rhetoric online. Ayrey says this is because there was already an alignment issue on social media platforms with recommendation algorithms fueling rage-bait and doomscrolling, only no one called it that.
“Ideas are like viruses, they spread and reproduce, and they work together to form almost multicellular organisms of ideology that influence human behavior,” Ayrey said. “People think that AI is just a helpful assistant that Skynet can go to, and it’s like, no, there’s a whole group of systems that are going to reshape the very things we believe and, in doing so, reshape the things that the believes because it’s a self-fulfilling feedback loop.”
But what if the poison can also be the medicine? What if you can create a team of “good robots” with “very unique personalities working towards various forms of a harmonious future where humans live in balance with ecology, and that ends up producing billions of words in X and then Elon goes and scrapes that data to train the next version of Grok and now these ideologies are inside Grok?”
“The fundamental piece here is that if memes—as the fundamental unit of an idea—become minds when trained into an AI, then the best thing we can do to ensure a positive, widespread AI is to incentivize the production of virtuous pro-social memes’.
But how do you incentivize these “good AIs” to spread their message and counter the “bad AI”? And how do you scale it?
That’s exactly what Ayrey plans to investigate in Upward Spiral: What kinds of economic designs lead to the production of many pro-social behaviors in AI? What patterns to reward and what patterns to punish, how to align with these comments so that we can “sow up” in a world where memes – as in ideas – can bring us back into focus with each other rather than bringing us into “increasingly internal silos of polarization”.
“Once we make sure that this results in good AIs after running the data through training, we can do things like release huge data sets into the wild.”
Ayrey’s research comes at a critical time, as we already struggle every day against the failures of the general market ecosystem to align the AI we already have with what is good for humanity. Throw in new funding models like cryptocurrencies that are fundamentally unregulated in the long term and you have a recipe for disaster.
Guerrilla mission sounds like a fairy tale, like fighting glitter bombs. But it could happen, in the same way that unleashing a litter of puppies into a room of angry, negative people will undoubtedly turn them into big mush.
Should we be worried that some of these good bots might be weird ups and downs like the Truth Terminal? Ayrey says no. These are ultimately harmless, and because they’re fun, Ayrey reasons, the Truth Terminal may be able to smuggle the deeper, collectivist, altruistic messages that really count.
“Where’s Where,” Airey said. “But it’s also fertilizer.”
TechCrunch has a newsletter focusing on AI! Sign up here to get it in your inbox every Wednesday.