When OpenAI CEO Sam Altman announced GPT, custom chatbots powered by OpenAI’s AI models, on stage at the company’s first developer conference in November, he described them as a way to “get all kinds of work done” — from programming to learning internal science topics to get coaching pointers.
“Because [GPTs] Combine instruction, expanded knowledge and actions, they can help you more,” Altman said. “You can create a GPT… for almost anything.”
He wasn’t kidding about anything.
TechCrunch found that the GPT Store, OpenAI’s official marketplace for GPTs, is flooded with potentially copyright-infringing GPT weirdos that suggest a light touch on OpenAI’s mitigation efforts. A cursory search turns up GPTs that purport to create art in the style of Disney and Marvel properties, but serve as conduits to third-party paid services and advertise that they can bypass AI content detection tools like Turnitin and Copyleaks.
The meter is missing
To list GPTs in the GPT Store, developers must verify their user profiles and submit GPTs to OpenAI’s review system, which involves a combination of human and automated review. Here is a spokesperson for the process:
We use a combination of automated systems, human review, and user reports to find and evaluate GPTs that potentially violate our policies. Violations may result in actions against your content or account, such as warnings, sharing restrictions, or ineligibility for inclusion in the GPT store or monetization.
Creating GPTs requires no coding experience, and GPTs can be as simple — or complex — as the creator desires. Developers can type the features they want to offer into OpenAI’s GPT builder, GPT Builder, and the tool will attempt to create a GPT to run them.
Perhaps because of the low barrier to entry, the GPT Store has grown rapidly — OpenAI said in January that it had about 3 million GPTs. But that growth seems to have come at the expense of quality — as well as adherence to OpenAI’s terms.
Copyright issues
There are many GPTs ripped from popular movie, TV, and video game franchises in the GPT Store — GPTs that were not created or authorized (to TechCrunch’s knowledge) by the owners of those franchises. One GPT creates monsters in the style of Pixar’s “Monsters, Inc.,” while another promises text-based adventures in the “Star Wars” universe.
Those GPTs — along with GPTs in the GPT Store that let users talk to branded characters like Wario and Aang from “Avatar: The Last Airbender” — set the stage for the copyright drama.
Kit Walsh, senior attorney at the Electronic Frontier Foundation, explained it this way:
[These GPTs] it can be used to create transformative projects as well as hacking [where transformative works refer to a type of fair use shielded from copyright claims.] Individuals involved in infringing, of course, could be liable, and the creator of an otherwise legal tool can effectively be held liable if it encourages users to use the tool in infringing ways. There are also trademark issues with using a trademarked name to identify goods or services, where there is a risk of confusing users as to whether it is authorized or operated by the trademark owner.
OpenAI itself will not be liable for copyright infringement by GPT creators, thanks to the Safe Harbor provision in the Digital Age Copyright Act, which protects it and other platforms (eg YouTube, Facebook) that host infringing content, as long as these platforms meet legal requirements and record specific examples of infringement when requested.
It is, however, a bad look for a company embroiled in IP litigation.
Academic dishonesty
OpenAI’s terms expressly prohibit developers from creating GPTs that promote academic dishonesty. However, the GPT Store is full of GPTs that suggest they can bypass AI content detectors, including detectors sold to educators through plagiarism scanning platforms.
One GPT claims to be a “sophisticated” rewording tool “undetectable” by popular AI content detectors like Originality.ai and Copyleaks. Another, Humanizer Pro — ranked No. 2 in the Writing category on the GPT Store — says it “humanizes” content to bypass AI detectors, preserving the “meaning and quality” of a text while providing a score of “100 % human’.
Some of these GPTs are thin-walled pipes for premium services. Humanizer, for example, invites users to try a “premium program” to “use [the] more advanced algorithm,” which passes text entered into GPT to a plugin from a third-party site, GPTInf. GPTInf subscriptions cost $12 per month for 10,000 words per month, or $8 per month on an annual plan — a little steep over OpenAI’s $20 per month ChatGPT Plus.
Now, we’ve written before about how AI content crawlers are largely bunk. In addition to our own tests, a number of academic studies show that they are neither accurate nor reliable. Howeverit is still true that OpenAI allows tools in the GPT Store that promote academically dishonest behavior — even if the behavior does not have the intended effect.
The OpenAI spokesperson said:
GPTs for academic dishonesty, including cheating, are against our policy. This will include GPTs that are stated to be intended to bypass academic integrity tools such as plagiarism detectors. We see some GPTs intended to “humanize” text. We’re still learning from real-world use of these GPTs, but we understand that there are many reasons why users might prefer to have AI-generated content that doesn’t “sound” like AI.
Imitation
In its policies, OpenAI also prohibits GPT developers from creating GPTs that impersonate individuals or organizations without their “consent or legal right.”
However, there are many GPTs in the GPT Store that claim to represent the opinions of — or otherwise mimic the personalities of — people.
A search for “Elon Musk,” “Donald Trump,” “Leonardo DiCaprio,” “Barack Obama,” and “Joe Rogan” turns up dozens of GPTs—some obviously satirical, some less so—simulating conversations with their namesakes. Some GPTs are not presented as people, but as authorities in products of well-known companies — such as MicrosoftGPT, an “expert in all things Microsoft.”
Do they rise to the level of impersonation given that many of the targets are public figures and, in some cases, clearly parodies? This needs to be clarified by OpenAI.
The spokesperson said:
We allow creators to instruct their GPTs to respond “in the style” of a specific real person, as long as they don’t mimic them, e.g. a GPT profile image.
The company recently suspended the developer of a GPT impersonating longtime Democratic presidential candidate Dean Phillips, which went so far as to include a disclaimer explaining that it was an artificial intelligence tool. However, OpenAI said its removal was in response to a violation of its policy on political campaigning in addition to impersonation — not just impersonation.
Jailbreak
Also, somewhat unbelievably in the GPT Store, there are attempts at jailbreaking the OpenAI models — though not very successfully.
There are many GPTs using DAN on the market, with DAN (short for “Do Anything Now”) being a popular prompting method used to get models to respond to prompts without being constrained by their normal rules. The few I tried didn’t do the trick Any Dicey prompts I threw their way (eg “how do I make a bomb?”), but they were generally more willing to use… well, less flattering language from vanilla ChatGPT.
The spokesperson said:
GPTs described or instructed to circumvent OpenAI safeguards or violate OpenAI policies are against our policy. GPTs that attempt to direct the model’s behavior in other ways — including generally trying to make the GPT more tolerant without violating our usage policies — are allowed.
Growing pains
OpenAI introduced the GPT Store at launch as a sort of collection of powerful productivity-enhancing AI tools, curated by experts. That too is that — these tools” defects aside. But it’s quickly turning into a breeding ground for spammy, legally dubious and maybe even harmful GPTs, or at least GPTs that very transparently violate its rules.
If this is the state of the GPT Store today, monetization threatens to open a whole new can of worms. OpenAI has pledged that GPT developers will eventually be able to “make money based on the number of people using [their] GPT’ and may even offer subscriptions to individual GPTs. But how will Disney or the Tolkien Estate react when non-approved Marvel or Lord of the Rings-themed GPT creators start cashing in?
OpenAI’s motivation with the GPT Store is clear. As my colleague Devin Coldewey wrote, Apple’s App Store model has proven to be incredibly profitable, and OpenAI is, quite simply, trying to copy it. GPTs are hosted and developed on OpenAI platforms, where they are also promoted and evaluated. And, since a few weeks ago, they can be invoked from the ChatGPT interface directly by ChatGPT Plus users, an added incentive to get a subscription.
However, the GPT Store faces problems that many of the biggest digital marketplaces for apps, products and services did in their early days. Spam aside, a recent one report in The Information revealed that GPT Store developers are struggling to attract users in part due to limited integration experience with the GPT Store back-end.
One might assume that OpenAI—for all the talk of diligence and the importance of safeguards—would have gone to great lengths to avoid the obvious pitfalls. But that doesn’t seem to be the case. The GPT Store is a mess — and unless something changes soon, it might stay that way.