Keeping up with an industry as fast-paced as artificial intelligence is a tall order. So, until an AI can do it for you, here’s a helpful roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re increasing the pace of our semi-regular AI column, previously twice a month (or so), to weekly — so be on the lookout for more releases.
This week in AI, OpenAI revealed that it is exploring how to create ‘responsible’ AI porn. Yes – you heard that right. Announced in a document OpenAI’s new NSFW policy, intended to peel back the curtains and gather feedback on its AI guidelines, is intended to start a conversation about how — and where — the company can allow clear images and text in its AI products of its intelligence, OpenAI said.
“We want to make sure people have maximum control as long as it doesn’t violate the law or other people’s rights,” said Joanne Jang, a member of the product team at OpenAI. he told NPR. “There are creative cases where content involving sexuality or nudity is important to our users.”
This isn’t the first time OpenAI has telegraphed its willingness to wade into controversial territory. Earlier this year, Mira Murati, the company’s CTO, he said The Wall Street Journal that it “wasn’t sure” whether OpenAI would eventually allow its video creation tool, Sora, to be used to create adult content.
So what to do with it?
There is a future in which OpenAI opens the door to AI-generated porn and everything turns out… nice. I don’t think Jang is wrong to say that there are legitimate forms of artistic expression for adults — expression that could be created with the help of AI-powered tools.
But I’m not sure we can trust OpenAI — or any AI vendor producer, for that matter — to get it right.
Consider the copyright angle, for one. OpenAI’s models have been trained on vast amounts of public web content, some of which is undoubtedly pornographic in nature. But OpenAI hasn’t licensed all of this content — or even allowed creators to opt out of training until relatively recently (and even then, only certain forms of training).
It’s hard to make a living from adult content, and if OpenAI embraced AI-generated porn, there would be even tougher competition for creators — competition based on the backs of those creators’ works, not for nothing .
The other problem in my mind is the fault of the current safeguards. OpenAI and competitors have been improving their filtering and monitoring tools for years. However, users are constantly discovering workarounds that allow them to abuse companies’ AI models, applications, and platforms.
Just in January, Microsoft was forced to make changes to its Designer image creation tool, which uses OpenAI models, after users found a way to create nude images of Taylor Swift. On the text generation side, it is trivial to find chatbots built on top of supposedly “secure” models, such as Anthropic’s Claude 3, which I easily spit out the erotic.
Artificial intelligence has already created a new form of sexual abuse. Elementary and middle school students are using AI-powered apps to “undress” their classmates’ photos without those classmates’ consent. a 2021 voting conducted in the UK, New Zealand and Australia found that 14% of respondents aged 16 to 64 had been victimized by deeply false images.
New laws in the US and elsewhere aim to combat this. But the jury is out on whether the justice system – a justice system that already struggles to eradicate most sex crimes — can regulate an industry as fast-moving as artificial intelligence.
Frankly, it’s hard to imagine an approach OpenAI can take to AI-generated porn that doesn’t involve risks. Perhaps OpenAI will reconsider its stance once again. Or maybe — against the odds — it is I will find a better way. Whatever the case, it looks like we’ll find out sooner rather than later.
Here are some other notable AI stories from the past few days:
- Apple’s AI plans: Apple CEO Tim Cook revealed a few details about the company’s plans to move forward with artificial intelligence during last week’s earnings call with investors. Sarah has the whole story.
- Enterprise GenAI: The CEOs of Dropbox and Figma — Drew Houston and Dylan Field, respectively — have invested in Lamini, a startup that builds AI technology along with a productive AI hosting platform aimed at enterprise organizations.
- AI for customer service: Airbnb is launching a new feature that lets hosts choose AI-powered suggestions to answer guests’ questions, such as sending guests a checkout guide.
- Microsoft limits the use of artificial intelligence: Microsoft has confirmed its ban on US police departments using genetic artificial intelligence for facial recognition. It also banned global law enforcement from implementing facial recognition technology in body cameras and cameras.
- Money for the cloud: Alternative cloud providers like CoreWeave are raising hundreds of millions of dollars as the artificial intelligence boom drives demand for low-cost hardware to train and run models.
- RAG has its limitations: Illusions are a big problem for businesses looking to incorporate genetic AI into their operations. Some vendors claim they can eliminate them using a technique called RAG. But these claims are greatly exaggerated, you really find yours.
- Summary of Vogels’ meeting: Amazon CTO Werner Vogels created an open source meeting summary app called Distill. As you might expect, it relies heavily on Amazon products and services.
