Keeping up with an industry as fast-paced as artificial intelligence is a tall order. So, until an AI can do it for you, here’s a helpful roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
By the way, TechCrunch plans to launch an AI newsletter on June 5th. Stay tuned. In the meantime, we’re increasing the pace of our semi-regular AI column, previously twice a month (or so), to weekly — so be on the lookout for more releases.
This week in AI, OpenAI released discounted plans for nonprofit and education customers and pulled back the curtain on its latest efforts to prevent bad actors from abusing its AI tools. Not much to criticize, there — at least not in this writer’s opinion. But I I will report that the flurry of announcements seemed timed to counter the company’s bad press of late.
Let’s start with Scarlett Johansson. OpenAI removed one of the voices used by its AI-powered chatbot ChatGPT after users pointed out that it sounded oddly similar to Johansson’s. Johansson later released a statement saying she had hired legal counsel to inquire about the voice and get precise details about how it was developed — and that she had refused repeated pleas from OpenAI to license her voice for ChatGPT.
Now, a article in the Washington Post implies that OpenAI did not actually seek to clone Johansson’s voice and that any similarities were coincidental. But why, then, did OpenAI CEO Sam Altman contact Johansson and urge her to reconsider two days before a blistering demo featuring the audible voice? He is a minor suspect.
Then there are the trust and security issues of OpenAI.
As we reported earlier this month, OpenAI’s since-disbanded Superalignment team, responsible for developing ways to govern and guide “superintelligent” AI systems, pledged 20% of the company’s computing resources — but only ever (and rarely) received a fraction This. This (among other reasons) led to the resignation of the two co-leaders of the teams, Jan Leike and Ilya Sutskever, former chief scientist of OpenAI.
Nearly a dozen security experts have left OpenAI last year; several, including Leike, have publicly expressed concerns that the company is prioritizing commercial projects over security and transparency efforts. In response to criticism, OpenAI formed a new committee to oversee safety and security decisions related to the company’s projects and operations. But he staffed the committee with company members — including Altman — rather than outside observers. This according to information from OpenAI examines the ditch Its non-profit structure in favor of a traditional for-profit model.
Such incidents make it harder to trust OpenAI, a company whose power and influence is growing daily (see: its deals with news publishers). Few, if any, companies are trustworthy. But OpenAI’s market-disrupting technologies make the breaches even more alarming.
It doesn’t help that Altman himself isn’t exactly a beacon of truth.
When the news of OpenAI aggressive tactics towards former employees broke — tactics that involved threatening employees with losing their vested equity or preventing stock sales if they didn’t sign restrictive non-disclosure agreements — Altman apologized and claimed he was unaware of the policies. But, according to VoxAltman’s signature is on the incorporation documents establishing the policies.
And if former OpenAI board member Helen Toner; we have to believe—one of the former board members who tried to oust Altman from his position late last year—Altman withheld information, misrepresented things going on at OpenAI, and in some cases outright lied to the board. Toner says the board learned about ChatGPT’s launch via Twitter, not Altman. that Altman misrepresented OpenAI’s official security practices. and that Altman, unhappy with an academic paper Toner authored that cast a critical light on OpenAI, attempted to manipulate board members into pushing Toner off the board.
None of this bodes well.
Here are some other notable AI stories from the past few days:
- Voice cloning made easy: A new report from the Center to Combat Digital Hate finds that AI-powered voice cloning services make falsifying a politician’s statement fairly trivial.
- The Race for Google AI Reviews: AI Insights, the AI-generated search results that Google began rolling out more widely earlier this month in Google Search, needs some work. The company admits this – but claims it repeats quickly. (We’ll see.)
- Paul Graham on Altman: In a series of posts on X, Paul Graham, the co-founder of startup accelerator Y Combinator, dismissed claims that Altman was pressured to step down as Y Combinator’s chairman in 2019 because of potential conflicts of interest. (Y Combinator has a small stake in OpenAI.)
- xAI Raises $6B: Elon Musk’s artificial intelligence startup xAI has raised $6 billion in funding as Musk raises capital to aggressively compete with rivals like OpenAI, Microsoft and Alphabet.
- Perplexity’s new AI feature: With its new Perplexity Pages feature, AI startup Perplexity aims to help users make reports, articles or guides in a more visually appealing format, Ivan reports.
- AI models’ favorite numbers: Devin writes about the numbers that different AI models choose when given a random answer. As it turns out, they have favorites—a reflection of the data each was trained on.
- Mistral releases Codestral: Mistral, the $6 billion Microsoft-backed French AI startup, has launched its first AI production model for coding, called Codestral. But it can’t be used commercially, thanks to Mistral’s rather restrictive license.
- Chatbots and Privacy: Natasha writes about the European Union’s ChatGPT working group and how it offers a first look at the decoupling of AI chatbot privacy compliance.
- ElevenLabs Sound Generator: Voice-cloning startup ElevenLabs has introduced a new tool, first announced in February, that lets users create sound effects through messages.
- Interfaces for AI Chips: Tech giants including Microsoft, Google and Intel – but not Arm, Nvidia or AWS – have formed an industry group, the UALink Promoter Group, to help develop next-generation AI chip components.