AI systems and large language models must be trained on massive amounts of data to be accurate, but they must not be trained on data they are not allowed to use. OpenAI’s licensing agreements with The Atlantic and Vox last week indicate that both sides of the table are interested in landing these AI education content licensing deals.
Human Native AI is a London-based startup building a marketplace to broker such deals between the many companies building LLM projects and those willing to license data to them.
The goal is to help AI companies find data to train their models while ensuring that rights holders are selected and compensated. Rights holders upload their content free of charge and link with AI companies for revenue share or subscription deals. Human Native AI also helps rights holders prepare and price their content and monitors for copyright infringements. Human Native AI caps each deal and charges AI companies for trading and tracking services.
James Smith, CEO and co-founder, told TechCrunch that he got the idea for Human Native AI from his previous experience working on Google’s DeepMind project. DeepMind also ran into problems with not having enough good data to properly train the system. He then saw other AI companies facing the same problem.
“We feel like we’re in the Napster era of genetic artificial intelligence,” Smith said. “Can we get to a better time? Can we make content acquisition easier? Can we give creators some level of control and compensation? I kept thinking, why isn’t there a market?’
He brought the idea to his friend Jack Galilee, an engineer at GRAIL, on a walk in the park with their respective children, as Smith did with many other potential startup ideas. But unlike in previous seasons, Galilee said they had to.
The company launched in April and is currently in beta. Smith said demand from both sides is really encouraging and they’ve already signed a handful of partnerships that will be announced in the near future. Human Native AI announced a £2.8m seed round led by LocalGlobe and Mercuri, two UK micro VCs, this week. Smith said the company plans to use the funding to build out its team.
“I’ve been CEO of a company for two months and I’ve been able to have meetings with CEOs of 160-year-old publishing companies,” Smith said. “This suggests that there is a lot of demand on the publishing side. Likewise, every conversation with a large AI company goes exactly the same way.”
While it’s still early days, what Human Native AI is building seems to be a piece of infrastructure that’s been missing in the growing AI industry. The big AI players need a lot of data to train on, and giving rights holders an easier way to work with them while giving them full control over how their content is used seems like a win-win approach. of the table happy.
“Sony Music just sent letters to 700 AI companies asking them to cease and desist,” Smith said. “That’s the size of the market and potential customers that could acquire data. The number of publishers and rights holders could be in the thousands if not tens of thousands. We believe this is why we need infrastructure.”
I also think this could be even more beneficial for smaller AI systems that don’t necessarily have the resources to make a deal with Vox or The Atlantic so they can access data for training. Smith said they’re hoping for that, too, and that all of the notable licensing deals so far have involved the biggest AI players. He hopes that Human Native AI can help level the playing field.
“One of the biggest challenges with content licensing is that you have a big upfront cost and you’re massively limited with who you can work with,” Smith said. “How do we increase the number of buyers for your content and lower the barriers to entry? We think it’s really exciting.”
The other interesting piece here is the future potential of the data collected by Human Native AI. Smith said that in the future they will be able to provide rights holders with more clarity on how to price their content based on a history of deal data on the platform.
It’s also a smart time to launch Human Native AI. Smith said that with the evolution of EU AI law and potential US AI regulation on the way, AI companies that source their data ethically – and have the evidence to prove it – will become more pressing.
“We’re optimistic about the future of AI and what it’s going to do, but we have to make sure that as an industry we’re being responsible and not decimating the industries that have gotten us to this point,” Smith said. “That would not be good for human society. We need to make sure we find the right ways to allow people to participate. We are optimistic about artificial intelligence on the side of humans.”