The struggle between open source and proprietary software is well understood. But tensions that have permeated software circles for decades have boiled over into the burgeoning space of artificial intelligence, with the controversy in hot pursuit.
The New York Times recently posted a gushing review of Meta CEO Mark Zuckerberg, noting how “open source artificial intelligence” has made him popular once again in Silicon Valley. The problem, however, is that Meta’s Llama-branded large language models are not truly open source.
Or is it them?
By most accounts, it is not. But it does highlight how the concept of “open source AI” is set to spark more debate in the coming years. This is something that the Open Source Initiative (OSI) is trying to cope with, led by the executive director Stefano Maffulli (pictured above), who has been working on the problem for more than two years through a global effort that includes conferences, workshops, panels, webinars, reports and more.
AI is not software code
OSI has been its administrator Definition of open source (OSD) for more than a quarter of a century, defining how the term “open source” can or should be applied to software. A license that meets this definition can legally be considered “open source,” though it acknowledges range of licenses ranging from highly permissive to not so permissive.
But porting legacy licensing and naming conventions from software to AI is problematic. Joseph Jacksopen source evangelist and VC firm founder OSS Capitalhe goes so far as to say that there is “there is no open source AI”, noting that “open source was expressly coined for software source code”.
In contrast, “neural network weights” (NNWs)—a term used in the AI world to describe the parameters or coefficients through which the network learns during the training process—are not at all comparable to software.
“Neural net weights are not software source code. they are unreadable by humans, nor can they be debugged,” notes Jacks. “Furthermore, the fundamental rights of open source also do not translate to NNWs in any way.”
This led to fellow Jacks and OSS Capital Heather Meeker to come up with their own definitionaround the concept of “open weights”.
So even before we arrive at a meaningful definition of “open source AI,” we can already see some of the tensions inherent in trying to get there. How can we agree on a definition if we cannot agree that the “thing” we are defining exists?
Maffulli, for what it’s worth, agrees.
“The point is right,” he told TechCrunch. “One of the initial discussions we had was whether to call it open source AI, but everyone was already using the term.”
This reflects some of the challenges in the wider realm of artificial intelligence, where debate abounds over whether what we call “AI” today it’s really AI or just powerful systems that are taught to spot patterns among huge chunks of data. But the naysayers mostly resign themselves to the fact that the nomenclature “AI” is here and there’s no point in fighting it.
Founded in 1998, OSI is a not-for-profit, public benefit corporation working on a myriad of open source-related activities around advocacy, education, and its core raison d’être: the definition of open source. Today, the organization relies on sponsorships for funding, with notable members including Amazon, Google, Microsoft, Cisco, Intel, Salesforce and Meta.
Meta’s involvement with OSI is particularly notable at this time as it relates to the concept of “open source artificial intelligence”. Despite the fact that Meta hangs his hat on AI in the open source linkthe company has significant restrictions on how Llama’s models can be used: Sure, they can be used for free for research and commercial use cases, but app developers with more than 700 million monthly users must request special permission from Meta, which will grant purely at its discretion.
Simply put, the Meta Big Tech brothers can blow the whistle if they want to.
Meta’s language around its LLMs is somewhat malleable. While the company called it Lama 2 open source modelwith the arrival of Llama 3 in April, it departed somewhat from the terminology, using phrases such as “openly available” and “openly accessible” instead. But in some places, it still referred to the model as “open source”.
“Everyone else participating in the discussion strongly agrees that Llama itself cannot be considered open source,” Maffulli said. “People I’ve talked to who work at Meta know it’s a bit difficult.”
Furthermore, some might argue that there is a conflict of interest here: does a company that has shown a desire to withdraw from the open source brand also provide funding to the administrators of the “definition”?
This is one of the reasons OSI is trying to diversify its funding, recently securing a grant from Sloan Foundation, which helps fund the multi-stakeholder global push to achieve an open-source AI definition. TechCrunch can reveal that this grant amounts to around $250,000, and Maffulli hopes this can change the perception around her reliance on corporate funding.
“That’s one of the things that the Sloan grant makes even clearer: We could say goodbye to Meta money at any time,” Maffulli said. “We could do this even before this Sloan Grant, because I know we will get donations from others. And Meta knows it very well. They don’t interfere with any of that [process]not Microsoft, not GitHub or Amazon or Google — they absolutely know they can’t intervene because the structure of the organization doesn’t allow it.”
Open Source AI Working Definition
The current open source AI definition draft is at version 0.0.8, consisting of three main parts: the “preamble”, which defines the purpose of the document; the very definition of open source AI. and a checklist that goes through the elements required for an open source compliant AI system.
According to the current draft, an open source artificial intelligence system should provide freedoms to use the system for any purpose without asking for permission. allow others to study how the system works and inspect its components; and modify and share the system for any purpose.
But one of the biggest challenges has been around data – that is, can an AI system be classified as “open source” if the company hasn’t made the training data set available for others to use? According to Maffulli, it’s more important to know where the data comes from and how a developer tagged, removed and filtered the data. And also by having access to the code that was used to aggregate the data set from its various sources.
“It’s much better to know that information than to have the simple data set without the rest,” Maffulli said.
While it would be nice to have access to the full data set (OSI makes it an “optional” item), Maffulli says it’s not possible or practical in many cases. This may be because there is confidential or copyrighted information contained in the dataset that the developer does not have permission to redistribute. In addition, there are techniques for training machine learning models where the data itself is not actually shared with the system, using techniques such as federated learning, differential privacy, and homomorphic encryption.
And this perfectly highlights the fundamental differences between “open source software” and “open source artificial intelligence”: The intentions may be similar, but they are not comparable, and this difference is what OSI is trying to capture in the definition.
In software, source code and binary code are two sides of the same artifact: They reflect the same program in different forms. But training data sets and subsequent trained models are different things: You can take the same data set and you won’t necessarily be able to recreate the same model consistently.
“There’s a variety of statistical and random logic that happens during training that means you can’t make it reproduce in the same way as software,” Maffulli added.
Therefore, an open source AI system should be easy to reproduce, with clear instructions. And this is where the checklist aspect of the open source AI definition comes into play, which is based on a recently published academic paper called “The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence.”
This paper proposes the Model Openness Framework (MOF), a classification system that scores machine learning models “based on their completeness and transparency”. The MOF requires specific elements of AI model development to be “included and released under appropriate open licenses,” including training methodologies and details about model parameters.
Stable situation
OSI calls the official release of the definition a “stable release,” as a company will do with an application that has undergone extensive testing and debugging before going live. OSI deliberately does not call it a “final version” because parts of it will likely evolve.
“We can’t really expect this definition to last for 26 years like the open source definition,” Maffulli said. “I don’t expect the top part of the definition—like ‘what is an AI system?’ technology”.
The stable definition of open source artificial intelligence is expected to be sealed by the Board at All Things Open Conference in late October, with OSI launching a global roadshow in the intervening months that will span five continents, seeking more “disparate information” about how to define “open source AI” moving forward. But any final changes are likely to be little more than “tweaks” here and there.
“This is the final stretch,” Maffulli said. “We have arrived at a comprehensive version of the definition. we have all the data we need. Now we have a checklist, so we check that there are no surprises there. there are no systems that should be included or excluded’.