Meta has was released the latest entry in Llama’s line of open AI models: Llama 3. Or, more accurately, the company debuted two models in its new Llama 3 family, with the rest coming at an unspecified future date.
Meta describes the new models — Llama 3 8B, which contains 8 billion parameters, and Llama 3 70B, which contains 70 billion parameters — as a “big leap” compared to the previous generation Llama models, Llama 2 8B and Llama 2 70B . performance. (Parameters essentially determine an AI model’s ability at a problem, such as text parsing and generation; models with a higher number of parameters are, generally speaking, more capable than models with a lower number of parameters.) In fact, Meta says that, for their respective parameter measurements, Llama 3 8B and Llama 3 70B — trained on two custom-built 24,000 GPU clusters — are is among the best performing AI models available today.
That’s quite a claim. So how does Meta support it? Well, the company notes the scores of the Llama 3 models on popular AI benchmarks like MMLU (which attempts to measure cognition), ARC (which attempts to measure skill acquisition), and DROP (which tests reasoning of a model in chunks of text). As we’ve written before, the usefulness — and validity — of these benchmarks is up for debate. But for better or worse, they remain one of the few standardized ways that AI players like Meta evaluate their models.
Llama 3 8B outperforms other open models such as Mistral’s Mistral 7B and Google’s Gemma 7B, which contain 7 billion parameters, in at least nine benchmarks: MMLU, ARC, DROP, GPQA (a set of biology, physics and chemistry-related questions), HumanEval (a code generation test), GSM-8K (math word problems), MATH (another math benchmark), AGIEval (a set of problem-solving tests), and BIG-Bench Hard (an assessment of joint reasoning logic).
Now, the Mistral 7B and Gemma 7B aren’t exactly on the cutting edge (Mistral 7B was released last September), and in some of the benchmarks reported by Meta, the Llama 3 8B scores only a few percentage points higher than the two. But Meta also claims that its higher-spec Llama 3 model, the Llama 3 70B, is competitive with flagship AI production models, including the Gemini 1.5 Pro, the latest in Google’s Gemini series.
The Llama 3 70B beats the Gemini 1.5 Pro in MMLU, HumanEval and GSM-8K, and — while not competing with Anthropic’s most efficient model, the Claude 3 Opus — the Llama 3 70B scores better than the second weakest model in the Claude 3 series , Claude 3 Sonnet, on five benchmarks (MMLU, GPQA, HumanEval, GSM-8K and MATH).
For what it’s worth, Meta also developed its own test suite that covers use cases ranging from coding and creative writing to reasoning to summarization and — surprise! — Llama 3 70B beat Mistral’s Mistral Medium model, OpenAI’s GPT-3.5, and Claude Sonnet. Meta says it banned its modeling teams from accessing the set to maintain objectivity, but obviously—given that Meta devised the test itself—the results should be taken with a grain of salt.
More qualitatively, Meta says users of the new Llama models should expect more “directionality,” a lower likelihood of refusing to answer questions, and greater accuracy on trivia questions, questions related to history, and STEM fields like engineering and science and general coding recommendations. This is due in part to a much larger data set: a collection of 15 trillion tokens, or a staggering ~750,000,000,000 words — seven times the size of the Llama 2 training set. (In the AI field, “tokens” refers to subdivided bits raw data, such as the syllables “fan,” “tas,” and “tic” in the word “fantastic.”)
Where did this data come from? Good question. Meta didn’t say, only disclosing that it was pulling from “public sources”, included four times more code than in the Llama 2 training dataset, and that 5% of that set has non-English data (in ~30 languages) to improve performance in languages other than English. Meta also said it used synthetic data – e.g. AI generated data – to create larger documents for Llama 3 models for training, a somewhat controversial approach because of the potential performance disadvantages.
“While the models we release today are only tuned for English results, the increased data diversity helps the models better recognize nuances and patterns and perform strongly on a variety of tasks,” Meta writes in a blog post shared on TechCrunch.
Many AI makers see training data as a competitive advantage and thus keep it and the information related to it close to the chest. But the details of the training data are also a potential source of intellectual property lawsuits, another disincentive to disclose much. Recent report revealed that Meta, in an effort to keep pace with AI competitors, at one point used copyrighted AI training e-books despite warnings from the company’s lawyers. Meta and OpenAI are the subject of an ongoing lawsuit by authors, including comedian Sarah Silverman, over the vendors’ alleged unauthorized use of copyrighted data for education.
So what about toxicity and bias, two other common problems with genetic AI models (including Llama 2)? Does Llama 3 improve in these areas? Yes, claims Meta.
Meta says it has developed new data filtering pipelines to boost the quality of its model training data and has updated its pair of productive AI security suites, Llama Guard and CybersecEval, to try to prevent misuse and unwanted text generation from Llama 3 models and more. The company is also releasing a new tool, Code Shield, designed to detect code from artificial intelligence models being created that might introduce security vulnerabilities.
However, filtering isn’t foolproof — and tools like Llama Guard, CyberSecEval, and Code Shield only go so far. (See: Llama 2’s tendency to they answer questions and leak private health and financial information.) We’ll have to wait and see how the Llama 3 models perform in the wild, including testing by academics on alternative benchmarks.
Meta says Llama 3 models – which are available to download now and power Meta’s Meta AI assistant on Facebook, Instagram, WhatsApp, Messenger and the web – will soon be hosted in managed form on a wide range of platforms cloud, including AWS. Databricks, Google Cloud, Hugging Face, Kaggle, IBM’s WatsonX, Microsoft Azure, Nvidia’s NIM and Snowflake. In the future, versions of the models optimized for hardware from AMD, AWS, Dell, Intel, Nvidia and Qualcomm will also be available.
Llama 3 models may be widely available. But you’ll notice we use “open” to describe them as opposed to “open source”. And that’s because, besides Meta’s claims, the Llama model family isn’t as close-knit as people would think. Yes, they are available for both research and commercial applications. However, Meta forbids Developers do not use Llama models to train other production models, and developers of apps with more than 700 million monthly users must request special permission from Meta which the company will — or will not — grant at its discretion.
More capable Llama models are on the horizon.
Meta says it’s currently training Llama 3 models with more than 400 billion parameters — models that can “conversate in multiple languages,” take in more data, and understand images and other formats as well as text, which will bring the Llama 3 series in line with with open releases like Hugging Face’s Idefics2.
“Our goal in the near future is to make Llama 3 multilingual and multimodal, have a larger framework, and continue to improve overall performance across the core [large language model] capabilities like reasoning and coding,” Meta writes in a blog post. “More to come.”
Actually.