Illusions – the lies that artificial intelligence models are basically telling – are a big problem for businesses looking to integrate the technology into their operations.
Because models have no real intelligence and simply predict words, images, speech, music and other data according to a private schema, they sometimes get it wrong. Very wrong. In a recent article in the Wall Street Journal, a source recounts an example where Microsoft’s genetic AI invented meeting participants and implied that the conference calls were about topics that were not actually discussed on the call.
As I wrote a while back, illusions can be an intractable problem with today’s transformer-based model architectures. But a number of leading AI vendors are proposing to do so can to be more or less eliminated through a technical approach called recovery augmented production or RAG.
Here’s how one salesman, Squirro, he drops it:
At the core of the offering is the concept of Retrieval Augmented LLMs or Retrieval Augmented Generation (RAG) built into the solution… [our generative AI] is unique in its promise of zero hallucinations. Every piece of information it generates is traceable to a source, ensuring reliability.
Here is one similar step from SiftHub:
Using RAG technology and enhanced large language models with industry-specific knowledge training, SiftHub enables companies to create personalized responses with zero hallucinations. This guarantees increased transparency and reduced risk and inspires complete confidence in using AI for all their needs.
RAG was pioneered by data scientist Patrick Lewis, a researcher at Meta and University College London and lead author of the 2020 paper who coined the term. Applied to a model, RAG retrieves documents possibly related to a question — for example, a Wikipedia page about the Super Bowl — using what is essentially a keyword search, and then asks the model to generate answers to this additional context.
“When you interact with a generative AI model like ChatGPT or Llama and ask a question, the default is for the model to answer from its ‘parametric memory’ — that is, from the knowledge stored in its parameters as a result of training on huge data from the web,” explained David Wadden, a researcher at AI2, the research arm of the nonprofit Allen Institute, which focuses on artificial intelligence. “But as you are likely to give more accurate answers if you have a reference [like a book or a file] in front of you, the same applies in some cases to the models.’
RAG is undeniably useful — it allows one to attribute things a model generates to retrieved documents to verify their authenticity (and, as an added bonus, avoid potential copyright infringement). RAG also allows businesses that don’t want their documents used to train a model—say, companies in highly regulated industries like healthcare and law—to allow models to rely on those documents in a more secure and temporary way.
But RAG for sure slope stop a model from hallucinating. And it has limitations that many sellers ignore.
Wadden says RAG is most effective in “knowledge-intensive” scenarios where a user wants to use a model to address an “information need” — for example, to find out who won the Super Bowl last year. In these scenarios, the document that answers the question is likely to contain many of the same keywords as the question (eg “Super Bowl”, “last year”), making it relatively easy to find via keyword search .
Things get trickier with “reasoning-intensive” tasks like coding and math, where it’s harder to identify in a keyword-based search query the concepts needed to answer a query — much less identify which documents may be relevant.
Even with basic questions, models can be “distracted” by irrelevant content in documents, particularly long documents where the answer is not obvious. Or they may – for as yet unknown reasons – simply ignore the contents of retrieved documents, choosing instead to rely on their parametric memory.
RAG is also expensive in terms of the hardware required to implement it at scale.
This is because retrieved documents, whether from the web, an internal database, or somewhere else, must be stored in memory—at least temporarily—so that the model can refer to them. Another expense is accounting for the increased context a model must process before generating its response. For a technology already notorious for the amount of computation and electricity it requires even for basic functions, this is a serious consideration.
That’s not to say that RAG can’t be improved. Wadden noted several ongoing efforts to train models to make better use of documents retrieved from the RAG.
Some of these efforts include models that can “decide” when to make use of the documents, or models that can choose not to perform the retrieval in the first place if they deem it unnecessary. Others focus on ways to more efficiently index massive document datasets and improve search through better document representations—representations that go beyond keywords.
“We’re pretty good at retrieving documents based on keywords, but not so good at retrieving documents based on more abstract concepts, such as a proof technique needed to solve a math problem,” Wadden said. “Research is needed to create document representations and search techniques that can identify relevant documents for more abstract production tasks. I think that’s mostly an open question at this point.”
So RAG can help reduce a model’s hallucinations — but it’s not the answer to all of AI’s hallucination problems. Beware of any vendor who tries to claim otherwise.