Artificial general intelligence (AGI) – often referred to as “strong artificial intelligence”, “full artificial intelligence”, “human-level artificial intelligence” or “general intelligent action” – represents a major future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks such as spotting product defects, summarizing the news, or building a website for you, AGI will be able to perform a wide range of cognitive tasks at or above human levels. Addressing the press this week at Nvidia’s annual GTC developer conference, CEO Jensen Huang seemed very bored with discussing the topic — mostly because he thinks he’s got so much wrong, he says.
The frequency of the question makes sense: The idea raises existential questions about humanity’s role in and control of a future where machines can think, learn better, and outperform humans in nearly every field. At the core of this concern lies the unpredictability of AGI decision-making processes and goals, which may not align with human values or priorities (a concept has been explored in depth in science fiction since at least the 1940s). There is concern that once AGI reaches a certain level of autonomy and capability, it may become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.
When the sensationalist press asks for a timeline, it often baits AI professionals into putting a timeline for the end of humanity — or at least the current status quo. Needless to say, AI CEOs aren’t always keen on the subject.
Huang, however, spent little time telling the press what he did does think about the matter. Predicting when we’ll see a passable AGI depends on how you define AGI, Huang argues, and draws some parallels: Even with the complications of time zones, you know when the New Year is coming and 2025 is coming. If you drive to San Jose Convention Center (where this year’s GTC convention is being held), you generally know you’ve arrived when you can see the huge GTC banners. The critical point is that we can agree on how to measure that you’ve gotten, either temporally or geospatially, to where you hoped to go.
“If we define AGI as something very specific, a set of tests where a software program can do very well—or maybe 8% better than most people—I think we’ll get there within 5 years,” Huang explains. He suggests that the tests could be legal tests, logic tests, financial tests or perhaps the ability to pass a preliminary examination. If the questioner is not able to be very specific about what AGI means in the context of the question, they are not willing to make a prediction. Fair enough.
The AI illusion is solvable
In Tuesday’s Q&A session, Huang was asked what to do about artificial intelligence illusions — the tendency of some AIs to give answers that sound plausible but not based on reality. He appeared visibly frustrated by the question and suggested that hallucinations are easily solvable — just by making sure the answers are well-researched.
“Add a rule: For every answer, you have to look up the answer,” Huang says, referring to this practice as “augmented retrieval generation,” describing an approach similar to basic media literacy: Consider the source and the context. Compare the facts contained in the source with known truths, and if the answer is inaccurate – even partially – discard the entire source and move on to the next one. “Artificial intelligence should not just answer. he should first do research to determine which of the answers are best.”
For mission-critical answers like health advice or the like, Nvidia’s CEO suggests that perhaps checking multiple resources and known sources of truth is the way forward. Of course, this means that the generator generating an answer must have the option of saying, “I don’t know the answer to your question” or “I can’t come to a consensus on what the correct answer to this question is,” or even something like “Hey, the Super Bowl hasn’t happened yet, so I don’t know who won.”
Find out about Nvidia’s GTC 2024: