News & Analysis

DeepMind Defines Artificial General Intelligence

And it’s way beyond what just a definition as Google’s AI team creates a whole new taxonomy

Back in 2020, when OpenAI hadn’t nailed ChatGPT let alone release it to the world at large, its employees would vote on when artificial general intelligence or AGI will finally arrive and change the world as we know it. It was a fun way to debate super intelligence, but now Google’s DeepMind has not only created a new definition for AGI but also a new taxonomy. 

True to form, Google has stepped in when everyone has exhausted their views around what AGI means and most agree to disagree. The DeepMind team began with a review of all existing definitions of AGI and drew out some common features. In doing so, they also outlined five levels of this technology starting from low to high, with ChatGPT listed under the first one. 

Google’s DeepMind researchers put together in the form of a paper that dropped online without any fanfare sometime last week. Taking a peek at it, the MIT Technology Review says the five levels of AGI starts with emerging, competent, expert, virtuoso and superhuman. While emerging lists chatbots such as ChatGPT and Bard, superhuman is when it can beat humans. 

The four levels before AGI turns superhuman

The paper notes that at the moment there is no level beyond emerging AGI and predicts that at superhuman levels, AGI can perform tasks better than humans, including some that we cannot do at all, such as decoding other’s thoughts, predicting the future and talking to animals. Some researchers working on AGI feel that DeepMind has defined parameters on which future AI-led innovations can be mapped. 

 

And does there need to be such a map? Well, the researchers say DeepMind’s classification will help humanity decide collectively around what could be the final frontier in the development of artificial general intelligence. Of course, there are others who believe classifications do not halt human endeavor or greed in this matter, though most believe a 15-year horizon is needed. 

The duo behind Google DeepMind’s paper are Shane Legg, one of the co-founders and now designated as the company’s chief AGI scientist and Meredith Ringel Morris, who is the principal scientist for human and AI interaction at DeepMind. MIT Technology Review’s Will Douglas Heaven got some bytes out of these top scientists. 

DeepMind’s effort is to create a taxonomy 

According to Legg who actually came up with the term AGI 20 years ago, there are too many debates around what terms actually meant, there is a need to sharpen up the meanings. He recalls that when the term was thrown at a possible title for a book on AI, the need to define it in detail was thought unnecessary as AGI was considered to be a field of study, not an artifact. 

The idea then was to differentiate existing AI in the form of IBM’s chess-playing Deep Blue from the hypothetical ones that were very much in the realms of imagination then. According to Legg, human intelligence is way too broad compared to Deep Blue which can do one task very well but hardly anything beyond that. 

However, the advent of emerging AI means that companies and scientists are bandying around terms in public statements about their missions for the future. To have such discussions, there needs to be clarity on what they mean when using some terms. The DeepMind note defines that AGI needs to be both general purpose and high-achieving, which automatically throws out the existing brouhaha over AI-led chatbots and their training.

So, what exactly would constitute AGI?

Researchers say that AGI must have both depth and breadth to qualify as such, which is currently missing in the existing AI models. So, AGI needs to perform several tasks and also learn from them, do periodic self-assessments and seek assistance when stuck. What it does matters more than how it does it, says Morris. 

Another reason for classifying the various levels of AGI is to help build measurements of what the existing AI-led models can do. For example, how does it help that a large language model passes dozens of high school tests? Is it a sign of intelligence or rote learning? And when the models get complicated, assessments would get tougher too. 

So, if and when AGI moves up the ladder created by DeepMind researchers, its capabilities need to be constantly evaluated and not via a handful of such tests. Also, Legg and Morris note that AGI doesn’t imply autonomy as in theory it would be possible to build super-smart machines that are fully controlled by humans. 

Of course, in spite of DeepMind’s attempt to classify AGI and its hierarchy, it is unlikely that there will be an answer to why AGI is required and whether it would ultimately be a false promise of utopia propagated by geeks. Afterall, AGI appears to be unscoped unlike other engineering projects that are well-scoped with milestones and goals.