google-deepmind
AI

What constitutes artificial general intelligence is a question that Google DeepMind hopes to answer

author
5 minutes, 36 seconds Read

Artificial general intelligence, or AGI, is a trendy subject in the IT industry right now. On top of that, it’s a highly contentious issue. That there is no universally accepted definition of the phrase is a major contributor to the issue. A group of researchers from Google DeepMind have just published a study that provides a taxonomy of AGI definitions, rather than a single new definition, thereby ending the debate.
All things considered, AGI usually refers to AI that is on par with or even surpasses humans in a variety of jobs. In favor of the generalization that AGI is AI that is superior, the details on what constitutes human-like behavior, which tasks are included, and the total number of them are often ignored.

The team at Google DeepMind set out what they saw as the key commonalities across the current definitions of AGI before coming up with their own.

Also, the group lays out five tiers of artificial general intelligence: emerging (which includes state-of-the-art chatbots like Bard and ChatGPT), competent, expert, virtuoso, and superhuman (doing a broad variety of tasks better than humans, including things humans can’t do at all, like reading minds, predicting the future, and communicating with animals). Notably, they point out that we have not progressed beyond the stage of developing AGI.

“This provides some much-needed clarity on the topic,” remarks Julian Togelius, an AI researcher at New York University who was not involved in the work. “The term AGI is tossed around without much consideration of its meaning by too many people.”

Without much ado, the researchers published their study online last week. Meredith Ringel Morris, Google DeepMind’s principal scientist for human and AI interaction, and Shane Legg, one of DeepMind’s co-founders and now billed as the company’s chief AGI scientist, gave me the rundown on how they arrived at these definitions and what they hoped to accomplish in an exclusive interview.

Improved clarity

The original coiner of the phrase, Legg, arrived at it around 20 years ago. “I see so many discussions where people seem to be using the term to mean different things, and that leads to all sorts of confusion,” He adds. “Since AGI is gaining significant attention—even the prime minister of the United Kingdom is discussing it—we should clarify our meaning.”

This is not how things used to be. Conversations about artificial general intelligence (AGI) used to be laughed off as, at best, nebulous and, at worst, magical thinking. However, chatter about AGI has recently taken flight, riding high on the success of generative models.

Ben Goertzel’s 2007 book regarding AI’s future advances was named after a phrase that Legg proposed to him. The term’s hand-waviness was kind of the purpose. Having a precise definition was difficult for me. As far as Legg is concerned, it was superfluous. “Actually, my perspective was that of a scientific discipline, not an artifact.”
During that period, he was trying to differentiate between real AI programs like Deep Blue, which could excel at only one activity, and the hypothetical AI that he and others envisioned would be able to excel at a wide variety of activities. According to Legg, “It is a very broad thing” when asked to compare human intellect to Deep Blue.

However, AGI began to be seen as a possible attribute that real computer programs may possess as time went on. These days, it’s par for the course for leading AI firms like OpenAI and Google DeepMind to make strong public claims about their goal to develop such algorithms.

Telling people exactly what you mean is crucial when conducting those kinds of interactions, advises Legg.

For instance, according to the DeepMind team, an AGI can’t simply be good at one thing; it needs to be able to generalize and excel in a variety of contexts. “This method of dividing breadth and depth is extremely helpful,” Togelius remarks. “It demonstrates why previous highly competent AI systems do not meet the criteria for AGI.”

Additionally, they argue that an AGI has to be capable of learning new activities, evaluating its own performance, and requesting help when it needs it, in addition to being able to handle a variety of tasks. They also claim that the capabilities of an AGI are more important than its implementation details.

Morris thinks that it is important to consider how an AGI functions. Unfortunately, our current understanding of state-of-the-art models’ inner workings is insufficient to center the definition on this aspect. This includes huge language models.

According to Morris, “it may be important to revisit our definition of AGI” when we learn more about the underlying mechanisms. We should zero down on what is currently measurable in a way that scientists can agree upon.

Taking stock
Scientists are still arguing about what it truly takes for a big language model to pass hundreds of high school exams and beyond, which makes measuring the success of current models much more contentious. Is it indicative of a high IQ? Alternatively, memorization?

It will be considerably more challenging to evaluate the efficiency of future, more powerful models. Instead of doing a few one-off experiments, the researchers recommend continuously testing AGI’s capabilities if it is ever built.

Additionally, the group stresses that AGI is not the same thing as autonomy. Morris claims that “people would want a system to operate completely autonomously” rather frequently. However, there are exceptions to that rule. We can theoretically create extremely intelligent machines that respond only to our commands.

The researchers gloss over the question of why we should construct AGI while defining it. A number of computer experts have voiced their disapproval of the project, including Distributed AI Research Institute founder Timnit Gebru. According to Gebru, the hypothetical technology “sounds like an unscoped system with the apparent goal of trying to do everything for everyone under any environment.” This statement was made during a presentation in April regarding her concerns about the misleading and perhaps harmful utopian promise of artificial general intelligence (AGI).

A well-defined scope is typical of engineering projects. Unfortunately, the goal of creating an artificial general intelligence system does not. An infinitely generalized and intelligent AGI is possible, according to even Google DeepMind’s definitions. “Don’t try to construct a deity,” Gebru warned.

Few will listen to such counsel in the mad dash to construct larger and more advanced systems. Whatever the case may be, it is good to finally have some definition for a long-muddled idea. Legg states, “Just having silly conversations is kind of uninteresting.” the situation. We can get over these defining concerns, and then there is a lot of wonderful material to go into.

yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno yeneno

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *