The Future of AI: General Artificial Intelligence

The Future of AI General Artificial Intelligence

To gain a true understanding of artificial intelligence, researchers should turn their attention to developing a fundamental, underlying AGI technology that replicates human understanding of the environment.

Industry giants like Google, Microsoft, and Facebook, research labs like Elon Musk’s OpenAI, and even platforms like SingularityNET are betting on artificial general intelligence (AGI) — the ability of intelligent agents to understand or learn any intellectual task that humans cannot, which represents the future of artificial intelligence technology.

Somewhat surprisingly, however, none of these companies has focused on developing a fundamental, low-level AGI technology that replicates human contextual understanding. This may explain why the research these companies are doing relies entirely on intelligent models with varying degrees of specificity and relying on today’s artificial intelligence algorithms.

Unfortunately, this reliance means that, at best, artificial intelligence can only appear intelligent. No matter how impressive their abilities, they still follow a predetermined script with many variables. As a result, even large, highly complex programs such as GPT3 or Watson can only demonstrate comprehension. In fact, they don’t understand that words and images represent physical things that exist and interact in the physical universe. The concept of time or the idea of a cause having an effect is completely foreign to them.

This is not to take away the capabilities of AI today. Google, for example, is able to search through vast amounts of information incredibly quickly to deliver the results users want (at least most of the time). Personal assistants like Siri can make restaurant reservations, look up and read email, and give directions in real time. This list is constantly expanding and improving.

But no matter how complex these programs are, they still look for inputs and respond with specific outputs that depend entirely on their core datasets. If not, ask a customer service bot an “unplanned” question, which may generate a meaningless response or no response at all.

In conclusion, Google, Siri, or any other current examples of AI lack real, commonsense understanding that will ultimately prevent them from progressing toward Artificial General Intelligence. The reason goes back to the dominant assumption of most AI developments over the past 50 years, that if hard problems can be solved, easy problems of intelligence will be solved. This hypothesis can be described as Moravec’s Paradox, which states that it is relatively easy to make computers perform at an adult level on intelligence tests, but make them have the perception and motor skills of a one-year-old baby skills are difficult.

AI researchers are also wrong in their assumptions that if enough narrow AI applications are built, they will eventually collectively grow into general intelligence. Unlike the way children can effortlessly integrate vision, language, and other senses, narrowly defined AI applications cannot store information in a general way that would allow the information to be shared and subsequently used by other AI applications.

Finally, researchers mistakenly believed that if a sufficiently large machine learning system could be built with sufficient computing power, it would spontaneously exhibit general intelligence. This too was proven wrong. Just as an expert system attempting to acquire domain-specific knowledge cannot create enough case and example data to overcome an underlying lack of understanding, an AI system cannot handle “unplanned” requests, no matter how large.

General Artificial Intelligence Fundamentals
To gain true AI understanding, researchers should turn their attention to developing a fundamental, underlying AGI technique that replicates human understanding of context. For example, consider the situational awareness and situational understanding that a 3-year-old child demonstrates while playing with blocks. 3-year-olds understand that blocks exist in a three-dimensional world, have physical properties such as weight, shape and color, and will fall if stacked too high. Children also understand the concepts of cause and effect and the passage of time, as the blocks cannot be knocked down until they are first stacked.

A 3-year-old can also become a 4-year-old, then a 5-year-old, and finally a 10-year-old, and so on. In short, 3-year-olds are born with abilities that include the ability to grow into fully functioning, generally intelligent adults. Such growth is impossible for today’s AI. No matter how sophisticated it is, today’s AI remains completely unaware of its presence in its environment. It does not know that actions taken now will affect future actions.

While it is unrealistic to think that an AI system that has never experienced anything other than its own training data can understand the concept of the real world, adding mobile sensory pods to AI could allow artificial entities to learn from real-world environments and demonstrate Develop a basic understanding of physical objects in reality, cause and effect, and the passage of time. Like the 3-year-old, the artificial entity equipped with a sensory pod is able to directly learn how to stack blocks, move objects, perform a series of actions over time, and learn from the consequences of those actions.

https://oaicon.com/index.php/2023/03/07/the-future-of-ai-general-artificial-intelligence/

Through sight, hearing, touch, manipulators, etc., artificial entities can learn to understand in ways that are simply not possible with text-only or image-only systems. As mentioned earlier, no matter how large and varied their datasets are, such systems simply cannot understand and learn. It is even possible to remove the sensory pods once the entity acquires this ability to understand and learn.

While at this point we cannot quantify how much data is needed to represent true understanding, we can surmise that there must be a reasonable proportion of the brain involved in understanding. After all, humans interpret everything in the context of everything they have experienced and learned. As adults, we explain everything in terms of what we learned in the first few years of life. With this in mind, it seems likely that true artificial general intelligence will fully emerge only if the AI community recognizes this fact and takes the necessary steps to establish a fundamental foundation of understanding.