One of the pioneers of artificial intelligence, economist Herbert Simon, said in the 50s of the last century that “in the visible future, the range of problems that machines can handle will match that of the human mind.”

At that time, it didn’t seem like such a naive forecast: it had already been possible to make a computer play checkers and learn from its own mistakes. But Simon died in 2001 without having witnessed that technology that had seemed so close.

The Paradox of Moravec: easy is difficult

Although we might think that if AI has already been able to overcome in very complex fields (such as playing Go ) or show skills that we have never had (such as detecting the sex of a person through a photo of the interior from your eye), it should be easy to copy our most ordinary skills, the small day-to-day actions we usually carry out unconsciously.

However, these skills (tying a shoe tie, moving with agility on two legs, being able not to collide while moving on the street and thinking about anything else, etc.) are not simple because they are an intrinsic part of who we are: as any physiotherapist could remind us, the ability to walk is not easy to teach even humans.

No, that and other skills are nothing but the complex result of written programming optimized by natural evolution over millions of years.

Therefore, an AI can be asked to solve abstract problems, and it will be able to carry out the task by executing an acceptable computational effort … but we must spend huge amounts of resources to carry out simple tasks. How is this possible?

Hans Moravec, an Austrian robotics researcher, formulated (with the collaboration of other notable discipline names, such as Rodney Brooks or Marvin Minsky) this paradox that now bears his name:

“It is relatively easy to get computers to show similar abilities to that of an adult in an intelligence test or when playing checkers, and it is very difficult to get them to acquire the perceptual and motor skills of a one-year-old baby.”

Or, put another way: trust a machine to play chess … but when the game is over, ask a human to take care of keeping the pieces in their box and keeping them.

Moravec’s argument when formulating his Paradox is simple: when we develop artificial intelligence, we only apply reverse engineering to our intelligence. And the effort necessary to copy each human ability is proportional to the age with which it appeared in our family tree.

In the case of motor-sensory knowledge, well, our ancestors still had scales when they began to develop it. But let us return the word to Moravec: ” However, abstract thinking is a new trick, perhaps less than 100,000 years old. We have not yet mastered it. It is not entirely intrinsically difficult; it only seems that way when we do it.”

Or, as the psychologist Steven Pinker summarizes, “the thirty-five-year main lesson in research in Artificial Intelligence is that difficult problems are easy and easy problems are difficult.”

If we think about it for a moment, it is fascinating to think that the ability to reason, the one we understand that separates us radically from the rest of the Animal Kingdom, is not only the ‘easiest’ thing to reproduce artificially, but it turns out to be just “a new trick that we have not yet mastered.”

The ‘Kamprad test’

A year ago, a group of researchers from Nanyang Technological University (Singapore) announced that they had gotten a pair of industrial robots to assemble “most” of an IKEA furniture. They called this test ‘the Kamprad Test’ (in honor of Ingvar Kamprad, the founder of IKEA who had just passed away a few months earlier).

The result was discouraging: the machines spent 11 long minutes scanning their surroundings and planning their movements. Although they had been given a hand by providing precise instructions thinking about the task they were going to perform, and grouping part of the pieces to facilitate handling.

Then, they needed another 9 minutes to complete the entrusted mission, during which they made several mistakes such as dropping pins, misaligning pieces, etc. Despite this, the researchers received the results as good news: they knew the real complexity of the task entrusted to the machines.

Also, the current AI systems are, fundamentally, engines of pattern recognition, which we train by providing thousands or millions of examples with the hope that, from them, they can infer rules that can be applied in a generalized way in the real world. But that does not mean that they understand that world.

Your retina is more powerful than that CPU

Moravec explained in his article ” Robots, re-evolving mind” (2000) that industrial robots with the ability to move that was manufactured in the late 1980s failed commercially because

“They were guided by occasional navigational markers designed (such as barcodes detected by laser), and by pre-existing features (such as walls, corners, and doors). The hard-hat labor of laying guide wires is replaced by programming carefully tuned for each route segment”.

Of course, the situation since the 80s has changed a lot: we now have powerful cameras that allow any robot (or vehicle) to be equipped with artificial vision, and Moravec himself points out that this is the way to go.

However, it also reminds us that we know vertebrate retina well enough to use it as “Rosetta Stone” capable of establishing a measure of comparison between nerve tissue and calculation capacity:

“In addition to the light detectors, the retina contains edge and motion detection circuits, grouped in a small area two centimeters wide and ten centimeters wide that simultaneously reports over one million regions of images about ten times per second through the optic nerve.”

“In robotic vision, similar detections require the execution of several hundred computer instructions, which causes the 10 million detections per second of the retina to involve more than 1,000 MIPS [million instructions per second].”

“In 1999, the PCs were equated with the nervous system of the insects, but they did not reach the level of the human retina, nor that of the brain of a goldfish (0.1 grams). And they were a million times too weak to do the work of a human brain.”

Moravec accompanied his article with this illustrative graphic, which equates technological evolution (until 2030) with its biological equivalents in terms of processing capacity:

Graph
Graph by Hans Moravec from the 1990s

But be careful, sometimes the hard part is … difficult

No relevant researcher or theorist in the field of artificial intelligence disagrees with Moravec’s point of view: the paradox he points out is obvious, and the explanation elaborated by him, using the evolutionary explanation, has broad support.

But … is it true that the tasks to be learned are divided only into evolutionarily recent (easy to replicate and not innate) and those developed millions of years ago (innate and difficult to replicate)?

Perhaps that is true when we only contrast psychomotor skills and calculation, but … where is the creativity in that division?

That is the doubt of the Israeli psychophysiologist Vadim Rotemberg, who has elaborated one of the few criticisms of the Moravec Paradox that we can find in the academic field (or, rather, the evolutionary explanation of the same elaborated by its author, that Rotemberg considers that it suffers from “restrictions and weaknesses”).

The key to his criticism is that, while creativity is one of the last skills that appeared in biological evolution (and the area of ​​the brain responsible for it has been the last to mature), “it is very difficult – and, even now even impossible – find an algorithm capable of processing and computerizing creativity.”

“I suppose that the explanation of these contradictions, as well as that of Moravec’s paradox, is related to the different functions [and different thinking strategies] of the left and right hemispheres of humans.”

Thus, while the formal logical thinking of the left hemisphere organizes the information in “a strictly ordered monosemantic context and without ambiguities […] Such a thinking strategy makes it possible to construct a pragmatically convenient but simplified reality model”.

In contrast, the function of the right hemisphere is to “simultaneously capture an infinite number of real connections and shape an integral but ambiguous polysemantic context.” This hemisphere plays a key role in creativity … but also “it is especially related to the limbic system, which controls bodily functions.”