Connect with us

Hi, what are you looking for?


Index – Science – Artificial intelligence thinking is becoming more specific

Index – Science – Artificial intelligence thinking is becoming more specific

the nature-He published a summary of the research, according to which artificial intelligence and the human brain may still be getting closer. Since the 1980s, researchers have been discussing whether artificial intelligence could function like the neural network of our brains. It now seems to have been developed and trained so that our brains can be freed.

It appears that a heretofore atypical aspect of human intelligence can be acquired through practice, said Brenden Lake, an assistant professor of psychology and data science at New York University and co-author of the study.

AI networks mimic the structure of the brain, where its information processing nodes are interconnected and whose data is processed in hierarchical layers. But until now, AI systems have not behaved like brains, because they have not had the human ability to combine familiar concepts in new ways.

This systematic structure is one of the geniuses of our brain, but we seem to be slowly teaching it to robots as well.

In the new study, Lake and study co-author Marco Baroni of Pompeu Fabra University in Barcelona tested AI models and volunteers using fictional language. Fictional words correspond to colored dots, resulting in a function that changes the order of the dots. Thus, the sequence of words determines the sequence of colored dots. Both the robot and the human had to detect the grammatical regularity behind a nonsense expression, which determined which points corresponded to which words.

People produced good sequences of dots about 80% of the time, and when they failed, they consistently made errors, such as thinking that a word represented a single dot, rather than a function that scrambled the entire sequence of dots.

After testing seven AI models, Lake and Baroni developed a method called meta-learning composition (MLC), which allows the AI ​​to apply different rules to newly learned words while providing feedback on whether it applied them or not. Using this method, the AI ​​achieved at least the same performance as a human, sometimes making the same mistakes, but sometimes performing better. They later compared MLC to two neural network-based models from Open AI and found that MLC (which can understand both instructions and sentence meanings) and humans outperformed the OpenAI models in the scoring test.

The success was enormous, but at the moment MLC worked on the trained sentence types, and was unable to generalize it to new sentence types. The next step is to increase the synthetic generalization ability of MLC, which is one of the keys to human intelligence.

(Cover image: Index)