Meta wants to improve its AI by studying human brains | POPSCI

Shutterstock / Contributor majcot

How Computer Scientists are investigating the Brain to Aid Deep Learning in Language Understanding 

Together, Neurospin and INRIA analyze human brain activity and deep learning algorithms trained on language or voice tasks. The findings might help explain why humans understand a language far more effectively than robots. Theoretically, it might decipher how both human and artificial brains determine the meaning of language. Meta-AI researchers examine the brain’s reaction to words to see if they can be used to train AI computers. Using methods such as fMRI and magnetoencephalography, they monitor brain activity in response to particular words and phrases down to the millisecond. 

Detailed observation of the brain will enable researchers to determine which brain areas are engaged when hearing words such as “dog” or “table.” A team at Meta AI is constructing a collection of open-source transformer-based language models with millions or perhaps billions of parameters. With 175 billion parameters, the most significant model is comparable in scale to other industrial language models, such as GPT-3. Possessing a comprehensive understanding of a topic may be essential to developing improved AI systems for natural dialogue, which might power future virtual assistants one day. A solid language model is a crucial component for chatbots, conversation agents, machine translation, and text categorization. 

A transformer-based model “uses both a learned mechanism for encoding information sequences and a mechanism for attention,” according to the head of Meta AI Research Labs, Joelle Pineau. Meta AI is open-sourcing its language models to get input from other academics, especially those investigating the behavioral and ethical consequences of these massive AI systems. 

Source: https://www.popsci.com/technology/meta-ai-language-models/  

   

RELATED ARTICLES

Subscribe to our newsletter