How Machines Are Learning to Read Minds: From Emotions to Brain Interfaces

“I Know What You’re Thinking, Human”: How Machines Are Learning to Read Minds

When it became clear that telepathy was something out of science fiction, humanity didn’t give up on the idea of reading minds. Instead, people shifted from mystical thinking to scientific approaches, turning to psychology, biology, and medicine for help. As technology advanced, artificial intelligence became one of the tools aimed at helping us unlock the secrets hidden inside the human skull. Here’s how people and machines are learning to read minds—and what’s coming out of it.

Reading Minds Through Emotions

One of the most well-known methods that comes close to mind reading is interpreting nonverbal signals, including micro-expressions. Many mentalists and so-called “telepathic” magicians use this technique: they analyze the audience’s nonverbal cues to guess numbers, names, or find hidden objects.

Paul Ekman, an American psychologist best known for inspiring the TV series “Lie to Me” starring Tim Roth, tried to explain mind reading through emotions. Ekman studied the connection between emotions and facial expressions. As part of his research, he traveled the world photographing people’s faces as they experienced different emotions. Analyzing these photos, Ekman concluded that emotions and their expressions are universal across cultures. He conducted further studies to confirm this hypothesis, wrote the book “Telling Lies” (which inspired “Lie to Me”), and worked as a lie detection consultant for various American organizations, including the FBI.

Ekman’s codification of basic emotions was even used by Pixar animators to create facial expressions for characters in movies like “Shrek” and “Toy Story.” But that’s not the only application for his scientific work. His research sparked a wave of studies in machine learning and computer vision focused on reading and analyzing micro-expressions and other nonverbal human signals.

Tech companies have collected massive amounts of surface-level images of human emotions, including billions of selfies on social media, portraits on Pinterest, videos on TikTok, and photos on Flickr. Like facial recognition, emotion recognition has become part of the basic infrastructure for many platforms, from tech giants to small startups. Some companies use this data to improve security features (like FaceID), while others analyze emotional data to study user behavior. As early as the 2010s, Sony released digital cameras that could recognize a smile and automatically take a picture when the subject smiled.

These systems are already influencing how people behave and how social institutions operate, even though there’s little scientific evidence that they actually work. Automated emotion recognition systems are widely used today, especially in hiring. The recruitment company HireVue, whose clients include Goldman Sachs, Intel, and Unilever, uses machine learning to determine if candidates are a good fit for a job. In 2014, the company launched an AI system to analyze micro-expressions, voice tone, and other variables from video interviews, comparing job candidates to the company’s top performers. After heavy criticism from scientists and civil rights groups, HireVue dropped facial analysis in 2021 but kept voice tone as a key evaluation criterion. In January 2016, Apple acquired the startup Emotient, which claimed to have developed software capable of recognizing emotions from facial images.

However, these machine learning algorithms remain controversial among both human rights advocates and scientists. Paul Ekman was named one of the 100 most influential people of our time for his contributions to science, but as his fame grew, other researchers began to scrutinize his work and found inconsistencies. It turned out that Ekman studied micro-expressions within the framework of six basic emotions, and most of the photos he used were either staged or depicted “pure” emotions. In real life, people express complex combinations of emotions and intermediate states, like something between joy and sadness.

A machine can certainly recognize a smile or a frown, but it can’t make the same nuanced judgments a human does during communication. In real-life interactions, we evaluate not just verbal and nonverbal cues, but also the context in which communication occurs and the circumstances under which a person expresses certain emotions. So, for now, your face is still (at least for the time being) a closed book to machines.

Straight to the Brain

The face is only a reflection of our thoughts—their true source is the brain. So it’s logical that scientists decided to go beyond studying facial expressions and try to access human thoughts directly from neural connections. Research in neurobiology and neuropsychology, along with the use of MRI, has helped scientists study brain activity and the relationship between different brain regions and various behaviors, as well as pathologies (including behavioral disorders). With the help of artificial intelligence, it’s become easier to process, analyze, and apply the data obtained from these studies.

This has led to the emergence of an entire field called “neurobotics,” a mix of natural sciences, robotics, and information technology, as well as BCI (brain-computer interface) systems. These systems combine the brain and computer interfaces for enhanced data analysis.

Think BCI can help you take over the world or find all the mutants on the planet, like in “X-Men”? In reality, it doesn’t work that way. These systems are used to detect and diagnose brain disorders, treat pathologies, and study neural connections. Neuro-implants are already being used to treat vision problems in paralyzed patients, overcome speech and communication disorders, and address sensory impairments. For example, in 2021, Stanford University demonstrated a neuro-implant that allowed a completely paralyzed patient to communicate at a speed of 15 words per minute.

A similar chip, Neurolink, is being developed by Elon Musk’s company of the same name. The chip was announced in 2019 and is currently being tested on animals. Studies have already been conducted with mice, pigs, and primates (the researchers claim that no animals were harmed). These chips help study brain activity, identify and overcome pathologies, and address speech and memory disorders caused by structural brain issues. These technologies are getting closer to the mind reading described in Marvel comics—but without the superheroics or dystopian overtones.

Leave a Reply