
A smiley face on a screen is about as far from an actual human smile as a stick figure is from a photograph. Yet new research shows that when it comes to reading emotions, the brain processes a real person and a little yellow cartoon in surprisingly similar ways, at least in the first fraction of a second.
Scientists at Bournemouth University have found that the brain's electrical patterns when processing emotional expressions on emoji faces closely mirror those triggered by photographs of real human faces. So closely, in fact, that a computer algorithm trained to recognize emotion-related brain signals from one type of face could successfully identify those same signals when people looked at the other type. That discovery reframes how scientists think about the relationship between the billions of tiny digital icons and the ancient brain wiring humans evolved for reading each other's expressions.
Published in the journal Psychophysiology, the study demonstrates that both kinds of faces activate overlapping patterns of brain activity, firing in similar ways, at similar times, across similar regions, regardless of whether the face on screen is a real person or a cartoon circle with dot eyes.
Two separate experiments used nearly identical setups. In the first, participants viewed color photographs of eight real people each displaying one of four expressions: happy, angry, sad, or neutral. In the second, a different group viewed emoji faces pulled from six platforms showing those same four emotions. Both groups wore caps fitted with 64 sensors measuring electrical brain activity. Rather than examining brain activity one sensor at a time, the team used a method that reads patterns across many sensors simultaneously, capable of picking up subtle signals.
Within each experiment, the algorithms could reliably identify which emotion a person was viewing based solely on brain activity. For emoji faces, this signal appeared as early as about 70 milliseconds after the image appeared. For real faces, the signal emerged around 120 milliseconds. Both peaked over regions toward the back of the head, areas long associated with face processing.
The more telling result came from what researchers call "cross-classification." Algorithms trained on brain data from one experiment were tested on data from the other, with different participants looking at different kinds of faces. Even so, the algorithms could still decode which emotion was being viewed. When trained on real faces and tested on emoji data, strong decoding emerged between 115 and 200 milliseconds. A second wave of shared activity appeared later, between 350 and 500 milliseconds, suggesting the overlap extends into the stage where the brain weighs the emotional meaning of what it saw.
Emoji faces sometimes produced even cleaner signals than photographs. Researchers suggest this may be because emojis are designed with exaggerated features, including oversized frowns and wide grins, that make the boundaries between emotional categories especially stark. Real human faces are subtler and vary across individuals in ways that blur those lines.
By showing that a classifier trained on brain responses to one format can decode emotions in the other, it offers direct evidence of overlapping neural coding across fundamentally different visual inputs. A cartoon smiley and a genuine human smile activate notably overlapping brain architecture, enough that the brain's electrical signature for one can predict its response to the other. In an era when much of human emotional communication happens through screens, that overlap may help explain why those tiny icons can feel less like decoration and more like the real thing.