Fairness And Empathy Are Not The Same

4 min read

Most people are familiar with a classic moral puzzle. But the real question isn't whether someone would make a certain choice; it's why they wouldn't. A new brain imaging study finds that when people refuse to sacrifice one person for the greater good, two distinct mental processes are firing in two different parts of the brain.

Published in PNAS Nexus, the study put participants inside brain scanners and had them repeatedly choose how to divide harm between one person and a small group. What researchers found challenges the idea that the instinct to protect an individual is a single, unified response. Instead, it splits into two dimensions, one rooted in perspective-taking, the other in an internal fairness calculator, and each activates a separate brain network.

That distinction matters well beyond philosophy classrooms. Understanding how moral decision-making actually works could, in theory, influence how societies navigate the tension between individual rights and collective welfare.

Researchers at Princeton University, Seoul National University, and Korea University designed a task that forced participants into uncomfortable trade-offs. In each round, participants decided how to split an uncomfortable experience, time spent holding a hand in ice-cold water, between one person and a group of three or four people, across 150 total decisions.

Minimizing total discomfort always required dumping more burden on the single individual. Protecting that one person meant accepting more overall suffering for everyone else. Before entering the scanner, each of the 68 participants experienced 20 seconds of the cold-water test themselves, ensuring they understood what they were assigning to others. Sixteen were later excluded from brain imaging analysis for technical reasons, leaving 52 in the neuroimaging portion.

More often than not, participants chose to protect the individual, doing so 59% of the time. To keep one person from bearing a lopsided share, they were willing to impose roughly 68 extra seconds of total discomfort on the group.

Researchers also tested whether participants simply preferred inaction, allowing a default outcome rather than actively choosing who gets hurt. Data didn't support that idea. When participants preferred the default option, it was only when that default happened to protect the individual. People weren't motivated by avoiding action; they were motivated by shielding the worst-off person.

Researchers built mathematical models to test which mental calculations best explained participants' choices, and two separate components won out.

One captures the "maximin" strategy: a drive to minimize the maximum harm any single person faces. It functions as a way of mentally focusing on the person who would suffer most. A second component captures "agreeability," an internal threshold for what counts as a fair amount of extra burden to place on one person. Participants varied widely here. Some were comfortable assigning more discomfort to the individual before it felt unfair; others had a much tighter limit.

These two dimensions were barely related to each other. Knowing how strongly someone used the maximin strategy told researchers almost nothing about where that person's fairness threshold sat, pointing to genuinely distinct mental processes rather than two sides of the same coin.

Brain imaging reinforced this. When participants weighed how much worse off the single individual would be, a network tied to perspective-taking and understanding other people's mental states became active. Participants with stronger maximin preferences showed even greater activity in these regions, alongside reduced activity in areas typically tied to calculating value and weighing costs, suggesting that focusing on the worst-off person can reduce the weight given to cost calculations.

Agreeability told a different neural story. Participants with similar fairness thresholds showed matching patterns of activity in regions linked to tracking equity and weighing fairness against efficiency, areas that are part of the brain's valuation system, entirely distinct from the perspective-taking network.

Prior work, including classic trolley-problem studies, couldn't distinguish whether someone who refused to sacrifice one person was motivated by a rule against harmful action, by concern for the victim, or by a sense of fairness. Even within the "don't sacrifice the individual" camp, multiple mental processes are at work, each with its own neural signature.

Protecting the vulnerable isn't a single emotion. It's a two-track process, one part perspective-taking and one part fairness-monitoring, working in parallel at the very heart of what it means to live in a society with other people.