The fallacy

There are philosophical debates that seem to belong to another world. Discussions about whether genuinely selfless motivation exists, about what it really means for an explanation to be "simple," about whether natural selection operates only among individuals or also among groups. It's easy to dismiss them as academic exercises with no practical consequences.
But sometimes those debates are closer to everyday life than they appear. Not long ago, in a conversation about restructuring a sales team, someone said something that captures a very widespread assumption: "At the end of the day, people only understand money." It wasn't a provocation. It was a genuine belief, shared by many in the room. And on that belief, an entire system had been built: incentives, rankings, hiring processes, culture. The premise, at its core, was philosophical: human beings are driven by one thing, and everything else is window dressing.
Elliott Sober, philosopher of biology and of science, has spent decades dismantling exactly this kind of reductionism. Not from idealism, but from logic, evolutionary biology, and the theory of inference. His work touches three seemingly distinct areas — psychological egoism, the evolution of altruism, and the principle of parsimony — but they all point to the same thing: the temptation to reduce complexity to a single variable is understandable, but almost always wrong.
Egoism as an irrefutable thesis
Psychological egoism is the thesis that all human action is ultimately motivated by self-interest. When someone helps another, they do it because it gives them pleasure, because it avoids guilt, because they expect reciprocity. Apparent altruism is always, at bottom, egoism in disguise.
Sober treats this not as a self-evident truth, but as an empirical hypothesis about human motivational architecture. And as a hypothesis, it has to compete with alternatives. His strategy is to show that universal psychological egoism is a worse hypothesis than those competing with it.
The first problem is what Sober calls, following the philosopher Joseph Butler, the hedonistic fallacy. The egoist says: "When someone helps another and feels pleasure, that proves they were doing it for the pleasure." But this is a non sequitur. If I genuinely want my child to be well, and my child is well, I feel pleasure. The pleasure is a byproduct of having satisfied the desire, not the object of the desire. It's like saying a footballer didn't want to score the goal but to celebrate it.
The second problem is the burden of proof. Universal egoism claims that in no case, ever, does any human being have an ultimate desire that isn't about their own state. This is an extraordinarily strong claim. Motivational pluralism — the idea that we have a mix of selfish desires and desires genuinely directed at others — is a much more modest hypothesis and, a priori, more plausible.
But the most revealing problem is immunization. Psychological egoism has a suspicious property: it is compatible with any observation. Do you help someone? You're seeking pleasure. Do you sacrifice yourself? You're avoiding guilt. Do you die for another? You preferred dying to living with the burden of not having acted. This infinite capacity for reinterpretation is not a strength of the theory. It is a symptom that it is empirically empty: it makes no predictions that can distinguish it from the alternatives.
This happens constantly outside philosophy. Someone helps a neighbor and another says "they're doing it to look good." A professional shares knowledge generously and the suspicion is that "they're trying to position themselves." A parent devotes themselves to their children and someone comments that "they do it to feel like a good parent." Sober gives us the tools to see that this systematic suspicion is logically fallacious. The fact that an act has positive consequences for the person performing it does not prove that those consequences were the motive.
What evolution tells us
This is where Sober connects philosophy with biology in an especially original way. Together with biologist David Sloan Wilson, in Unto Others (1998), he developed an argument that rehabilitated an idea that mainstream biology had declared dead: group selection.
The consensus since the 1960s, established by George C. Williams and reinforced by Dawkins, was that natural selection operates fundamentally at the individual (or gene) level. A selfish gene within an altruistic group will always spread faster, because it reaps the benefits of others' altruism without paying the costs. Before selection between groups can favor altruistic groups, selection within each group will have already eliminated the altruists.
Sober and Wilson don't attempt to return to the naive group-selectionism of old. Their move is more sophisticated: they argue that selection operates simultaneously at multiple levels, and that the right question is not "individual or group selection?" but "what is the balance between selective forces operating at different levels?" The conclusion is that altruism can evolve when the force of selection between groups exceeds the force of selection within groups.
But what matters most for our argument is the connection Sober draws between evolutionary biology and psychology. Natural selection needs parents to care for their offspring. What psychological mechanism is more reliable for producing that care? An egoistic mechanism would require a long causal chain: the parent perceives the offspring's need, calculates that helping will produce pleasure or avoid guilt, and then acts. An altruistic mechanism is more direct: the parent perceives the offspring's need, wants the offspring to be well, and acts. The direct mechanism is more robust because it has fewer points of failure. Natural selection, a blind but efficient engineer, probably produced mechanisms where the wellbeing of certain others is an ultimate end, not an instrumental one.
Genuine altruism, far from being a sentimental anomaly, may be a more efficient evolutionary solution than egoism. It's not that nature is good. It's that a plurality of motivational mechanisms works better than a monoculture.
The trap of simplicity
Sober reaches a similar conclusion from an entirely different angle: his work on parsimony. What we call "Occam's razor" — the preference for the simplest explanation — is one of the most invoked and least understood principles of thought.
Sober's central thesis is that parsimony is not a single principle with a single justification, but a family of distinct principles that operate differently depending on context. When a physicist says they prefer a simpler theory and when a biologist says they prefer a more parsimonious phylogenetic tree, they are doing conceptually different things, even though they use the same word.
Most incisive is his argument against parsimony as a metaphysical principle. There is no reason to believe the universe is inherently simple. Parsimony is an epistemic tool, not a truth about reality. When preferring the simple is rational, it's because the simple turns out to be more probable given the evidence — and that depends on the structure of the problem, not on a universal law.
Sober illustrates this with the problem of overfitting. If you have a dataset, you can always find a model complex enough to pass exactly through every point. But no one believes that model is the best explanation. We prefer simpler models even if they fit the observed data less well, because excessively complex models capture noise along with the signal and predict the future worse. Here parsimony has a precise justification. But that justification is context-specific: it cannot simply be exported to other domains.
The lesson is subtle but profound: invoking "the simplest explanation" is not a shortcut to truth. It is a tool that works under certain conditions. And when those conditions don't hold, the simple is not just insufficient — it can be actively misleading. Reducing human motivation to "self-interest" is, in a sense, the psychological equivalent of overfitting in reverse: a model so simple it no longer captures the signal.
The self-fulfilling prophecy, in organizations
All of the above could remain academic philosophy if it weren't for the fact that these assumptions become design. And design constructs reality.
Companies often operate on a kind of institutionalized psychological egoism, inherited from neoclassical economics and agency theory. The baseline assumption is that people maximize their self-interest. The entire apparatus of individual incentives, bonuses, KPIs, and performance reviews is built on this assumption. If you want someone to do something, you have to make it worth their while.
Sober would invite us to ask: what if this assumption is as problematic as universal psychological egoism? What if people also have plural motivations — self-interest, yes, but also a genuine desire for the project to work, for the team to be well, to do work that matters?
The consequences of assuming one thing or the other are enormous. If you assume universal egoism, you design control systems: monitor, measure, incentivize, because without the right incentive no one will do the right thing. This produces high-friction organizations where every interaction becomes an implicit negotiation.
And it has a perverse property that Sober would appreciate: it tends to produce exactly the behavior it predicts. If you design a system where the only signal of value is individual performance, you attract and retain people who optimize for individual performance. Those who had plural motivations discover that those motivations are not only unrewarded but sometimes punished. The system selects against genuine cooperation and then says: "See? People only respond to incentives."
In biology this is called niche construction: the organism modifies its environment, and the modified environment selects for a certain type of organism, which in turn modifies the environment in the same direction. Organizations build cultural niches that select for the type of person who reinforces that niche. The cynical suspicion — "they helped the new hire because they want to look good to the boss," "they shared their code because they want people to depend on them" — is as unfalsifiable as psychological egoism, and produces the same corrosive effect: it destroys the possibility that genuine cooperation is recognized as such, and eventually destroys cooperation itself.
Some companies have experimented with different models, and the results tend to be revealing. It's not that self-interest disappears, but motivations emerge that the previous system was actively suppressing. Altruism isn't created from nothing. You simply stop destroying what was already there.
Beware of reductionist systems
Sober's intellectual move is always the same: where tradition sees a single principle, he sees many. Where a universal explanation is sought, he shows that explanations are local and contextual. Where simplicity is invoked as a reliable guide to truth, he shows it is reliable only under specific conditions.
He doesn't destroy parsimony: he understands it. He doesn't deny egoism: he positions it as one among several real motivations. He doesn't defend altruism as faith: he defends it as an empirical hypothesis more robust than the reductionist alternative.
If natural selection, blind and purposeless, was capable of producing mechanisms where the wellbeing of others is a direct end — because they turned out to be more efficient than the alternative — perhaps we should pay attention. Not out of idealism. Out of realism. The human motivational repertoire is richer than our favorite models capture. The ways of explaining the world are more diverse than Occam's razor, misunderstood, suggests. The ways of organizing collective work are more varied than the egoistic assumption allows us to imagine.
But the dominant business culture — especially the one radiating from Silicon Valley — pushes hard in the opposite direction. The narrative is familiar: people need pressure to perform, without someone behind them they won't move, talent is measured in individual "high agency," empathy is a luxury that doesn't scale, and whoever can't handle the intensity simply isn't cut out for it. It's a narrative that presents itself as tough-minded realism but, viewed through Sober's tools, is pure reductionism disguised as pragmatism. It reduces human motivation to fear and ambition. It reduces leadership to surveillance and pressure. It reduces organizational design to a problem of incentives and punishments.
And like all reductionism, it doesn't just describe reality poorly: it impoverishes it. Entire organizations built on these premises end up expelling exactly the kind of motivation that would make them more robust. The engineer who directly wants the product to be good responds better to the unexpected than the one who evaluates every decision against their performance review. The team where people genuinely care about the outcome is more resilient than the one that depends on each individual calculating whether contributing is worth it. But these direct mechanisms, more reliable, more efficient, don't survive in environments designed to be suspicious of them.
It's not about believing that people are good. It's about not designing systems — not models, not theories, not organizations — that guarantee they behave as if they weren't.