What is the good life? From that, what is the right way to live?
One might propose that both of those questions have more or less the same answer. That is: the good life is having something we are going to call “happiness” and the right way to live is to cause the most happiness for the most people (or, in modern interpretations, the most living animals).
Intuitive and plausible. And yet, there is a fashion among ethicists not to be utilitarians that goes back centuries. Professional ethicists, except for a few pop-stars like Peter Singer and Derek Parfit, tend to belong to more obscure and esoteric ideologies, usually deontological ones (that is, which actions are ethical turns on ethical laws over actions themselves are right or wrong, not their consequences). The position de rigeur at the moment is probably that of the care ethicists, but in the longue duree care ethics is just one more wave bashing against the rocks of That Old Bitch Kant. There have been centuries of such waves, and more will come in later years, but the rocks will abide.
I think the two main reasons that few ethicists are utilitarians are that:
- Everything interesting to say on the subject has already been said.
- If utilitarianism is true, then a philosopher’s training is not very useful for answering ethical questions.
I am not, myself, a deontologist. My main opposition to the idea is that, usually, if we want to think through whether a deontological position works or not, we think about whether the consequences of people following that position would be good. Kind of defeats the purpose. My second is that I think that the whole point of actions is that they cause effects in the world. If the whole point of actions is that they cause effects, the effects might as well be what we grade them by. My third is that each action can be classed in a number of ways, and it’s hard to not have the classifications affect which laws apply to which actions, yet we probably don’t think that different classifications of our actions should lead to different moral laws. Some care ethicists actually have neat solutions to this last problem, and a few have neat answers to the second. In the end, there are many deontologies that are internally consistent, and many of them match a number of our ethical intuitions, but they do not sway me personally. So from now on we’re going to talk a utilitarian line.
I want to launch an alternative critique of utilitarianism, which is that, even taking utilitarianism for granted, it’s very unlikely for utilitarian arguments to answer all of life’s ethical questions, because it would require happiness to have a very unlikely mathematical form.
Pretend we have a fantastic understanding of the brain, and such reliable reporting on satisfaction, that we are able to totally rank how happy any individual is under any given condition.
In all plausibility, such a ranking is impossible, because there are multiple kinds of happiness that can’t be perfectly compared. There is satisfaction, and excitement, and love, and wonder, and pride, and awe. Granted, these may all just be different triggers to release dopamine. But the brain is complex, and not everyone loves heroin. Happiness is not a chemically simple phenomenon, so it is probably not morally simple either. Happiness involves a balance of different factors which are difficult to directly compare.
However, it’s possible these aren’t so incomparable as they seem. After all, we compare them against each other every time we make a decision. Our decisions are affected by considerations other than our estimates of how happy they would make us, but it’s possible that, with all the science done, you can measure and grade all the many happinesses, ironing out beliefs about effects, expectations of future consequences, considerations of others, and all our pathologies of pride, guilt, and shame. Of course, it’s also possible that you’d pull away the cloth and see that by that point happiness itself would by then have been ironed away as well. Eh.
Multiple happinesses wouldn’t make utilitarianism useless. There will still be lots of questions utilitarian analysis can answer, because there are lots of situations in which one is less happy in every single way, such as physical torment, or when guilt and grief sour all joys. But if there were multiple happinesses, utilitarian analysis wouldn’t be able to help judge happinesses against each other.
However, that’s an old critique, and not the most interesting. Again, we constantly make decisions trading one happiness against another. So instead, suppose that the total-happiness-ranking is theoretically possible. Suppose that through neuroscientific genius, we have composed a happiness index that can measure the joy in any given soul, from the socialite to the scientist to the centipede, and spit out a number ranking how happy they are.
In such a case, we might have a perfect understanding of The Good Life, and yet still not have the answers to most ethical questions. And the reason is because a ranking is not enough.
What we would want, to have a solid utilitarian answer to all ethical questions, is a ranking of the happiness, not only of humans, but of groups.
And, to get a ranking of the happiness of groups, then happiness would have to not only have an ordering, but ratios. We would need to be able to say not only that one person was happier than another, but that they were twice or three times as happy. And not only would we need to be able to say this, we would need to be able to say it in a unique special way that’s true, as opposed to all the other ways that are false.
Why? Well, as a good utilitarian, I will often want to say that a decision is a wrong one, even if it makes someone happier. That will be because it makes someone else sad, and because we think the sadness outweighs the happiness. For these comparisons to be well-defined, we will need ratios of happiness, not just a ranking, at least most of the time..
Not always! Suppose I am a jolly, carefree creature, and my brother a melancholy and wretched sort. One day my brother, simply for pleasure, shoots me in the back. This produces in him a mere fleeting moment of joy, in which he is not even as happy as I had been. In turn, I am left crippled and miserable, even sadder than he was before his savage escapade.
In such a situation, we could say that an action was wrong even though it made one person happy and another sad, only using a ranking of happinesses, because the action raised my brother in the ranking less than it lowered myself. But the very contrivance of the situation demonstrates how rarely a simple ranking of happinesses will tell us whether an action is right to do.
Much more often, the morality of an action will be unclear. Suppose I love to be loud in restaurants. Just scream. Screeeeeeeeaaaaaaam. If I were instead tactfully quiet, I would be sadder than anyone else in the restaurant, and at full volume I am as happy as anyone in a restaurant can be. However, my sadness ruins the evening for many other customers, though they never become as sad as I would be if forced to be quiet.
Should I pipe down? Unless we can define happiness ratios, the answer is unclear. Yes, I have moved farther up the ranking than anyone else has fallen. Yet, I have lowered more people than have been raised (let’s say, fifty). Since I can’t say that my squeals and squaks raised my happiness fifty times as much as the happiness of each other diner was lowered, I can’t say whether the action was immoral or not.
And the issue isn’t just about comparing disparate losses to concentrated benefits.
Suppose your dying grandfather’s only wish is that you spend a day with him recording his stories and opinions. You’ve heard his stories and opinions before, so spending a whole day listening to them is kind of a bummer. Even though it brings your grandfather great joy to have his final wish fulfilled, he is not made quite as happy as you would have been if you hadn’t had to have this bummer of a day. Even with you listening to him, he’s still dying, after all. However, you are not made nearly as sad from your bummer of a day as he would be about dying unfulfilled and alone.
Even in this case, where the morality seems intuitively clear, a purely ranking-based utilitarian analysis cannot answer whether helping your grandfather is the right thing to do. Because your drop in rank is not totally contained by his rise in rank (a dying man, no matter how fulfilled, is not as happy as a carefree twentysomething) we can’t say whether his gain was greater than your loss.
To be able to say that, we would need to have ratios of happiness. In other words, happiness would have to be cardinal, rather than ordinal. If we could say that your grandfather would go from 1/10 as happy as your carefree self to 9/10 as happy, while you only lost 15% of your happiness, then we could confidently assert that helping your grandfather is the right thing to do.
But to have uniquely determined ratios of happiness would be extremely odd.
This is not how much of nature works. In science, the way we treat a phenomenon as “real” and “understood” is that we have fit it into equations. Force = Mass * Acceleration. Mass and Acceleration are fairly easy to measure, so we can measure forces by proxy, and then we can use force in all kinds of useful equations. This is why we say that force, as a phenomenon, is real and more or less understood.
But instead of talking about force, I could talk about shmorce, where shmorce is simply the cube root of force. I could then take all of my equations regarding force and substitute (shmorce3) where force was previously. All the math would be the same.
What we would need, to say that happiness has ratios, is to have a set of empirically verified equations where something like force clearly represents happiness and something like shmorce clearly does not.
But, this is frankly implausible. In all likelihood, it is not only impossible to determine which of many different indices should represent happiness in our equations, but there is no answer to the question of which index is correct.
It would be very surprising if there were a way to correctly say that X is twice as happy as Y, rather than three or four times. Of course, a person might be willing to give up two hours of one happiness for one hour of another, but not three hours or four. But that’s just their decision about what they would prefer. A person only really knows one happiness at any given time (their current happiness) so we haven’t any reason to think that people’s preferences reveal anything fundamental about what happiness is, deep down. The same problem applies to how much money you would spend on something. Perhaps we can say, “Larry would be willing to spend ten dollars on having the karaoke room to himself, while Curly and Moe would each only pay $4 to share it”. But we cannot say, “Larry would pay $10 to be one person getting the karaoke room and another person not getting it, while he would only pay $4 apiece to be two people sharing it”. Perhaps money, like time, cannot measure happiness, because it has diminishing marginal returns.
But if we can’t use money to measure happiness, and we can’t use time, then money has diminishing marginal returns relative to what? Will we ever have an answer? And how will we know that answer uniquely right?
I think the fundamental truth is more likely that utilitarian analysis can answer some ethical questions, and solve some ethical dilemmas, but on other issues it simply does not speak – because happiness likely does not have the right mathematical form for it to tell us any more.
For the remaining questions, we are left to other ethical concerns: fairness, desert, equitability, perhaps piety or kindness. Happiness is only so important. There is more to the moral life.