/
08-Utilitarianism.Rmd
186 lines (111 loc) · 25.9 KB
/
08-Utilitarianism.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
# Utilitarianism
:::: {.centerpic data-latex=""}
![](img/crowd.jpg){width="50%" alt="illustration"}
::: {.centercap data-latex=""}
[Free-Photos](https://pixabay.com/users/free-photos-242387/){target="_blank"} at Pixabay.com
:::
::::
:::{.epigraph data-latex=""}
Actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.\
\
---John Stuart Mill, Utilitarianism
:::
**_So far we have examined_** a number of approaches, all of which are popular, but all of which have proven to be unsatisfactory in one way or another. This may lead you to wonder whether all of this philosophical analysis is really such a good idea. Shouldn't we have some results by now, after all of this investigation? Won't philosophers be capable of poking holes in every theory that comes along with their hyper-critical methods of analysis? The answer is a definite "yes and no." Although no ethical theory that has been developed is entirely without problems and criticisms, there are a couple that seem pretty good, even to us skeptical philosophers. In this chapter we will consider one of these more promising theories. It is known as utilitarianism and was developed in its most explicit form in the 19th century by two British philosophers, Jeremy Bentham and John Stuart Mill. To situate this theory in relation to the theories we have already examined, consider the following case :
>Fred is on his way to a job interview and happens to be running late. On his way to the bus station he happens to notice a small child apparently drowning in a pool. He quickly glances at his watch and realizes that if he does not hurry, he will miss his bus and will be very late for the job interview. Fred decides not stop and help. He catches the bus and not only makes it to the job interview on time, but gets the job.
**_Something is clearly_** wrong here. Fred should have stopped to help, even if it meant being late for the job interview. The simple reason he should have stopped was that someone else was in need of help. And the fact that this person was in need overrides his personal interest in getting to the interview on time. Now, in spite of how obvious this might seem, none of the theories we have considered so far can really account for this simple moral intuition. And this does not make them look very good as theories about our ethical obligations. Consider what each would say about this case:
- Relativism: "Stop and help if your culture values helping strangers."
- Divine Command Theory: "Stop to help if and only if God commands you do help strangers."
- Natural Law Theory: "Stop to help if and only if it is part of human nature to help others."
- Psychological Egoism: "Helping would be selfish anyway, do whatever suits you more."
- Ethical Egoism: "Don't help since it is best to let people help themselves."
- Social Contract Theory: "Stop to help because it is in your long term best interests to help others."
**_None of these approaches_** can account for what seems compelling about this case, that is, that we just should help in cases when someone is desperately in need and it would cost us comparatively little to help them. Their interests should count for us at least this much. This, in a sense, is the moral intuition that utilitarianism tries to account for. It is an explicit attempt to justify our normal, everyday, moral sense that we should take other people's interests seriously. Our systematic analysis of the varieties of egoism gives us a theoretical motive for considering this point -- if we can't defend selfishness shouldn't we at least try to see whether the case for moral concern for others does any better? But our humanity should give us a deeper reason for examining the nature and justification for this very ordinary feeling of moral concern that we should feel and expect others to feel as well.
## Happiness and The Highest Good
**_Utilitarianism is one_** of the major contemporary philosophical theories about the nature of and justification for ethical principles. It has its roots in the writings of the Scottish philosopher David Hume (1711-1776), although the name "utilitarianism" is most closely associated with the works of Jeremy Bentham (1748-1832) and John Stuart Mill (1806-1873).
Utilitarianism is an attempt to provide a rational basis for ethical decision making. It focuses on the consequences of actions and measures their value by the amount of good they do for whoever it is that is affected them. As Mill puts it,
> Actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness.
**_"Whose happiness?"_** you may ask. "Everybody's!" the utilitarian would answer. The sophisticated moral theory developed by Bentham and Mill in defense of this simple point is what we will be examining further here.
**_Both Bentham and Mill_** aim to make ethics into nothing less than a science of human happiness. This science starts out with a distinction that is important for ethical theories in general, a distinction between different types of value. Some things have a value only because they help to bring about other things of value. For example, a car is valuable to me since it helps me to get around easily. If I live in a place where owning a car is not necessary to get around, or it becomes prohibitively expensive because I cannot afford parking, gas or maintenance, then it loses this value. This type of value is known as "instrumental value." Something has instrumental value if it allows me to realize other goals I have. On the other hand, some things just have value in themselves. For example, I may want a higher salary since the extra money will enable me to rent a bigger apartment and this in turn will make happier. But my quest for happiness itself is not for the sake of anything further. Being happy is something I want for its own sake and not for the sake of something else, unlike the bigger apartment or the salary increase, both of which I want for the sake of something else. Something that has value on its own, without its being valuable for something else has "intrinsic value," as opposed to instrumental value. Utilitarianism starts with the claim that the only thing that has intrinsic value is happiness. Everything else of value is valuable to us to the extent that we believe it will help us to attain happiness. Happiness alone is something that is desirable for its own sake. Thus it may make sense to ask someone who takes on an extra job to earn more money why she needs or wants the extra money, but it makes no sense to ask someone why they want more happiness out of life. It is the one thing we are all striving for, even if we each have our own unique perspectives on exactly what will make us happy.
**_Now the question_** arises, "what is the best way to attain happiness?" Is there a reliable, rational, "scientific" way to increase our happiness? Both Bentham and Mill answer that there indeed is a scientific way to attain happiness, and it is called "maximizing utility."
**_Each of us_** has his/her own idea of what will lead to happiness, but in general we are all out to maximize happiness. Now every pleasurable thing that we strive for has a cost including time, money, effort, and/or opportunity costs. But, both benefits and costs are uncertain. So the rational way of attaining happiness is by getting as much satisfaction for as little cost as possible given the uncertainties involved. This is "maximizing utility" or the rational approach to happiness.
**_The rational way_** to attain happiness is to strive to maximize utility by taking into account the costs and benefits (as well as their respective probabilities) of each possible choice we may have in a give situation. So far, we have only been speaking of the way each of might satisfy his or her own selfish desires. Both Bentham and Mill argued, however, this account can be the basis of ethics as well. If each of of is out to get the most pleasure for the least cost, then the ethical thing to do would be the course of action that enables the most people to satisfy their personal interests at the least cost. Ethical choices are choices that lead to the greatest overall utility.
### Implications of utilitarianism{-}
**_So what then,_** are the implications of utilitarianism? It has a number of positive implications. First, it makes ethics relevant to the real world, by defining ethical actions as those that have the best overall outcomes. Being ethical would not be a matter of personal conscience alone, but would be something that would be good for everyone. Second, the ethical nature of an action or a decision is something that could be measured. This idea was particularly attractive to Jeremy Bentham -- he was a strong advocate of legal reform and used as his criterion for evaluating laws the question of whether or not a given law was beneficial or harmful overall. Third, making ethical decisions would not require anything other than the ability to figure out how much our actions impact the interests of others. All of us are equally capable of adding up the expected good and bad results of our actions and deciding accordingly.
**_On the other hand,_** there are negative implications to the theory as well. We will be considering these in greater detail in the next section and in the next chapter, since the final position we will look at in the theoretical part of the course, Kantian ethics, is an explicit attempt to avoid the pitfalls of the utilitarian approach. Three problems stand out. First, since utilitarians claim that the moral worth of an action depends on it having better consequences than the alternatives we may wonder how we can ever really determine these consequences when we are making a decision. Of course, some actions have obviously bad consequences, like tossing a burning cigarette butt into a dry field of tall grass. But can we really reliably predict the consequences of our actions in the majority of cases? After all, moral problems seem to arise in just those cases where it is not entirely clear what the best thing to do really is. In addition, as many large scale engineering projects have shown, determining whether the consequences of a decision are for the best or not in part depends on when you ask the question. For example, the building of dams as sources of cheap power may seem like nothing but a win-win situation, but the long term environmental consequences of such projects may end up costing more than the proponents of the project could foresee.
**_Second,_** since utilitarianism depends on comparing the benefits and costs of different actions on everyone who is affected by those actions, it assumes that it is possible to compare the impact of one's actions on different people. But how can we do this? Is there one neutral standard of comparison that might reveal that, for example, person A gets 3 units of utility from action X and 5 units of utility from action Y, while person B gets 5 units of utility from action X and only 1 unit of utility from action Y?
**_Finally,_** we have the problem that, if utilitarianism is correct, and the moral worth of an action is to be measured by the amount of good it does in the world, this seems to do away with the concept of rights. As long as an action leads to a sufficiently good outcome, anything goes. For example, suppose framing and executing an innocent person for murder would lead to great happiness for the whole of a community. As long as we could show that the benefits to everyone else outweigh the costs to this innocent person, a utilitarian would not have a problem with this. But, what about the rights of an innocent person not to be punished for a crime he or she did not commit? We will return to this question in the next chapter.
**_In spite of some_** of the troubling consequences of utilitarianism, however, it remains a popular theory. Part of the reason for its popularity is the plausibility of the argument for this view. It it to this that we now turn.
## Why should I care?
**_The argument for utilitarianism_** is straightforward and might go like so:
::::{.center data-latex=""}
:::{.argument data-latex=""}
Everyone is out for the same thing -- happiness.\
The rational approach to happiness is maximizing utility.\
All of our interests count equally.\
\
Thus we should all strive to maximize overall utility.
:::
::::
**_That is,_** given a standard set of assumptions about human motivation and rationality, and adding an explicitly ethical premise (the third), we get the result that we should all strive to get the best possible outcome for the greatest number of people. Utilitarians claim that this argument shows why it is that we should use the good that our actions do for whoever it is that is affected by them as a measure of their ethical value.
**_So far,_** in our description of the position of utilitarianism, we have seen the grounds for buying the first two premises. The first of these claims is a straightforward, and pretty obvious, description of human behavior. The second depends on a pretty clear and uncontroversial theory of rational action. Acting rationally has to involve taking into account the costs of doing something and the probability of actually accomplishing it. So far so good, but these premises do not yet enable us to go beyond egoism. They are perfectly compatible with a complete lack of concern for others. The third premise goes well beyond anything that an egoist would accept and clearly calls for a more involved defense. The whole weight of utilitarianism rests on this claim.
**_So far,_** we seem to be admitting that people only care about themselves and getting what they individually want. Yet ethics is supposed to be about concern for others, or at least taking others into account when you are making a decision. Utilitarians, however, insist that we can construct an ethics on the basis of what we have so far said about human motivation. So what we have to show next is that not only should we act with our own interests in mind, but we also have to have a compelling reason to take others into account. We can't just assume that others count; we have to demonstrate that ignoring others' interests is just not a rational option.
**_Consider the following,_** somewhat absurd, example. Suppose I have 10 dollars and would like to entertain myself on a hot sunny afternoon. I come up with the following two possibilities:
1. I could buy a ticket to go see a Hollywood action movie.
2. I could use the 10 dollars to buy a six pack of beer and then go up to my roof where there happen to be some bricks left over from the construction of the building. My plan would be to hang out and relax, enjoying the breeze, while occasionally throwing a brick down on to the crowded street below.
**_In case 1_** I'd get some relief from the heat and a little relief from my boredom. But this relief would only last for an hour and a half, and I'd have to sit though yet another typical Hollywood movie in which the good guys get the bad guys, with all of the usual predictable car chase and cliff hanging scenes and all of the usual dazzling but fake looking special effects. In short -- it just doesn't seem worth the money.
**_In case 2_** instead of the fake explosions and chaos, blood and guts of a Hollywood movie I could see the real thing. Maybe there would be multiple vehicle accidents and perhaps even a gripping real life chase, involving me too. This certainly seems more cost effective, a way of getting the most for my entertainment dollar.
**_The only problem here_** is of course that option 2 requires that I assume that nobody else's interests, or lives for that matter, really count. In this scenario I am considering only my own interests and treating others as if their pain and suffering just didn't matter. The question is then, do I have any way to defend my actions? Can I possibly come up with good reasons for believing that my interests mean more than the interests of the people whose lives I am putting at risk for the sake of entertainment? It is not enough here just to
insist that the people I am endangering wouldn't want me to endanger them. This is because I may genuinely not care what they think of my actions. I am acting selfishly here as a matter of fact. The real issue is whether I can in any way rationally defend my selfishness. Well, can I?
**_It seems like_** I can't. Of course I can act selfishly, but that's not the same as convincing others, who are potential victims of my selfishness, that I am justified in acting selfishly. But that seems very unlikely -- why should you, someone who has his or her own needs and interests agree to let me endanger you unless you were getting something in return? But if I am selfishly endangering you, then you are not getting anything, or at least enough, in return, so you'd have no reason to go along with my selfish schemes. Thus I cannot defend my own selfishness. And if I cannot do this, then I must accept the third premise of the argument for utilitarianism: "All of our interests count equally."
**_Where is all_** of this leading? To the claim that we don't really have any good reason for denying that others count just as much as we do. Once we are convinced of this it is a short step to the basic principle of utilitarian ethics, other wise known as "the principle of utility." This states that the right thing to do in any situation is to look at all of the available alternatives and choose the one that gets the most benefit for the most people, or that maximizes overall utility. All of morality boils down to this one simple principle: do whatever brings the most benefits and the least costs to the greatest amount of people. On the account of utilitarianism I have been developing here, this principle follows from the fact that none of us really has any good reason for denying that others count just as much as we do.
**_This is perhaps_** not a surprising conclusion. After all, isn't accepting the idea that others' interests count just as much as ours do a requirement for looking at things from a moral point of view? If we cannot accept this, then we are simply not moral agents. Utilitarianism simply makes this idea explicit and defends it with a clear argument.
## Problems, Problems
**_The difficulties_** faced by utilitarianism are of two types -- technical problems that arise from its claim to be based on a scientific calculation of the costs and benefits of our actions and deeper questions about its status as a moral theory. We will look at each of these in turn in this section.
### Technical problems {-}
**_These issues_** have already briefly discussed these in the last section but they deserve further elaboration since they may end up being more serious than defenders of utilitarianism seem willing to admit. The problems I mentioned there were the problems of measuring and comparing happiness as well as the problem of determining what exactly the outcomes of our actions might be. At first glance these may seem relatively minor, but it seems to me that they begin to call into question the very claim of utilitarianism to be a viable theory.
**_So the first question_** is, if utilitarianism claims that we can determine the moral worth of an action by the results of that action and measure those result by the amount of happiness that it produces we may wonder how exactly we might go about measuring that. Of course it may seem obvious that I know for myself whether I am happy or not, whether my needs have been met, whether I am better off than I previously was in a variety of circumstances. It may also seem to be the case that we can compare different people and determine that one person is happier or better off than another at least in general terms. But that already suggests a problem for utilitarianism -- how can we get beyond the somewhat vague and indefinite comparisons we might make in such a way as to support the claim that the calculation of costs and benefits can truly lead us to an objectively valid measure of moral worth for any particular situation we might face? Economists and other social scientists frequently face similar issues and so fall back on a convenient stand-in for happiness -- dollar value. When they examine human decision-making from a scientific standpoint they often ask people to choose which of two alternatives people would pay more for, or how much would be required to compensate them for their efforts in a given scenario. This works well enough in given scenarios but it seems to be limited to cases in which a monetary equivalent can be given and there are many situations in which monetary value is a poor substitute for other more qualitative valuations we might make. For example, how might we compare the long term, but not particularly intense satisfaction of seeing one's children graduate from college to a shorter term and more intense pleasure, like that one might get from going skydiving. Any attempt to find a single scale on which to make an objective comparison must seem completely arbitrary. Hence Jeremy Bentham's claim to have developed a "felicific calculus" that would enable us to calculate the precise quantity of happiness produced by any action invites parody, something with Bentham himself unwittingly provided with his convenient verse mnemonic:
>Intense, long, speedy, fruitful --
Such Marks in pleasures and pains endure.
Such pleasures seek if private be thy end:
If it be public wide let them extend
Such pains avoid whichever by thy view:
If pains must come let them extend to few.^[[@benthamIntroductionPrinciplesMorals1970]]
**_This brings up_** the related issue of how we might be able to compare the results of an action across different people. Do we just end up counting heads? Is there a survey we can give to everyone involved to determined how much people really enjoyed the results of our actions or not?
**_The general problem_** here of determining the precise payoff of our decisions gets even worse when we think about when that payoff even occurs. After all the consequences of my actions continue to spread out in all kinds of ways like the ripples on a pond after you throw a stone into it and there seems to be no non-arbitrary way to determine when exactly the further consequences no longer need to be taken into account. It is not as if the consequences of our decisions simply stop being relevant after a certain definite point.
**_And finally_** in this vein, we may wonder how it is that we can even tell what consequences our actions even may have. It seems to me to be no accident that the classic "trolley problem" where we are asked to decide whether to throw a switch that leads to the death of one person on one track as opposed to doing nothing causing the death of more than one person on another is cast on a railroad, where a speeding train car is locked into its trajectory. In the real world there are no such well-defined and pre-determined alternatives but instead we have to take at best educated guesses about what might happen as a result of our decisions.
### Deeper questions{-}
**_The deeper questions_** that utilitarianism faces have to do with its fundamental claim that we can and should define what is right in terms of what is good. Well can we and should we? As we will be seeing in more detail in the next chapter the philosopher Immanuel Kant argues that the answer is no. In the absence of his particular arguments about why this is the case we can here at least appeal to our moral intuitions. This isn't really a definitive proof that utilitarianism is wrong, but it does at least suggest that we need to look at things more carefully, which Kant will offer a way of doing.
**_The big worry here_** is captured by the question, "Can the ends ever justify the means?" Utilitarians answer in the affirmative -- given good enough outcomes, the pursuit of the greatest happiness for the greatest number of people may in certain cases lead us to endorse doing things that seem to be morally dubious. Should we ever risk the lives of innocent people in order to accomplish a "greater good?" Well utilitarians might very well answer in the affirmative if they consider the payoff to be big enough. But then, if we can't really determine how big the payoff really is, how can we say when this might be the case? Are we really justified in causing real harm in the interests of avoiding even worse but merely hypothetical results if we acted differently. Since we have no real way of rewinding the tape and playing the scenario again with a different choice at the crucial moment we are basically reducing the morality of any given decision to something that is basically unknowable — what would have happened if things had been otherwise. Real life examples of this are easy to find and it always must seem at least a little suspect to offer as a response to the victims of our actions, "Trust me the outcome would have been much worse if I did this instead of that."
## Slideshow Summary
:::{.slideshow data-latex="Here is a slideshow summary which can be \href{https://gwmatthews.github.io/ethics-slideshows/08-phl210-slides.html}{viewed online}, \href{https://gwmatthews.github.io/ethics-slideshows/pdf/08-phl210-slides.pdf}{downloaded} or \href{https://gwmatthews.github.io/ethics-slideshows/pdf/08-phl210-handout.pdf}{printed}."}
<iframe src="https://gwmatthews.github.io/ethics-slideshows/08-phl210-slides.html" width="100%">
</iframe>
:::
```{asis, echo=identical(knitr:::pandoc_to(), 'html')}
<div class="slideshow-buttons">
<div>
[download](https://gwmatthews.github.io/ethics-slideshows/pdf/08-phl210-slides.pdf)
</div>
<div>
[print](https://gwmatthews.github.io/ethics-slideshows/pdf/08-phl210-handout.pdf)
</div>
<div>
[new tab](https://gwmatthews.github.io/ethics-slideshows/08-phl210-slides.html){target="_blank"}
</div>
</div>
```
## Further exploration {-}
```{asis, echo=identical(knitr:::pandoc_to(), 'html')}
<br>
<br>
<hr>
**Editorial comments**
If you have a GitHub account and want to make any editorial suggestions, please do so here.
<script src="https://utteranc.es/client.js"
repo="gwmatthews/ethics"
issue-term="title"
theme="github-light"
crossorigin="anonymous"
async>
</script>
```