blimix: Joe by a creek in the woods (Hat)
[personal profile] blimix
Disclaimer: I have not yet studied modern thought on Utilitarianism. (Ghandi took back the book he had lent me before I read it.) So this is simply from the perspective of a philosopher, not a philosophologist. For all I know, this has been said before.

Utilitarianism suffers from two weaknesses:

The first is derived from Utilitarianism's greatest strength: It aims directly toward maximizing goodness (or Quality, or whatever you'd like to call it). This is, I think, the secret aim of all other philosophies. Another philosophy might say that "Doing X is the best thing you can do." It makes X the ostensible goal. But it justifies that goal by stating that it is the "best;" i.e., the most good. It unquestioningly implies that whatever action is the most good is what one should do. The proponents of that philosophy may go to great lengths to prove that X is the best action, never noticing that the goal of X, and thus their ultimate goal, has been goodness the whole time.

But in driving right to the heart of the matter, Utilitarianism runs into the same difficulty already known to Buddhism (which also acknowledges goodness directly): The Good cannot be defined. This leaves Utilitarianism in a seemingly awkward position, compared to most western philosophies. Its primary goal appears nebulous, while the others are fixed upon narrow paths whose goals, while definitively incapable of being as worthy as goodness itself, are frequently better defined and thus more comprehensible.

Many Utilitarians circumvent this vagueness by using happiness as a measure of goodness. (That is, the quality of an action can be measured by the happiness it creates or preserves.) This is a generally useful schema, though it does leave them open to attacks (which are weaker than they seem1) based on the imperfect correlation between goodness and happiness. Rather than bow to this imperfection, John Stuart Mill expanded his definition of "happiness" to include other significant results of good actions, such as intellectual satisfaction, pleasure, and freedom from pain. But he got caught up in the problem of how to decide what is best when there is disagreement, because he still hadn't defined "good". So he fell back upon trust in the opinion of the majority, a method that has since been discredited by U.S. elections. No other easy schema could have sufficed: Any that could, would have provided a simple definition for "good," which we already know to be an NP-hard problem.

The second weakness of Utilitarianism is its lack of guidance. It provides a goal, but no means to achieve that goal. Again, this makes sense, given that goodness itself varies with people's situations (internal and external). Even aside from that, no one lifestyle can maximize goodness for everybody, given that people have differing resources and abilities. Yet, guidance is appealing in a philosophy, and occasionally even useful. It is certainly possible to compile sound philosophical advice, and even to tailor it to the needs of the individual. Such endeavors are Utilitarian simply by being worthy of doing. Yet the advisory content of such compilations seems to lay outside of the scope of Utilitarianism. (On the positive side, this lack of completeness places it ahead of those philosophies and religions whose advice is useless or harmful. An independent Utilitarian is more likely to do good than an organization is.)

Footnote 1. For example, there's the "happiness pill" argument. If we had a cheap pill with no harmful side effects, which caused blissful happiness, would a society of people who spent their entire lives experiencing nothing but this drug-induced happiness be the best thing possible? The questioner knows perfectly well that it is not (even if they are unsure as to why), and hopes to lure the Utilitarian into admitting that a philosophy of happiness could lead to a result far removed from goodness. The simplest response is that such a society is not the best thing possible, on the absurdly simple grounds that it is not "possible" at all. For one thing, such a society is unsustainable; its populace would have its basic needs unmet. (Every counter to this objection that I have heard involves robots.) For another, such constantly euphoric people would never develop an emotional capacity for happiness; they would never move beyond the capacity for raw pleasure.

(no subject)

Date: 2005-06-29 12:39 am (UTC)
From: [identity profile] ratatosk.livejournal.com
I ran up against some of that taking criminal law, when we were covering theories of punishment. I had previously sort of defaulted to something like utilitarianism, and regarded punishment for its own sake as sort of pointless, because it didn't affect anyone else.

Other reasons include deterrence, rehabilitation, and incapacitation. Deterrence isn't so great, because mostly what people find is that fear of getting caught is a way bigger deterrent than fear of what happens later, so carefully grading sentence lengths isn't necessarily doing anything for deterrence. There isn't a lot of evidence that we know what we are doing with rehabilitation or that anything we are doing now is particularly effective. And as to incapacitation, it's not only very hard to tell who who would be a recidivist, but in many cases putting people in prison does not reduce total crime because you are just reducing competition for other criminals.

All three suffer problems of somebody making potentially incompetent decisions about what is good for society as a whole, the criminal, or the victim.

In light of that, "deontological" or "retributivist" motives for punishment seem almost more fair.

(no subject)

Date: 2005-06-29 03:48 am (UTC)
From: [identity profile] blimix.livejournal.com
Oh, they certainly are more fair. The problem, as you know, is that "fair" isn't the same as "good". "An eye for an eye leaves the world blind," according to Mahatma Gandhi, and I agree.

However, I am reminded of something (which I think [livejournal.com profile] leora pointed out to me): Being treated fairly doesn't really make people content, but believing that they are being treated fairly does. A system that tries to be fair, no matter how badly it fails in actuality, may still leave its citizens more content with it. Not that I think that the malinformed opinions of the populace about crime and punishment are a great guide to right action, mind you.

I'm interested in the subject, but still know little about what actually works. (In fact, this is my sister's realm of expertise.)

(Dictionary.com tells me that "malinformed" isn't a word. Yet I insist on using it here.)

(no subject)

Date: 2005-07-02 05:05 pm (UTC)
kirin: Kirin Esper from Final Fantasy VI (Default)
From: [personal profile] kirin
Re: Footnote 1:
What's wrong with involving robots? Do you think that most of our basic needs will *not* be met in some automated fashion within the next century? (I do find your further objection to this argument valid, though.)

(no subject)

Date: 2005-07-03 01:37 am (UTC)
From: [identity profile] blimix.livejournal.com
Science fiction doesn't frequently become science fact. Not only don't I believe that any significant portion of humanity could afford robot slaves to cater to all of their needs, I don't believe that robots will be able to cater to all of people's needs until their AI is equivalent to sentience. Even granting the extremely unrealistic idea that this will be developed within the next century, we will then have a new race of sentient slaves: Hardly a shining example of the greatest happiness for the greatest number.

Automated catering to *most* basic needs doesn't satisfy the example; a human vegetable forever under the influence of a happiness pill will be incapable of satisfying *any* need beyond breathing.

Furthermore, any economy would collapse if all (or most) of its participants removed themselves from all interaction. So this race of euphoric people would be unsustainable. But I'm no economist, so I may be wrong on this point. Also, this point breaks down if the robots can meet all of their own needs as well as their masters'. But such robots would be arguably superior to humans, most of whom cannot even correctly care for a fish tank. So this exception presses the "slavery" point even harder.

Mainly, though, I'll go with the "extremely unrealistic" counterpoint.

(no subject)

Date: 2005-07-03 01:46 am (UTC)
kirin: Kirin Esper from Final Fantasy VI (Default)
From: [personal profile] kirin
I take it you're not a fan of Kurzweil's singularity theory, then. Of course, that course doesn't really postulate the end of labor via robots as terribly likely - more likely is the end of labor via transforming humans into something that doesn't need much in the way of sustenance. Or rather, the distinction between a human and a robot might become moot. And at that point, a "happiness pill" might simply be code. Still not terribly appealing, though.

And you may be right that even in such a case, entropy would eventually break down the system if it got into a state where none of the entities that comprised it were paying any attention to anything other that a state of happiness.

(I may not be backing myself up terribly well here... way behind on LJ after a week on the road and trying to catch up.)
Page generated Jan. 10th, 2026 08:24 am
Powered by Dreamwidth Studios