Nominating Oneself For The Short End Of A Tradeoff

I’ve gotten a chance to discuss The Whole City Is Center with a few people now. They remain skeptical of the idea that anyone could “deserve” to have bad things happen to them, based on their personality traits or misdeeds.

These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test. When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.

Let me give three examples:

1. Imagine an antidepressant that works better than existing antidepressants, one that consistently provides depressed people real relief. If taken as prescribed, there are few side effects and people do well. If ground up, snorted, and taken at ten times the prescribed dose – something nobody could do by accident, something you have to really be trying to get wrong – it acts as a passable heroin substitute, you can get addicted to it, and it will ruin your life.

The antidepressant is popular and gets prescribed a lot, but a black market springs up, and however hard the government works to control it, a lot of it gets diverted to abusers. Many people get addicted to it and their lives are ruined. So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.

Let’s suppose the government is being good utilitarians here: they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused, and they correctly determined the costs outweighed the benefits.

But let’s also suppose that nobody abuses the drug by accident. The difference between proper use and abuse is not subtle. Everybody who knows enough to know anything about the drug at all has heard the warnings. Nobody decides to take ten times the recommended dose of antidepressant, crush it, and snort it, through an innocent mistake. And nobody has just never heard the warnings that drugs are bad and can ruin your life.

Somebody is going to get the short end of the stick. If the drug is banned, depressed people will lose access to relief for their condition. If the drug is permitted, recreational users will continue having the opportunity to destroy their lives. And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better. But I still feel, in some way, that the recreational users have nominated themselves to get the worse end of this tradeoff. Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this”.

(this story is loosely based on the history of tianeptine in the US)

2. Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.

Then one of his friends comes to you and says “This guy harassed one woman per month, and not even that severely. On the other hand, kicking him out of the community costs him all of his friends, his support network, his living situation, and his job. He is a pretty screwed-up person and it’s unclear he will ever find more friends or another community. The cost to him of not being in this community, is actually greater than the cost of being harassed is to a woman.”

Somebody is going to have their lives made worse. Either the harasser’s life will be worse because he’s kicked out of the community. Or women’s lives are worse because they are being harassed. Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.

And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.

(not going to bring up what this story is loosely based on, but it’s not completely hypothetical either)

3. Sometimes in discussions of basic income, someone expresses concern that some people’s lives might become less meaningful if they didn’t have a job to give them structure and purpose.

And I respond “Okay, so those people can work, basic income doesn’t prohibit you from working, it just means you don’t have to.”

And they object “But maybe these people will choose not to work even though work would make them happier, and they will just suffer and be miserable.”

Again, there’s a tradeoff. Somebody’s going to suffer. If we don’t grant basic income, it will be people stuck in horrible jobs with no other source of income. If we do grant basic income, it will be people who need work to have meaning in their lives, but still refuse to work. Since the latter group has a giant door saying “SOLUTION TO YOUR PROBLEMS” wide open in front of them but refuses to take it, I find myself sympathizing more with the former group. That’s true even if some utilitarian were to tell me that the latter group outnumbers them.

I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?

One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor. But maybe this is kind of like saying “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive, would you do it?” In the thought experiment, maybe yes; in the real world this either never happens, or never happens with 100% certainty, or never happens in a way that’s comfortably outside whatever Schelling fence you’ve built for yourself. I’m not sure I find that convincing because in real life we don’t treat “force people to bear the consequences of their action” as a 100% sacred principle that we never violate.

Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.

A friend reframes the second situation in terms of the cost of having law at all. It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption. I can see this as a very strong answer to the second scenario (which might be the strongest), although I’m not sure it applies much to the first or third.

I could be convinced that my desire to let people who make bad choices nominate themselves for the short end of tradeoffs is just the utilitarian justifications (about it incentivizing behavior, or it revealing people’s true preferences) crystallized into a moral principle. I’m not sure if I hold this moral principle or not. I’m reluctant to accept the ban-antidepressant, tolerate-harasser, and repeal-basic-income solutions, but I’m also not sure what justification I have for not doing so except “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.

But I hope people at least find this a more sympathetic way of understanding when people talk about “desert” than a caricatured story where some people just need to suffer because they’re bad.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

377 Responses to Nominating Oneself For The Short End Of A Tradeoff

  1. orin says:

    Scott, the dilemma you are describing is one of the classic criticisms of utilitarianism, sometimes called “The magistrate and the mob”.

    • Scott Alexander says:

      I don’t think it’s the same as that at all.

      • hlynkacg says:

        Your example may be more general but it’s the same basic principal. Simple fact of the matter is that utilitarianism requires a substrate of deontology or virtue ethics to keep it from settling into a defect-defect equilibrium where everyone has to be on constant guard against being pushed front of trolleys or having thier organs harvested in the name of the “public good”.

        Granting that utilitarianism requires additional principals beyond maximize utility/minimize suffering is effectively a concession that utilitarianism has failed on it’s own merits as any such rules/principals you add will put you back in deontology/virtue ethics territory anyway.

        • Douglas Knight says:

          Sure, Scott is talking about rule utilitarianism. That addresses the Magistrate and Mob. This post is about what rule to choose.

          • Freddie deBoer says:

            Of course, rule utilitarianism just becomes another set of rules that are abstracted further and further away from the utility principle.

          • hlynkacg says:

            Of course, rule utilitarianism just becomes another set of rules that are abstracted further and further away from the utility principle.

            Correct, at which point they might was well be deontologists.

        • Frederic Mari says:

          What about the economic adaptation of utility? I.e. we are seeking to maximize utility but under constraints?

          In economy, the constraints are limited resources/scarcity. In utilitarian thinking, the constraints would be more ethical based but notheless routed in the tribal/small group survival instincts of our past.

          Hence, while harvesting the organs of a single healthy person to heal 5 sick ones could be deemed “utility maximizing”, it breaks a constraint around “your ingroup won’t put you to death randomly” and that’s based on the necessity of actually preserving an ingroup.

        • knockknock says:

          “…An equilibrium where everyone has to be on constant guard against being pushed front of trolleys … in the name of the ‘public good.’”

          Lol, yes, I was already thinking “trolley problems” with this one!

          If Utilitarians ever want a more high-profile media-friendly label for their movement, Trolley Pushers might do the trick. At the very least it would work for a Utilitarian softball team or garage band. Just imagine the T-shirts.

      • orin says:

        The point of the “magistrate and the mob” example hinges on the tradeoff between the punishment of an innocent verses the suffering of a mob that has “nominated itself to get the short end of a tradeoff” if the innocent is not punished.

        • 10240 says:

          In the version I’ve found on the internet, the trade-off is between executing an innocent, and the mob taking revenge on another, also innocent group. What is the version you refer to?

  2. Tulip says:

    I think my answer to the overall problem here is basically to gesture in the general direction of Ozy’s recent post on Silicon Valley Liberalism.

    Narrowing down on specific cases, though: the reason that my instinct in 1 is to not ban the drug is because I don’t actually trust regulators to do bans in a net-positive-utility way, even if in a spherical-cow thought experiment banning the drug could potentially improve things. (And, indeed, when I run the relevant thought experiment as a pure thing without taking into account concerns of knock-on effects, I find myself endorsing banning the drug.)

    And then the UBI case… I think my answer there is just, very straightforwardly, that I don’t expect the negative utility of the people who suffer in absence of jobs and who nonetheless choose not to get a job when on UBI to be anywhere near as great as the positive utility of the people who suffer under current having-a-job-is-basically-mandatory conditions and are freed from that. If someone were to convincingly argue that my expectations on that front were wrong, I might change my mind about the goodness of UBI; and, in the thought-experiment world where it in fact is the case that UBI produces net negative utility, I’m inclined against UBI for exactly that reason.

    • Peffern says:

      Wow, that article summed up my beliefs pretty much exactly, although I don’t live in Silicon Valley. I agree we need a better cluster word for these beliefs. I use “neoliberal” for myself but as Ozy says that is kind of terrible.

    • fluorocarbon says:

      I don’t think the views espoused in Ozy’s blog post are specific to Silicon Valley. They sound pretty similar to the Economist’s political stance. They call themselves “liberal” or “radical centrist.” I’ve also heard people say “neoliberal,” but like Peffern says, it’s usually as an insult.

      I think “liberal” is a good word to use and it might be better not to attach it to one particular place. Now that people on the left in the US are using the term “progressive” to describe themselves it makes things less confusing too.

    • Simon_Jester says:

      To specifically focus in on the moral question here…

      The biggest argument I can see here is that utility calculations tend to have high margins of error, especially when we’re talking about things that are poorly modeled. Things like what conditions will look like in a society where UBI has had 30-50 years to become ‘the new normal.’

      It’s relatively easy to crunch numbers and decide that having one person get run over by a trolley is objectively less bad than having two people get run over. Or that spending $1000 is objectively less bad than spending $1200.

      But if we find ourselves trying to calculate “how much harm results from people who live on UBI and are unfulfilled because they refuse to labor voluntarily” versus “how much harm results from people forced to labor involuntarily to survive…” We’re basically stuck flailing around trying to do anything better than get a Fermi estimate.

      Much of the time, there is no obvious calculable utility difference between two outcomes, because of the limits of our ability to calculate precisely enough. And at that point we have to break out some kind of heuristic principle, or we can’t make the decision at all.

      What Scott seems to be noting is that he, for one, is pretty comfortable with one of those heuristics being “it is generally less-bad to apply suffering to those who have nominated themselves to suffer through making some clearly bad decision or through ignoring a principle put into place to protect others, than to apply suffering to those who have not thus nominated themselves.”

      Like, in the magistrate-and-mob scenario, when a society with a strong commitment to human rights is faced with the choice to execute an innocent man for fear of a mob going berserk and lynching a bunch of other innocents… The usual solution is to deploy armed agents of the state to stop the mob, by lethal force if necessary.

      Because the innocent people haven’t done anything to nominate themselves for the “Congratulations, you just won a Very Bad Day” competition, but the mob has.

  3. noah says:

    “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.

    What could be more utilitarian than adding principles to utilitarianism that make it work better?

    To be a bit less flippant, why would we expect the most effective rules for maximising the happiness of organisms that are an accumulation of many different evolutionary paths, strategies, and selection pressures to be simple?

    I don’t know the name for it, but there has to be a fallacy that describes our desire to oversimplify complex systems, and expect there to be a reachable state of perfection. Taking as our first axiom that such states are inherently unstable seems like it resolves this whole discussion.

    • Tarpitz says:

      Couldn’t agree more. Absent a very naïve kind of moral realism, why should we expect the moral system we would feel most comfortable endorsing to be simple, elegant or readily expressible? I quite understand that it would be incredibly helpful from an AI risk point of view if we could come up with a comprehensible system everyone could broadly endorse, but the project strikes me as utterly hopeless.

    • jasmith79 says:

      Or to extend your evolutionary metaphor, if coming up with a simple, elegant, consistent moral system that actually works were that easy, someone would have likely done so by now.

      • baconbits9 says:

        Its a good thing the concept of individual rights has never been tried.

        • vakusdrake says:

          Wait are you being serious? Because if so one should be aware “individual rights” has its host of issues just like any ethical system:

          How do you determine what is and isn’t a right?

          In which ways do a rights existence morally obligate people to act in a certain way?

          For instance does it just mean people can’t act to deliberately infringe on that right? If so what counts as deliberately infringing on a right given actions can have indirect consequences?

          Do rights morally require people actively go out of their way to ensure other people’s rights are upheld, and if so to what extent?

          The point is the issues here are as abundant as those of most ethical systems and the more you examine them in detail the more intractable those problems become (and/or the more you start having to tack on extra stuff because it makes the theory work better).

    • slightlylesshairyape says:

      I don’t know the name for it, but there has to be a fallacy that describes our desire to oversimplify complex systems, and expect there to be a reachable state of perfection.

      The not-very-catchy term for those that reject that is antiparsimonialism, I’ve heard it come up in the moral theory of Jonathan Haidt.

  4. stevemckay says:

    It’s about agency, I think. If someone of sound mind, knowing the consequences of their actions, chooses to do something, even if it looks like a terrible choice from the outside–well, that’s their right. They’re presumed to be maximizing utility using a different function than you do.

    • eric23 says:

      1) I think “They’re presumed to be maximizing utility using a different function than you do.” demands more philosophical explanation

      2) Drug addicts have only limited agency when it comes to their addiction.

      • Creutzer says:

        Drug addicts have only limited agency when it comes to their addiction.

        Yes, but only once they’re addicts. What’s at issue is the act of taking the drug for the first few times that turns them into addicts, when there is not yet a factor that diminishes their agency with respect to that act.

        • caryatis says:

          I disagree with this concept that “addicts have limited agency.” Addicts are humans who respond to incentives. Their behavior looks different because they place a higher value on having their substance than most non-addicted people. But, they can abstain temporarily or permanently if sufficiently motivated (everything from quitting completely to “I’ll give you $10 if you don’t drink today”). Any addiction you can think of is one that people have fixed for themselves without the intervention of do-gooders.

          It’s important to remember this given the long history of dehumanizing addicts, imprisoning them, and torturing them based on the claim that they are slaves to the drug and cannot help themselves.

          • gmacd18 says:

            You should check out Bryan Caplan’s paper “The Economics of Szasz”

          • cuke says:

            Some of my work as a therapist is with people in recovery from addiction. I agree with your disagreement, unless we agree that everyone has limited agency.

            It’s true that some mental illness hijacks motivation, reasoning, and executive function a lot. Substance use is often self-medication for other mental illness — like bipolar or depression or anxiety or a personality disorder — that then takes on a life of its own. So addiction is a distortion field on top of other distortion fields. If there’s too much distortion, a person can’t think logically or plan or anticipate consequences. But most people with addictions and mental illness can do some of those things at least some of the time.

            DSM diagnostics include measures of severity and functional impairment when it comes to mental illness and substance abuse. So it’s not “do addicts have agency or not?” It’s “what kind of supports does this person need to follow a plan to get better?”

          • Simon_Jester says:

            I would argue that “everyone has limited agency, drug addicts’ agency tends to be more limited than usual” is in fact the correct position.

            Totally perfect pure free will, versus totally predetermined inevitable forced choice, lie on a spectrum.

            We all face certain constraints on our free will. We cannot choose to walk off a cliff without falling. We cannot, at least as a normal rule, will that two plus two equal both four and five at the same time, except perhaps by somehow bypassing our own inability to imagine what that would mean.

            Some of the constraints on our free will are biological. We cannot freely choose to take amphetamines to stay focused without experiencing the side effects. We mostly cannot choose to sleep for four hours a night forever without adverse consequences.

            We all have limited agency.

            Any philosophical position that is based on the assumption that humans have unlimited agency and can make- and persist in making- whatever choice most optimally maximizes their personal fitness algorithm… Has a hole in it.

            Because true perfect, philosophically ideal agency doesn’t consist of a heroin addict quitting cold turkey. It consists of a guy deciding to be able to fly, and succeeding in doing so.

            The idea is inherently chimerical. All human life is the result of coexistence between our agency and the external and internal constraints that limit the actions we’re capable of choosing to carry out in any meaningful sense of the word ‘choice.’

      • Alex Zavoluk says:

        Then drug addicts should be treated like children and their caretakers have the right to punish them and take away their toys and we don’t grant them all the same rights as everyone else.

        • christianschwalbach says:

          And that fixes the issue how? It just prolongs or dances around it. Determining what is neurologically aberrant and targeting that, both in a counseling and medicated manner is a reasonably effective methodology. That being said, I will admit as much that establishing consequences is useful, especially in initiating first steps towards treatment, or perhaps mandating such, but it cant be left at that only…

        • Edward Scizorhands says:

          What specific rights do you have in mind?

        • Alex Zavoluk says:

          christianschwalbach: I agree it doesn’t fix anything, but I don’t actually think that drug addicts have such limited agency. I was simply drawing the natural conclusion to that claim. We don’t let young children or the severely mentally ill wander the streets all alone and put the entire world in bubble wrap so they can’t possibly hurt themselves. They stay in certain places, or their parents or caretakers go with them and we all accept that they have control over their charges’ behavior, because their charges are not capable of making decisions on their own.

          Edward Scizorhands: I’m not sure exactly, but anything that we don’t allow children or the severely mentally disabled to do would be up for consideration. Their living situation is determined by someone else, and if they leave they’re considered runaways. Maybe they couldn’t vote. Consent laws are different. Definitely can’t buy guns or intoxicating substances.

          Again, though, I don’t think drug addiction actually works this way.

      • Squirrel of Doom says:

        You can think of things like food, water and oxygen as addictions we all have.

        Do we have limited agency when it comes to them? Only when we can’t get a decent supply, I’d say. When there is a good stable water supply, my agency is just fine.

        Maybe this is a bad metaphor for oxy or vodka, but I’m not so sure.

        • Watchman says:

          The concept of fasting might suggest this is wrong. And I think any attempt to define addiction as covering things necessary to life is doomed to fail as it then requires a new word for the compulsive urge to consume something not necessary for life in order to distinguish this behaviour from trying to stay alive…

          • Squirrel of Doom says:

            My point is that I’m not convinced there is an important difference between these behaviours!

            The mind altering properties of some addictive drugs is a distinction, and maybe that makes it different in kind. But for many addictive drugs, those effect are both mild, temporary, and even arguably beneficial.

            I’m currently caffeine addicted, for example. Maybe it’s The Demon talking, but I feel like my agency is just fine, if I get 2-3 cups a day.

          • Simon_Jester says:

            It’s mostly a matter of degree, and of side-effects.

            Your ‘oxygen addiction’ is strong enough that you might well commit burglary to ensure your continued access to oxygen. And indeed this wouldn’t even be blameworthy because without oxygen you would literally die.

            Your caffeine addiction will never realistically cause you to commit burglary to pay for your next cup of coffee. And if coffee was so expensive that it could do so, you’d probably have given it up and found a different ‘vice.’ If you were in a state of homeless poverty and had to constantly choose between buying food and buying caffeine, while you might sometimes choose caffeine, you probably wouldn’t keep doing it until you were a half-starved wraith.

            Whereas experience shows that heroin and meth addicts totally will consider burglary to pay for their next fix- that is, they behave, empirically, as if continued access to their drug is comparable in importance to oxygen, food, and water. Some addicts will treat

            And the addicts themselves often talk and behave as if this is the product of some coercion.

            (Well okay, they can BECOME necessary through screwing up your biochemistry to the point where you suffer physical problems, but the point is that you didn’t have a mechanism to make you dependent on this stuff before you started taking it in the first place).

            And indeed, addicts to strong drugs often do fall into this trap of repeatedly choosing the drugs over other things necessary to their life, or disregarding basic rules about how to take the drug safely because they feel that need for the drug.

            This suggests that modeling heroin and meth and such as Evil Pills What Make You Need More Evil Pills isn’t entirely unreasonable. Because instead being substances you treat as necessary to your life because you do in fact need them to live… They’re cases that aggressively hack your body and force it to need more of themselves. They can force you to need so strongly that it can easily kill you if not kept under externally regulated control, because they’ve already partly disabled your sense of self-preservation.

            Most other things just don’t do that.

      • powerwizard says:

        2) Drug addicts have only limited agency when it comes to their addiction.

        Scott is willing to deny agency to the addicts on the basis of incompetence (I agree). The problem is that in this scenario, you’re also denying agency to the non-addicts, by depriving them of legal access to the chemicals. Which is a problem of utilitarianism, that your values can be hijacked by massive suffering which can occur for a number of reasons.

    • arlie says:

      The more I learn about the ways in which humans predictably make bad choices, the less I’m certain that there’s any such thing as a person of sound mind. Still, I find myself agreeing about the idea of nominating oneself for the short end of a tradeoff – not as an absolute thing, but as one more condition to include in decision making.

    • Dagon says:

      I logged in to say mostly this. It’s _absolutely_ about agency. This is the standard “do we care about opportunity or about outcome” conundrum. Do we accept more overall actual suffering in order to reduce potential suffering “if only people made better choices”?

      For myself, I’m OK with some unfairness – I don’t use the word “fault”, but I am willing to help those who I find more sympathetic at the expense of those I don’t. I sympathize more with the women than the molester, more with the depressives than the drug addicts, and with the zero-productivity workers more than the potentially-satisfied-workers-who-wont-try-hard-enough-without-financial-necessity.

      • acymetric says:

        Maybe this is looking to hard into the specific example and not at the point being made, but does the fact that the addicts likely had some mental health problems prior to the addiction (depression or other) impact your sympathy towards the addicts at all?

        This is not necessarily directed at you as I don’t really know how you feel about these issues in a general sense, but it seems like there are a lot of people who are very vocal about supporting people with depression and the like, but have a very negative view towards addicts, which seems like a highly contradictory position to hold to me.

        I don’t think it changes the decision in this case (the drugs should be available), but probably changes what outcomes are acceptable for the addict group and what measures (other than banning the drug entirely) should be taken to protect or help the addicts, right?

        • AG says:

          Yeah, there’s been a few good looks at how the current opioid crisis is largely a function of economic issues, since there have been previous times of surge opioid usage without a corresponding increase in addiction rates.

          Similarly, the HIV crisis among homosexuals has since turned around with social acceptance and encouraging medical research, instead of stifling it in the name of moral desert.

        • Simon_Jester says:

          I think the moral principle of “in case of a tradeoff, let the bad stuff fall on the people who went out of their way to nominate themselves for being on the bad side of the tradeoff” breaks down when you’re talking about cases where there isn’t any specific identifiable victim.

          Deciding whether or not to ban a drug that helps depressives, to reduce the exposure risk of people becoming addicted, is a clear tradeoff. People (including nondepressives) who would become addicted suffer if the substance is widely available, while depressives who need the substance suffer if it isn’t.

          Likewise there’s a clear tradeoff between allowing an unrepentant sexual harasser to stay in the community for the sake of their own happiness, versus kicking them out for the sake of the targets of their harassment.

          But there’s no similar tradeoff involved in “to fund AIDS awareness or not to fund AIDS awareness.” The gays and needle-sharers may suffer most from not funding AIDS awareness, but nobody specifically suffers more because you did fund it. There’s some hypothetical opportunity cost from the financial effort involved, but that’s a straightforward calculation of QALY versus dollar bills, of the sort we already know how to do.

          Likewise, there’s a specific tradeoff between suffering for depressives (if you ban their medicine) and potential-future-addicts (if you ban the substance they might otherwise start abusing). There’s no similar tradeoff to funding methadone clinics to help the people who DO get addicted to put their lives back together; no specific group gets singled out for disproportionate suffering that way.

          So maybe the heuristic should be something like:

          1) If there is clearly a tradeoff, if solving A’s problem doesn’t necessarily entail creating a specific problem for a clearly identifiable individual or demographic group B… When possible, default to allowing the suffering to land on whichever group deliberately did the most to contribute to the problem.

          2) If there is no clear tradeoff, if solving A’s problem doesn’t create a specific new problem of roughly equal size for B… Solve A’s problem.

          (1) says that if you find a way to make global warming affect only climate change denialists, and nobody else, go for it. 😛

          (2) says that if you can cure AIDS without having to do something super-disproportionately costly (“expend half of global GDP forever”), or something abhorrent that creates a new AIDS-sized problem for some other group (“make medicine out of ground-up human sacrifices”)…

          Well, you should default to curing AIDS, even if there are lots of people with AIDS whom you don’t like.

  5. Hoopyfreud says:


    I’ve hammered on this drum before, so feel free to ignore, but…

    I think you’re coming to the point where you need to reckon with Plato.

    It sounds to me like your utilitarianism isn’t really comparative; in many of these cases, A is “better” than B, but C, which requires B, is better than A. A world where depressed people can get the best treatment, and women aren’t sexually harassed, and people can self-actualize because they’re liberated from their janitorial duties is a *good* world, and by affirming world A we move away from world C.

    So, what IS world C?

    If you resist the siren song of repugnant conclusions (and I think you do), I think you’ll find that C is Plato’s Republic, but with Scott Alexander as its philosopher-king. If that’s the case, then your utilitarianism should really be understood as a moral pull in the direction of world C, and the problem disappears; this approach is utilitarian because it more closely resembles the one you’d respond with were you in world C.

    This isn’t necessarily strictly true, but I think it’s useful to illustrate the real point: if a more-or-less utilitarian framework can be assumed, the thing you think you identify as “good” isn’t what you think it is; it’s more complicated, and certainly less articulable. If you interrogate your conception of the “good” rather than searching through details, implications, and justifications, I think you’ll find a more satisfying answer.

    • slightlylesshairyape says:

      Uh, OK. But what are we supposed to do to get to world C?

      What if I calculate that an attempt by me to subjugate the world to my (or Scott’s) rule as philosopher-King will cause many people to become violently angry with me (or Scott, or probably both) and that we may lose the ensuing chaos and end up far worse than before. Or that even if we ‘win’ and Scott is now the Resident World Controller, it will be at great cost.

      That is to say, Plato might be fine in the same way that “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive” fine. Sure. I can imagine it as a thing itself, but I can’t imagine this world and that world ever causally connected.

      • Hoopyfreud says:

        I’m not saying that Scott should uncritically embrace (or reject) this framework; I’m saying that “maximizing the good” seems not to be about an abstract good, but about congruence with world C. Scott’s world C may (or may not) be internally inconsistent, but there’s nothing really wrong with that – it’s an unattainable ideal anyway. And I don’t think there’s anything wrong with this sort of Platonism. I just strongly suspect that almost all utilitarians run like this under the hood and refuse to admit it. And it is causal – it causes Scott to act and think in the ways he does. That’s why I think it’s important to admit.

    • pjs says:

      Can I take this in a different direction? I think you are right to note that there’s not just A and B, but also (and best of them all) a C.

      But the more I think about such examples, the more it it seems that I prefer ‘B’ like states precisely when the further step to the relevant ‘C’ seems reasonable and easy. Maybe I’m subconciously resenting a small roadblock being put in the way of a big improvement, or maybe I’m hoping that the extra step end up being taken. It’s not just that I think of B as a moral pull towards C, but I’m actually conflating them in some sense.

      I know it’s stipulated that the harasser can’t stop, but really, I can’t help think: why can’t he? And thinking about a single addict, why can’t he just not do it? (But reflection makes this seem naive, perhaps in the same class as ‘why can’t the depressed person just cheer up’). You want meaning, get a job (but on second thought, in a world where the UBI exists, does that still work?) When I reflect and realize the B to C step is harder than I first thought (or change the example so that it’s obviously so) I become much more sympathetic to staying at ‘A’. For me this seems to generalize fairly well to other examples of this type.

      This stresses C not as idealization, but as the opposite; something clear and plausibly attainable (the more so the better). Though it also takes a lot of load off the choice/deserts moralizing.

      • cuke says:

        “I know it’s stipulated that the harasser can’t stop, but really, I can’t help think: why can’t he? And thinking about a single addict, why can’t he just not do it? (But reflection makes this seem naive, perhaps in the same class as ‘why can’t the depressed person just cheer up’).”

        I’m inclined to think about this in less black and white terms it often gets framed. It’s not that addicted people or depressed people have no power or agency. Their agency is impaired in various ways, but not completely. People without these problems also have their agency impaired in various, perhaps less disabling, ways (relative to society’s current standards).

        I prefer to think of addicted people and depressed people (as well as sexual harassers or other people we might judge as deficient in various ways) as just like the rest of us. Which is to say, we have some degree of choice, even if those choices are somewhat constrained by biology or circumstance. We never know at any given point for any individual what are the outer limits of their constraints or how plastic those outer limits are.

        We know that addicts do recover and that depressed people do recover and that sexual harassers can learn different behaviors. It’s not as simple as “snap out of it,” but there’s a potential change process out there (though it’s far from a clear road map and isn’t accessible to everyone). We know that change process involves some amount of motivation and effort. We know people’s motivation can be enhanced in some circumstances and their efforts to make change can be bolstered (with the opposite also being true — we can undermine motivation and discourage effort as well).

        And we know all these problems that we describe in black and white terms actually exist on a spectrum (or several spectra). There are people quite disabled by very severe depression and there are people with mild occasional depression. There are depressed people who take medication or go to therapy and there are those who do neither. There are depressed people who are hurtful to the people around them when they are low and depressed people who aren’t. Doing all the things recommended to a person to recover from whatever they’re dealing with is not entirely in their control, but it’s not entirely out of their control either.

        I like the comments above about essentially abandoning the fantasy of a simple moral model. Addiction is not just a disease. Addiction is not just a bad choice. Recovery is not just a matter of finding the right treatment or recovery environment. It’s some mix of all these and other things. And so much of human experience is complex like this.

        So when I’m thinking about larger frameworks, I tend to think in terms of what are conditions that encourage all of us to make better choices, with the understanding that we will all be limited to varying degrees in taking advantage of all of those conditions. I’m also inclined to say it’s not even possible to make very good cost/benefit analyses in a utilitarian sense because human systems are so dynamic. The person who is unhappy on UBI and refuses figure out something to do with their time, may next year figure it out. It may also be that the process of a society adapting to fewer earners and more people on UBI is a 100-plus year adaptation timeline and so can’t be optimized over any one person’s lifetime (kind of like the industrial revolution). Change over time and agency and subjectivity all complicate utility calculations (it seems to me).

  6. hnau says:

    I’m very tickled that you’ve rediscovered and adopted the same reasoning that many Christian thinkers (C.S. Lewis, for example) have used to defend the concept of Hell.

    • phi says:

      It can’t be exactly the same. There’s no tradeoff involved in the case of Hell, since God, being omnipotent could presumably just send everyone straight to heaven.

      • DavidS says:

        Lewis’s view as I remember is it that essentially people send themselves to hell. From the (fantasically readable even if like me you’re solidly atheist) The Great Divorce

        There are only two kinds of people in the end: those who say to God, “Thy will be done,” and those to whom God says, in the end, “Thy will be done.” All that are in Hell, choose it.

        Presumably here ‘a world with true free will’ is the good thing, and the tradeoff is at the cost of those who choose badly.

        • 6jfvkd8lu7cc says:

          I guess there is a bit more honesty in Jehova’s Witnesses’ approach: Hell is the place where the _remains_ of those who chose not to accept eternal life from hands of God (and so got their chosen option of death-means-death and oblivion) are destroyed.

          Of course, Lewis tries to paint a milder Hell than most denominations do, so one could assume that the inhabitants there do not actively regret that they did not have the option of being obliterated.

        • Creutzer says:

          Lewis’s view as I remember is it that essentially people send themselves to hell.

          That doesn’t solve the problem because there is no reason why an omnipotent, benevolent god should make it so that they can do that. God is responsible for there being a tradeoff in the first place.

          This is very, very different from the situation Scott discusses here.

          • hnau says:

            DavidS’s explanation addressed this, actually, where he mentions “free will.” With free will in the picture, your phrase “make it so that they can do that” falls apart. Even omnipotence doesn’t make logically contradictory things possible, and “free will” without the possibility of rejection sounds pretty self-contradictory to me.

          • Creutzer says:

            No, free will doesn’t solve this at all. It’s not about their having an internal ability to choose, it’s about the world being such that the tradeoff is even there in the first place. There is no good reason for there to be an ugly trade-off in which some people can nominate themselves for the short end. You could make everything the same and just remove hell from the equation – and an omnipotent and benevolent god would.

          • J Mann says:

            That doesn’t solve the problem because there is no reason why an omnipotent, benevolent god should make it so that they can do that. God is responsible for there being a tradeoff in the first place.

            I post this about once a month, but there’s an answer to at least the first level of that critique. It’s not super satisfying emotionally, but as far as I can tell, it’s rock solid as a matter of logic.

            1) When you say God is “omnipotent,” you mean either that:

            1.1) He’s otherwise all-powerful but logically bounded (for example, He can make it so that the sky is made of cherries or that it’s not, but He can’t simultaneously make it true that (sky is made of cherries) and not-(sky is made of cherries); or

            1.2) God is so all-powerful that He’s not logically bounded. (So He can cause both P and not-P to be simultaneously true for any P and without contradiction).*

            2) Therefore:

            2.1) If God is otherwise omnipotent within logical bounds, then it’s possible that we’re living in the best possible world for reasons we don’t fully understand. It might be that morally imperfect beings can’t enter Heaven because that would change the nature of Heaven, for example.

            2.2) If God is omnipotent without logical bounds, then He can just make it so that this is the best possible world by any standard you choose to judge.

            3) Now even granting that, it does seem that within logical bounds, there are things God could do to improve the benevolence of the Universe. For example, even if the concept of Hell is necessary and even if imperfect beings can’t enter Heaven, He could just put each sinner into a matrix that seemed like Heaven to them. (But as said, this might be a worse world, even for the sinner – we can’t be sure.)

            * This is also the answer to “If God is omnipotent, can He make a rock so heavy that He can’t lift it?”

            – If He’s omnipotent within logical bounds, then He can either make the rock so heavy that He can’t lift it or He can lift it, but not both.

            – If He’s omnipotent without logical bounds, then He can make a rock so heavy that he can’t lift it, and He can lift it, and there is no contradiction because that’s how omnipotent He is.

          • woah77 says:

            I think one of the important stipulations to God described (at least in the Bible) is that he can not violate his nature. That would, in my opinion at least, declare that God has some form of boundary (although we may not be able to catch him in it).

          • Creutzer says:

            @J Mann: Agreed. I just think that “it’s possible that we’re living in the best possible world for reasons we don’t fully understand” is slightly more absurd than Cartesian skepticism.

          • J Mann says:

            @Creutzer – I agree that trying to convince someone of the Abrahamic faith with Cartesian logic is unlikely to succeed, but it’s also very hard to convince someone out of it with the Problem of Evil.

            Somewhat upsettingly, one good solution for our hypothetical God would be to lie a lot, assuming that lying isn’t so immoral as to obviate the obvious benefits. If you’re living in a Solipsistic universe that has been designed for your personal moral growth and testing, then we need a lot fewer “mysterious reasons Cruetzer can’t understand” to at least get to a benevolent universe.

          • Bugmaster says:

            @J Mann:
            I could be wrong, but didn’t God create logic in the first place ? If he did, then presumably he could change it at will; he also could’ve created logic to exclude himself, or to make all the dice always land in his favor (so to speak, since we know He doesn’t play dice), etc. If not, then this would imply that logic existed before God, and might in fact be more powerful than God; so, our next line of questioning would be, “who created logic, and why shouldn’t we worship that thing, instead ?”

          • woah77 says:

            @Bugmaster You’re presuming that Logic isn’t an inherent quality of God, something that I don’t think there is any reason to believe given the description the Christian Bible provides. If Logic is just an inherent quality of god, then it is likely that it is neither created by something external to god nor is something that god can change.

          • DavidS says:

            I’d argue it’s not even that logic might be a property of God: it’s that omnipotence means being able to do any thing, and illogical statements like ‘give people free will while dictating what they decide’ (or for that matter ‘create a triangle with four sides’) is just not a thing at all. The fact we can write these sentences doesn’t mean they refer to anything.

            I think the Problem of Evil can work emotionally and probabalistically, but not logically. The classic argument that omnipotence and benevolence is incompatible with any evil assumes that it’s impossible for evils and good to be bound up together, and I don’t think that’s demonstrated by e.g. Hume when he makes the argument.

            Agreed with the point above that in practice the Problem of Evil is simpler to solve if you remain sceptical about other minds: my own life is not I think incompatible with a benevolent God, even if it’s not constant bliss. But it’s far harder to feel the same can be said for some others’ lives.

            Of course, the other solution, used by Lewis and others, is to face the scale of suffering but argue that it looks different sub specie aeternitatis – that the immensities of it pale compared to the infinite world that awaits. Whether you find this palatable varies, and for me the problem of evil ends up being as much an aesthetic/intuitive question as a logical one, lining up ‘Rebellion’ from the Brothers Karamazov or Candide against the dialogue with Mustapha Mond at the end of Brave New World and so on.

            The one thing that I think is overwhelmingly clear is that a straightforward observation of the world does not give us evidence of God’s goodness. When Leibniz said we lived in the best of all possible worlds it was because he thought that logically we must do, rather than because he thought when he looked at the world everything seemed peachy.

          • Bugmaster says:

            I gotta admit, I don’t really understand what it means for something to be an “inherent quality of God”. How is that different from “God prefers to do X” (in which case, he could sometimes do not-X) ? Also, did God create logic or didn’t he ? If he did, then logic is just a system that is under his absolute control, like gravity, so he could simply decree, “X and not-X cannot be true at the same time, unless you’re talking about me, in which case all bets are off”.

            I agree that “a triangle with four sides” is not a thing at all, given our understanding of the world. Our minds are, to a certain extent, logical; which means that we literally cannot conceive of such a thing. But if God created our minds (like he did everything else), then it is entirely possible that he is free from such limitations, isn’t it ?

          • DavidS says:

            It’s possible our concepts of logic are flawed: but it’s also possible that they are simply identifications that some statements are incoherent. I think there’s plenty of room for suffering in a world with a good God (whether I think such a God is plausible from what I know of the current balance of suffering is a different matter)

          • Bugmaster says:

            I don’t think it is necessarily the case that our concepts of logic are flawed. Rather, if God does exist, and if he created everything including logic, then logic is just another tool in his toolbox, like gravity. There’s nothing special about logic; the fact that our brains work on logic as opposed to Grand Cosmic Understanding or whatever is just part of God’s design. Trying to apply logic to God would be like trying to measure his mass or electric charge.

          • woah77 says:

            @Bugmaster You’ve structured god as an entity that can make and unmake the rules of the universe, which is not something that follows from any description of God I’ve found in the bible. Sure, in sunday school, for children, there is the simplification that “God is all powerful”, but that is like saying “Gravity always pulls down”. It’s true in a limited context, good enough for children and for most reasonable uses, but it is false in any larger context. There are many scriptures that refer to God as unchanging, consistent, etc. So when you ask me about what is an inherent quality of God, it feels a bit like you’re asking me why is a certain rock having a certain hardness a quality of that rock. It is a quality of God because God is made up in such a way where he is logic. He can no more do something illogical than you can unmake your bones.

          • Bugmaster says:


            You’ve structured god as an entity that can make and unmake the rules of the universe, which is not something that follows from any description of God I’ve found in the bible.

            Wait a minute, are you saying that God (or his angels/avatars, even) cannot perform miracles ?

          • acymetric says:


            Miracles do not necessarily break those universal laws in the way that some of these God Paradoxes would. Re-arranging someone’s molecules so that their blind eyes can now see, or their cancer goes away would seem to still be in-bounds here. Highly improbable but physically possible (though beyond our current technological ability) is not the same as paradoxical or impossible in the Universal (capital U) sense.

            *Disclaimer: I am not religious, just interested in being intellectually honest in this discussion

          • Bugmaster says:

            Well, technically, few things are “completely impossible”, given quantum mechanics. It is technically possible that, in the next second, a bunch of particles would spontaneously tunnel at the same time in such a way as to produce a fully-functional cuckoo clock right in front of me. It’s possible, but if it happened, I’d still call it a violation of physical laws.

          • J Mann says:

            @Bugmaster – if your understanding of omnipotence means “can create a triangle with four sides (and simultaneously make it true that all triangles have three sides and not be doing some kind of trick)” then God can just make it so that everything He’s doing is the best possible thing that can be done. Then anything else would be worse and he’s as perfectly benevolent as possible.

        • hnau says:

          Exactly. Thanks for adding this explanation.

          In this account (not necessarily mine, by the way– I’m just saying it’s a common account), “free will” is seen in the same kind of light as tianeptine or UBI in Scott’s examples, except that the trade-off isn’t just practical matter: it’s a logical necessity.

          Without some kind of “free will” (or “responsibility” if you prefer– as with Scott’s examples it probably works even if you insist on determinism), the Universal-Value-O-Meter probably just sits at zero, because nothing interesting (in the sense of moral choices) is happening. Does free will send the Universal-Value-O-Meter to a net positive? Quite plausibly, yes; and even if it doesn’t, Scott’s aside on more people suffering with UBI suggests that it could still be justified in his framework.

          C.S. Lewis (who’s a bit radical in this regard) tends to see Hell as a very direct expression of free will: it’s simply the state of consciously rejecting God. Other Christian writers see it as a natural consequence– rather than an artificial punishment– of behavior that amounts to culpable rejection. But this distinction doesn’t make much difference for the argument here. Even John Lennon couldn’t seem to imagine a world where Heaven exists and Hell doesn’t.

          • Bugmaster says:

            C.S. Lewis … tends to see Hell as a very direct expression of free will: it’s simply the state of consciously rejecting God.

            I’m not sure that I could consciously reject God, assuming I was convinced that He was a). real, and b). all-loving. I couldn’t even consciously reject believing that the sky is blue (and no, pretending like I believe that it’s really green with pink polka dots doesn’t count).

            Even John Lennon couldn’t seem to imagine a world where Heaven exists and Hell doesn’t.

            Maybe John Lennon couldn’t, but ancient Jews certainly could. In ancient Judaism, there’s no Hell, only Sheol, which is sort of a holding place for shades of the dead, regardless of their goodness or wickedness. Heaven does exist, but may or may not be inaccessible to all but a few especially spiritual people (I’m not 100% clear on this, admittedly).

            That said, I’m not even sure if the concept of “free will” is coherent under Christianity. Can God (who is omniscient, and the creator of humans) predict, with perfect accuracy, each action that any human would take throughout that human’s life ?

          • vaticidalprophet says:

            Even John Lennon couldn’t seem to imagine a world where Heaven exists and Hell doesn’t.

            Maybe Lennon just wasn’t as creative as you think. Positive afterlives existing and negative ones not is a common New Age theological claim, for instance.

          • 6jfvkd8lu7cc says:

            > Even John Lennon couldn’t seem to imagine a world where Heaven exists and Hell doesn’t.

            I think in «Imagine» the problem with Heaven is not that it implies Hell, but that it implies a Supreme Lord In Heaven with earthly lords claiming to act in the Lord’s name…

          • DavidS says:

            @Bugmaster: worth reading The Great Divorce. Rejecting God is rejecting God’s will and tends to involve things that come down to pride or desire for control/revenge.

            So e.g. someone won’t go to heaven because someone who they knew in real life as a criminal was there (the criminal having repented is welcome). A domineering wife won’t go unless she gets to run her husband’s afterlife because ‘he belongs to me’. A sophisticated theologian is unwilling to consider anything so gauche as a literal heaven where you walk around and see angels. And so on.

            Some personal axes being ground here clearly, but I think also quite an interesting take on psychology (ditto Screwtape). Well written too.

          • Bugmaster says:

            Fair point, I’ve never read The Great Divorce; I somehow got this idea that it’d be like Perelandra, which is the single most unreadable book that I’ve ever encountered. On the other hand, I did enjoy The Screwtape Letters a great deal — although some of its enjoyment surely comes from my impression that the book is as close to being totally atheistic as Lewis could allow himself to get.

            So e.g. someone won’t go to heaven because someone who they knew in real life as a criminal was there… A domineering wife won’t go unless she gets to run her husband’s afterlife because ‘he belongs to me’. A sophisticated theologian is unwilling to consider anything so gauche as a literal heaven…

            These aren’t very good examples, IMO. They would work reasonably well if God was some kind of a powerful human CEO, or a minor genie — in which case all these people would suspect his motives or abilities (and rightly so).

            But God is not merely a powerful being, he’s an omnipotent uber-Lord, so to speak; and he knows exactly what it would take to convince everyone to at least give Heaven a fair shake. You (or rather, Lewis) keep making this argument that various people could deny the evidence of Heaven before their eyes (or whatever spiritual sensoria that souls possess); but I don’t understand why the existence of Heaven is even deniable in the first place. By contrast, here on Earth, the existence of gravity is pretty much undeniable (except by brain-damaged people, I suppose). We might argue about the nature of gravity, but very few people would outright deny that heavy things tend to fall down.

          • DavidS says:

            @Bugmaster: some do deny it, and I think there’s a sense in which God can’t just overwhelm them (without being asked) without intefering with free will.

            But some of it is closer to the chapter “Rebellion” in the Brothers Karamazov where Ivan(and I almost feel bad posting this because it’s better to read the whole chapter: ideally the whole book but the chapter stands alone and gives the emotional context)

            Oh, Alyosha, I am not blaspheming! I understand, of course, what an upheaval of the universe it will be, when everything in heaven and earth blends in one hymn of praise and everything that lives and has lived cries aloud: ‘Thou art just, O Lord, for Thy ways are revealed.’ When the mother embraces the fiend who threw her child to the dogs, and all three cry aloud with tears, ‘Thou art just, O Lord!’ then, of course, the crown of knowledge will be reached and all will be made clear. But what pulls me up here is that I can’t accept that harmony. And while I am on earth, I make haste to take my own measures. You see, Alyosha, perhaps it really may happen that if I live to that moment, or rise again to see it, I, too, perhaps, may cry aloud with the rest, looking at the mother embracing the child’s torturer, ‘Thou art just, O Lord!’ but I don’t want to cry aloud then. While there is still time, I hasten to protect myself, and so I renounce the higher harmony altogether. It’s not worth the tears of that one tortured child who beat itself on the breast with its little fist and prayed in its stinking outhouse, with its unexpiated tears to ‘dear, kind God’! It’s not worth it, because those tears are unatoned for. They must be atoned for, or there can be no harmony. But how? How are you going to atone for them? Is it possible? By their being avenged? But what do I care for avenging them? What do I care for a hell for oppressors? What good can hell do, since those children have already been tortured? And what becomes of harmony, if there is hell? I want to forgive. I want to embrace. I don’t want more suffering. And if the sufferings of children go to swell the sum of sufferings which was necessary to pay for truth, then I protest that the truth is not worth such a price. I don’t want the mother to embrace the oppressor who threw her son to the dogs! She dare not forgive him! Let her forgive him for herself, if she will, let her forgive the torturer for the immeasurable suffering of her mother’s heart. But the sufferings of her tortured child she has no right to forgive; she dare not forgive the torturer, even if the child were to forgive him! And if that is so, if they dare not forgive, what becomes of harmony? Is there in the whole world a being who would have the right to forgive and could forgive? I don’t want harmony. From love for humanity I don’t want it. I would rather be left with the unavenged suffering. I would rather remain with my unavenged suffering and unsatisfied indignation, even if I were wrong. Besides, too high a price is asked for harmony; it’s beyond our means to pay so much to enter on it. And so I hasten to give back my entrance ticket, and if I am an honest man I am bound to give it back as soon as possible. And that I am doing. It’s not God that I don’t accept, Alyosha, only I most respectfully return Him the ticket.”

          • Bugmaster says:

            I’ve got to admit that I wasn’t as impressed by the Brothers Karamazov as I was perhaps supposed to be; there’s too much sophistry in it for my taste. Admittedly, it’s been a long time since I’ve read it, so maybe a re-reading is in order.

            That said, I don’t understand why God can overwhelm me with the notion of gravity (in the sense of “heavy things fall down”, at least) without infringing upon my free will; while doing the same with Heaven, or in fact some evidence of his existence, is a no-no.

            In the passage you quoted, Alyosha is confronting the classic Problem of Evil, and he’s making the choice to stick with his own moral intuitions (and/or conclusions), as opposed to blindly accepting the divine order as good and righteous. You could argue that Alyosha’s understanding of morality is flawed, and thus he doesn’t realize that God’s way is actually morally good — but whose fault is that ? By contrast, Alyosha doesn’t choose to believe that he can fly by will alone, since gravity is self-evident.

            In addition, many Christians (Lewis among them) often fail to make a distinction between believing in God’s existence yet refusing to worship him; and simply disbelieving that God exists (or even believing that some other god exists, instead). As far as I can tell, Lewis’s argument is that atheists and infidels secretly do believe in the Christian God, perhaps even on some subconscious level; but choose to deny him because of their pride, or desire to sin, or whatever. I’ve never found this argument particularly persuasive, seeing as it’s completely unfalsifiable and, IMO, rather intellectually arrogant.

          • The Pachyderminator says:

            In addition, many Christians (Lewis among them) often fail to make a distinction between believing in God’s existence yet refusing to worship him; and simply disbelieving that God exists (or even believing that some other god exists, instead). As far as I can tell, Lewis’s argument is that atheists and infidels secretly do believe in the Christian God, perhaps even on some subconscious level; but choose to deny him because of their pride, or desire to sin, or whatever. I’ve never found this argument particularly persuasive, seeing as it’s completely unfalsifiable and, IMO, rather intellectually arrogant.

            Not at all. Lewis explicitly affirmed, many times, that there are sincere unbelievers. And in The Last Battle (the last Narnia book, which is about the Apocalypse and last judgment), he depicts the salvation of someone who spent his life sincerely worshipping Tash, who is actually a demon.

        • nameless1 says:

          I think it is a mistake to link the concept of deserving to the concept of free will. Well, perhaps when discussing religion not. But in all other cases this implies a sense of cosmic justice that is not appropriate to human affairs.

          People often mean a gazillion different things when they talk about deserving, but I think the common ground is *predictable* consequences are just, deserved and serves them well either from natural law or social agreement.

          So we see these videos that some stupid drunk college student tries an unsafe stunt with fire and gets some burns and people say “well, what did he expect?” with strong undertones of “serves him right”. And I think it is not even about saying that “people who mispredict should suffer” but “people who mispredict MAY suffer, and that is okay”. That is, if he is lucky and does not get burnt, people will typically not say “too bad he should be burnt”, just don’t mind if he gets unlucky and gets burnt.

          I think this logic really predicts how people generally think about deserts. If we had unlimited resources, nobody cares if some people are lazy and do not work. But having limited resources, saying lazy people deserve to go hungry means they MAY go hungry, and that is okay. They do not MUST go hungry, they may have just hit the lottery and that is okay. But in a world of scarce resources not working predictably leads to bad consequences and it is their predictability that makes others turn the cold shoulder. “What did he expect?”

          This isn’t really about free will and cosmic justice. Ability to predict consequences does not require free will.

          • DavidS says:

            Surely predictable and chosen. If the college student had spontaneous combustion syndrome burns might also be predictable but not deserved.

            I agree sometimes use the ‘should have seen it coming’ but I think inconsistently. For instance, if Alice is predictably fired for doing something Bob disapproves of, Bob will default to a ‘well, she should have seen it coming’ response (even if he’s unwilling to actually back the firing more explicitly).

            But if Bob approves of what Alice is fired for, even though the firing is just as predictable, the same reasoning isn’t used.

        • Paul Zrimsek says:

          I rather doubt that anyone actually says “Now that you ask, I’d rather go to Hell.” When someone speaks of people choosing it, they usually seem to be referring to the paragraph on page 61 of the 75-page popup window that says “By clicking Disbelieve, you indicate your acceptance of our Damnation Policy.”

          • arbitraryvalue says:

            I don’t think the “hell is voluntary” people are the same as the “hell is a lake of fire” people; the way I understand it is like the story of where heaven is people at a banquet table feeding each other with long-handled spoons and hell is people at a banquet table going hungry because they can’t feed themselves with their long-handled spoons.

          • hls2003 says:

            This perhaps doesn’t address the larger theological point, but it brings to mind a story about a famous Frisian king, Redbad. Redbad was a pagan who fought (and defeated, albeit temporarily) the Franks under Charles “The Hammer” Martel. According to tradition Redbad was strongly considering conversion to Christianity. Upon inquiring with the missionary, he was told that if he converted he would go to Heaven, where he would be with other Christians, instead of to Hell, with all the other pagans. He then announced that he would rather be in Hell with his pagan ancestors than in Heaven with the hated Franks.

            Relevant Wiki

          • DavidS says:

            Not in Lewis, though it’s one of his less orthodox moments. At least in The Great Divorce he sees people getting repeated post-mortem opportunities to change their minds.

          • Jiro says:

            the way I understand it is like the story of where heaven is people at a banquet table feeding each other with long-handled spoons

            That metaphor only works when someone goes to Hell because of obvious selfishness. It doesn’t work so well when it’s “by this long chain of abstract philosophical reasoning, you could have concluded that action A hurts people and/or separates you from God and action B helps people and/or brings you closer to God. By performing A anyway, you have chosen to go to Hell.”

            The metaphor also doesn’t work well when you ask why someone already in Hell can’t repent and go to heaven. If you start saying “once you’ve decided never to feed someone with your spoon you can never do so in the future, because you’re in the spoon-decisions-are-irreversible place”, the metaphor collapses.

      • A1987dM says:

        But how would heaven be any better than Earth if the former had its Hitlers and Stalins too?

        • 6jfvkd8lu7cc says:

          By making sure nobody can maintain power over another one without continued consent. If we add abundance of resources, safety + comfort + powerlessness might be preferable to obliteration for some dictators.

          • Conrad Honcho says:

            But “consent” doesn’t solve anything. Feminists can give you all sorts of reasons why “consent” can’t truly exist in a sexual relationship between a high-status person and a low-status person. Socialists reject the idea that capitalism is fine because the worker “consents” to work for the capitalist.

            And “equality” doesn’t solve these problems, even in heaven, unless heaven erases any sort of individuality you have. In heaven, a talented musician will be “higher-status” than I am by virtue of his talent, unless the musician is made untalented in heaven or everyone in heaven is a rock star.

          • L. says:

            @Conrad Honcho
            I can give you all sort of reasons why we should initiate a 40k style worldwide purge of all homosexuals, but that don’t make ’em good or true.

          • 6jfvkd8lu7cc says:

            @ Conrad Honcho

            Consent doesn’t fully solve anything. Neither does resource abundance. But together they might still make the place much better than Earth.

            Withdrawing force and resource control as methods of coercion can improve a situation (while being a completely fantastic assumption) even if it doesn’t fully solve anything. Preventing both resource control and coercion force would also reshape the notion of status quite a lot…

            I think the most easily available examples of the objections you raise are closely related to resource control…

            Note that the amount of status allocated to _some_ talented musicians is also related to market structures and scale effects (and amounts of luck no less than the amount of talent involved).

            I wonder if I can declare lack of mental energy an illness to cure — or maybe a lack of resource — without wiping individuality; I do agree that there are interactions where a person could safely walk away but lacks energy to reflect and decide if it would make them happier.

            (So we probably agree on real-world situations better than on the choice where to put the most stress in a hypothetical situation; I guess I just didn’t make it clear enough that I mean relative comparisons)

          • Deiseach says:

            And “equality” doesn’t solve these problems, even in heaven, unless heaven erases any sort of individuality you have. In heaven, a talented musician will be “higher-status” than I am by virtue of his talent, unless the musician is made untalented in heaven or everyone in heaven is a rock star.

            Dante gotchu, fam 🙂

            As Dante ascends through the spheres of the heavens, he starts out in the Moon – the lowest sphere. He wonders if the spirits there do not wish to have ‘better’ places, to be ‘higher’ in Heaven. Piccarda, one of the Blessed Souls, explains to him (in canto III) that they are happy with their position, that to desire more would be discordant (and therefore cause misery), and that all the souls experience as much bliss as is possible to them as individuals. Beatrice (in canto IV) emphasises that the Souls he sees (and will see) are not actually stationed in the planets, they have come to meet him, and all are equally in Heaven (the Empyrean).

            Basically, asking “don’t you want to be happier, isn’t X happier because they are higher up than you?” is like asking “which is more full, a full pint pot or a full quart pot?” The quart may hold more, but when full to the brim both are equally full, neither is ‘fuller’ than the other, and trying to pour the quart into the pint will only spill and won’t add anything.

            Your musician may have more capacity for bliss, but he won’t be more full of bliss than you, and nobody needs to be all rock stars or all nobodies, everybody enjoys to the fullest what they are capable of experiencing. And in Heaven, status is meaningless, everyone there is a saved soul and nobody is higher-status than anyone – “not even Mary is in a better heaven than Piccarda who is in the lowest sphere”.

            Paradiso, Canto III

            64 ‘But tell me, do you, who are here content,
            65 desire to achieve a higher place, where you
            66 might see still more and make yourselves more dear?’

            67 Along with the other shades, she smiled,
            68 then answered me with so much gladness
            69 she seemed alight with love’s first fire:

            70 ‘Brother, the power of love subdues our will
            71 so that we long for only what we have
            72 and thirst for nothing else.

            73 ‘If we desired to be more exalted,
            74 our desires would be discordant
            75 with His will, which assigns us to this place.

            76 ‘That, as you will see, would not befit these circles
            77 if to be ruled by love is here required
            78 and if you consider well the nature of that love.

            79 ‘No, it is the very essence of this blessèd state
            80 that we remain within the will of God,
            81 so that our wills combine in unity.

            82 ‘Therefore our rank, from height to height,
            83 throughout this kingdom pleases all the kingdom,
            84 as it delights the King who wills us to His will.

            85 ‘And in His will is our peace.
            86 It is to that sea all things move,
            87 both what His will creates and that which nature makes.’

            88 Then it was clear to me that everywhere in heaven
            89 is Paradise, even if the grace of the highest Good
            90 does not rain down in equal measure.

            Canto IV

            28 ‘Not the Seraph that most ingods himself,
            29 not Moses, Samuel, or whichever John you please —
            30 none of these, I say, not even Mary,

            31 ‘have their seats in another heaven
            32 than do these spirits you have just now seen,
            33 nor does their bliss last fewer years or more.

            34 ‘No, all adorn the highest circle —
            35 but they enjoy sweet life in differing measure
            36 as they feel less or more of God’s eternal breath

          • Bugmaster says:

            I find Dante’s explanation unsatisfying, possibly because I have ready access to cyberpunk tropes, while Dante didn’t. From my perspective, it would seem certainly possible to increase (or, in fact, decrease) the size of the cup pretty much arbitrarily.

            That said, traditional descriptions of Heaven always sounded like wireheading to me, anyway. It’s just maximum bliss, and only maximum bliss, forever. I acknowledge that this might in fact be the best possible outcome for humanity, but on a purely emotional level, I find it quite sad.

          • Deiseach says:

            From my perspective, it would seem certainly possible to increase (or, in fact, decrease) the size of the cup pretty much arbitrarily.

            Which is fine! Now you have a deeper, broader, more subtle and more sophisticated appreciation of the Good, the True and the Beautiful! And best of all, you getting a bigger cup does not harm me or deprive me of anything!

            Heavenly bliss is not a pie that, if you get a bigger slice, I have to get a smaller one because the pie can only be so big and be cut into so many slices.

            We have limited language to speak of such things, and limited concepts; for some, “bliss” does indeed connote “drugged, lotus-eating, mindless pleasure” but that is not necessarily so. The saved will be happy, and that does not mean “turn off brain and wirehead”, it means “we will achieve the end of our creation, which is to know, love and be happy with God”.

          • Bugmaster says:


            And best of all, you getting a bigger cup does not harm me or deprive me of anything!

            Then why doesn’t God give everyone a cup of maximum (ideally, infinite) size ?

            The saved will be happy, and that does not mean “turn off brain and wirehead”, it means “we will achieve the end of our creation, which is to know, love and be happy with God”.

            To me, that sounds exactly like “turn off brain and wirehead”. I am not trying to be flippant, I literally can’t tell the difference.

        • DavidS says:

          Well, the Hitlers and Stalins would have reformed (and wouldn’t have the power to do harm to others anyway)

          @ConradHoncho: without wishing to sound personal, the idea that heaven can’t be heaven because someone might have higher status than you is the sort of thing Lewis takes aim at (and that leads people to end up choosing Hell). In The Great Divorce, choosing heaven is not about losing individuality but it is about letting go of pride (and accepting God’s grace to help you let go of it: like Eustace letting Aslan strip the dragon skin away from him).

          Incidentally there are people in his vision of heaven who are ‘high status’ but it’s not e.g. musicians. It’s people who were exceptionally loving and Christ-like on earth, and these are often people who you wouldn’t look twice at on earth. And nobody resents them for it, but is joyful about that.

    • JohnBuridan says:

      Of course, the attractive part of this conception is exactly what Scott finds attractive about the notion of “nominating oneself,” the person is the cause of their own misfortune. Extending the Christian metaphor to a sincere desire that all souls be saved, we would get the outcome that we treat the jerk in the group with respect and try to help him reform his ways as best as we can before expulsion.

      I think this is the ideal answer. We should expel people who do evil from the community, not with the slightest trace of vengeance, but through a humane process which attempts their rehabilitation into the community.

  7. phi says:

    Initial Thoughts:

    1. My intuition says that in almost any real world situation, banning the drug would be negative utility.

    2. Direct utilities might be comparable, but secondary effects clearly favor kicking the guy out. (As Scott pointed out, keeping that sort of person in the group means adopting an exploitable policy.)

    3. Basic income is a complicated issue, but let’s assume we have robots to do all the work that nobody wants to do. Then once again, I can’t really think of a reasonable situation where implementing basic income lowers utility.

    So, on the one hand, I feel conflicted about all of these examples. But on the other hand, that could just be because my perception of the utilities involved differs from Scott’s insistence that “no, really, that choice is the one with higher utility”.

    • Acedia says:

      1. My intuition says that in almost any real world situation, banning the drug would be negative utility.

      This seems like fighting the hypothetical to me. You’re sidestepping the point Scott was trying to make instead of engaging with it.

      • phi says:

        Yes, but it’s difficult for me not to fight the hypothetical here. These examples are meant to appeal to people’s moral intuition. I know that my moral intuition tells me not to ban the drug here, but with these particular examples, I can’t really tell why. Maybe it’s because my moral intuition thinks that people who carelessly become addicted to drugs have nominated themselves for to get the short end of a tradeoff. Then again, maybe it’s just because my intuition can’t reconcile Scott’s claims about expected utility with the story presented.

        • sandoratthezoo says:

          It doesn’t sound super hard to reconcile to me.

          If it helps, make it so the drug mysteriously doesn’t work for lots of depressed people. But those that it does work for, it works really well. This is not implausible: lots of psychiatric drugs work this way.

          Adjust the number of people it works for until the balance of utilitarian interests is clear to you.


          • fnord says:

            But now we’re left with a drug that should be only prescribed to a small number of people, but which is somehow making its way into the black market in large enough quantities to ruin many people’s lives. And, for the hypothetical to work, it has to be entering the black market through the medical system: if it’s being manufactured/imported illicitly, banning medical use won’t stop the abuse. So there’s unavoidably something implausible going on.

            Now, implausible thought experiments are fine, but being implausible also makes it unusual. So, to the extent that people have an intuition against drug prohibition based on standard cases (an intuition that Scott is inviting people to use by referencing a real case), the fact that this drug is (by hypothesis) much, much worse than standard drugs makes that intuition deceptive.

            It’s hard, I think, to intuitively model what the world has to look like for there to be a slam-dunk case for banning this drug. I think that’s all the more reason why its important to be willing to shut up and multiply.

      • Mr. Doolittle says:

        If I posit a hypothetical where the assumptions are opposite of an existing philosophical or moral position, then I can fully expect to get pushback from individuals who happen to hold those positions.

        Maybe on some level we cannot truly separate the hypothetical from the context (who said it, what was their purpose, etc.)

        As a hyperbolic example, imagine a scenario where all women liked getting sexually molested. Would it be bad for men to sexually molest women in such a scenario? We can answer the hypothetical in the scenario with a solid No. I would not be at all surprised if a Feminist complains about it or still says that molesting women is wrong.

        This is less obvious and more pronounced when the hypothetical cases are tricky and trying to help us feel out the limits of our intuitions. We are toying around with areas we don’t quite understand, and reacting to the split between the barriers of the fabricated situation and the intuition that was built on a different reality (which may be real, or our own hypothetical built on prior experience).

        It’s not a bad response to a hypothetical that stretches our mental boundaries to cry foul on the specifics. This would in fact be the preferred response if the hypothetical misreads the underlying tensions.

  8. Jack V says:

    Huh, interesting.

    My intuition is that there’s a big weight on “not screwing over a minority even if the average wellbeing would be improved by that”. This is the sort of calculation hospitals have to actually do with QALY etc, but I think it’s better to have some “helping the most screwed over” even if the calculations don’t make sense.

    And I guess my intuition says there’s a small extra weight on “letting people make their own mistakes”. I think this is often abused — e.g. if you print a 15 page list of warnings on something, I think everyone knows most humans won’t read that, and in practice, that doesn’t absolve you of responsibility towards people who don’t read it. But I might think it more urgent to fence off a hard-to-see cliffside danger, than to fence off a cliff that says “no diving, shallow water” that people ignore. If people just ALWAYS ignore it, then that’s who humans are and we need to prioritise our safety spending appropriately, but there’s also a place for “people can learn to take responsibility for themselves given an opportunity” and I’m not sure where the trade off is.

    (There’s also a lot of “allow people autonomy to make their own choices even if they seem bad” because the people imposing the choices may not know as much as they think they do.)

    [1] with the caveat that “banning it” may not be he best solution anyway

  9. DavidS says:

    I’m not sure this is a usual concept of desert: specifically, I think the last case is very unlike the others. Intuitively, at least for most people in modern societies, we think of sexual harassment and drug abuse as being personal failings with more or less of a moral overtone (especially the former). Whereas ‘not getting a job because you can survive without one even though in the long run it would make you happier’ seems to me to at least potentially be a far more fundamental aspect of humans being bad at making certain trade-offs. This is relevant both for sheer numbers/utility and for how we think about desert.

    It’s less like ‘a few will deliberately abuse this drug’ and more like discussions of whether we should sell foods that people compulsively want to eat and are bad for them because in a few limited circumstances or tiny quantities those sort of food groups are actually good.

    I feel there’s some distinction here: we would clamp down on sexual harassment even if the utility calculation was heavily skewed in favour of allowing it, because it feels like a direct imposition of one person on another. But for all sorts of other things – noise pollution, various behaviour some find obnoxious, but also providing facilities that are sometimes/often/usually/almost always misused – we have to feel our way through and I don’t think we can apply the same sort of absolute principles.

    Someone ‘choosing the short end of a trade-off’ is definitely part of that, but we consider other things as well. Including whether those ‘nominating themselves for the short end’ are largely disadvantaged in other ways, for various reasons around social capital, education and impulse control (e.g. I frequently see the argument that liberalisation is like this on drink, drugs, sexual liberty, gambling… in the UK people also argue that e.g. zero hour contracts and payday loans even if they have some beneficial use are in practice largely harmful and especially to the most vulnerable).

  10. Chris Wooldridge says:

    Isn’t this just a basic biological thing? People have a strong inbuilt desire to see certain people who have transgressed certain societally contingent norms being punished for it. This makes the majority of people happier by boosting their sense of “justice” and “fairness” (which is just codeword for “unclean person has been quarantined, society is now cleaned of their impurity”). From a utilitarian point of view, this can be defended on the grounds that it actually does make everyone happier to keep such a principle in place (you know there will be a massive outcry otherwise and that’s not going to increase net happiness).

    • carvenvisage says:

      happier by boosting their sense of “justice” and “fairness” (which is just codeword for “unclean person has been quarantined, society is now cleaned of their impurity”)

      What makes you say/assume it isn’t the other way around? If people have weird intuitions (purity or otherwise) that just happen to match up with the idea of useful things like reciprocal disincentivisation, the obvious conclusion is that the weird intutions are reflecting the latter, even if not by means of neatly placed rectangular grids.


      n.b. “people’s instincts aren’t always wrong” doesn’t in any way imply deontology is true. Deontology is a method which people love to mistake for a theory or description, and utlitarianism is a theory/description people love to confuse for the sole method. Sadly for the people who prefer the former mistake, true means “actually the case”, not “an easier to apply model”

  11. Tarpitz says:

    I’m afraid I think you are typical-minding many – probably the majority – of desert theorists. I think most people really do occasionally delight in the pain of others who they don’t think deserve it; actively wishing suffering on those who do seems almost inevitable given that.

    At the very least, I can speak for my younger self. I am now firmly with Clint Eastwood in the “deserve’s got nothing to do with it” camp, and would not send Hitler to Hell, but for as long as I endorsed desert I certainly did view the suffering of those who deserved it as desirable, not merely necessary.

    Perhaps you mean only to speak of those with a fairly sophisticated view of desert, rather than the mass of naïve desertophiles. In that case, the claim is more plausible, though I still have my doubts – doubts, to be clear, about your cohort: I believe you are more atypically nice than you realize, not that you are mischaracterizing your own position.

    • J Mann says:

      1) I think you’re right about desert. Most people I’ve talked to have trouble with watching people they feel are immoral thrive, and take some comfort in believing that sooner or later, dishonesty and other bad conduct catches up to you.*

      2) I’ll note that in Unforgiven, Will Muny’s response to surrendering a belief in desert is to return to a pre-civilized code of revenge killings, not to a more advanced state (where you hopefully are). He’s killing Bill because Bill killed Ned, who Will loved, and he doesn’t care whether it’s right or wrong. If someone loves Bill and is a good enough shot, they can try to kill Will.

      It’s a super powerful scene, in part because it’s the culmination of Will’s surrender to nihilism. He’s so angry about Ned that he’s given up on right and wrong altogether. “Yeah, I’ve killed women and children. I reckon I’ve killed just about everything that walks or crawls at one time or another. And now I’m here to kill you, Little Bill — for what you done to Ned.”

      * This is older than steam – Glaucon and Adeimantus raise it very effectively in The Republic, and I don’t think Socrates adequately responds.

      • cryptoshill says:

        So I think most people who believe in desert want bad people to suffer to preserve some higher moral order of the world – not because they want bad people to suffer. They just aren’t losing any sleep over that particular group of people suffering, because preserving the social order is a net positive. For example – even those who hold this view are sympathetic to bread thieves, but they would not complain about bread thieves being arrested. The intuition is that “preserving the no-theft rule” is more important than the suffering of the individual bread thief, and that his individual suffering is bad – but they won’t be willing to march against the imprisonment of the bread thief.

        • Tarpitz says:

          I agree that most desert theorists do in fact believe in some higher moral order of the word and believe punishments serve it. I just also believe that in many cases – not all, perhaps not bread thieves, but many – they actively take satisfaction from the suffering of those they disapprove of. Both motivations are in play. And again, not all desert theorists. I believe Scott means what he says. I just think he’s atypical in this regard – as indeed one must assume SSC readers are as a whole.

          • cryptoshill says:

            The amount of moral satisfaction derived in such suffering is directly proportional to the amount of perceived danger the defection creates. Ie: what would be the danger if we experienced norm-drift that makes this behavior ok.

            The real problem with desert theory is that it if a malicious actor cheats a deserts-theorist’s heuristics on these things, that malicious actor can get them to approve of all sorts of awful behaviors.

      • Matt M says:

        “Deserve’s got nothin to do with it” is quite possibly my favorite movie quote of all time.

      • Paul Zrimsek says:

        We should be careful not to mix up belief in desert with belief in karma. Whether someone ought to suffer is orthogonal to whether they will suffer.

      • Tarpitz says:

        Completely agree with your analysis of Unforgiven (my favourite film). I was much more in the market for shoehorning in a reference to a great line than actually suggesting I endorse Munny’s behaviour; that said, I think the sense in which my state is truly “more advanced” (if it is) is in my greater ability to rationally explain why I am not a desert theorist. My preference for non-punishment of the guilty except insofar as is socially useful (frantic handwaving) is essentially an arbitrary product of my contingent sentimental preferences. If my best friend was murdered, perhaps I would in fact want to take revenge, and that would represent a change in my preferences, not my moral reasoning. Boo! to hypothetical me, says actual me, but that’s all.

    • Randy M says:

      Yes. For one, I believe that a universe where a man who murders, say, his wife, then walks out the door and gets hit by a car and dies is preferable to one where he then lives the rest of his life happy and crime free.
      The universe where he repents and devotes his life to helping others in recompense is perhaps preferable to still. Nonetheless, wickedness leaves a debt that should be paid.
      Proportionality is important, and human justice systems need to be extremely cautious that they don’t cause worse consequences in administering it, and add whatever other caveats are necessary, but I and probably a great many people endorse karma or justice as principles.

    • Koken says:

      I think Will Munny would send everyone to hell, rather than no-one. After all, “We all got it coming”.

      • J Mann says:

        Will wouldn’t send his wife and kids to hell, even if that also meant sending Bill there. “We all got it coming” is, IMHO, limited to the hard men of the West – Will, the Kid, Bill, English Bob, etc.

        (I’m not even sure he would send Bill and Skinny there – he kills them out of revenge, but I’m not sure he wants them to suffer eternal torment).

        Here’s my take:

        1) In the beginning, Will is clinging to a sense of desert-based morality. His wife is a good person who took pity on them, and by taking care of their children, he’s doing some good that might make up for the ocean of evil he’s done.

        2) He resists Ned’s invitation to go on an assassination mission – he’s “not like that no more,” but he really needs the money, so he eventually rationalizes it into his morality. Maybe the cowboys have it coming – shouldn’t somebody look out for Silky, etc. He stays faithful to his wife’s memory when Silky offers him a “free one.”

        3) But ultimately, watching the cowboys die and learning of Ned’s death convinces Will to surrender to nihilism. Did the cowboys deserve to die for cutting Silky? Did Ned deserve to die for setting up the assassination? Does Bill or Skinny deserve to die? Does Will, who’s committed more atrocities than any of them?

        Will’s answer is that he doesn’t care. He’s going to kill Bill and Skinny on a pre-civilized rule of blood for blood. They’re all killers, so the fastest and luckiest can live, but there’s not much to it past that.

        • Dave92F1 says:

          “We all got it coming” is about death and sin.

          Death will come to all of us, and all of us have done morally wrong things and “deserve” death.

          There are no saints in Will”s world.

  12. 6jfvkd8lu7cc says:

    I think the case about UBI is also about inter-personal and inter-type utility comparison. The case about drugs… well, also like that a bit, I guess.

    Giving a person who doesn’t have realistic options a way to improve physical safety and comfort — versus something vague and abstract about meaning (where we didn’t even take away the option to use the old way, just stopped enforcing it)…

    It might be, by the way, that some people (me included — and likely you included, judging from some of the posts) _know_ that popular external ideas of comfort in life do not actually match the real internal preferences (neither the near-mode nor the far-mode ones), so the value of being away «bad» options is naturally deemed negative (they would take away quite a few good options too in the process anyway), and the value of giving more options is considered qute significant.

    Are we also on a different side of the paradox of choice from the majority?

    (In the harassment case it is a safety-safety tradeoff, and you assumed that burden-of-proof questions are clear, so it is a question of long-term norm drift — you shun the harasser because this specific community wants to avoid slowly phasing out the activities that are made less safe by the thought of possible harassment. I find this is a different kind of tradeoff, in a way.)

  13. nameless1 says:

    I don’t know how to put it in a way that does not come across in a bad way… but as someone struggling with mental health, I don’t trust my preferences. Preferences are predictions of what things gonna be good for me, and this isn’t working well for me. Don’t respect my preferences: do what is good for me.

    I know, I know. Preferences are kind of sacrosanct, the whole theory of modernity depends on them. Coercing people for their own good is known to be a slippery slope tiled with moral hazards and all that. Still. They need to be a bit less sacrosanct. “Revealing true preferences” is dodgy for this reason, I think.

    In a utopia, we would do what is the objectively best for people. But that requires predictions, and those predictions are fallible. People’s preferences as predictions matter for two reasons 1) they know the details of their situation 2) strong motivation to get the things they predict to be good for them, not bad. On the other end of the scale are experts like doctors who know the general logic how a thing works but not necessarily the details of the situation of an individual person, and their motivation to help may not be so strong, they may have other motivations like professional success through trying out a new method one could write a paper about or something. Our loved ones know the details of our situation pretty well, and usually want us good, this is why we let parents make decisions for kids. I personally know what is good for me, but I do not have a strong motivation to do that, due to the mental health stuff. I don’t think one can formulate a general principle, other than that we should try for the max() of professional expert knowledge, knowledge of the situation, and motivation to the good. The later two is usually what preferences are for. Usually, respect them. But if something is failing, if a person’s preferences lack the predictive power either due to not knowing their own situation, having a totally false theory of expert knowledge, or lack of motivation to the good, then yes preferences should be ignored which really implies something like coercion.

    If it was up to me, I would probably create a system that piggy-backs on the existing one where if your 95 years old grandpa is demented, you can become his legal guardian, he is effectively becoming a child in the eyes of law. The same way there would be some kind of an objective test of qualifying for being an adult, not automatically getting adult rights at 18, and people not qualifying would be legally children for life or at least until the condition preventing them from being really functioning adults is properly treated. I would probably not fully qualify and seriously having a guardian would do me good.

    • NoRandomWalk says:

      This was eye-opening for me, thank you.

      • Joseph Greenwood says:

        “Stumbling on Happiness” is one of the most influential books I have ever read. It’s about how bad people are at figuring out what will make them happy, and why we guess wrong about the future all the time. It taps into a lot of the same research that the rationalist community does, but goes in a very different direction with it.

        Basically, there’s good reason to think that people with mental disorders are not alone in predicting what will be good for them. I don’t self-identity as mentally disordered in any meaningful way, but I know I’ve guessed wrong about how much I will enjoy things before–this is especially true when I am sad and I think if I (say) go on a walk I will still be sad… but instead going on a walk makes me happy.

        I still mostly want people to respect preferences because 1) the concept of free will is important to me, and 2) I don’t trust other people to consistently guess better than I do in regards to my happiness. But respecting preferences is not a sacred value for me–it’s just something that pragmatically seems to work.

    • durumu says:

      Have you looked into Coherent Extrapolated Volition? It’s Yudkowsky’s attempt to formalize something like what you’re talking about, and you might find it interesting. Here is a whitepaper (not as technical as the usual MIRI whitepaper) that explains it.

    • I relate to this strongly, but I wonder if there’s a sneaky way of reframing it? I’m pretty sure I always want to have my preferences respected, but it depends on the ‘I’ in question. So, for example, if I was non compos mentis, I’d want to default to the preferences of the sane version of me, not the one who’s telling the nurse that he’s a giant butterfly and plans to jump out the window. We’re forced to prioritize competing preferences amongst our various selves all the times (e.g. hyperbolic discounting) – is it that much of a stretch to prioritize them across time, too?

      There are obvious issues with this – which me is the real me? – but I guess that’s why we have contracts, power of attorney, various laws that constrain us in useful ways, friends that might forcibly stop us doing dumb shit, and all sorts of other mechanisms binding us to the preferences we held at a certain point in time/frame of mind. Personally, I’m super glad they exist!

    • Tarpitz says:

      In a utopia, we would do what is the objectively best for people.

      I understand and sympathise with what you are getting at, but the trouble is that there is nothing resembling a consensus on what is best for people – I think most people would say it’s overwhelmingly likely that there is no objective fact of the matter at all, and that people fundamentally and irreconcilably disagree. This isn’t just an abstruse metaphysical concern: it really does actually matter to what we as a society decide to do about all sorts of things.

      • nameless1 says:

        Yes, I am absolutely aware how dangerous it is, it very very easily leads to one group forcing their preferences on the other one. The flip side of this is that humankind cannot be that wrongly calibrated to constantly want the wrong things, so there is probably some kind of a baseline of preference that points predictably at the good. Maybe this is what CEV is about, it was linked to me above.

        The thing is, the advice I would give to myself is far far better than my actual choices, revealed preferences. But if for example I would utter them in public, there would a danger that I would be signalling, I would be trying to make other people have a good opinion of me and hence not be honest. This is why revealed preference is better. More honest. Still, the true advice I would give myself is better than my actual revealed preferences. If I would put the advice I would give to myself anonymously in a closed envelope, and then this was somehow done to me, it would be good for me, albeit I might grumble.

        Isn’t it a bit of an id-ego-superego? Often the id generates our revealed preferences, pigging out on unhealthy food. At other times, our ego signals to others piously the importance of healthy lifestyles. But my best part is a kind of a superego, the kind of advice I would give to myself anonymously so without pious signalling, in a closed envelope.

        I wonder if part of the reason we have anonymous voting in politics is not just so that voters won’t be intimidated, but to cut down on signalling and make them vote for their more honest preferences. But it is kind of sabotaged by the whole shit-show of politics in general which is not anonymous. What if in a way we could create a system where our anonymous preferences are collected and taken into account, which are less signally than our publicly uttered preferences and less sabotaged by instinct and short-term desire and akrasia and addiction and whatnot than our preferences revealed in actual actions?

    • fion says:

      Very well put.

  14. localdeity says:

    Consider the effect over multiple generations. People’s propensity to (a) ignore all rational and obvious warnings and abuse a drug, (b) harass women despite all attempts to prevent it, or (c) choose a lifestyle they find miserable over one they’d find less miserable and never experiment with the other and discover they should switch; that propensity should not be constant between generations. First, there’s the upbringing: there’s a long list of stupid things that parents and schools tell kids not to do, and I’m pretty sure it has some efficacy. Second, there are probably genes that affect things like impulse control, stupidity, and propensity to ignore parents’ warnings about self-destructive behaviors.

    If making the stupid choice (and thereby suffering) causes people to have fewer descendants, then this makes the suffering a self-limiting problem—the fraction of people that end up making the stupid choice should go down, and even if the utilities added up for the first generation of people appeared to support banning the stupid choice, eventually the balance will go the other way, and then we get the benefit of the drug summed up over all future generations, which is enormous (unless you strongly expect us all to die soon, in which case I hope you’re working on problems more important than idiots injuring themselves).

    Does making the stupid choice cause fewer descendants? (a): I hope so; if it “destroys your life”, it probably makes it more difficult to have and support kids. (If the welfare system setup means that people with “destroyed lives” actually have plenty of leisure and nothing significant preventing them from recklessly getting pregnant / fathering children with those with similarly destroyed lives, and such people actually end up having more kids grow to adulthood, then this is an extremely bad consequence of the welfare system—worse than the taxpayer burden it implies—and it should be burned with fire. I don’t know what welfare systems this currently applies to, but I suspect it has applied to some in the past.) If they already have kids, well, I hope the kids take the parents as an example to not follow; if they don’t, I hope that’s a consequence of general bad judgment on their part that also leads them to be less successful elsewhere and generate fewer grandkids. (b) Almost certainly yes: cutting off desired social connections can only decrease one’s ability to find mates (unless one then spends more time finding new social connections and those happen to be better for finding mates—but for someone whose sex drive seems important to them, their desired community probably ranked highly for that). (c) Eh, maybe somewhat: being miserable is probably not an attractive quality, nor is not having developed and proven competency at some valuable activity.

    I strongly, strongly support society being set up in such a way that people who would do self-destructive things despite the best advice of their family, friends, and teachers are allowed to do them, and in which this leads to them having less influence over future generations (either through teaching or genetics). There might be reasons to delay getting there, if a fast transition would cause too many short-term negative consequences and destabilize society, but I think there’s no reason to move away from there.

  15. Bugmaster says:

    In all three cases, it sounds like you’re describing an addictive superstimulus. The antidepressant is a pretty typical case of chemically-mediated addiction. The harassment is a socio-sexual addiction. The UBI is, arguably, a socioeconmic addiction (e.g. to vegging out and playing videogames all day instead of working).

    The problem with superstimuli is twofold. Firstly, once you have indulged in a superstimulus, it changes your personality — perhaps irrevocably — to want more of the stimulus, to the detriment of whatever your goals were before that. Heroin addiction is an extreme example. But secondly, even before the addiction, the pleasure promised by the stimulus is too intense for some people to resist.

    If this is indeed the case, then banning the superstimulus makes sense, even from a Utilitarian point of view. It may not be the case that allowing the stimulus would only cause a few self-selected losers to overindulge in it; rather, it might embolden and even normalize the indulgence, swiftly overwhelming whatever tenuous barriers people have in their minds against dedicating their entire lives to the pursuit of this particular pleasure.

    What’s worse, as I said in some previous comments, I can’t even argue that pursuing such superstimuli is irrational; ultimately, I can’t build up a convincing logical argument against wireheading. On an emotional level, I feel that wireheading would be bad — but if I were wireheaded, I would feel nothing but pleasure all the time, so that’s not much of an argument…

    • localdeity says:

      If you spend your adult life doing nothing but wireheading (and, I suppose, eating and drinking and such when necessary—or just having an IV drip), then you will eventually die with no children. Meanwhile, those of us who have an “irrational” aversion to wireheading will continue to have children. Some fraction of those may go and wirehead too. To the extent that this “irrational” aversion can be passed on via upbringing or genes, it will be. Suppose it maxes out at a point where 60% of children still end up wireheading. Then couples can have 5 kids each and the population will be stable (or >5 kids for a growing population).

      The problem is self-limiting.

      • Bugmaster says:

        That makes perfect sense from the population genomics point of view, but it tells me nothing about whether I should prefer wireheading or not, rationally speaking.

        Additionally, we could even sidestep this problem completely, because if wireheading ever becomes technologically feasible, then so would cloning. Even in our current world, sperm/ova donation is pretty much routine (maybe less so for ova).

  16. Ivan Timokhin says:

    I’m confused. Didn’t you at some point write a post in defense of psych treatment of attempted suicide? Because it seems, superficially, to be the same kind of situation. There’s a policy — not treating people for attempted suicide — which seems to have some kind of a broadly-distributed upside (people that have carefully considered suicide get to commit it without interruptions, plus we free some amount of funding to be spent on other things), with the downside falling primarily on those who “chose” it. Sure, their choice probably didn’t get the careful deliberation it deserves, and they frequently end up regretting it later, but it seems that excluding choices that people end up regretting is going to eliminate the whole category almost by definition (well, except for the Caplanian kind, but those aren’t the problem).

    And, speaking of choices, aren’t pretty much all people demonstrably irrational? I don’t find it even slightly implausible that there’s a tradeoff on which one will choose a short end pretty much for everyone (and if there isn’t already, that’s what thought experiments are for). By this logic, shouldn’t we then enact all these policies at once, leaving everyone uniformly worse off (since, by assumption, the downside of each policy outweighs the upside)?

  17. Aapje says:

    There is a more general question about the extent to which liberty improves well-being.

    For example, many people seem to be poor at estimating the effects of their actions and thus make choices that leave them worse off. A person who underestimates the addictive effect of a drug or medicine, may only realize that their actions resulted in them getting addicted once they are already addicted.

    The more you see people as children who lack the ability to see the consequences of their actions, the more often it makes sense to have culture, the government or some other force limit people’s freedom to act, for their own well-being. This is similar to how, if a toddler falls off the stairs, we tend to blame the parents for not taking security measures, rather than argue that the toddler got her just desert and/or chose to fall.

    My personal point of view is that people are partially incompetent and thus should partially be protected from their own stupidity. Because people differ in their competence, but the protection can often not be tailored to the person, we cannot help that the protection is too strong for some, but too weak for others.

    This is then very similar to the case where you have different groups with different needs, where some have a legitimate desire and would be best off with a liberal policy, but where others have an illegitimate desire and would be best off with an illiberal policy. Ideally you’d differentiate between the groups, but if you can’t, you have to choose to what extent you will accommodate each group.

    • L. says:

      What should you be denied in order to be protected from your own stupidity?
      Drugs, guns, management of finance, porn, alcohol, tobacco, medical autonomy, internet, sexual autonomy, video games, TV, career choice, gender autonomy, kinks,….etc.? Anything at all?

      • durumu says:

        Well, of course, we already (attempt to) deny people their drugs and medical autonomy, and don’t really deny people any of those other things, so I think it’s safe to call that a Schelling fence. Personally, I’m not sure that’s the best place for us to be (I would support, for instance, allowing FDA-approval-pending and FDA-unapproved drugs to be legally sold online-only), but I think that eg banning heroin is probably a positive-utility move.

        • Ghillie Dhu says:

          I think that eg banning heroin is probably a positive-utility move.

          Then I’m skeptical that you’re accounting for the costs of enforcement & violence associated with the black market in heroin.

        • L. says:

          @Guy in TN

          My question wasn’t meant to imply a slippery slope, but to semi-genuinely inquire what he thinks should be denied to him due to his own stupidity.
          Semi-genuinely because I strongly suspect the answer to that question is exactly the same as I would give.
          You may notice how Aapje’s proposition/protection has been presented as good given to the people to which it applies, something to improve their lives; yet it is most unusual good.
          It is perhaps the only good in existence that we are so altruistic about that we are willing to give it to everyone else and have literally none for ourselves; the only good that often when encountered in the wild by its beneficiaries makes the beneficiaries feel frustrated and angry; probably the only good that its beneficiaries will try to subvert and undo even when fully seeing what it is.
          Either that’s an odd good or I’m an odd man, because I have yet to encounter that good in the wild and feel anything other than overwhelming desire to personally strangle every single one of my protectors and their supporters in their sleep.

      • Guy in TN says:

        We already regulate (i.e. control people’s access to):

        Drugs, guns, management of finance, porn, alcohol, tobacco, medical autonomy, internet, sexual autonomy, video games, TV, career choice, gender autonomy, kinks

        (With the exception of perhaps “gender autonomy”)

        And it has not led to slippery-slopes. By harnessing public preferences via democracy, the level of regulation expands and contracts as people gauge their various benefits and trade-offs.

        • Salem says:

          And it has not led to slippery-slopes


          • Guy in TN says:

            We ban heroin but we don’t ban Cheetos. We ban Gatling guns, but we don’t ban pitchforks.

            Arguments along the lines of “this will spiral out of control” have to reckon with the reality that it has not.

          • Nabil ad Dajjal says:

            @Guy in TN,

            You’re clearly not familiar with the UK’s proposed knife bans or the various campaigns here in the States against soda.

            There are absolutely slippery slopes here, which we’ve been slowly riding for longer than I’ve been alive.

          • Guy in TN says:

            The imagery of the “slippery slope” is that it is one-directional: once you go down, you don’t ever go back up.

            Marijuana was once banned- now its being legalized.
            Alcohol, porn, and sexual autonomy were also much more regulated than they are now. If regulation of these behaviors is a “slippery slope” towards ever-escalating control, then this shouldn’t be happening.

            Your counter-examples are just evidence that its not a slippery-slope towards de-regulation either, but rather an ebb and flow that changes with time.

          • Nabil ad Dajjal says:

            You seem to be misunderstanding the metaphor of the slippery slope.

            Saying that something is a slippery slope means that there’s no obvious stopping point. It doesn’t mean that any changes are impossible to reverse, or that change must be immediate.

            And ever-increasing control sounds about right for today’s regulatory environment. Even the most insane and powerful despots of history wouldn’t have been able to enforce laws on the allowable curvatures of bananas. We’re in an absolutely unprecedented era of state control.

          • baconbits9 says:

            Saying that something is a slippery slope means that there’s no obvious stopping point. It doesn’t mean that any changes are impossible to reverse, or that change must be immediate.

            I would say that a core portion of the concept is that it is easier to go one direction than the other.

          • Jiro says:

            We ban heroin but we don’t ban Cheetos.

            The argument is not “banning heroin will lead to banning everything”. The argument is “banning heroin will lead to everything being banned or not banned on an arbitrary basis”.

      • Bugmaster says:

        I would say that we should deny such things that are likely to cause great harm, far in excess of their benefits. So, maybe we can allow handguns, but not rocket launchers, that sort of thing.

        • woah77 says:

          I have to say that the concept of “not allowing rocket launchers” never made much sense to me, because the cost of acquiring a rocket launcher legally necessitates a level of capital that suggests it will be unlikely to ever be used in a criminal or harmful way. I suggest that certain market effects create their own barrier to entry which is far more rational than simply saying “You can’t have it”

          • Bugmaster says:

            Firstly, I am pretty sure that I could afford to buy a rocket launcher, assuming they were legal. I am also pretty sure I’m exactly the kind of person who shouldn’t own one.

            Secondly, rocket launchers are difficult and expensive to obtain primarily because they are illegal. Yes, they intrinsically are more expensive to manufacture than handguns, but not by that much.

          • woah77 says:

            I think there are controls that could be implemented to keep them expensive to sell to civilians, barriers to entry of one form or another, such that only the extremely wealthy could ever purchase one for private use.

            Also, I’d like to point out that “I’d get myself killed” is not a reason to make something illegal, and just a reason to exercise self control.

          • Garrett says:

            because the cost of acquiring a rocket launcher legally necessitates a level of capital that suggests it will be unlikely to ever be used in a criminal or harmful way

            I decided to go check that out and apparently the cost for the current US single-use shoulder-fired rocket of choice, the AT4, is $1,480.64. That’s about as much as a higher-end COTS handgun or mid-range rifle. One use only, but definitely affordable. I’ll take 2!

          • Aapje says:


            That is what the military pays. The person you quoted said ‘legally, ‘ which for a US civilian means getting a $1000 per year federal license (at which point you are technically a dealer), as well as paying a $200 tax for each ‘destructive device.’

      • Aapje says:


        What should you be denied in order to be protected from your own stupidity? Drugs, guns, management of finance, porn, alcohol, tobacco, medical autonomy, internet, sexual autonomy, video games, TV, career choice, gender autonomy, kinks,….etc.? Anything at all?

        Child labor, unsafe work conditions, slavery contracts, unsafe building construction, statutory rape, being required to pay for social security, etc.

        Your examples are quite typical for someone who takes many ‘nanny state’ benefits we get for granted and who thus merely recognizes the things where they disagree with the nannying.

        • L. says:

          So you consider yourself so stupid that you feel you cannot trust yourself to make a decision when unsafe work conditions or a slavery contract would be beneficial for you to take?
          Too stupid to make a decision whether or not you want to live in subpar buildings or make a budget in which you determine how much of your earnings goes toward your social security related stuff?
          If so, I must congratulate you, for not many people would have the courage to admit that they are so stupid they cannot trust themselves to not sell themselves into literal slavery unless absolutely necessary.

          Out of your six things you listed I violated one until I got too old to violate it, am currently violating one and would be involved in violation of another were it my employer who was exposing me to similar level of risk instead of doing it myself.

          • Aapje says:

            This is not about me personally, but about society in general. There are indeed many people who make rather dumb and shortsighted decisions.

            You are also assuming a well-informed person, which is not necessarily the case. There is no way I can realistically know whether the buildings I go into are built safely.

            Thirdly, you are assuming that the person is not dealing with strong pressures. A person may feel pressured to sacrifice their well-being or take substantial risks because otherwise their well-being is reduced in other ways. Social approval/stoicism/a protestant work ethic plays a major role here, where people may self-sacrifice rather than care for themselves.

            Laws can stop/unwind purity spirals, stopping competitions where people prove their fitness in societally harmful ways.

            Of course there is something to be said for putting these decisions in the hands of individuals who best know their circumstances, but there is also something to be said for not doing that. Like usual, a mixed system is best where the state intervenes when excesses happen. Of course, what is an ‘excess’ is a matter of contention.

        • cuke says:

          This is a wondering I have about libertarian views. From the outside they sometimes seem to me to be unaware of certain historical experiences. I get that libertarianism has a certain utopian idealism about it that makes it aspirational, but then at some point it has to engage with how we got here. I’m positive there are a lot of really smart people who are libertarians and so imagine many of them must not be ignorant of history.

          Proposing that society protect people from child labor, statutory rape, slavery, faulty construction, etc and the response being “are you so dumb you couldn’t protect yourself from these things?” is one of those moments where someone like me who is sympathetic and interested in understanding more about libertarian views will come to certain unhelpful conclusions about libertarianism. I’m saying this by way of feedback if the goal is to have a productive conversation with people who don’t already hold your view.

          • ec429 says:

            Proposing that society protect people from child labor, statutory rape, slavery, faulty construction, etc and the response being “are you so dumb you couldn’t protect yourself from these things?”

            The point, though, is that there are times when a child really, no-seriously-he-knows-what-he’s-doing, wants to labour. Children worked in mills because the alternative, given the amount of wealth that existed at the time, was generally starvation (or back-breaking rural poverty, if they stayed out of the cities).
            And sometimes a couple are sufficiently emotionally mature to give informed consent, but the law declares they’re underage and therefore they must wait or else it’s rape.
            The faulty construction one is similar to the child labour one, while with slavery it’s important to make a distinction between being forcibly enslaved and selling oneself into slavery (or ‘perpetual indentured servitude’). The sort of ‘protections’ you speak of are only to prohibit the latter, because the former is already prohibited by it being, y’know, theft of someone’s freedom. Historical slavery like the African slave trade was invariably of the coercive kind (generally African blacks were captured by other African blacks to be sold as slaves), and treating the other kind as equally repugnant is a case of the noncentral fallacy.

            In general what’s happening is that the paternalistic gentry-liberal (who has no skin in the game) says “outcome X is so horrible that I can’t believe anyone would ever choose it. Therefore, let us ban X”. So now all the people whose options were X or Y are stuck with Y, which is even worse (else they wouldn’t have been choosing X in the first place).

            So what the libertarian is asking is “are you really so dumb that you’d choose X even when it’s a terrible idea and not at least somewhat near to the best of a really terrible option space?”

          • Guy in TN says:

            So now all the people whose options were X or Y are stuck with Y, which is even worse (else they wouldn’t have been choosing X in the first place).

            This is a point of contention- the assumption that people are, in fact, always making decisions that are best for themselves. Scott alluded to the problems as it relates to drug addicts.

            You acknowledge that a parent might know what increases a child’s utility, more than the child himself, right? So given the variation among adults in knowledge and expertise (and clouded judgement due to addiction, or moments of passion), you can’t determine whether X or Y is “better” or “worse” for a person simply by looking at which one they choose.

          • L. says:

            You’re both missing the overarching argument I’m trying to make and misrepresenting the implied one.
            Overarching argument is that the protection suggested by Aapje is not only something which people almost universally reject for themselves and see as an unwanted burden, but the very basis of that protection is something they reject when it comes to themselves. I am willing to bet Aapje is one of those people when it comes to him.

            As for the implied argument, it’s not a general argument that society shouldn’t protect you from slavers and faulty construction, as that would mean that anyone can freely at their whim make you a slave or sell you a house that crumbles the moment you purchase it.
            No, the argument is that society shouldn’t protect you from your own choice if upon being informed precisely what the slave contract says or what the exact condition of the house is, you choose to take the slave contract or purchase the house.
            If a man tries to sell me heroin by telling me it’s this harmless cough suppressants that has no know side-effects, then yes, society should interfere, but if I’m well aware of what heroin actually is and still chose to buy it, then I neither desire, accept or consider justified society protecting me against that choice.
            To give you an example form my own life, I well acquainted with what several plants do to the human mind and the risk that comes with them, yet do to the benevolence of society that is very serious about protecting me from my own choices I had to jump through multiple hoops and risk prosecution in order to get the said plants; I wish to have nothing to do with this benevolence and reject it in all its incarnations.

          • L. says:

            @Guy in TN

            Suppose that is true to a significant enough degree; are we to accept that our freedom is so shallow that it should only exist when we are making the best choices for ourselves?

          • Guy in TN says:


            Suppose that is true to a significant enough degree; are we to accept that our freedom is so shallow that it should only exist when we are making the best choices for ourselves?

            We shouldn’t try to micromanage all behaviors, but we should control some behaviors.

            For example:
            If someone just went through a nasty breakup and decides to eat a tub of ice cream, I probably can’t say, as an someone who is not them, whether the emotional benefits to ice cream is really outweighing its disutility in physical health, or whether the person’s judgement is too clouded. So let them eat the ice cream.

            But if someone goes through a nasty breakup and decides to eat a tub a cyanide, I probably can say (based on our shared humanity) that their utility-judgement is being clouded by strong emotions, and best outcome would be to prevent them from making their own decisions. So they shouldn’t have the freedom to eat cyanide.

            Freedom of choice, like every other value, has utility trade-offs. Its not my terminal value, and I’ll gladly kick it to the curb if it gets in the way.

          • L. says:

            @Guy in TN

            Who are you to decide their best outcome and where do you draw the line?

            Look at your ice-cream example, you claim we shouldn’t try to micromanage all behaviors, yet the only argument you bring as to why you aren’t stopping them from eating the ice-cream is ignorance of utility math.

            Also, you claim their utility-judgement is clouded by strong emotions, but that’s not true. Their utility-judgement isn’t clouded by strong emotions, it is incorporating strong emotions.
            Emotions are powerful source of values, so for example when I’m very angry I value violence and suffering; that’s not my utility-judgement being clouded by strong emotions, that’s my emotions being a source of values that go into the utility-judgement. I am still able to think and make the utility-judgement that those desired things would be against several other values I hold.
            The strong emotions of your suicidal dumpee just happen to be a source of value that make death the best choice.

          • Guy in TN says:

            Also, you claim their utility-judgement is clouded by strong emotions, but that’s not true.

            Have you ever considered that, among survivors of suicide attempts, the proportion of people who are glad someone saved them is >0%? Which is to say, the will-to-revealed-preferences pipeline actually does get clouded sometimes?

            Their utility-judgement isn’t clouded by strong emotions, it is incorporating strong emotions.

            If you aren’t convinced by the suicide example, I wonder if would you apply the same reasoning to the behavior of children? That is, if a toddler gets the emotional urge to run in front of traffic, a parent shouldn’t pull him away?

            There’s nothing that happens in the human brain when they turn 18 where we can go “ah, now they can fully comprehend the consequences of every action.” We’re all swayed by bursts of fleeting irrationality, and our behavior at a given moment should not be taken as conclusive evidence of a person’s utility function.

          • L. says:

            @Guy in TN

            Sometimes in anger I hit things I latter wish I hadn’t hit, but that isn’t a sign of a clouded judgment, as I wanted to hit those things and found it enjoyable; it’s a sign of different preferences across time.

            I would not apply the same reasoning to children as they not fully developed human beings that have all of the capacities of an adult human being, once they become that, they can dance on the freeway for all I care.

            We all may be swayed by bursts of fleeting irrationality, but you haven’t shown that. Your case of suicidal dumpee is an example of a person in great emotional pain (however trivial it may be to us) that results in the creation of a value of release from the pain. As the emotional pain is great, so too is the value, which causes it to overshadow all other values. Like it or not, of the options that offer release from the pain, death is likely the quickest and easiest, which makes it a rational choice.

          • Guy in TN says:

            it’s a sign of different preferences across time.

            If that’s the way you want to conceptualize it, you could say that I’m advocating for mandating the fulfillment of expected future preferences, at the expense of fulfilling short-term current preferences. In certain circumstances, at least.

            I don’t see why the creation of value now, necessarily takes precedence over future value creation.

          • Guy in TN says:

            once they become that, they can dance on the freeway for all I care.

            Because you are assuming they fully comprehend the consequences of their actions, right?

            But what if this wasn’t the case? There are plenty of aspects of life that I don’t understand. What a certain medication does. How healthy a certain gas to breathe is. What makes a building earthquake resistant or not.

            Really, the aspects of life that I can say I absolutely understand the consequences of is rather small.

            Even in your “dancing in the street” example: I can’t tell you if getting hit by a car going 30 miles per hour would kill me on the spot, or merely leave some bruises.

          • Aapje says:

            A major issue with voluntary slavery and contracts in general is that it undermines exactly the ability to freely make decisions.

            For example, I think that the strongest argument in favor of divorce is the ability to escape an abusive marriage.

            In my opinion, the idea that this issue can be solved with a combination of people informing themselves & writing contracts that describe what is and what is not allowed is not realistic. In most cases, being fully informed is impossible. Writing fully exhaustive contracts is nigh impossible for non-trivial situations. Society cannot function without trust and making reasonable assumptions. These two issues mean that basing society purely on contracts won’t work. It will result in endless rule-lawyering and abuse of loop holes.

            It will (ironically) greatly restrain people, as smart people will not dare to make any contracts but the ones that have been gradually improved over time to feature the fewest loop holes. The few people who will dare to be different will face enormous bureaucratic costs, as they will need immense contracts for non-standard agreement they want to make.

          • ec429 says:


            Society cannot function without trust and making reasonable assumptions.

            I don’t think anyone’s saying courts can’t interpret contracts and make reasonable assumptions about what they mean for things they miss out. That, however, is different from the unconscionability doctrine whereby the court says something like “Well, the contract said that A would enter indentured servitude to B, but no-one would ever be willing to do that so we’re gonna declare it void”. Which in turn is different from the fraud case of “A reasonable-man would not be aware that that was what he was signing”.
            You seem to be conflating “everything is up-for-contractual-grabs, even slavery” with “contracts are subject to literal, even mechanical, interpretation and inflexible enforcement”. Which may be an argument worth having (contracts in such a world could, for instance, incorporate-by-reference an existing and thorough code of interpretation and equity, and also a separate code of, say, building safety standards if there’s a building involved) but I don’t think it’s the discussion anyone else in this thread thought they were in.

          • SaiNushi says:

            You are confusing “all libertarians” with “anarcho-capitalists”. As a libertarian, I believe that the government should intervene if a person is interfering with another person’s rights. I believe that every person has a right to autonomy, bodily integrity, personal property, freedom of association, freedom of speech, and a basic education (up to 8th grade level, plus basic safety courses and intro to logic). I believe that a local government should deal with things that affect all the locals, and the state government should deal with things that affect everyone in the state, and the federal government should deal with things that affect everyone in the country. A lot of libertarians would love to see something like Scott’s archipelago solution, with each town being it’s own government, and assistance to help people move if they prefer a different government that’s available elsewhere.

    • Faza (TCM) says:

      You do realize that this approach is trivially attacked, I hope?

      If people are incompetent (to whatever extent) and require protection, then it appears necessary that there exists a competent authority that shall institute the protections. If the authority charged with protection is, itself, incompetent, the cure could well be worse than the disease, no?

      We need a competent authority and we’ve already established that people are incompetent. Quite the conundrum, eh?

      Of course, anyone interested in assuming the role of such authority would claim: “I am competent. It’s all those other people who are incompetent and need to be protected from themselves.”

      To which the correct answer is: “Well, you would say that, wouldn’t you?”

      Given that we cannot ensure that whatever authority we may establish to protect people from their own incompetence is itself competent enough to do a good job, and that anyone who actually wants the job should probably be kept well away from it (for reasons of its dictatorial nature), the only way forward that I see is to create a Superintelligent AI that will be supremely more competent than any human in order to protect us from our own incompetence.

      And thus the end begins…

      • albatross11 says:

        There’s also a huge agency problem here. Once I’m in a position to make decisions for your own good, there’s a lot of temptation for me to make some of those decisions for my own good instead.

      • Guy in TN says:

        If people are incompetent (to whatever extent) and require protection, then it appears necessary that there exists a competent authority that shall institute the protections. If the authority charged with protection is, itself, incompetent, the cure could well be worse than the disease, no?

        We need a competent authority and we’ve already established that people are incompetent. Quite the conundrum, eh?

        Not a conundrum, because you are switching from “people are incompetent” to “all people are equally incompetent, all the time”, which is a different thing.

        The scientists at the FDA are more competent that I, at determining what drugs might kill me. And the people at the EPA are more competent than I at determining what gases might give me cancer.

        • Salem says:

          The FDA have far more knowledge about what causes cancer than you do. They also have far less knowledge about you than you do, and far worse incentives. For instance, they get far more blame for allowing an unsafe medication to be sold, than for preventing a safe one from being sold, which makes them too conservative in allowing new products onto the market. By some calculations, they have cost millions of lives.

          Faza (and James Madison) is right:

          If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.

          • Guy in TN says:

            The object-level question of the net-utility of the FDA aside, a utilitarian could theoretically acknowledge that:
            1. The FDA knows less about me personally than I do.
            2. The FDA has a different set of incentives than I do.

            And still come to the conclusion that giving drug-regulating power to the FDA is a good idea, because its worth the trade off for their superior scientific knowledge.

          • baconbits9 says:

            And still come to the conclusion that giving drug-regulating power to the FDA is a good idea, because its worth the trade off for their superior scientific knowledge.

            No they can’t. They can come to that conclusion, but they can’t come to a utility maximizing conclusion as that would require knowledge that they cannot have.

          • Guy in TN says:

            Utility-maximizing inherently involves working within given constraints, including imperfect knowledge.

            In the trolley problem, we don’t say that the utility-maximizing option is for the trolley to sprout wings and soar into the air. Instead, we’ve got to work with what we’ve got.

          • baconbits9 says:

            No, the Trolly Problem aims at as much certainty as you can get, either 5 dead or 1 dead. Organizations like the FDA obscure the relevant data, you don’t get to look at the last 20 years and judge a group of outcomes that used the FDA vs a group that went it themselves. The utilitarian argument requires that you have the information that the FDA will prevent you from collecting.

          • Guy in TN says:

            The utilitarian argument requires that you have the information that the FDA will prevent you from collecting.

            It certainly does not. Like I said earlier, you can make utility-maximizing decisions without having perfect information. Even in the trolley problem, we don’t know everything useful, like the people’s age (I’d save one baby over two 110-year old men, for instance)

            So the lack of controlled studies regarding the existence/non-existence of the FDA does not mean that utilitarians are left flying blind, making decisions via coin flips. You can make an inference via larger trends, lived experiences, and heuristics.

          • baconbits9 says:

            It certainly does not. Like I said earlier, you can make utility-maximizing decisions without having perfect information

            Yes it does, because you decisions aren’t one offs. It isn’t “should the FDA regulate either every single thing that exists or nothing” its “what should the FDA regulate”. Restricting information prevents this decision making from being accurate which prevents the possibility that you are making utility-maximizing decisions by any method other than chance.

            Even in the trolley problem, we don’t know everything useful, like the people’s age (I’d save one baby over two 110-year old men, for instance)

            This information is not being obscured by the nature of the trolley problem, the question of “what would the rate of heroin addiction be absent a federal ban on heroin” is obscured by a federal ban on heroin.

          • Guy in TN says:

            This information is not being obscured by the nature of the trolley problem, the question of “what would the rate of heroin addiction be absent a federal ban on heroin” is obscured by a federal ban on heroin.

            Declining to obtain information is often the utility-maximizing decision, particularly if there are second-order effects that would be suspected of creating disutility.

            For example: I don’t know whether jumping off a 100 foot high cliff would be amazing and feel fun, since I’ve never done it before, and I can point to no experiments involving my particular body and that particular cliff. So I could jump off that cliff, and obtain previously unknown knowledge that would help me make more utility-maximizing decisions in the future.

            You wouldn’t say the utility-maximizing decision is to jump off the cliff, right?

          • Bugmaster says:

            @Guy in TN:
            To be fair, that’s a rather biased example. On the one hand, you have plenty of strong evidence to suspect that jumping off the cliff would kill you (based on what you already know of gravity, the durability of your body, direct observation of other jumpers, and so on). Yes, obtaining data is important, but in this case the next expected value is quite low.

            You could improve your example by saying something like, “I’ve managed to accidentally synthesize some kind of liquid while in a drunken stupor last night; I have no idea what it is or does, so should I drink it now ?”. But, in this case, there are many more ways for a liquid to turn out poisonous than for it to turn out pleasant; even if you know nothing else about the liquid, drinking it still has a very low expected value.

        • Faza (TCM) says:

          [Y]ou are switching from “people are incompetent” to “all people are equally incompetent, all the time”


          The question isn’t one of relative, but absolute competence, that is: whether the designated authority is competent enough.

          No disagreement that the level of incompetence will differ between people, but once you nominate a person to decide what another person may or may not do “for their own good”, it becomes imperative that the authority is able to make decisions that are, in fact, better for the subject than those the subject could have made for themselves.

          It is not at all clear or certain that this will, in fact, be the case and the latter part of my reply touched on this.

          Authorities tend to act non-persuasively – they don’t convince you to do or not do certain things, they tell you (and often have means to compel compliance). Of course, this does not preclude you being persuaded to follow their dictate without need of compulsion, by… ahem… deference to authority. There is a subtle trap here though:

          The scientists at the FDA are more competent that I, at determining what drugs might kill me. And the people at the EPA are more competent than I at determining what gases might give me cancer.

          You presume the above to be the case, but I posit that by virtue of the fact that you are less competent in the subject matter in question, you are in no sound position to judge whether the FDA or EPA are themselves competent. If you were competent enough to say with certainty that the FDA or EPA are competent in their areas of expertise, you wouldn’t need either.

          • Guy in TN says:

            You presume the above to be the case, but I posit that by virtue of the fact that you are less competent in the subject matter in question, you are in no sound position to judge whether the FDA or EPA are themselves competent. If you were competent enough to say with certainty that the FDA or EPA are competent in their areas of expertise, you wouldn’t need either.

            Oh, come on. Do you trust that Stephen Hawking knew more about physics than a random TV celebrity? And how can you say, without yourself being such a world class-expert in physics that you don’t even need to read his trivial, elementary observations?

            It’s a standard you apply to no other aspect of life, I imagine. I posit that people routinely (and accurately) rank a person’s competence in a field, without having knowledge of the actual inner-workings and content of that field.

          • Faza (TCM) says:

            With great power comes great responsibility.

            Frankly, it doesn’t matter very much whether Hawking knew more about physics or not, because Hawking’s knowledge did not have a political dimension (at least, as far as I’m aware of).

            Look, I’m an economist, by education. Here’s what I know about economics:

            1. Despite popular misconceptions, economics is a science. Certain parts of economic theory are as solid as possible – given the sheer complexity of the system studied (compared to economics, physics is easy).

            2. The moment economics gets intertwined with politics (which happens most of the time), it ceases to be a science and becomes politics. This is true of both the statist left wing and the libertarian right wing.

            Economics isn’t the only branch of knowledge that is subject to this problem. Every time you have science being used to drive policy, the science becomes subservient to policy. If you don’t believe me, I propose hunting down some of the actual publications underlying specific “science-based” policies and see how much of the nuance gets lost between the study and the policy. Looking at predicted effects at the time a policy is instituted and the actual effects down the line is also instructive.

            I’m as big a fan of science as the next man, but scientism (the blind faith in science providing the right answers to difficult questions of policy) is a cancer. It’s like the replication crisis never happened!

          • Guy in TN says:

            Economics isn’t the only branch of knowledge that is subject to this problem. Every time you have science being used to drive policy, the science becomes subservient to policy.

            The FDA, because it is a part of the state, is swayed in by the goals and values of the state. I agree with this, it seems pretty basic.

            But if the FDA was ran by a corporation, it would be swayed by the goals and values of the corporation. And if they FDA was entirely ran by a single busy and eccentric individual, it would be swayed by that individual’s goals and values.

            There’s no getting around that science must be conducted by humans, who operate under normative values distinct from the science they are conducting.

            I propose hunting down some of the actual publications underlying specific “science-based” policies and see how much of the nuance gets lost between the study and the policy.

            This seems like an argument that the scientists and the policy-makers need to be more closely intertwined, so that they have better communication and coordination, and prevent these misunderstandings. Without the FDA, policymakers would have to try to convince MedGloboCorp (or whoever is doing studies) to tell them what they know. I mean, the science->policy translation has to happen somehow, and doing it in-house seems like the most error-free approach.

            I’m as big a fan of science as the next man, but scientism (the blind faith in science providing the right answers to difficult questions of policy) is a cancer. It’s like the replication crisis never happened!

            I’m not saying the FDA is right all the time, I’m saying their opinions are more right than mine, because that is their field of study.

            They may have poorly replicated studies regarding, say, sodium benzoate, but I don’t even know what sodium benzoate is, so they still come out ahead.

      • Bugmaster says:

        I would argue that it isn’t the case that people are totally incompetent. Everyone can agree that being e.g. a heroin addict would be really bad — primarily because we have lots of heroin addicts around to use as examples, despite the fact that no one wants to become a heroin addict personally. Perhaps (and I’m just entertaining this as an idea, not claiming that it’s true) there’s something about heroin that makes it uniquely difficult to resist in certain real-life conditions. So, we can all get together and agree to ban heroin, knowing that individually many of us — including those who voted “yes” on the “ban heroin” proposition — would be unable to resist it individually.

    • Bugmaster says:

      I somewhat agree with this argument, if not with its tone. As I said in my comment above, rather than labeling people people as “incompetent”, I’d say that there exist some superstimuli so alluring that they are beyound the average person’s capacity to resist. It’s not a matter of insufficient training or wilful ignorance; it’s a matter of how people are wired. If that is true, then yes, banning the substance (or practice) in question would be reasonable.

    • LadyJane says:

      @Aapje: That may very well be the case, and I acknowledge that. I simply prioritize “freedom to make one’s own decisions” as a higher value than happiness, health, or overall well-being. I am well aware that this might be an extremely uncommon position to take, but it’s my position nonetheless.

    • baconbits9 says:

      The more you see people as children who lack the ability to see the consequences of their actions, the more often it makes sense to have culture, the government or some other force limit people’s freedom to act, for their own well-being.

      Where are these people coming from? You are positing two classes of people, one who can’t be trusted to make judgments for themselves and another who can be trusted to make judgments for everybody.

      If a toddler falls down the stairs you don’t get a bunch of toddlers together to vote on how to prevent further stair falling. The more you view people as toddlers the less useful government becomes, the less likely that having a ‘culture’ will help you, and the less likely that you will be able to make progress on issues.

      And toddlers that aren’t allowed to make their own mistakes have trouble growing up.

      • Aapje says:

        You are positing two classes of people, one who can’t be trusted to make judgments for themselves and another who can be trusted to make judgments for everybody.

        In a representative democracy, it’s the toddlers who vote which parents act in their interests. So there are more or less two classes of people, but it’s not that we trust the one over the other per se. We trust one class to make detailed decisions, while we trust the other to make a more high-level decision on how well those detailed decisions get made.

        And toddlers that aren’t allowed to make their own mistakes have trouble growing up.

        I’m not advocating unlimited nannying. There is a middle ground between kicking toddlers out of the nest and expecting them to fly (I may be mixing up metaphors here) and never giving them any opportunity to fail.

    • cuke says:

      Well said, Aapje. I agree with this and find it helpful to be articulated this way, partly because I think it clarifies an essential world view disagreement that different political ideologies find themselves on opposite sides of, but that gets lost in arguing about object level stuff.

  18. tmk says:

    In the first example, many of the “recreational” users will be heroin addicts desperate for heroin, or a passable substitute. I don’t think you get to count them as “nominating themselves”.

    • 6jfvkd8lu7cc says:

      On the other hand, if we are looking at the heroin addicts already out there as a reason to ban a new cheaper drug that becomes a substitute for some of them, is it the main part of the effect calculation? Arguably the main part of the utilitarian tradeoff is new addicts…

      • Simulated Knave says:

        Not just new addicts, but that subset of new addicts who wouldn’t have just tried something else instead and gotten addicted to THAT.

  19. caryatis says:

    I can’t think of a situation in which it’s useful to think about what someone “deserves.” We do not live in the world in which an omnipotent, omniscient entity hands out consequences based on the state of one’s soul. We live in a world in which some quantum of suffering has to be suffered by someone, and it is just for the suffering to be imposed on the guilty instead of the innocent. (And if some people enjoy the suffering of the guilty–it’s distasteful to me, but hard to see how it makes a difference).

    In case my hints aren’t obvious enough, I think that this “desert talk” is a remnant of religious ideas which we should banish from our minds and discourse.

  20. Krisztian says:

    In addition, how do you want others to treat you?

    The ordinary way: “Joe is (to a large extent) a responsible individual. If he screws up we’ll blame him.”

    The sophisticated utilitarian way: “Joe is not responsible for his actions, those are due to genetics+upbringing+other chance. We shouldn’t blame or praise him. Sure, we’ll pretend to be angry if he screws up to incentivize his behavior. We don’t neglect his well-being (we are rational utilitarians), but there is no fundamental difference between how we treat him and how we treat a dog (except for his utility receiving some extra weight).”

    I want people to treat me the ordinary way. It’s really hard for me to imagine someone opting for the latter. I think it’s easy to say “don’t blame XYZ, she isn’t responsible”, but how many people want others to treat them like that?

    • Krisztian says:

      Coming to think of it, how do you view yourself?

      The ordinary way: “If I do something bad (say, sexually harass someone) tomorrow, I will be blameable. Others will be justified to blame me.”

      But what do you do if you don’t believe in the ordinary notions of blame?

      Do you say something like: “I shouldn’t sexually harass someone tomorrow (it’s wrong). But if I do so, I won’t be responsible, since I only had unlucky genes+environment+chance.”

      The convoluted version: “I shouldn’t sexually harass someone tomorrow (it’s wrong). If I do so, I won’t be morally responsible. HOWEVER, I will pretend that I was. I will pretend feel guilty and angry at myself for the correct incentives.”

      Can you really wrap your head around something like that?

      • Gazeboist says:

        Why is blame a binary thing? Why can’t I say: “I shouldn’t do [bad thing] for [reason]. I will avoid situations that make it more likely for me to do [bad thing]. If I do [bad thing] anyway, my culpability will depend on the degree of control I had over my circumstances.”

      • Tarpitz says:

        “If I sexually harass someone at work tomorrow, it won’t be wrong. “Wrong” is a lie-to-children. However, I like neither the idea of treating people in ways that upset them nor the prospect of the guilt which I would (sincerely) feel if I did so, so I won’t.”

      • Illuminatus major says:

        “I shouldn’t sexually harrass someone tomorrow (it’s wrong). If I am still in my body when my body commits such an act, there must have been some mistake from genes+environment+chance, since not doing that, to the best of my abilities, is part of the algorithm by which I consciously determine my behavior. It is possible that my body will be borrowed by some algorithm which is not me, and for them it may not be accidental, but it would be accidental for me to allow that algorithm to usurp me, unless that algorithm is actually morally superior in some way I can tell, in which case I should sexually harrass someone tomorrow (it’s right), but that’s totally hypothetical.”

    • Unirt says:

      I certainly want people to treat me in the sophisticated utilitarian way, at least in the aspects of life that I tend to screw up. Could it be that you like the ordinary way because you don’t screw up much and you don’t mind getting credit for that? I would go for the Ordinary Way in everything I do well, and the Sophisticated Way in everything I do badly (and cannot realistically do better, even though people blame me anyway).

  21. Norman says:

    The discussion overlooks the information problem by assuming that someone (the government) can actually carry out the utilitarian calculus. In general, evolutionary learning (whether cultural or genetic) and individual learning are competing optimization methods. Moral intuitions regarding just deserts are the result of evolutionary learning. The advantage of evolutionary learning is that we have more information available collectively than individually, and we can explore a larger solution space (ie “the wisdom of ages” is a real thing). The disadvantage of evolutionary learning is that it is slow, and may give wrong results in new circumstances. Since circumstances are always changing, this is a pervasive problem. Your utilitarian analysis is an example of individual learning applied to three particular problems. The advantage of individual learning is that we can adapt quickly to new circumstances, and the disadvantage is that we are individually dumb, relative to ability as a group. (See generally Henrich, The Secret of our Success.) Your three examples explicitly assume that you (or someone) can individually figure out the correct utilitarian answer. Your lingering disquiet with the utilitarian answer is your moral intuition reminding you that that assumption is almost certainly factually false.

    Which answer is more likely to be actually correct depends on much circumstances have changed, and the consequent likelihood that the moral intuition is wrong, and how much of the relevant information we now have (and our ability to process it), and the consequent likelihood that the individual learning answer is correct. That trade-off is impossible to assess with certainty, so one’s position on this debate will likely turn on your self-confidence. A big difference between conservatives and liberals is how confident they are in their own intelligence, relative to evolutionary learning. Put another way, liberals are intellectually arrogant and conservatives are intellectually humble: see Burke, Reflections on the Revolution in France.

    I should add that while “arrogant” has negative connotations and “humble” has positive connotations, I don’t mean to say liberals are always wrong and conservatives are always right.

    • Tarpitz says:

      Which answer is more likely to be actually correct

      What on earth would it mean for one or the other answer to be “actually correct”?

      • Norman says:

        “And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better.” I mean “actually correct” in the same sense that Scott means the word “better”.

  22. Watercressed says:

    >A friend reframes the second situation in terms of the cost of having law at all.

    I don’t think this is as strong as you think. You could rewrite the second example as a general case, saying, “on average, people who only harass women a little bit lose more from being kicked out than the women do from being harassed, and therefore we should not exclude people for small amounts of sexual harassment”. This still lets you have law; laws can handle different levels of severity just fine, but I don’t think it will be convincing to many people.

  23. Thegnskald says:

    I am continually confused by the idea that determinism has something to say about morality; the idea that your behavior being predetermined alleviates moral responsibility for that behavior.

    I can see the moral intuitions that give rise to it – given two people who killed their spouses for cheating on them, we might have a little more sympathy for the one whose spouse was seduced by an evil agent who knew it would result in their murder.

    We have more sympathy for somebody whose circumstances were brought about by an evil genie who knew exactly what would cause that person to commit evil, than somebody who was just unlucky. Determinism feels like the whole universe is an evil genie; you weren’t merely unlucky, you were fated to be unlucky.

    But the moral intuition involved isn’t about the universe, it is about dealing effectively with hostile intelligences.

    There is a utility-specific version of this problem, which this post kind of circles around, which is the problem of utility monsters. Utilitarianism is vulnerable to a single-shot utility blackmail; give me 100 utility right now, or I will make myself 80 utility worse off. There is also the more typical utility monster version of this: Group X must lose 20 utility, or my group will “choose” to lose 100. Whether or not we call it a choice doesn’t matter; we have the same broad issue, which is that we are dealing with potentially hostile intelligences, who can use any explicit rules against us.

    And we don’t want to just rule out giving people a bit of extra utility, even at everyone else’s expense. Because, you know, sick children and whatnot. So instead we have a loose rule that can be adjusted at will, depending on whether a case is perceived as defection.

    • Faza (TCM) says:

      The idea of “utility blackmail” is new to me and I rather like it (on the “why didn’t I think of that?” level, not the “do this at the earliest available opportunity” level).

      I’m somewhat less convinced by the final paragraph, though. If we choose a “loose rule that can be adjusted at will”, we are almost certainly engaging in a sleight-of-hand where we claim to be following the rule, but are, in fact, using the rule to justify our moral sentiments.

      The thrust of my objection is that if we may apply or suspend application of a rule at will (or adjust the degree to which it applies), it is no longer a rule – a function that gives deterministic output for any set of inputs. It is at best a guideline that we can choose to apply or no, depending on our current mood and fancy – that’s what “at will” means.

      I don’t mind people forming moral judgements based on mood and fancy – as long as they don’t claim to be following rules while doing so.

      • Thegnskald says:

        All rules were written based on our moral sentiments; they are cases where our moral sentiments are completely predictible, so we stopped worrying about them and just wrote them down. The rule doesn’t exist to justify our moral sentiments, but to express them, and will be changed if ever our attitudes towards the action in question change.

        • Faza (TCM) says:

          No disagreement that this is what actually happens. I’m not even particularly opposed to the idea of rules as expression of sentiments.

          I am opposed to claiming rule-following when the rule is subject to change at a moment’s notice. If a rule doesn’t occasionally force you to accept a result you don’t like, it’s no rule at all.

          I’m under the impression that people like to feel principled (and to have a no-thinking-necessary mechanism for making decisions) and hence have a tendency to put forward Rules of Guaranteed Moral Behaviour (anything from revealed divine command, through categorical imperatives, all the way to postulates of maximising utility – whatever that may mean). If such rules are meant to be broken, however, the dispassionate observer is inclined to take any claims of principle with a large grain of salt.

          This makes virtue ethics look like the most honest approach of the Big Three (Four?), if only because adherents are up front about trying to do their personal best, rather than claiming to follow rules that they shall choose to ignore once they become inconvenient.

          • Thegnskald says:

            I would suggest trying to write a rule that bans all undesired touching, sexual and otherwise, but allows all desired touching. The rule has to be absolute, such that nobody can rules-lawyer them. I will also reiterate that all desired touching must continue to be allowed. So, to explain, you must devise the rules under which one human can touch another human.

            We’ll iterate. Every time you write the rule, I will find a way to get around it.

            Do you think you will be able, after five iterations, to prevent me from designing a scenario in which your rules cause problems?

            I’ll even give up the first case right now: Medical necessity. I’ll give up the second case: Religious exemption for medical necessity. I’ll give up the third case: Child abuse masquerading as religious exemption. Fourth case: False accusation of child abuse centered around a contentious divorce and a disagreement about the medical procedure. Fifth case: True accusation of child abuse claimed to be false.

            That is centered around the specific scenario of medical care, and I am pretty sure it is already sufficiently illegible to prevent the writing of a rule.

            Discretionary rules exist for a reason.

          • Faza (TCM) says:

            We continue to be in violent agreement that it is not possible to devise a rule that “just works” – in that it consistently gives us the result we would like from it.

            But that’s not what I’m talking about.

            Rule following means that whatever result is consistent with the rule is good and any result that is inconsistent with the rule is bad. Rules-lawyering is only a problem if you expect the rule to give one result and it ends up giving another. If I only care to see whether the rule has been followed, how can you design a scenario that causes problems? (I am presuming you mean a scenario where the rule was followed, but I didn’t like the result.)

            This is what Stack Overflow calls an X/Y problem. You want people to only be touched when they want to be (Y) and you’re looking for a rule that will help you achieve it (X). My answer to this riddle is: stop looking for X.

            My position all along has been: “Let’s stop pretending we’re following rules, because we’re not. Instead we are passing judgements about situations as we find them, maintaining a mere pretense of consistency while doing it. It’s okay.”

          • Thegnskald says:

            I feel like you are attaching a significance to “rules” that I don’t share, which renders this conversation largely orthogonal; we aren’t discussing the same thing.

            I am talking about “The things we write down so people have an idea of where the borders of good behavior are”.

            You are talking about “A sacred trust that everybody pretends to be a part of while quietly defecting whenever that trust gets inconvenient for them”.

            If I am arguing for discretionary rules, I am arguing against the concept of rules as… uh… whatever it is you are pointing at, which I don’t have a word for. The sacred trust in rules, similar to but not identical to rule of law. The conceptual superset of rule of law? Rule of rules? Or just, I don’t know, Rule?

            Deontology, maybe? I haven’t ever had an in-depth conversation with a determined deontologist, but my impression is that most deontologists don’t think they know all the rules; they’re working with a pale mortal copy of the ur-deontology.

          • Faza (TCM) says:

            The problem of “the things we write down so people have an idea of where the borders of good behavior are” is exactly the one you’ve already pointed out – wherever we draw those borders, we’re gonna find out (probably soon) that they don’t lie there at all.

            Trying to set borders of acceptable behaviour is counterproductive, if it turns out that sticking within those borders doesn’t actually result in acceptable behaviour (acceptable behaviour outside the borders is less immediately problematic, but still leaves us without a clear idea of just what is acceptable).

            Funnily enough, we probably have a very similar idea of what the end result – in terms of how people relate to one another – should be. My opposition to the use of the world “rule” is that it is a loaded term – people may rightly expect that if something is a “rule”, they’ll be okay if they just stick to it.

            If people expect “rules” to be solid, predictable principles and instead find that the rules change on a whim, because we don’t like where following the rules got us, they will not only find it difficult to know where the boundaries lie, but will also lose faith in the system altogether (because the system itself does not keep the faith).

            There’s probably a D&D angle to this I could work, to illustrate where the problem lies.

    • Ivan Timokhin says:

      But the moral intuition involved isn’t about the universe, it is about dealing effectively with hostile intelligences.

      I’m not sure how relevant that is, and I may just be misinterpreting you, but it seems important to me that “morality” is really a mix of two unlike things that should be kept separate.

      First, we have axiology, i.e. a notion of what do we value, what do we want to achieve. And then there is decision theory, which is all about how to achieve that, including in presence of hostile intelligences, and shouldn’t properly be a part of morality at all.

      So, for example, the refusal to give in to utility blackmail has everything to do with a decision theory solution to the Newcomblike problem of “people who give in to blackmail get blackmailed much more often” and nothing to do with axiology at all. Likewise, the alternative ansver to the problem 2 that Scott gives is of this kind.

      This seems relevant, since I don’t think anyone really claims that there is no notion of deserts at the decision-theoretic level; for example, we probably should punish criminals at least to disincentivize crime. The whole debate, as I understand it, is about whether there’s also a purely axiological desert; whether some people deserve their misfortune not because of any reasons why it might be necessary for the common good, but as a terminal goal.

      • Thegnskald says:

        I am puzzling over the distinction. It has a sort of… mind-body duality flavor to it, to me, that I am having trouble putting my finger on.

        I can agree that “What” and “How” are distinct questions, but I am not sure they belong to distinct magisteria? Particularly given that a lot of the differences in morality ultimately boil down to ethical concerns about the “how”.

        Maybe that is a large part of the issue with morality – improperly combining these things, a la the is-ought problem.

        I will have to consider that for a bit before I have a useful reply. It isn’t a direction I devoted much thought to, and I don’t have a conceptual library to draw upon to quickly understand it.

        • Ivan Timokhin says:

          I can agree that “What” and “How” are distinct questions, but I am not sure they belong to distinct magisteria?

          Maybe that is a large part of the issue with morality – improperly combining these things, a la the is-ought problem.

          Actually, I think I can try to use the is-ought problem to shed a bit more light on this idea, hopefully clarifying the “distinct magisteria” claim.

          Per the is-ought problem, to do any kind of “ought” reasoning one needs some base-level “ought” claims that are accepted without further justification — moral axioms, if you will. That is the “What” part.

          The “How” part, then, is the part where you combine these moral axioms with various “is” statements that you already possess to actually determine what to do on a day-to-day basis.

          The “separate magisteria” thing then comes from the observation that these are produced by very different processes, and obey very different rules. We have some desiderata on moral axioms — they should probably be consistent with themselves and one’s moral intuitions — but in the end, they are what ever you say they are; indeed, this seems to me to be the whole point of the is-ought problem (Edit: or, rather, orthogonality thesis). But once you have determined those, the “How” part should, at least in principle, be more or less objective. That is, “do these policies allow me to obtain the goals that I have set” seems like a factual question that we should be able to answer objectively.

          And while the language in the previous paragraph seems like it presupposes some form of consequentialism, I think other ethical systems could fit here as well. For example, we may have a moral axiom of “do not kill [other people]”, and then debate whether this or that policy successfully prevents us from killing people (e.g. “don’t stab people” seems like the sort of policy that we could approve; “stab everyone” much less so).

          Or maybe I’m still presupposing consequentialism and just fundamentally misunderstand how deontology works.

    • Simulated Knave says:

      Re determinism and morality: I think this is because you’re misunderstanding what morality is. Morals aren’t about dealing EFFECTIVELY. Morals are about dealing rightly or correctly or fairly. Morals are about what is good, not what is most effective. Some people argue that what is most effective IS what is good, but that’s not the same thing as morals being about what is most effective.

      Note that almost all moral systems claim to be right – it’s not just about how you interact with others. It’s about how you are claiming others should interact with you. And with each other, for that matter. About what behaviour should be rewarded, and what should be punished. And note that frequently moral systems DO say that the universe will punish people for breaching them. Moral systems aren’t about what works. They’re about judging what is right and what isn’t. And a core component of judgment is determining responsibility.

      In most moral systems, your responsibility for your actions corresponds directly to your ability to influence them. You are less responsible if you have a brain injury, or if you are a child, or if you are sleepwalking, or someone spiked your drink, or if the consequences of your actions were harder to foresee. Various principles underlie this, all of which make a ton of intuitive sense.

      And so when determinism wanders along and says “no one actually has any ability to influence their own actions”, the logical conclusion of pretty much all moral systems is that if determinism is correct that means no one has any moral responsibility for anything they do. It is not fair to make people pay for losing a rigged game (or, for that matter, to pay out to people who won one).

      Of course, five seconds of thought creates meta-determinism, where we are either helplessly swept along by the seas of fate (in which case we must act as if we have free will, since we have no choice but to do so and accepting determinism doesn’t really do much to anything about our day-to-day lives) or we aren’t (in which case determinism is bullshit and we should ignore it). Frankly, whether we have free will or just look like we do, it doesn’t actually change that much.

      Determinism is, like cultural relativism, self-defeatingly pointless. It is not as sophisticated as its proponents think, nor the end to debate they think either.

  24. Murphy says:

    I believe 1 and 3 are a bit different to 2.

    I think some of this hinges on how highly you weight harm vs freedom.

    I like the idea of minimising harm and maximizing happiness… but I also like trying to maximize for freedom. Give people the freedom to jump off cliffs and some of them will end up splattering on the rocks.

    have their preferences been fulfilled? probably not very well but I’m not purely maximizing for their preferences being fulfilled, I’m also happy to partly maximize for their ability/right to choose how they attempt to maximize their own utility.

    Show me a society where the average utility points worth of happiness/fulfillment everyone gets per day is 7 but nobody gets much choice about how they go about doing so, perhaps the local community political officer plans their day, perhaps their supervisor AI decides when to inject some happy drugs.

    but line it up next to a society where the average utility points worth of happiness/fulfillment everyone gets per day is 6 but everyone gets to choose what they want to do, which may include screwing up and thus lowering the average… and I may pick that one because the freedom to choose gets some utility points as well.

    item 2 I think is a slightly different issue to the others since there’s an aggrieved party.

  25. Jiro says:

    These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test.

    I’m pretty sure it does. Claiming that people don’t actually believe this because it seems absurd to you is typical-minding.

    When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.

    I think this idea has problems, and it’s basically a patch to fix up utilitarianism where it doesn’t work. This isn’t the first time you’ve discovered problems with utilitarianism and tried to patch it rather than considering that utilitarianism might not be correct.

    “People have nominated themselves for X” who, if asked, would tell you “I haven’t nominated myself for X” sounds to me a lot like “she said ‘no’, but her eyes/clothes/slutty demeanor said ‘yes'”. If someone denies or would deny asking for X, especially where X is harmful, I refuse to claim that they are implicitly asking for X anyway. If I’m going to do X to them, I’m going to admit I’m doing it to them against their will, not try to fit it into a framework where I can claim they chose X anyway in some sense.

    Also compare “by doing Y, you have chosen to go to Hell. God isn’t inflicting the punishment for Y, God is innocent, you’re really doing it to yourself”.

    • albatross11 says:

      By sticking your dick into that sausage grinder, I’d say it’s you and not the sausage grinder who has made a bad choice and will now suffer.

    • Deiseach says:

      These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer.

      You don’t need to be pro-desert or anti-desert or anything else, you just need to be a human being. Has anyone here never, ever wished misfortune on somebody in a burst of anger or outrage? Nobody ever thought “that guy who threw living beings into a wood chipper should have someone stick his hand into a meat grinder, see how he likes it”?

      Yes, there are pro-desert people who use that as a cover for their sadistic wish to see others suffer, or to gratify other urges about proving their superiority or seeing enemies punished or any of the other reasons humans behave like assholes to one another.

      There are also pro-desert people who don’t want to see people suffer for the sake of suffering, but if it is the inevitable consequence of a bad choice they made uncaringly, then it seems no more than a balancing of the scales. Joe preferred to drink his wages away and now his kids are going hungry, if Joe ends up with no money to buy more drink and suffers withdrawal symptoms then Joe deserves that (in the sense of “brought it upon himself by his choices, which involved drinking to excess such that cutting off means withdrawal symptoms, and his suffering should not be considered with more sympathy than the suffering he was willing to cause others – for example, letting his kids go hungry – in preference to satisfy his own desires”).

      If you stick your hand in the fire, you are going to get burned. You know that, so you have no right to act surprised when it happens, or demand that others relieve you of the consequences (that does not mean “deny you medical treatment for burns”, it does mean “deny that we make it so that you can stick your hand in the fire without getting burned but only when you and nobody else does it”).

      I’m treading on thin ice here, but last night I read a post about the increase of HIV/AIDs in America, particularly in the South, and all the list of why this is the fault of society etc. was trotted out – and sure, poverty and the rest of it are real reasons – but there was nothing at all about “maybe is part of this increase people having unsafe sex? now that the next generation post-AIDS has come along and forgotten all the warnings? indulging in risky behaviour?” Because I’m going to suggest that at least some of the increase in STIs and allied diseases is down to carelessness/complacency/risky behaviour amongst younger generation, but the entire post was all politics and nothing about “what are the people contributing to this?” and I can understand why that, why they didn’t want to engage in victim-blaming or shaming, but it was clear that the idea never even entered their minds – any increase had to be the fault of The Man and come from top-down, nothing to do with any behaviour on the ground (unlike this site which does acknowledge risky behaviour as contributory in amongst the rest of the political and social causes).

      • L. says:

        and sure, poverty and the rest of it are real reasons

        I’m sorry, what now?
        What exact mechanisms are involved in the spread of HIV by poverty? I know the mechanisms involved in the spread of HIV by the way of sticking one’s dick and/or needle in places where they ought not to be, under conditions that ought not to be, but the ones involved in the spread of HIV by poverty are unknown to me.
        Seriously now, we all know how HIV spreads and poverty ain’t and can’t be one of those things.
        In the West, HIV is largely spread by stupid people doing stupid shit, while having absolutely no justification for doing stupid shit, as they have been repeatedly informed that the stupid shit they are doing is in fact stupid shit.
        We can sit here listing contributing factors to the spread of HIV until we die of old age, but at the end of the day if a dick or a needle aren’t where they ought not to be, HIV ain’t going nowhere.
        It’s the same shit with teen pregnancy, where people will look for the cause of it literally everywhere except in the fact that two idiots fucked.

        • cuke says:

          Soooo, I’ll bite. The cost of condoms and other birth control may be a factor. Literacy and access to sex and health education may be a factor. Access to healthcare may be a factor. We’re not talking about causes, but contributing factors to a population-scale increase in a long-standing public health problem. I’ll let the public health researchers weigh in here if they want to about how socio-economic factors contribute to public health problems.

          We can say the rise in HIV infection rates in the southern U.S. is just purely about dumb people’s dumb behavior. But that doesn’t really help explain the increase — why now, why in the South more than other places now? The answers to those questions might yield better solutions than just saying “yeah, some people are dumb and they’re getting what they deserve.”

          Deiseach’s take seems to point to this even while suggesting individuals’ choices are to blame: a new generation of young people has “forgotten” the risks of unprotected sex. Since it’s probably not that individuals have literally forgotten something, it’s more likely that the institutions that previously educated young people about the risks of unprotected sex are not doing so to the same extent that they did during the height of the AIDS crisis. Maybe that’s due to shifting policy priorities or education priorities or some other thing, but it seems like that thing would be a large-scale thing and not a thing having to do with the sudden increased stupidity of individuals.

          • L. says:

            I’m not disagreeing with what you’re saying, but I dislike losing sight of the way things actually are and placing blame and responsibility where they do not belong; I find this to be important for both moral and practical reasons.
            If one were to lose sight of the nature of most people responsible for the spread of this disease one might be tempted to treat them like people to be reasoned with instead of animals to be poked and prodded with condom prices and daily reminders where to stick what under which conditions into getting them to stop sentencing themselves to death.
            Contempt on the inside, flyer with a talking condom on it on the outside.

            If you spend some time on any gay site that isn’t tripping over itself to be all woke and shit, you will find that a forgetting of a kind has indeed happened.
            Quite a few older gay men have noticed that effective AIDS medicine and generational change has caused the gay community to largely forget the horrors of HIV from the 80’s and what HIV can really do, which in turn has caused the resurgence of some questionable behavior.
            I find myself in agreement with them; that Grindr can simultaneously exist and have a feature to reminder one to test oneself for HIV every few months is something that I cannot understand.

          • cuke says:

            L., I’m hearing what you dislike and object to I think. I don’t understand what you’re proposing as a solution for this “forgetting” because you seem not to like there being any “reminding.” What’s your suggestion?

          • L. says:

            What in my writing suggests to you that I’m against reminding people?
            As for my suggestions, I don’t really have a horse in this race except getting enraged by shifting of the blame, so I’m not giving any.
            My prediction is that the disease itself will give its lessons to those who need it and thus take care of itself, just like it did once before.

          • cuke says:

            Yes I’m catching on that your goal here is expressing your rage about various things. That helps me clarify that this isn’t a useful conversation for me.

          • L. says:

            My goal is to clarify things.
            You may disagree with my lack of care for what is useful, but you didn’t disagree with me on the way things are.

  26. Jiro says:

    I think a correct answer to this is going to assign some importance to each principle and weigh them. It’s good that a drug helps depressed people, but it’s bad that people abuse it. If the rate of abuse when releasing the drug is high, and the amount of benefit to depressed people is low, the tradeoff is going to be different than if the rate of abuse is low and the benefit is high. You actually have to consider whether the tradeoff is worth it, not just automatically make the tradeoff in one direction because one side has nominated themselves for the short end.

    The same goes for the sexual harassers example, but in cases where we consider the tradeoff to be good enough that we will let the men act, we wouldn’t even call his action “sexual harassment” in the first place.

  27. Salem says:

    Here’s one way to think about some of what’s going on here:

    Utilitarians like to claim that interpersonal utility comparisons aren’t as hard as detractors claim, we are comfortable making them all the time. For example, when I decide whether to give up my seat to a stranger on a train, I’m probably making some sort of “utility comparison,” whether or not I call it that. So while these comparisons may not work in theory, they work in practice. Utilitarians then carry over that “folk” understanding of interpersonal utility comparison into their broader theory.

    However, I think our real-world “utility comparisons” are more sophisticated. For instance, it would never occur to you to give up your seat to a particularly lazy man, who just really enjoys sitting down. Instead of assigning people the utility they actually get from an action or state (call this U), we assign them the utility we think they ought to get (call this Uo). So I don’t give up my seat to the lazy man.

    This also explains our objection to “utility monsters” in a better way, and some of the so-called paradoxes. We accept an incredibly strong preference not to be slaughtered, so utilitarians need bite no bullets to say that majorities shouldn’t slaughter minorities even if they really hate them. We don’t accept an incredibly strong preference to enslave others – you shouldn’t enjoy that so much! – so standard utility monsters are a tough bullet to bite.

    So utilitarians present cases where U is roughly equal to Uo by common consent, and use it to argue that utilitarianism is plausible. Then, in more controversial cases, they use their Uo to argue for particular policies, which is why they get accused by non-utilitarians (or fellow utilitarians with different Uo) of cooking the books. Here, Scott has found cases where his Uo for particular events is pretty strongly bounded (refuses to take the “SOLUTION TO YOUR PROBLEMS” door…), notes that real people’s happiness may sharply differ from his Uo, and so worries that his “intuitions” are now rebelling against the utilitarian calculation. Rather, I claim almost no-one is doing a real utilitarian calculation to begin with.

    So what is to be done? One option is to embrace Uo over U in utilitarian calculations. This solves many of Scott’s concerns. But is such a vision broadly supportable? “I don’t care how much you enjoy sugary drinks/smoking/etc, I don’t think you should enjoy it, so I’m going to take it away.” On the one hand, this is the politics of most self-describing utilitarians anyway, so why not make it more honest? On the other hand, one of the appeals of utilitarianism is that it looks like a liberal, universal, pseudo-objective system – abandoning people’s own U and substituting your personal Uo makes it an illiberal, subjective one. Remove the mask of faux-objectivity, and these claims look a lot less appealing.

    Another option would be to enquire deeply into U across the board. But we already know this is basically impossible, both theoretically (unknowable) and practically (incomputable), which is why we are using these short-cuts in the first place. And even if we could do that, we might well end up with utilitarianism leading us to very different places than we now think it does – many the new reflective equilibrium would cause many current utilitarians to abandon ship (although perhaps there would be new recruits to replace them).

    My preferred option would be for greater introspection and self-awareness among utilitarians, and the application of a more pluralistic set of ethical methods. But I won’t try and flesh that out, because this is already way too long.

    • DeservingPorcupine says:

      It seems to me that fully embracing Uo basically kills utilitarianism entirely. I mean, perhaps instead of considering utility to be this sort of personal thing, you could expand the notion of “utils” to include facts/states of affairs and still preserve the fundamental computational/maximization/consequentialist aspects of the theory. So, someone could say that the loss of individual satisfaction is more than made up for by the amount of “total justice utils” generated. But I can’t imagine many utilitarians being happy with that parsing.

      • Salem says:

        Would it? I guess it depends what you like about utilitarianism. If it’s “the greatest good for the greatest number,” then not necessarily. After all, utilitarians fight amongst themselves on what utility is (happiness? preference-satisfaction? wellbeing? something else?) – these are very different things to be maximising, but moving from one to the other is generally thought to be no big deal. In fact, I see no evidence that different concepts of utility lead to different substantive views. That’s pretty shocking when you think about it. So I don’t see why another change in what utility means should be so devastating.

        I think it would actually help utilitarians speak to and persuade non-utilitarians. Once we’re clear about where the utility is coming from, and why, it’s easier to reason about. One classic utilitarian “paradox,” deeply related to Scott’s (2), is whether A should be allowed to sexually harass B if he likes it more than B dislikes it. Utilitarianism Classic says … Maybe? But our brand new Diet Utilitarianism says “Feel free to set A’s utility from the act to 0 if you don’t like what he’s up to. Or set it negative, because he ought to be deeply regretful about what he did, even if he isn’t.” So more people will be on board.

    • vV_Vv says:

      One option is to embrace Uo over U in utilitarian calculations.

      Then you obtain a real-valued version of deontology, why bother calling it utilitarianism?

      Utilitarianism (both the preference version and the hedonic version) is intended to reduce “ought” questions to “is” questions, based on people’s mental states. If you replace people’s actual mental states with more “ought” rules, then you’re going in circles.

      • Salem says:

        Well, to be clear, that’s not my recommendation. But I think it’s a bit less farcical than you suggest.

        If you think actually-existing utilitarianism is a successful project, I agree that this fix will seem unnecessary. And I agree it’s ugly. But the less well you think utilitarianism works, the more open you might be to changes that would make it easier to speak to non-utilitarians. If you think real-world utilitarian calculations don’t sneak this reasoning in by the back door, feel free to be horrified. If you think they do, then you might be open to making them go in the front door and wipe their feet. To be clear, this cludge wouldn’t make me a utilitarian, but it would make me a little more open to it as a mode of reasoning.

        You are right that the theory pushes back where this Uo is coming from, and that presumably this is another moral theory. But I don’t see this as so bad. In the real world, almost every utilitarian has other moral theories too, and they’re pretty widely shared. Few utilitarians think utility monsters are an argument in favour of utilitarianism.

        • vV_Vv says:

          I think utilitarianism is a fundamentally flawed project.

          I wasn’t objecting that the moral theory that you propose doesn’t make sense, in fact I think that empirically it’s closer than utilitarianism to how people actually form their moral intuitions.

          My objection was that I don’t think it’s very useful to try to salvage utilitarianism by keeping some elements of its language while fundamentally altering its core premise.

  28. DeservingPorcupine says:

    Long ago I imagined a similar argument against utilitarianism where everyone in the entire world suddenly decides they want to torture an innocent child, and they dutifully line up to take their turn to do so. I’m given a gun with infinite rounds and can choose to kill each subsequent attacker or let them commit the atrocity. What do I do? My intuition is that I’m actually quite OK killing every one of them, even if that leaves only me and the child as the last of the human race. This kind of scenario has always weighed much more heavily on my mind than most of the other famous thought experiments regarding utilitarianism and makes me think that the notion of moral “deserts” is wrongfully disregarded by utilitarians.

    • L. says:

      I had a similar reaction a long time when Sam Harris made an argument in favor of utilitarianism that went along the lines of how we can all agree that worst possible misery for everyone is objectively bad.
      It made me imagine a hypothetical situation were I’m made a God of universe where each person had just finished a 1000 year brutal torture of another innocent person by murdering them.
      To me making that universe a place of worst possible misery for everyone is just the right thing to do.

    • Bugmaster says:

      To be fair, this could be interpreted as an argument against Utilitarianism, or as an argument against trusting your moral intuitions, depending on where you stand.

    • Kentyfish says:

      I think in this case the best choice of action would be to turn the gun on yourself. Do you really want to live in a world where everyone wants to torture innocent children? This scenario reminds me of the book (and not so much the movie) I Am Legend. You’d be the odd one out in this scenario, who are you to impose your morals on the rest of humanity.

    • carvenvisage says:

      Saving this example. (I also think it’s more striking/illustrative than many thought experiments I’ve read, -thanks.)


      As a self proclaimed utilitarian, your conclusion seems the obvious one to me too.

      In fact, I have trouble even seeing the conflict between the theory and the conclusion, unless I try to remember patterns of some other people who claim to be utilitarians. Like, even if you leave the child out of it, a world of eight billion demons will obviously have more suffering than one of eight billion corpses. (beyond the immediate ultra-short-term of the executions)


      Two things I think are important to try and avoid when applying utilitarianism as a method of analysis (and not just a theory of morality’s nature, which it primarily is):

      1. extrapolating utility to the end of the immediate sequence rather than towards eternity.–If someone is in the middle of stabbing someone, and you shoot them, the *immediate* consequence is causing horrific destruction and pain. Only with a slightly bigger picture, and as we carry it forward in time, do we see that it also hopefully stops the stabbing. This one is pretty easy, but as you carry things beyond the scope of the immediate ‘scene’, and/or pile on more and more scale, it can become hard not to lose track of the fact that the consequences won’t stop as the main drama of the vignette winds down.

      2. For the kind of edge case where utilitarianism is commonly broken out, i.e. where common sense and heuristics aren’t getting you an answer you’re happy with; trying to combine it with pre-existing doctrinal simplifications developed for everyday life in a harmonious society, -, e.g. in this case, “it’s always wrong to kill”, “every life is equally worthy”, “you mustn’t judge people”, “mind your own business”, “don’t play god”. — -Plugging practical simplifications for everyday living into a debugging tool for situations where where heuristics tend to break down is just nonsensical, it’s like inserting a sentence from a novel into a maths problem. (before someone asks, “don’t torture children” isn’t a convenient simplification for life in harmonious society, it’s an as-OP-put-it ‘atrocity’, equally applicable to soldiers as soccer moms.)

  29. J Mann says:

    My instinct as a utilitarian is to jump up a level to larger incentives. People should believe they get what they deserve because it incents better behavior in the future.

    So if a campus group deliberately invites a controversial speaker to make a point about free speech, and then so many protesters plan to show up that it’s going to cost the college half a million to keep the peace, the math on overall utility is so vague that it’s hard to do. At some level, it’s easier to say:

    1) Deserve: The campus group deserves to be prevented from doing this. They and Controversial Speaker are assholes who know that they’re stirring shit up, and actually intend to do the stirring. Utilitarian: if we spend the half million to protect this speech, we’re going to see more provocative speeches. Mixed: if the Controversial Speaker doesn’t have anything important to say, they deserve less protection and there’s less of a utilitarian case for protection.

    2) Deserve: The protesters deserve to be shut down. They’re trying to exercise a veto over speech, and they shouldn’t get away with it. Utilitarian: If we allow a heckler’s veto here, people will use it more broadly, and on both sides. Mixed: See above.

  30. arbitraryvalue says:

    I’m not a utilitarian, so I don’t have a horse in this race, but I think that even within the framework of utilitarianism, the issues you raise are not paradoxes if you consider the abstract concept of “utility” to mean “freedom” more than “pleasure”. (With freedom defined as the subjective experience of having chosen to do what one did, which remains even if that choice was predictable based on genetics and upbringing.)

    • cuke says:

      Thank you for saying this. It seems really obvious to me that there’s variation in how people prioritize what might be competing values: honesty, mercy, justice, freedom, happiness, etc. And within one person, they may prioritize those values differently in different moments or situations.

      Could a utilitarian here school me, or link to something, that would help me understand how the framework accounts for these competing values?

  31. gbdub says:

    I think I need more explanation of why this needed an explanation? I don’t think I’m what you’d call a strict desert theorist but the core concept seems straightforward: the consequences (good and bad) of a choice should fall mainly on the person that made the choice. Everything else is just arguing about what counts as “choice”. And sometimes arguing about whether there should be a control loop on the system that pulls everyone toward “average consequence”.

    It’s not about wanting people to suffer, it’s about not wanting to force a person who made good choices to bear the costs of someone else’s bad choices.

  32. Matt M says:

    It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption.

    I think this is very important, mainly because nobody trusts that “when utilitarianism says its correct” won’t be roughly executed as “to benefit the powerful and screw over the powerless.”

    I recall that when I was in the military, if someone got a DUI, it would be handled in two ways.

    Option 1: This fucking idiot did the stupidest thing possible, the thing we train them constantly not to do. How disgustingly irresponsible. Throw the book at them, let them sit in jail for a few days, take away their driving privileges on base, start processing them for administrative separation and kick them out.

    Option 2: This otherwise effective leader is clearly suffering from the disease known as alcoholism. Let’s send them off to a fancy rehab for a week, ensure this incident is filed away somewhere nobody will ever hear of it, and give them another chance. It would be a shame to lose such a valuable resource over such a minor mistake that they really couldn’t control anyway.

    You wanna guess how the options were selected? Enlisted got Option 1, Officers got Option 2.

    That’s not to say that Option 2 isn’t logically sound. It might very well be the best policy for handing a DUI offense. But the actual execution of the policy was to bailout the powerful while continuing to punish the powerless. In real life, this is what utilitarianism often looks like.

    • CatCube says:

      Things must have changed somewhat. By the time I was in, a DUI was a severely career-limiting event for an officer.

    • Faza (TCM) says:

      I must admit that in this specific context, the policy looks sensible at first glance.

      Whilst I would expect that in an armed forces context people of all ranks are seen as expendable and replaceable (by virtue of the main job requirement), some are more expendable and replaceable than others.

      Enlisted men are cogs in the machine, raw manpower to be directed by the officer corps, to be expended as necessary in a situation of conflict. Discipline is the number one virtue in an enlisted man and if the loss of any particular individual has an impact on the effectiveness of the force as a whole, someone’s done something very wrong.

      Officers are the nervous system of the organisation and the ones that actually give it direction. There is a much greater investment of resources in each individual officer and the effectiveness of the force on its various levels is much more closely correlated with the actual individuals. Unless you want to concentrate all decision making at the top (which is a bad idea for a host of unrelated reasons), individual officers are likely to have some degree of autonomy at their level of command and their individual characteristics may well affect how well your army does its job. There are, therefore, much greater incentives to maintain your officer corps relatively unchanged and only get rid of those who are completely useless at their main job.

      All of which certainly does not preclude favouritism, cronyism and… ahem… esprit de (officer) corps. Birds of a feather, and all that. I’m simply struck by the fact that the “unfairness” rests on a different level of the system.

      • Matt M says:

        But why stop at the military?

        That same logic can be applied to society at large. Why shouldn’t OJ get away with murder given the massive utility he provided to society via his athletics and celebrity?
        Harvey Weinstein has been integral to the production of countless highly successful movies generating hundreds of millions of dollars of economic activity. If the price we have to pay for that is letting him molest a few random wannabe actresses, so be it, right? Why shouldn’t a homeless person accused of a crime simply be executed on the spot? Even if we were wrong, he wasn’t making society any better off anyway, right? Nothing to lose from the utilitarian perspective.

        Like I said, this is why a whole lot of people (and low-status people in particular) reject utilitarianism. Because they might not be rocket scientists, but they’re smart enough to see that it leads us to questions like this…

        • Faza (TCM) says:

          Just to clarify: I personally reject utilitarianism most strongly.

          As to why stop at the military? Why indeed?

          However, if we are to maintain the highest standards of reason, we must acknowledge that what’s good for the goose needn’t always be good for the gander.

          The military is something of a special case, because – at the end of the day – it is a group of people trained and equipped to kill comparable groups of similarly equipped and trained people. This places some constraints on how we can effectively organise such a force.

          I’m trying to tread carefully here, in order not to offend, but the military very much stands and falls by its hierarchical structure. Without it, you don’t have an army, you have an armed mob. Maintaining the structures is a top priority and the individual people making them up are of secondary concern, if that. I hope you will agree that this is the case.

          It isn’t necessarily clear that removing OJ from the picture – were it to happen – would actually affect athletics on a society level in any meaningful way. Hell, it might not have even affected his team’s (whichever) performance if we were to move the case back in time.

          Similarly Weinstein – it’s not like he’s a key man for anything really, other than being in the right place, at the right time. It’s kind of how corporations are supposed to work: you can replace anyone and everyone right up to the CEO and work will carry on regardless.

          Regarding executing homeless men: I take it you are referring to the Magistrate and the Mob. The reason not to do this is the same as why you don’t negotiate with terrorists – appeasement in the face of threats only buys you more threats, until you run out of homeless men. The correct response is the Riot Act (I jest, but darkly.)

          That said, there are situations where the correct response is to preserve the elite and sever the plebs. I don’t particularly like it (not being anywhere close to any sort of elite), but I find accepting and routing around it is more useful than getting stroppy about it.

  33. Icedcoffee says:

    First, each of these hypos is a false binary choice. In each case, there is a more desireable middle ground that avoids the negatives of the extremes.

    Second, incentives are only one piece of the puzzle. You also have to consider the human factors governing what decisions people will tend to make/not make. People are not hyper-rational actors, makings decisions based purely on incentives. Blaming people for their “bad choices” ignores the structural factors pushing them towards making those choices. We could just as easily place the blame on the people who made the structures that led people to make bad decisions in the first place.

    IMO it only makes sense to use incentives as a deterrent if those incentives actually have a deterrent effect.

    Side note, there’s an ancillary discussion to be had here about free will, and if it makes sense to penalize people for genetic/environmentally determined willpower, reasoning skills, etc.

  34. Balioc says:

    I realize that I’m bringing object-level applied human psychology to a theoretical ethics fight, and am therefore boring and lame. But since you do seem to have found yourself contemplating a seemingly weird and incomprehensible tangle of a situation, it may be worth discussing the object-level issues in scenario (3), with the UBI recipients who supposedly suffer from not doing a job (and won’t do so voluntarily).

    …I don’t think that’s an accurate and helpful way to describe almost anyone’s situation.

    There are vanishingly few people in the world who actually benefit from doing a job, qua doing a job. You would not expect anyone, in the normal course of events, to go do office work or repetitive manual labor recreationally; anyone who did that would be a tremendous weirdo. Instead, you get some combination of the following:

    * People who slip between the paid and non-paid versions of doing a task that is, itself, inherently rewarding to some people. Artists scholars athletes etc. etc. We all understand this one perfectly well, and I don’t think anyone serious is really afraid that there’s going to be less art or scholarship or athleticism in the world of UBI.

    * People who benefit from the dignity and social status afforded them by employment. Having a job (or a “real” job, in some contexts) is a marker that in some important ways you have cleared the most basic tests of worthiness and you can definitely be counted as a successful member of society. You’re a cut above street-hoodlum scum and basement-dwelling NEET loser-dom. Of course, this only works in a society where having a job actually serves that social function. Someone who gets a substantial UBI check is having his material needs met, but he’s not doing anything to prove that he deserves dignity or status — and that wouldn’t change if he got a random low-to-mid-level job, probably, because a society that’s set up to provide a substantial UBI is different from ours in a lot of key ways that are going to change the relevant social dynamics. Having a job “because you feel like it,” in a world where employment is not a proof that you can “do what is necessary,” where all sorts of employers are eager to pay people at low wages because the labor market has become incredibly tight, just isn’t going to mean the same thing. The hypothetical people turning down those jobs aren’t idiots, on this axis, they know exactly what they’d be getting and it’s not the thing they need.

    * People who derive value from feeling helpful and needed. This one is a little trickier, but it’s important to note that — while this is in fact a very common thing for people to want, and we should probably work hard to address it one way or another — conventional modern employment is an incredibly bad way to satisfy that desire. Like, feeding-a-baby-on-skim-milk bad. Neither a big faceless employer nor a middle manager is likely to actually care about you, or your work, in a way that will push those “I have done good for someone” buttons. In fact, a very large amount of your work is likely to end up in either the “I am a cog doing a thing that anyone else could do just as well” bucket or the “I am doing something that actually benefits no one, in any real sense, but is demanded anyway because of some bureaucratic hoopla or development-process stupidity or something” bucket. Point being, yeah, in UBI-world people probably won’t flock towards jobs in order to feel needed even if they do in fact need to feel needed, but this is mostly because people with jobs mostly aren’t feeling very needed now. “How do you help someone who needs to feel needed, in a world that doesn’t actually need him for anything, without holding everyone else hostage to his psychological well-being?” is a genuinely excellent questions…but it’s one for which we need an answer, not one for which we already have one.

    Floating around all of this, of course, is the vague sense that having a job is good for people in the sense that it gives them a “task –> completion –> reward –> new task” loop. But of course video games can do that too, and in fact do it better than any job ever could. It may well be, in fact, that “stop making people feel bad about playing the video games that might actually make them happy” is one part of the needed solution.

    The point being, it’s neither fun nor sensible to be a toiling ant if in fact you don’t need to toil to survive the winter. Sad listless UBI recipients who won’t go do random jobs are, almost certainly, not failing to walk through a SOLUTION TO ALL YOUR PROBLEMS door; they are correctly recognizing that what they need is not actually jobs, but rather some unknown thing that has some features in common with what jobs are today.

    • JulieK says:

      * People who derive value from feeling helpful and needed.

      Point being, yeah, in UBI-world people probably won’t flock towards jobs in order to feel needed even if they do in fact need to feel needed,

      There are a lot of people nowadays who derive from value from doing a job not because their employer needs them, but because their family needs them as a breadwinner.

      • Balioc says:

        This is true. And here we find ourselves faced with a very stark version of the “without holding everyone else hostage to his psychological well-being” problem. From a top-down social view, we don’t want that dude’s family to need him as a breadwinner — we want his family to be A-OK if he keels over dead, if there’s a divorce, etc. Kind of like how we don’t want to have to depend on vigilante superheroes to defend us from crime, no matter how rewardingly heroic it is to be a vigilante superhero; the fact that there’s a need for that kind of thing suggests that the baseline systems aren’t functioning as they should be, and in the long term the goal is to eliminate as much dependency-type need as we can.

        So breadwinner dude has to suck it up and recognize that this particular need/desire of his cannot be met without making civilization worse overall. But that’s the beginning of an answer, not the end of one, because breadwinner dude matters too, and if he’s suffering because his family doesn’t need him, we have to find something else that will provide him with a comparable form of psychic good.

      • John Schilling says:

        Even if their family really doesn’t need them as a breadwinner, even if their society’s safety net would provide as much bread as their job does, there are people who experience real value in being the breadwinner. And in feeling needed as such even if they objectively aren’t.

        • cuke says:

          Can you play this one out a little bit more?

          What’s our duty as a society to people who want to feel needed when they objectively aren’t?

          • John Schilling says:

            If the person in question has a family, then objectively that family does need to be provided for. If the person in question is capable of providing for his family, and wants to provide for his family, and we went and paid for an institution or a robot or whatever and said, “this is going to take care of your family for you from now on, sorry, we don’t need you any more”, then that was an objectively wasteful thing for us to do and an objectively harmful thing for us to do and we should maybe not have done that.

            If the person in question is not capable of caring for his family, then maybe he bought himself the short end of a tradeoff, or maybe his existence is a tragedy. Need more data.

          • Balioc says:

            @John Schilling:

            No, I don’t think that’s an objectively wasteful thing for “us” to do.

            I mean, yes, in our current social setup, going and paying for some extraordinary measure like a personal work-robot would be immensely wasteful and weird. Spending our valuable doing-good-things-for-society money in a fashion that piecemeal, and that arbitrary, would be stupid.

            But — if we have a general-purpose policy of “we will pay money to everyone, no questions asked, in order to provide for everyone on a society-wide level, and that includes your family members,” it’s not stupid at all. That spreads wealth to workers and non-workers alike, most of whom are likely to benefit from it tremendously. That prevents the family from being dependent on breadwinner dude, who may be abusive or improvident or just plain mortal.

            His desire to provide for his family, and to win whatever psychic benefit comes from occupying that role, is not even close to being the dispositive factor here.

          • cuke says:

            @John Schilling, I’m trying to understand your position. Re-stating here what I’m hearing:

            Balioc questions the value of much work that’s available. JulieK says the value may be in supporting a family, not in the work itself. You raise that the work may be valuable to meeting psychological fulfillment needs of the worker because some people like to feel needed by their dependents.

            My question to you was: what’s the duty of a society to meet the psychological needs of someone who likes to be needed?

            I think you answered that it would be wasteful to have someone else provide economically for these children if the worker can do it through their wages.

            I thought we were talking about this in the context of Scott’s third dilemma, where UBI would create a situation in which some people chose not to work even though working might make them happier. And Scott asking should we factor their unhappiness into the cost/benefit of UBI even though the door is open for them to choose to keep working?

            Are you saying that yes, the person who could work but is not working because they don’t have to, their unhappiness should be factored into the cost/benefit calculus because they have a need to be needed that not being forced to work for wages will deprive them of?

          • John Schilling says:

            But — if we have a general-purpose policy of “we will pay money to everyone, no questions asked, in order to provide for everyone on a society-wide level, and that includes your family members,” it’s not stupid at all. That spreads wealth to workers and non-workers alike, most of whom are likely to benefit from it tremendously.

            And it “spreads wealth” from people who created it and earned it and had other valuable things to do with it. Or is this the all-to-common socialist thing where we imagine wealth comes from the Tax Fairy or from Santa Claus or from Robber Barons who we can feel good about taking it from?

            Whatever. There is a finite amount of wealth [X] that we can obtain by taxation, etc, before causing what even you will acknowledge are unacceptable consequences. There is also some guy named Dave, who is presumably capable of creating some finite level of wealth [D] but will only do so if he believes this constitutes “providing for his family”. Otherwise Dave will just take opiods to make himself feel better until he makes himself dead, or you’ll have to devote more wealth to extraordinary measures to keep him alive.

            So, one plan is to e.g. subsidize Dave’s work until it’s enough to provide for his family, which gives you X+D of wealth for your society’s support-families measures and gives you a happy Dave. The other plan is to give money to Dave’s family directly, which gives you only X wealth or your society’s support-families measures and leaves you with a drug-addicted Dave and then a dead Dave.

            How is the second plan not the wasteful, stupid one, by at least value D?

            That prevents the family from being dependent on breadwinner dude, who may be abusive or improvident or just plain mortal.

            Dave being abusive is something you just pulled out of your ass to make the stupid wasteful plan seem necessary. If it turns out to be the case, there’s ways to deal with it. If you can’t even imagine the possibility of dealing with it, without also telling all the poor but good and decent fathers that we don’t need them any more, I’m going to have some choice words for you and they aren’t none of them going to be compliments.

          • Balioc says:

            @John Schilling:

            Look, to the extent that this conversation is about “the value of centralized social welfare versus an economy with fewer government payouts” in general terms — that’s a big topic, we’ve all heard the basic arguments on both sides a million times, it’s not particularly germane to the original discussion, I think we can just skip it for now.

            But specifically with regard to Dave-the-breadwinner’s psychological need to continue being a breadwinner:

            There are costs to not meeting that need. There are also costs to meeting it.

            The costs to not meeting it are pretty much the ones you outline. Dave will take a pretty hard blow to his psychic well-being, and as a society we’ll either have to (a) find some other way to shore that up or (b) accept a more-likely outcome of deterioration and despair. Also, it’s very possible that society-at-large will lose out on some amount of productive activity that Dave would otherwise be motivated to do.

            The main cost of meeting that need is, well, the continued dependence of Dave’s family. Which is fine if being dependent on Dave is fine, but there are all sorts of reasons that being dependent on an individual can be a big problem. Dave can get run over by a bus. Dave can turn out to be very bad with his money. Dave can, yes, be abusive, such that it’s very valuable to be able to cut ties with him without having to worry about food and shelter. I did not pull these possibilities out of my ass, they are all things that happen often enough that a social engineer would be grossly negligent not to take them into account in his calculations.

            Beyond that, having a supplemented income and a greater diversity of income sources can do a lot of good for Dave and his family — he’s freer to switch jobs or explore other options, freer to negotiate terms with his employer, etc. — even if it’s overall beneficial for him to continue working.

            Beyond that, on the level of abstract non-utilitarian morality, I think it’s grotesque to hold anyone or anything hostage to Dave’s psychological need. It is not good to need to be needed — that is a motivation to constrain the people around you and make them lesser — and insofar as society is trying to push at people’s personalities, it should be helping them to get over this kind of thing rather than encouraging it. Certainly this bothers me a lot more than any abstract injustice associated with taxation.

            Overall, I think that the calculus favors cutting the dependency, in a pretty strong way. I’m guessing that you feel very differently. But one way or another, there is an actual argument to be had here, with terms on both sides of the equation.

          • cuke says:

            Beyond that, on the level of abstract non-utilitarian morality, I think it’s grotesque to hold anyone or anything hostage to Dave’s psychological need. It is not good to need to be needed — that is a motivation to constrain the people around you and make them lesser — and insofar as society is trying to push at people’s personalities, it should be helping them to get over this kind of thing rather than encouraging it. Certainly this bothers me a lot more than any abstract injustice associated with taxation.

            I agree with this entirely.

            I can’t tell if making this more real in the details would be helpful. My mother was able to leave a husband who was very abusive to her and her kids because she happened to receive a small, unexpected independent source of income. I could walk out in detail how my life would have been measurably so much worse if that bit of luck had not happened. Shortly after it did happen, my mother gave $1000 to a friend of hers who was being beaten by her husband, and that $1000 enabled her friend to move out and rent an apartment and begin to figure out how to be economically independent after being a stay at home mom who had supported her husband through his education and career.

            I am a psychotherapist now. I work with women and men who are in abusive relationships of various kinds who cannot yet figure out how to get money to get their own health insurance or time for work or childcare for time to work so that they can live independently from the breadwinners who are abusive. Other situations do not involve domestic violence but are substance abuse situations where the addicted partner refuses to get treatment and the other partner doesn’t want children raised in a high-risk environment because of the substance use. This is far from some kind of rare edge situation. All you have to do is look at rates for domestic violence or substance abuse and remind yourself that rates are grossly underreported.

            On the breadwinners’ side, I also work with men and women who are trapped in horrible jobs with abusive bosses and can’t yet gather the resources to leave because their children would lose health insurance, because they would have to move for an equivalent job and they don’t have the money for moving plus health insurance, etc.

            I’ve also worked with quite a number of couples where one of them is managing a chronic illness, such that even without children, they are unable to work full-time and are dependent on their partner’s income. People with chronic illnesses, in our context of inadequate health insurance and rising healthcare costs plus inadequate disability benefits, are sometimes trapped in abusive relationships that they cannot get out of, and for them it’s sometimes that or homelessness. I’ve worked with a couple of people who eventually chose homelessness. Choosing between homelessness and an abusive relationship when you’re chronically ill has got to be one of life’s crappiest situations.

            Out of all the economically-trapped-in-bad-relationship situations I’ve encountered in my work, only one of them was ever reported to the police and became a statistic. This was a situation where the woman was beaten almost to death twice, the second time happening despite a restraining order being in place.

            So when we talk about weighing Dave’s psychological need to be needed up against other factors, those are the situations I have in my mind. I also work with a lot of people around career dissatisfaction and work stress, so I’m also very sympathetic to the need men and women have to feel useful, to have a sense of purpose and meaning, and to feel that they contribute. I like to believe we can care about both the needs for meaning/purpose on one hand and safety on the other.

    • Garrett says:

      People who derive value from feeling helpful and needed. This one is a little trickier, but it’s important to note that — while this is in fact a very common thing for people to want, and we should probably work hard to address it one way or another — conventional modern employment is an incredibly bad way to satisfy that desire.

      I’d go a little more abstract and say that there are people who derive value from being desired (for whatever version of desire applies). How do you fulfill the value of being desired while not necessarily being desirable.

  35. vV_Vv says:

    Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.

    Unless you are going to bite the bullet like Caplan does and say that addicts are rationally satisfying their preference for being addicts, then you’ll have to admit that act-utilitarianism mandates you to take away from some people the freedom to harm themselves, in some cases even at a cost for other people.

    More evidence that act-utilitarianism is broken.

  36. Deiseach says:

    Do people “deserve” bad things to happen to them? I think this is complicated; we all have examples in our mind of “bad” people whom we wish poetic justice upon, and if they ended up suffering we would probably think they deserved it, and we’d certainly enjoy them getting their comeuppance. The fact that we have a term like “comeuppance” means we do, in some sense, think “bad people deserve bad things to happen to them”.

    So there’s that. Even the very nicest people will have someone they think is not so nice and if something happened to them, well, it would be awful but at the same time…

    As for deserving it, in the case of drug addicts and so on, no, probably no-one deserves bad outcomes. But there is contribution to it, and even making all allowances for environment, genetics, abusive upbringing, bad luck and so on, there still remains the tiny grain of ‘you made the choice’. Some people are genuinely incapable of seeing that doing X is going to lead, inevitably, to outcome Y. That’s a problem, and it’s not one with an easy solution, because that means either letting them go on to one disaster after another and trying to clean up their messes for them (which in practice means “assign an overloaded social worker to them, once they burn out rotate in a new one every three months, nobody ever does anything effective because they’ve got sixty other clients like this, and are just getting to know the client and the details of their case when they’re shifted and a new person comes in and has to start from scratch all over”), the kind of paternalism which is now very much out of favour (e.g. involuntary admission to hospitals, the old style asylums and other similar social structures to control the lives of those who could not control them for themselves) or the hard-faced “okay, you’re perfectly free to spend every last cent on heroin and I’m not going to stop you because you have no claim on me and I have no responsibility for you and you’re free to make your own decisions, but you are also perfectly free to starve in the streets after that because you have no claim on me and I have no responsibility for you and this is the outcome of your free decisions”.

    Re: drug abuse, again yeah, after all allowances are made, there is that one remaining grain of “you did this yourself the first time”. Last week I had my post about being very happy on that anti-anxiety drug. And that little pill bottle sitting on my desk is a big temptation, because I know I can have that happy feeling again. I could pretend to myself “Oh but I really need it, I really do suffer from anxiety” and that’s true. But I’m not having an anxiety attack now, what I have is that bloody heredity strikes again. An awful lot of the paternal side of the family end up on “pills for me nerves”, a very close family member ended up with a diazepam addiction, and we tend to have personalities prone to psychological addictions.

    The main and only reason I’m not taking that happy pill is because even though I have a long history of making damn stupid decisions, I am more scared by the consequences of what I know will happen if I do (not might, would or could, will happen). If I take that pill without needing it, I know where I’m going to end up, and I’m bad enough already without a prescription drug habit on top of everything else.

    Do I blame my genes? Well, not much can be done there, but since I know the family tendency, if I go ahead anyway it’s stupidity on my part and I will have contributed to my own problems and I will, in a sense, deserve what happens.

    Do I blame the doctor? No, they were very clear about only prescribing a small amount at a low dose, warning me of the problems, and only offering it because (a) I arrived in after seven hours straight of jittering anxiety, presenting with physiological symptoms that were really psychosomatic (b) if they didn’t prescribe something to help with the anxiety at its worst, I would be every ten minutes in their surgery presenting with physiological symptoms which were really psychosomatic (c) they acknowledged that this prescription is basically a psychological crutch, that having something I can take if I have a really bad episode is going to help me calm the fuck down [not a direct quote] and not escalate into having a really bad episode and thus needing to take something.

    (EDIT: Trying to talk myself down does not work. The rational part of my brain is going “Okay, now, you know this is not as bad as it feels, this is X, Y and Z”, the other 99.9% of my brain is screaming AAAAAHHH NO HORRIBLE HORRIBLE GONNA DIE AWFUL TERRIBLE BAD THINGS BAD THINGS UNSPEAKABLE DREADFUL APPALLING THINGS AAAAHHHH!!!!).

    Do I blame ignorance? Nope, I was warned, I know from seeing it up close how it can happen, and I know the pitfalls. So if anything happens, it will be damn stupidity and I will have caused my own problems.

    Sure, there’s the whole genetic/environmental/stress/other reasons ball of wax for why I might abuse this drug, but in the end it does rest with me if I take that first step.

    Re: “but if you don’t have to work, how will you get any meaning out of life and you’ll be miserable” – hey, work is a curse due to the Fall, remember? Some people may indeed be very wretched if they’re not working, in which case I’d say “is it because you would miss that particular job, or is it that you feel you need to be contributing something to society?” in which case find some other way of contributing. Because for the majority of us, we are all replaceable in our jobs. If you didn’t turn up for the interview that got you this current job, they’d have hired someone else; there are very, very few people where an employer will go “Well, we can’t get Zacharias, might as well close up shop because we’re screwed without him”.

    I’ve had jobs I liked and enjoyed, but I would be perfectly happy tootling along without a job if I had a basic income to make sure my bare necessities were covered. I really don’t get the mindset of someone who has to work a 9-5 job to feel like they are being productive or have meaning in their life, I have plenty to do even when not at work. (Yeah, yeah, you lot are all thinking “Aren’t you the one who was just talking about being a prospective junkie, you mare?” but being in a job isn’t going to stop that happening if it ever does, believe me).

    I’m sure there are people who, if Jeff Bezos said “I’ve just had a spiritual experience and I’m going to live as an ascetic in the desert, here, have my bank account and all its contents”, would still go into work or have some kind of a job because they couldn’t see themselves as a person if they were without a job. But there are plenty of ordinary people who, if they won the lottery, would quit work in the morning and never regret it.

    Re: sexual harassment, being aro-ace I have no entitlement to give an opinion on what you weird sex-havers insist on doing about having sex and romance 🙂 (I can’t give any useful opinion on this, since every time I hear this kind of tale my instinctive reaction is “But why was it so impossible to keep your trousers buttoned? Nobody needs sex!” but, eh, you sex-havers are weird about “we do too need it”).

  37. Nabil ad Dajjal says:

    You’re failing to properly understand the opposing position here.

    Try to rank there three situations in order of best to worst case scenario:
    1. A bank robber tries and fails to rob a bank; no money is stolen and he is caught.
    2. A bank robber robs a bank and lives happily off of his crime.
    3. A bank robber robs a bank but the money is destroyed during the getaway, leaving him with nothing.

    Your ranking is almost certainly 1 > 2 > 3. It’s best for society if bank robberies are prevented, but if a robbery happens it’s better for the robber at least to benefit. His pleasure is counted as a positive in the equation.

    I, and many other people, instead rank it as 1 > 3 > 2. The best case scenario is where bank robberies are stopped and the robber punished, but if a robbery happens it’s better for the robber not to profit. His pleasure is counted as a negative in the equation.

    Giving everyone the same reward or punishment regardless of what they do is injustice. Good behavior deserves to be rewarded and bad behavior deserves to be punished.

    • Bobobob says:

      Vincent Vega from Pulp Fiction, about some dude keying his car: “Boy, I wish I could’ve caught him doing it. I’d have given anything to catch that asshole doing it. It’d been worth him doing it just so I could’ve caught him doing it.”

    • Civilis says:

      I think two things are getting confused here. I’d prefer scenario three over two, not because of any karma on the bank robber’s part, but because it discourages others from taking up robbing banks.

      Compare these two scenarios:
      2a. The robber robs a bank. He fakes his death during the getaway and lives happily ever after, but the public believes he died horribly.
      3a. The robber robs a bank. He seems to get away, but actually his risky getaway plan fails and he dies horribly. However, the public is convinced he survived.
      (You could probably do better using D. B. Cooper in these examples than a bank robber).

      The problem with your scenario two is not just that the robber lives and escapes karma, but that the robbery was successful, thus encouraging more people to rob banks. The rational approach is to prefer scenario 2a over 3a.

      The problem with Scott’s scenario two is that it’s a tradeoff not between the minimal damage of the one sexual abuser and the damage he receives by being fired for sexual abuse, but the tradeoff between all the sexual abuse that happens and the damages inflicted on the first abuser to be punished (and ultimately, the tradeoff between the damage from the abuse and the damage from the measures necessary to limit the abuse to an acceptable level).

      • Nabil ad Dajjal says:

        Ethical thought experiments always fall apart when you try add “and nobody ever finds out.” That’s just not how the real world works, there’s always some chance that news will spread.

        That said, I would say that 3a. is still better than 2a. under those conditions. It’s definitely better by a narrower margin, because the marginal would-be bank robber is that much more likely to try to emulate the seemingly successful escape. But the fact that a guilty man was unable to escape punishment counts for a lot even if we accept ad arguendo that somehow nobody discovers it.

  38. Levin says:

    It sounds a lot like you are a utilitarian who gives higher weights to people with more agency (I can certainly confess to such a preference). You could maybe modify the experiment to test this:

    1) There are only n people (for example n=1) who can be cured by the new drug, and a million people who would suffer from addiction if the drug is made legal.
    2) The only people for whom the antidepressant is effective are the kinds who would abuse it if they discover that they can, and discovering the snorting trick is a purely random event.

  39. blacktrance says:

    Not a utilitarian (so I give non-utilitarian answers to 2 and 3), but the answer to 1 is that there’s a difference between “not deserving X” and “deserving not-X”. The drug addict doesn’t deserve to suffer from their addiction, but they also don’t deserve to enjoy non-addiction at the expense of the depressed people.

    I’ve also lately realized that while thought experiments are valuable for helping us isolate moral principles, there’s a danger of assuming that real life is more similar to the hypothetical than it really is. So questions like 2 might be interesting in theory, but if you stipulate that, in the scenario, kicking the guy out might be suboptimal from a utilitarian perspective, and draw the conclusion that it’d be actually wrong to kick him out, and then apply it to real life, your real-life community will be terrible as a result. This is why rule utilitarianism beats act utilitarianism.

  40. AG says:

    This is all a lead-up to a full in-depth examination of The Good Place, right?

  41. Sigivald says:

    And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.

    If we’re really good determinists, shouldn’t we just accept that our reactions are just as determined and not care about them?

    (Of course, we do not, indeed cannot.

    This is my pet philosophical solution to the “problem of Free Will” – we almost certainly don’t literally have it, but we also inexorably have the experience of it and can practically only act and react as if we have it.

    So it doesn’t matter that it’s not “real”.)

  42. belvarine says:

    So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.

    “The government” didn’t “ban” tianeptine. Michigan classified it as a Schedule II substance. Has the government banned ritalin or oxycodone?

    Tianeptine is drying up in the US because credit card processors now refuse to service online vendors. In the US, patients can’t obtain prescriptions because Johnson and Johnson discontinued clinical trials several years ago, whereas Tianeptine prescriptions are freely available in France, Germany, etc.

    The state is willing to offer access to this substance. A network of private, unelected business conglomerates are preventing tianeptine from reaching US consumers.

    What are tianeptine users supposed to do, boycott online payment processors? Stop using Johnson and Johnson products until they decide to restart clinical trials?

  43. Deiseach says:

    Hmmm. Reading all the comments, it seems people are very big on Mercy but forget about, don’t care about, or ignore Justice. That is, it’s not fair/right/nice that sinners suffer, they should get their own local version of Heaven or be made non-existent or be made so that they wouldn’t sin in the first place.

    And this gives me a big clue as to why everyone/the majority on here seem to think “pro-desert people” are all some kind of sadists going around with our tongues hanging out looking for a rack and a heretic to stretch on it.

    • Faza (TCM) says:

      I feel I must call out Problem of Evil here.

      I’m not sure whether a concept of big-J Justice actually makes sense without an intelligent creator to set in stone what is right and what is wrong, so I will just assume God for the sake of argument.

      If God is both omniscient and omnipotent, then it is impossible for God to do something He did not intend to do. He has the power to do exactly what He wants to and He knows exactly what the result of anything He chooses to do will be.

      If God is both omniscient and omnipotent, and God has created the world, the world – and everything in it – is exactly as God intended. Everything that happens in the world is exactly as God intended.

      If God is both omniscient and omnipotent, and God has created the world, and something in the world happens that isn’t to God’s liking, God has only Himself to blame.

      If God is both omniscient and omnipotent, and God has created the world, God is the proximate cause of sin, through creating a world where sin can exist and sinners to sin.

      I’m not sure where Justice fits into this, to be honest. If God made sinners, that they may sin and hence deserve punishment, it really does start to look like sadism on His part – unless the case can be made that He was somehow forced to create sinners, in which case I’m not sure where that leaves omniscience and omnipotence.

    • cuke says:

      From my perspective, it seems to me we have an entire legal system devoted to the value of justice. Mercy is occasionally applied in small doses by some people when it comes to individuals and their specific circumstances.

      The primary way we have dealt with poor people, the mentally ill, and the drug addicted historically has been to shame them and lock them up. My sense is we are only just barely a little bit emerging from that history.

  44. I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?

    Wouldn’t preference utilitarianism resolve all of these scenarios, and related ‘utility monster’ style problems, without having to add a clunky bolt-on?

    1. Depressed people prefer having effective antidepressants available. Recreational drug users prefer to take ten times the recommended dose, crush it up, and snort it. Everyone gets to maximize their own utility function.

    2. The sexual harasser prefers to harass women, but his victims strongly prefer not to be harassed, and the community prefers not to harbor harassers. Everyone except the harasser gets to maximize their own utility function.

    3. The UBI recipients who don’t want to work get to sit on their asses and play video games. Those who do want to work are free to do so. It’s always preferable to have option B and option A, rather than option A alone. Everyone gets to maximize their own utility function.

    Obviously there are some pretty bleak outcomes in scenario 1 and 3, but at least this framework gives you the ‘correct’ (lesser-of-two-evils) answer without having to break anything.

    Something I’m really curious about is whether any philosophers still believe in Bentham-style hedonic utilitarianism? I thought it had been long since abandoned, but maybe I’m confused. As far as I can tell (no relevant expertise, could be completely off-base) it seems to clash with pretty much everything we’ve discovered – or rediscovered – about the nature of actual human flourishing/wellbeing in recent decades. What’s the story?

  45. eqdw says:

    Pre-comment: Halfway through this post I realized that my argument isn’t as rock solid as I thought. I have a vague sense that the main weaknesses are just due to specific framing details, and the core of it is still strong. Please exercise charity


    An observation:

    In two of the situations, there’s a fundamental asymmetry between the two sides, and that’s one between being active and being passive. In all three of the situations, the ‘utilitarian solution’ imposes an externality on the passive party where previously no such externality existed

    In the first situation, the people suffering from depression are in the passive role (since the hypothetical stipulates that they just kind of have depression and not that it’s a consequence of some more fundamental thing). And the people abusing tianeptine are in the active role (since they have to specifically go through collecting 10x as much drug, crushing, and snorting it.

    In the second situation, the harassed women are in the passive role (they’re just trying to live their lives. They didn’t do anything). The harasser is in the active role.

    The third situation is a bit of an outlier, and I’m not going to argue it. But I think the pattern holds. The people miserable because they are not employed are kind of in a passive role (since they’re _not_ working) but they’re also kind of in an active role (as in, they are actively choosing to not work).

    In all three scenarios, the active role is actively doing something to cause the situation they find to be harmful. And in all three scenarios, the utilitarian prescription for fixing this, even if it’s provably better (as stipulated by the hypothetical), creates an externality onto the passive party.

    In the first scenario, the utilitarian intervention (banning the drug) imposes an externality onto the passive side (they cannot enjoy an effective antidepressant). This externality is caused by the active party (because the drug problem is a consequence of the active party’s intentional action of taking the drugs, and because the externality of a drug ban is a consequence of the measures taken to fix the drug problem).

    In the second scenario, the utilitarian intervention (tell the women to deal with it) imposes an externality onto the passive side (the womens’ safety and comfort is degraded). This externality is caused by the active party (because the degradation of safety and comfort is a direct result of their harassment)

    In the third scenario, the utilitarian intervention (ban UBI) imposes an externality onto the passive side (they do not get to receive free money). This externality is caused by the active party (because the nihilistic angst problem is caused by them choosing not to take jobs available to them, and because the UBI ban is a consequence of the measures taken to prevent nihilistic angst).

    In all three scenarios, there is no such externality in the opposite direction if the hypothetical scenario was reversed.

    There is no reasonable argument that depressed people taking responsible amounts of tianeptine are causing the addicts to get addicted. At best they are enabling it

    There is no reasonable argument that the presence of women at the event is causing the harassment. There is no reasonable argument that the women are responsible for the harasser getting banned. At best, the women enable the harasser to get banned, and at best the group itself is responsible for the harasser getting banned. But all of this stems, fundamentally, from actions that the harasser took.

    There is no reasonable argument that the presence of a UBI is causing the nihilistic angst. At best, it is enabling it. The cause is still their choice not to take a job.


    In each scenario, this active/passive asymmetry and the externalities that fall out of it form a consistent basis for coming to the conclusions that Scott comes to. Because in each scenario, the problems befalling the active side are caused by the active side, and also in each scenario the solution to those active side problems creates problems for the passive side, which are ultimately also caused by the active side.

    Because of this asymmetry, the net result of the utilitarian solution in every case is “allow the active side to impose a cost on the passive side”. In pretty much any other situation, most everyone would agree that this is morally unacceptable, and if the only other alternative is for the active side to eat the cost, then that is how it has to be. This situation is similar.

    To illustrate this with a hamfisted analogy: Let’s say that Alice is drug addict, and Dave is a depressed person. If Dave could take tianeptine, he would be perfectly fully cured. Dave works as an employee making $50,000/yr. He was up for promotion to management, with a salary of $80,000 per year, but he was rejected due to fears that he would be emotionally unstable. If he had access to tianeptine, he would not have been rejected.

    Consequently, and thanks to this super convenient hypothetical setup, we can put a concrete dollar amount on the cost of his depression: $30k/yr. Times 40 working years => a lifetime cost of $1.2M

    We already know per Scott’s original construction that the benefit to the addicted Alices is greater than the cost to the depressed Daves. So even though this is a gigantic cost, it was officially worth it. Utilitarianally we should be ok with this.

    Now, imagine an alternative timeline. In this timeline, Tianeptine is legal. Dave got the promotion. He makes $80k/yr now.

    Until one day when he’s leaving his office at the end of the day. Alice pulls a gun on him and demands $1.2M.

    (Of course she’s using it for addiction treatment and is distributing it to other addicts or whatever. This stolen money will be efficiently spent).

    From a utilitarian perspective, this hypothetical is exactly the same as the original one. Yet I suspect that most people would intuitively conclude that “drug addicts demanding a million dollars at gunpoint is always wrong, even if it’s mathematically the utilitarian thing to do”.

    I believe this line of reasoning explains the paradox between the “correct” utilitarian position vs the “correct” intuitive position

  46. vpaul says:

    I don’t think your criticism of the utilitarian argument against UBI would pass the intellectual Turing Test. My argument against it would first emphasize that higher taxes to pay for it would lower economic growth greatly harming future generations. My second point would be that the UBI would discourage work, which would further mess up the labor market and lower economic growth and hurt future generations (see Tyler Cowen for the moral argument in favor of economic growth).

    The argument about work and happiness seems like a sort of conservative / religious moral critique of UBI, I’m not even sure that it is correct, since the link between money and happiness could be larger than the link between work and happiness.

  47. Guy in TN says:

    Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.

    Its better to think of the translation of will-to-revealed preferences as a spectrum. Too often, I see libertarians/classical liberals treat preferences along the lines of: “If you do a thing, that reveals that you wanted to do that thing. Bingo, simple as that.”

    But as you say, examples such as chemical addiction complicate this story. We both recognize that a heroin user might, deep down, want to quit, while simultaneously not quitting. And since the human body and mind is governed by chemical responses, can we not also say that moments of extreme sexual lust, anger, or grief, might also cloud the will-to-revealed preferences pipeline?

    So don’t think of revealed preferences as an on/off switch, but rather a gradient. For someone who is addicted to heroin, the cause of their addiction can simultaneously be that they “chose” it and it was “imposed” on them by outside forces, with the exact proportions of each cause being impossible to tease out.

    Your aversion to worrying too much about the UBI guy who chooses not to work, is based on your rational inference from context that his non-working is mostly his revealed preference. Your aversion to saying to heroin users “well, this is the life you want” is based on your rational inference that it is mostly not what a human would actually want.

    Caplan and the like are incorrect to abuse the concept of revealed preferences in the way they do, but the concept is not without merit, if tempered with the idea that human will is weak, and easily compromised.

  48. Huitzilopochtli says:

    I think these examples hinge on evaluating utility on a short term, and having that valuation come out in favor of the counter-intuitive option, but if we were to evaluate the utility of each option on longer terms we could determine a much better course of action, at the expense of short term utility.

    In each of these cases the option to accommodate for the drug addicts/harasser/job lovers is deemed to give greater utility, but it should also be evident that each of these groups have issues that cause them to act this way, and so in the longer term it would be optimal to develop a solution to fix their situations, such as rehabilitation or reeducation. Of course these hypothetical solutions would take a lot more resources and time to enact, and in the meantime, according to the short term valuation, society would lose utility, but this kind of longer term benefit at short term expense seems like something that any correct utilitarian evaluation ought to tackle, in the name of finding optimal results and actually considering all available possibilities.

    Additionally, given that we’re working on the assumption that accommodating the addicts gives greater utility, this seems like a warped incentive, because, specially with cases 1 and 3, a beneficial development is being cut short in the name of immediate benefits, which could lead to stagnation, because it could be the case that developments that produce much better outcomes in the long term produce immediate loses, so only considering the outcome in the short term, absent solutions to the variables that lead to considering a greater immediate benefit (the existence of addicts), seems like an evidently flawed way of determining utility.

    This leads back to the topic of not using the word lazy to refer to lazy people. We saw that considering short term benefits led to negating developments in the examples, and letting flawed people nominate themselves for the short end seems like an intuitive protection against the stagnation produced by considering the immediate benefits, but it also leads to dismissing the needs of these people, which is wrong, plus we can’t assume that the consequences of their choices will lead to self improvement, and, assuming no knock-on effects from the desired development, they might genuinely lead to a permanent lower utility outcome without guarantee of eventual increase. So the most robust solution is actually finding a solution to the situation of the outcasts, but given that dismissing them is a protection against their influence stopping development, words like lazy serve to dismiss them as a product of their own choices, their own nominating themselves for what they got. Instead their issues should be identified, the causes to their laziness, or addiction, which would lead to a greater understanding of their situations, and to being able to more easily treat them. Admittedly some medical words do serve as a way to categorize a set of symptoms associated with some causes and possible treatments, but the difference is that there’s a more robust understanding of those causes and treatments, so calling people by the sickness that they have is not dismissive or damning. Lazy doesn’t have any similar kind of robustness to its understanding, and so it serves mostly to dismiss people, rather than help them.

  49. Douglas Knight says:

    Intellectual Turing Test

    crystallized into a moral principle

    Aren’t these the opposite concepts?

    They’re both good concepts, but it’s important to acknowledge that they’re different, that you wound up at the end of the essay answering a different question than you framed it at the beginning. This is the classic problem of steel-manning.

  50. alcoraiden says:

    I’m going to change your sexual harassment example so that it’s less charged with modern issues, but otherwise remains logistically the same. Let’s say a guy is going around and, what’s something people hate…breaks a window. This guy, once a month, smashes your window, lets all the mosquitoes in, everything is awful. But if you kick him out of the community, he basically loses his friends, family, everything. It’s a given, as it was in your example, to this situation that nobody’s broken window is worse than him losing everything but his life. You said, “Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.”

    You said this was true even if he, say, had some kind of malfunction or disability or such, even if the *sum total of all the harm to all his victims is not as bad as his punishment.* Really? We’re not even going eye-for-an-eye here? You’re willing to be *worse* on him? I would say that you’re being pretty awful there. Nobody’s window, literal or no, is worth disproportionate punishment. I think “cruel and unusual punishment” starts where the punishment exceeds the crime.

    • Ghillie Dhu says:

      The punishment must exceed the crime (at least in realization) due to (often hyperbolic) discounting on the part of offenders and the potential to get away with it altogether in order for the punishment & crime to be equivalent in expectation.

      • baconbits9 says:

        No it doesn’t, it has to exceed the gap in opportunity cost between the behavior and the next best behavior.

        • Ghillie Dhu says:

          Fair. I was implicitly assuming inaction as the alternative.

          • Ghillie Dhu says:

            Addendum: assuming that the punishment is a function of the crime and not the particular criminal (so the incentive is meant to be effective for all those to be deterred), the relevant gap would be between the behavior and the next best behavior common to all those being deterred; the latter may well be inaction, so the requirement reduces to my original formulation.

    • arbitraryvalue says:

      I’d exile the guy breaking my window the same way I wouldn’t let someone transplant my organs to save the lives of multiple people. It’s my window and my organs. You don’t get to take them even if you need them more than I do.

      • alcoraiden says:

        I tend to weigh myself as equal to everyone else. I don’t think I’m a more valuable person than they are just because I’m me. (This is really important to my moral code.) While I think I can do more good alive than saving the handful of people my organs would save, if I knew I was going to be useless to humanity for the rest of my life otherwise, I like to think I’d offer myself up.

  51. Angra Mainyu says:


    While I think some (many) people deserve to be punished, I don’t think the examples work. I will address the first one. Please let me know if you’d like me to address the two others.

    1. Antidepressant.

    I agree that it would be unjust to deny people of their antidepressant only because some other people will use it to harm themselves. Something else would be needed to justify that (like the numbers of people getting hurt being so high that there would be a catastrophe or something, with third parties suffering even a lot more, etc.). Now suppose that a person attempts to abuse the antidepressant, but as it happens, she fails to acquire it, or she happens to have some sort of resistance, or whatever, and suffers no serious ill consequences. It is not the case that something unjust happened. Moreover, those attempting to abuse the antidepressant but failing to do so do not deserve to be punished for their attempt by being made to suffer the negative consequences of successfully abusing the anti-depressant.
    My point is that injustice of depriving people of their antidepressant is not about what those abusing it deserve to suffer, in this case. What’s it about? I don’t know. I think our moral assessments come before general hypotheses about what makes behaviors immoral, unjust, etc., and I don’t have one such hypothesis to offer. Rather, I make an intuitive moral assessment (which I think is also what you’re doing when you say “Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this””).

    I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?

    While I don’ know what evidence you have in favor of utilitarianism (i.e., which arguments you’ve read, which successful tests you’ve seen it pass, etc.), my assessment here is that utilitarianism is false. When tested against our moral sense, it regularly fails to pass the tests.

    An analogy: Let’s say a hypothesis posits that green objects are those that – under some pretty common standard conditions – reflect light in the 470-540 nm range. We test it against our color vision, and while sometimes it gets it right (maybe most of the times), it gets it clearly wrong a good number of times, so it’s false. Now, our color vision is probably less susceptible to errors than our moral sense, so it’s an analogy not the same thing, but even so, our moral sense seems to be the main proper tool to check general moral claims against specific examples (what else do we have?), and the evidence against utilitarianism is just too strong, in my assessment.

  52. JohnBuridan says:

    I like the way you are gesturing at “desert” here. It reminds of Samuel Johnson’s Brief to Free a Slave in which he argues from the point, not that slavery is in itself abominable and should be banned, but that one’s slavery must be the result of personal choice (like murder). Murdering people, in his view, nominates one for slavery. The defendant runaway slave in his case had done no wrong, no harmed anyone, forced into slavery at birth.

    But how to reconcile Scott’s cases with utilitarianism? I would say the cases we are talking about here all concern a tension between prudence and justice. The utilitarian calculus allows us to weigh all the outcomes and make the prudent choice for the greatest good. However, our desire for a (Rawlsian?) just society in which people, especially the unfortunate people, get opportunities to take needed antidepressents, be free from harassment, and get a basic income.

    Is it to you that we are dealing with some type of rule or heuristic that precedes implementing utilitarianism?

  53. Picador says:

    The debates you’re having with people seem to leave out some pretty pivotal theories of punishment that have been on the table in legal and philosophical thought for the past several centuries. Everybody knows about retribution, restitution, deterrence, incapacitation, and rehabilitation, but for some reason people don’t seem to discuss the moral-pedagogical theory of punishment: punishment as a public ritual to reaffirm communal values, as articulated by e.g. Durkheim.

    Maybe this is because it has some elements in common with deterrence theories, except that the goal is to send a signal not just to potential criminals but, just as much if not more so, to people who are not at risk of becoming criminals. Or perhaps it’s because the theory sounds barbaric on its face. Indeed, it is barbaric, literally. “Barbarian” honor cultures tend to carry out a lot of these public rituals with the explicit aim of affirming communal values, and with very little regard as to whether the punishment satisfied some sort of technocratic, “civilized” notion of the overall good.

    I find, as I get older and more conservative (or something), that my opinions are becoming more and more pro-barbarian. Utilitarianism seems to be caught in a sort of question-begging tautology about the nature of the good, and barbarian cultures seem to provide some pretty compelling answers to questions that utilitarianism refuses to even ask. The more I read about the lex talionis and the sorts of legal settlements it generally led to, the more I’m convinced that “barbaric” codes of justice understand something about game theory and mass psychology that we civilized folk have all forgotten.

    • Picador says:

      I should have added: the application of Durkheim’s theory to some of your hypotheticals seems to me to get very close to people’s moral intuitions about why you might punish the transgressor, and why that might be okay. The most obvious case is the sexual harasser: by exiling him from the community, yes, you accomplish incapacitation (he can’t harass anyone else in the community) and deterrence (other handsy men in the community mind their Ps and Qs lest they end up like him), but more than both of those things you signal to the women and those in solidarity with them that the community values their well-being and will act decisively against those who harm them. This strikes me as the primary purpose of such punishments in many cases like this, for good or ill. I can’t entirely say it’s wrong, even if it does lend itself to certain abuses.

  54. John Schilling says:

    OK, you’ve just perfectly captured my objection to just about every coercive approach to suicide prevention. The dead do not get to dictate terms to the living, and the people who want to be dead shouldn’t have that privilege either. And they have more explicitly nominated themselves for the short end of a tradeoff as anyone. But I seem to recall you are generally supportive of coercive suicide-prevention. Is that an exception to the general case or intuition you are describing here? Perhaps worth exploring.

    But, please, not by any attempt at utilitarian or consequentialist ethics. You can handwave “…they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused”, but nobody can actually do that. This isn’t counting the integer number of people tied to trolley tracks, this is the sort of apples-and-chickpeas comparison that is mathematically intractable in any real application. Might as well say that, as a good materialist, you’re going to practice psychiatry by solving the wave function of your patients’ brains.

    You’re going to need to define a rule, or find a relevant virtue, or just go with your intuition.

    • NoRandomWalk says:

      I think ‘the dead…the people who want to be dead’ is doing a lot of work here.
      We consider someone who commits a punishable offense as the same person throughout the process of transgression and punishment.

      I think if someone tries to kill themselves, and we know they will want this for all time if we prevent them, then my intuitions agree with yours.
      I think where we differ is that if someone wants to kill themselves | X, and we can change X, then I have a strong preference for changing X even if the best I can do is knowing that with some significant positive probability X will go away over time.

      • John Schilling says:

        But “coercive measures of suicide prevention” is pretty much the opposite of “change X”. Bob wants to kill himself because Alice dumped him and he can’t imagine finding a happy and meaningful life without her. Changing X, means convincing Alice to take him back, or helping him meet a woman who is an even better match than Alice, or helping him find a way to live a happy and meaningful life alone. I’m in favor of any of those things, if they can be done at reasonable cost (i.e. not enslaving Alice).

        Coercive measures of suicide prevention, means e.g. locking Bob in a padded cell where he probably wants to kill himself even more than he did yesterday, but can’t. And probably Charlie as well, who didn’t want to kill himself but said the wrong words when he was venting to his psychiatrist.

        • NoRandomWalk says:

          “even if the best I can do is knowing that with some significant positive probability X will go away over time”

          I am explicitly making the claim that the vast majority of the time, if Bob wants to kill himself because Alice doesn’t love him, if we keep him in prison for let’s say, a year, by the end of it he will have mostly gotten over how much her not loving him hurts her. Just because of how the brain responds to passage of time.

          Some things like illnesses that are not treatable and don’t get better over time without treatment, are justification to let folks kill themselves in my view, but it’s my belief (I don’t know where it comes from, I could be wrong) is that for at least the majority of suicide attempts if we just lock people up in a straightjacket for a moderate amount of time without any further care, their future self will on reflection opt to not try to kill themselves again.

          Do you disagree with my object level claim above?

          • acymetric says:

            I’ll jump in and say that I agree, except that “keep him in prison” is likely to lead to the same or a worse outcome. The problem with coercive suicide prevention isn’t that it is a bad idea, it is an exceedingly good one (if combined with other rehabilitating measures), it is just that we are particularly bad at implementing it through a combination of lack of understanding and lack of resources.

            In your example, now Bob no longer wants to kill himself over Alice, but very well may want to kill himself because he is homeless, broke, unemployed, and possibly without whatever support network he had before he was committed.

            So, I agree that time spent unable to kill oneself is a good preventative measure that will likely lead towards not wanting to kill oneself in the future for a lot of people if and only if the quality of life during that time and the expected quality of life after that time meet some criteria of goodness. Torture/cruelty (using loose definitions of the words here) and destroying the outside life of the person involved with no path towards rebuilding will likely have no effect or even a negative one.

          • John Schilling says:

            Do you disagree with my object level claim above?

            Your object-level claim starts with the assumption that Bob wants to kill himself, but here Bob is conspicuously not-dead. And the assumption that everyone who ever downed a bottle of pills wanted to kill themselves, as opposed to simply wanting someone to pay attention to them for a bit, is almost certainly false at the object level.

            So you’re combining, locking people up for something they haven’t done and you aren’t even sure they want to do, and locking people up with the next step of the plan being a hope that someone else will go do the hard work of actually solving the problem. I am opposed to both of these individually and particularly to both of them combined.

            If you’re willing to work with Bob all the time between now and them, the second objection mostly goes away. And there’s a good chance that you’ll figure out some of the mistakes in the first part. But if you’re willing to work with Bob all the time between now and then, then you probably don’t need to actually lock him up to keep him from killing himself. It’s only if you’re going to put that part on some vague to-do list than you need to put Bob in a padded cell. And that maximizes all the down sides that acymetric points out.

        • cuke says:

          Coercive measures of suicide prevention generally amount to involuntarily hospitalizing someone for 72 hours so they can be assessed and treated. Maybe Scott can weigh in here — I don’t think a person can be indefinitely involuntarily hospitalized just because they are really depressed. They can’t be forced to attend counseling or take medication once they’re out. If they then opt to kill themselves and succeed, then that resolves the situation. If they opt to get treatment and don’t kill themselves, then that resolves it. If they try to kill themselves again and don’t succeed, then they go back to 72 hours of hospitalization.

          I don’t mean to defend this as a way of proceeding, only to say it’s not quite as stark as it’s characterized here in terms of how freedom/intervention unfolds.

          Outpatient mental health workers talk to people who express suicidal ideation all day long. It’s a rare instance that those conversations lead to involuntary hospitalization. I don’t know the statistics, but my sense would be that more people are hospitalized after failed suicide attempts or are voluntarily hospitalized for suicidal ideation, than are involuntarily hospitalized for saying the wrong words when venting to their psychiatrist. Mental health workers are pretty well trained not to do that (which doesn’t mean it never happens, obviously). Suicide risk assessment is a core skill for mental health workers and it takes a bit of time to perform it; people are not supposed to be involuntarily hospitalized without that kind of assessment being done, which means a patient has repeated opportunities to clarify how seriously and imminently they intend to kill themselves.

          If someone who is suicidal is involuntarily hospitalized in a situation where they really preferred to kill themselves than get treatment, they will have other opportunities very soon. Some people who attempt suicide and survive are really very happy to have gotten treatment and another chance at life.

          • John Schilling says:

            Compulsory suicide prevention is 72-hour(ish) holds, plus sometimes longer confinement, plus sometimes being released from confinement on condition of taking mind-altering drugs indefinitely, plus gun control, plus blocked access to overlooks rooftops and bridges that would otherwise offer spectacular views, plus mandatory reporting, plus inadequate access to pain medication for people who are in constant pain.

            I’m opposed to all of that. And I’m not so much opposed to that because I think it makes it impossible or even extraordinarily difficult for people to kill themselves if they really want to. As you say, death manages. I’m mostly opposed to all of that for the obstacles it places in the way of people who want to live and enjoy life, but are liable to be paternalistically constrained because someone is afraid they might kill themselves if we e.g. let them have more than a five-day supply of pain medication.

          • cuke says:

            There’s a lot of stuff mixed in here. I’m not sure what you mean by gun control in this context of suicide prevention.

            I have never heard of a person who attempted suicide or expressed suicidality being required to take mind-altering drugs indefinitely or even immediately after inpatient discharge. How would that be enforced?

            Mandatory reporting I understand to refer to “mandated reporters” and that pertains to elder and child abuse. What do you mean here?

            Pain medication has long been highly regulated — are you saying that cutting back on opioid prescription as a result of the epidemic has gone too far, quite apart from suicide prevention?

            I gather some blocked access stuff is a public safety issue — ie, falling people can harm other people, cause accidents, etc.

            To me I guess these things seem like minimal public safety tradeoffs, but then I’m a healthcare provider with a public health background. I get they don’t seem like that to you. Our paternalistic buttons will get pushed by different things. I think banning large soda drinks is paternalistic.

          • John Schilling says:

            There’s a lot of stuff mixed in here. I’m not sure what you mean by gun control in this context of suicide prevention.


            There are many people who argue, explicitly or implictly, that private ownership of firearms should be banned or severely restricted because that would make it harder for people to commit suicide and reduce the number of people who commit suicide. Some of them are I suspect not being honest about their true motives, but some are sincere. And our host has made something very like that argument in a top-level post here fairly recently, so I’m surprised that this is a new idea for you. But, OK, now you know.

          • The Nybbler says:

            Pain medication has long been highly regulated — are you saying that cutting back on opioid prescription as a result of the epidemic has gone too far, quite apart from suicide prevention?

            As a result of the opioid epidemic, prescriptions in NJ have been cut back to 5 days. Last time I was prescribed opioids was for an LC1 pelvic fracture, which is a fracture of three bones. It is extremely painful. Treatment is rest and use crutches until you heal, some 3 months later (actually much longer than that, but after 3 months I could kinda walk unassisted, though still with pain). The pain after 5 days (or 15 days — the old rule — or even 30 days), especially if you accidentally put too much weight on the broken side, is not small.

            Yes, theoretically you can ask for more. But not really, because asking for opioids under any circumstances risks getting you put on a list of drug-seeking patients. If you’re on that list, you don’t get any opioids again. Ever. So it’s not worth it.

  55. helloo says:

    Let’s scale back a bit and look at the priors-

    Why does it matter if someone “deserves” to be punished if you are going with utilitarianism? Isn’t that a virtue in of itself?

    Not saying it does not matter, but rather whether they are counted as part of the utility function or governing procedures, the question of how to consider their intent, responsibility, etc. should already been considered when creating them.
    That is – if you feel that someone who willing chooses to do bad things to themselves shouldn’t have their negative utility be counted – state it as such by design!
    If you feel that it should be moderated by their ability to rationally choose, but are unable to measure it, either choose some method to fit it (always give them the benefit of the doubt, fraction based on the the probability it’s one or the other, etc.), or realize that this might also interfere with your ability to calculate the utility of it.

    Second, regarding the adoption of laws in a utilitarian sense.
    Laws can range from purely discretionary to strict.
    A strict law is not exactly utilitarian until you consider the consequences and consider it as your preferred solution.
    That is, a strict law will almost guaranteed have winners and losers, but still might be considered if it’s better than any other solution.
    A law with discretion will by definition vary according to the discretion of the one who exerts them. For better or worse, it can allow both reduction of errors from situational issues to possibility of discrimination and corruption.
    Either way, it’s not that “the law allows cases where it isn’t utilitarian” but rather the acceptance of the possible “unfair” consequences of the law should already been considered when choosing to adopt it. There might not be a law that fully encompasses the utility function unless you state it as such (which as you mention, might cause issues in itself).

    Lastly, I think a better example would be of lottery tickets. In your worldview, would you ban them or not?

  56. VNodosaurus says:

    I don’t think these examples have a utilitarian calculation that supports the post’s conclusion. Furthermore, I’d argue that in the extreme cases where the utility inequality is as described, the utilitarian course is reasonably close to moral intuition.

    Depression is something that very significantly lowers quality of life – not as badly as heroin addiction, but it’s not so small as to be ignored in the heroin comparison. Meanwhile, the people using the drug like heroin – what would they be doing, if the drug was banned? I suspect that a fair few of them would be using heroin proper, or something similar to it. If the availability and addictive qualities of the drug are such that a significant fraction of its users end up addicted, and that it is much easier to obtain than any other drugs of comparable addictive potential, I think that’s a fairly strong argument that the drug should be at best strictly regulated.

    (Or you can go full preference-utilitarian and say that people can get addicted if they want, and that this is a feature rather than a problem.)

    The example of the sexual harasser – well, ‘harassment’ can mean a lot of different things, and can have major knock-on negative effects well beyond the person directly affected. But generally, people do tolerate minor anti-social behaviors among people in their community. That attitude of ‘sure, he’s a jerk, but he’s still one of us’ is hardly unheard-of, especially in communities that attract jerks. Now, past a certain level of misconduct people will get kicked out, especially if it breaks laws, but that’s a matter of setting precedents.

    The case of UBI – if UBI genuinely causes people to suffer in ennui to a greater extent than causing them to be free of the suffering of work, I would oppose UBI, and I think so would almost everyone. In practice, the fraction of people who both require work to have meaning in life and wouldn’t work if given the option not to is, I suspect, pretty small. If you really do believe that most people are like that (which Scott doesn’t seem to), you should oppose UBI.

  57. Lillian says:

    This seems like as good a time as any to discuss a thought that’s been bouncing around my head for a while. A lot of discussion about morality frequently seem to be discussions about territories, whether it’s better to live in Utilsylvania or Virtuenia, or whatever. However i think much of the time people are actually talking about maps to substantially the same territory. It seems to me that much of humanity shares substantially the same core moral intuitions, so a lot of arguments about morality wind up being arguments not about which territory is best to live in, but about which maps best depict that shared terrain. So when i look at Scott’s post it sounds much like, “This utilitarian map I’m using, which serves me very well most of the time, shows a plateau in this spot where I’m pretty sure I see a valley. While I usually trust the map, in this case it obviously needs to be amended.”

    This falls in with my general intuition that moral systems are tools for a purpose, means rather than ends. Even a well stocked machine shop may find that sometimes a specific job exceeds the capabilities of their available tools, in which case they may need to either get a new one, or to use their existing ones in non-standard ways. This does not, however, invalidate the usefulness of the machine shop as a whole. It also means that when presented with a hypothetical job, saying “that’s never going to come up in real life” is a valid defence of the shop, it’s built to handle real life jobs in the real world, not every hypothetical you can possibly think of.

    • JohnBuridan says:

      Which ultimately just leaves all ethical dilemmas up to the individual to decide based on what he/she thinks best, correct?

      Ethics becomes a type of skill about knowing what tool can resolve in favor of your intuitions. Then of course we are stuck with intuition as our source for morality, and can not appeal to any rules with regard to moral arguments. In this view things like “divine law”, “virtue”, “greater good”, “natural law” and “individual rights” are just intuitions shared by a bunch of people, and those people who deny the existence of any of these things and act accordingly cannot be reasoned with. This seems like a bad result of your view… definitely doesn’t appeal to my intuitions.

      I like being able to appeal to something a little more objective, yes this objective thing should appeal to intuition, but should mostly be a series of appeals to realities outside of the individuals discussing the moral dilemma. “Who is harmed and who benefits and in what way in a chosen course of action” seems to me the stuff of factual discussion (E.G. My recent admin discussion about whether we should continue to offer physics in 11th grade). Which course of action to choose from seems to depend upon not only upon the intuitions of the individual, but strongly upon the “intuitions of the community.” However, communities don’t have intuitions, what they do have are some hierarchy of values (usually unspoken) and goals (usually explicit).

  58. aphyer says:

    (I’ve typed this comment out twice — when I saw a typo after I posted the first version I tried to edit it, and I was told ‘you cannot edit this any more’ and then my comment disappeared’. Did something eat it?)

    I think my main complaint here is that it’s basically using utilitarianism as a motte-and-bailey:

    The MOTTE of utilitarianism is as follows:

    Utilitarianism isn’t really a moral system — it’s a framework for thinking about moral systems. You can be a utilitarian paperclip maximizer — it’s actually easier than being a utilitarian human! Utility = # of paperclips, done!

    If (e.g.) Alice supports factory farming and Bob opposes it, utilitarianism can’t necessarily resolve this. It’s entirely possible that Alice will write a utility function that puts no weight on chickens and Bob will write a utility function that puts positive weight on chickens.

    But utilitarianism can sometimes help you think more clearly about moral problems — for example, you can try to work out what the ratio between chicken weight and human weight in your utility function would need to be for factory farming to be bad (spoiler alert: not very high).

    The BAILEY of utilitarianism is as follows:

    The RIGHT utility function is this utility function here that I have just written! I just analyzed my preferred policy, and it is positive-utility under this function! So you need to support my preferred policy, or you’re not a utilitarian!

    When Scott says e.g. that banning the drug in Example 1 is positive-utility, I assume he means something like this:

    Our best projection is that banning the drug will make 1000 depressed people suffer more, losing 10 QALYs each. However, not banning the drug will make 1000 potential drug addicts get addicted, losing 15 QALYs each. Overall, therefore, banning the drug has a utility of +5000 QALYs.

    But this is the same ‘sneaking in your own utility function’ mentioned in the bailey above! It’s assuming a single utility function of ‘add up total # of QALYs across all humans’. And there’s an obvious competitor utility function that is only a bit more complicated:

    Assign each human a moral weight based on their behavior — higher for people who behave better.
    Add up total # of QALYs again, but this time weight each human’s QALYs by their personal moral weight.

    Using the drug example again with the numbers above but with this new utility function:

    A: If we assign weight 1 to both the depressed people and the addicts, the utility of banning the drug is +5000 QALYs.

    B: If we assign weight 1 to the depressed people but weight 0.5 to the addicts (we care about them less), the utility of banning the drug is -2500 QALYs.

    C: If we assign weight 1 to the depressed people but weight 0 to the addicts (we don’t care about them at all), the utility of banning the drug is -10000 QALYs.

    D: If we assign weight 1 to the depressed people but weight -0.5 to the addicts (we actively want them to suffer), the utility of banning the drug is -17500 QALYs.

    Note that all of the last three oppose banning the drug, though for very different reasons: B accepts that the addicts ‘ suffering is bad, but is more willing to sacrifice them because their suffering is self-inflicted, while D considers their suffering to be an actively good thing.

    And (coming back to motte-style utilitarianism) I think utilitarianism as a means of looking at moral problems like this is MUCH CLEARER than other ways! In fact, I think pretty much the whole of Scott’s post can be summarized as follows:

    Many of Scott’s friends interpret people who talk about ‘desert’ as assigning NEGATIVE moral weight to people who do bad things. However, ‘desert’ can much more reasonably be interpreted as assigning LOWER moral weight to people who do bad things. This still leads to many of the same policies, and is much more understandable.

    • Charles F says:

      I think the first sentence of your summarization is accurate(ish), but the second gets the post’s point wrong. The idea of desert presented in the post is not the same as assigning a lower moral weight to people who do bad things. If I’m interpreting Scott correctly, in the drug scenario, he would not advocate for causing some arbitrary harm to the druggies instead of a somewhat smaller harm to the depressed people. It’s specifically the harm done by misusing the drug that’s being weighted lower while other, unrelated, harms to those people get similar weights to what they would with non-druggies.

      • aphyer says:

        Not sure I agree here. Imagine that we could follow a policy of banning the drug but then somehow moving 12.5 units of utility from each potential druggie to each depressed person. (Redistributive taxation using some magical knowledge of which people would be druggies?) This is a strict improvement over not banning the drug.

        • Charles F says:

          I’m not sure what your point is. It’s true that that policy would be a strict improvement, but as far as I can tell that doesn’t imply a lower moral weight for druggies.

          • aphyer says:

            The point is that if you weight ‘harm caused by drug’ lower than ‘other harm to depressed people’, you will pass up strict improvements like that.

            Moving from a world where the drug is not banned to a world where the drug is banned but then utility is transferred from druggies to depressed people in order to make up for it is a strict improvement under most ways of thinking about the problem. However, if you artificially apply a lower weight to ‘harm caused to druggies by drug’ than you do to ‘harm caused to druggies by any other source (e.g. magical redistributive taxation)’, you will pass up that improvement.

    • Bugmaster says:

      Assign each human a moral weight based on their behavior — higher for people who behave better.

      Under Utilitarianism, you cannot do this without presupposing some sort of a specific utility function — because, without one, words like “better” have literally no meaning.

      • aphyer says:

        Yes, this is true. You also can’t define a QALY without presupposing some sort of utility function, for the same reason. In practice it’s essentially never possible to completely define a utility function: instead I will plead ‘you know what I’m trying to get at, right’?

        • Bugmaster says:

          It sound like you’re getting at something like, “Utilitarianism is meaningless because there’s no universally agreed upon utility function, and (by definition) no way of comparing utility functions against each other; so we might as well forget all about it and just go with our moral intuitions, instead”.

          • aphyer says:

            Ah, no, sorry if I haven’t been clear. Let’s try this as an explanation; if this doesn’t work I’ll just chalk it up to me being tired and incoherent.

            I like utilitarianism a lot. This isn’t because I think there’s universal agreement on a utility function! I don’t even think I can clearly and unambiguously define my OWN utility function, much less expect that if I somehow did and then you did the same we would agree! I like utilitarianism because I think that if you try to talk about moral questions ‘in a utilitarian way’ (yes, I know this phrase is itself fuzzy and ill-defined) that makes it much easier to make good arguments and much harder to make bad arguments.

            Consider as a hopefully-not-too-super-controversial example ‘should we allow markets in organ donation’? (I think the answer to this is obviously ‘Yes’.)

            If you’re not trying to be utilitarian, the typical argument against this is something like ‘I can’t believe you think poor people should be cut up and used as spare parts to keep rich billionaires alive!’ If you insist on expressing things in utilitarian terms (even with the understanding that the stated utility functions will be fuzzy approximations), it gets MUCH HARDER to make bad arguments and MUCH HARDER to hide the weak/evil points in your position. A (potentially strawmanned) dialogue:

            Q: I just think it’s wrong for poor people to be treated as nothing more than sacks of organs!

            A: 20 people die every day due to a lack of organ transplants. Can you state what term in your utility function penalizes this treatment of poor people so severely that it outweighs 20 deaths per day?

            Q: Well, it seems unfair that rich people get to live longer than poor people!

            A: Can you state what term in your utility function penalizes this unfairness so severely that it outweighs 20 deaths per day?

            Q: Well, I like people living longer, but I don’t like rich people getting to live longer than poor people. Maybe my utility function is something like [Average Life expectancy] – [Difference in life expectancy between average rich person and average poor person]?

            A: This implies that your utility function actively rewards you for shortening the lives of rich people. If Bill Gates dropped dead tomorrow, by the utility function as stated above you would say ‘Yay’! Does that not seem somewhat evil to you?

            I realize this is not an entirely fair way of putting the object-level argument, and if you disagree with me about the object-level question please try to ignore the unfairness and not get diverted. The point I’m fuzzily trying to get at is that, even though we can’t do utilitarianism genuinely WELL, attempting to do things in a sorta-kinda-utilitarian way seems to make things much clearer, and seems to make it much harder to make bad arguments and much easier to identify bad arguments.

          • Hoopyfreud says:

            I think my objection boils down to the fact that you’re defining bad arguments as those which cannot be expressed in utilitarian terms. For myself, this compromises the foundations of my positions so much that my answer to,

            Can you state what term in your utility function penalizes this treatment of poor people so severely that it outweighs 20 deaths per day?

            Would be “No, I cannot contort my morality into the shape of a utility function for the sake of this discussion. It’s not locally linearizable here.” And then what?

          • Bugmaster says:

            I don’t think arguments along these lines are very persuasive. If I were a Utilitarian, I’d say something like, “your solution creates perverse incentives for harvesting organs from poor people, and such incentives will ultimately create too much disutility”. If I were a Deontologist, I could say, “nope, organ trade is just wrong, period”. If I were some other flavor of moral philosopher, I could say, “under your system, it seems like we could justify almost anything for the Greater Good; historically, that hadn’t worked out so well”. These are just some examples off the top of my head, I’m sure there are hundreds if not thousands more.

          • Faza (TCM) says:

            [U]nder your system, it seems like we could justify almost anything for the Greater Good; historically, that hadn’t worked out so well

            Isn’t that pretty much the core of all critiques of utilitarianism?

            Scott’s post revolves around the problem of inflicting suffering on the ostensibly blameless party, in order to prevent an even greater suffering being inflicted on the culpable party. The math checks out, so it’s all good!

            All flavours of utilitarianism I am familiar with tend to gloss over the fact that utility functions are being mapped over real, living, breathing people. People who might understandably object to having been selected for the negative term of the function, regardless of how many children in Africa are saved.

          • Bugmaster says:

            @Faza (TCM):
            I would like to agree with you, but I’m not sure if I can, given the existence of the Trolley Problem.

          • JohnBuridan says:

            I think what aphyer likes about utilitarianism is that it emphasizes weighing and inspecting the consequences and tradeoffs of human action with a doggedness which no other ethic shares.

            Virtue ethics says, “Seek Prudence, the virtue that pertains to cognition and foresight in various matters!” Yeah, thanks for the help Aquinas., but you could use a little more development.

          • Faza (TCM) says:

            Could I persuade you to unpack this a bit, because I’m not sure how the Trolley Problem relates to what I wrote.

          • aphyer says:

            @JohnBuridan: Ha. That does sound pretty accurate, yes.

  59. Philosophisticat says:

    This is the flip side of what I think is a fairly commonsense way to think about desert in the context of punishment, self-defense, etc. It is natural to think that people have by default a kind of moral shield against certain sorts of things being done to them even for the common good – i.e. being locked up, tortured as a deterrent, etc. But through exercising their agency by committing crimes, threatening someone, etc., they can weaken this default shield, and consequentialist considerations of deterrence etc. can get a grip. This is also a view on which you don’t punish people because bad people suffering is itself a good thing – you punish people for consequentialist deterrent reasons, but their free choices are what allow these reasons to overcome the rights people have against being used in this way.

    The view here seems to work the same way but starting from a consequentialist default – by default, everyone is entitled to having their interests considered impartially etc. But through exercising their agency, they can weaken this default entitlement, etc.

  60. Jake Abdalla says:

    I wonder if the “desired” answers here just have higher utility. I know there were disclaimers to the contrary in the examples, but I think some magical godlike utility calculus would be heavily weighted in favor of the preferences you mention here up until some more extreme point where self-nominating suffering dwarfs the involuntary suffering.

  61. I think that most of these dilemmas that seem to express themselves as a conflict between recalcitrant moral intuitions and utilitarian calculations are really conflicts between self-interest and utilitarianism. In the Trolley Problem, e.g., people can often acknowledge that the right thing to do really is to push the fat man off the bridge, but they also say they probably just couldn’t do it because they would feel too guilty. (We know that our feelings of guilt are imperfectly calibrated to moral outcomes that we reflectively endorse, and we have a decent idea why.) Nobody likes to feel guilty, but avoiding feelings of guilt is a self-interested aim, not a moral aim.

    The examples provided here don’t seem susceptible to this analysis, but I think they might be. In each case, my tendency to reject the utilitarian solution is due to having greater empathy for the people who would get the short end in that case. I feel greater empathy for people who only take drugs according to the recommended dose, who don’t harass people and who are capable of rational time-discounting because that is how I think of myself and I naturally empathize with people like me. Wanting things to go well for people like me isn’t exactly self-interest, but it is close enough. Overall I want a world that is more conducive to people like me than people who are very different than me, especially people I don’t like very much.

  62. deciusbrutus says:

    What’s the observable difference between giving the short end of the stick to people with brains that have addictive tendencies who are constantly addicted to any addictive substances in their environment, and giving the short end of the stick to kids who melt if they accidentally take the miracle anti-addiction pill that solves all the addiction problems of the first group?

    Do your intuitions say “The kids poisoning themselves on the cure are innocent, but the people with a documented inability to control their addicting behaviors without this cure are nominating themselves for the short end of the tradeoff” or do your intuitions say “The people who need the medicine are nominating themselves for the short end of the tradeoff, and the kids who don’t know better than to pick the lock on the medicine cabinet are the innocents who need protected”?

    It seems to me that since either party in any of those stories could easily be interpreted to be “nominating themselves for the short end of the stick”, the entire idea is the result of an attack on your ability to reason about harm, by slipping in the idea that some people deserve harm more than others, and that coincidentally those people are always the people who get the short end of the stick in the solutions to those problems that you prefer.

  63. JulieK says:

    I think in a lot of potential tradeoffs, you could think of reasons why both parties bear some responsibility for their own predicament. So this approach is not going to be so helpful, because it will be easy to convince yourself that side X bears more responsibility and ought to get the short end of the stick.

  64. ing says:

    I would like to fight the hypotheticals. : )

    In case (2), the hypothetical is that somebody is serially harassing women, and we’re asked to consider the possibility that maybe the one person’s pain is more significant than the large number of people he’s hurting. I think this is a weird hypothetical.

    In case (3), the objection being offered to basic income is that it might make some people sad because they would choose not to work. This is a stupid objection. The correct objection to basic income is that it costs a lot of money, which presumably gets taken from people who do work, and then those people get tired of supporting everyone else and stop working and the whole system collapses. It’s not clear whether, in this hypothetical, we’re somehow getting a lot of money from automation, or we’re just not thinking about the cost of our policies.

    In case (1), I admit that I’m not totally clear on how addiction happens. This drug is described as one that nobody could ever get accidentally addicted to. Do real drugs work like that? I suspect that real drugs do not work like that.

    I think we should be careful about drawing conclusions from these weird hypotheticals and trying to generalize those solutions to reality.

    • deciusbrutus says:

      Case 2 isn’t a hypothetical. It’s a poorly summarized poorly anonymized thing that is actually happening/has actually happened. And it’s setting up to be very devisie between people who think that 17 year old girls serially moving in with much older men are “nominating themselves for the short end of the stick” and those who think that the older men that they don’t like are.

  65. raj says:

    I’m a utilitarian and sympathetic with the desert. I don’t the issue with any of the three scenarios, because in reality the correct utilitarian answer agrees with desert theory (which evolved as an intuitive way to enforce more-or-less sane incentive structure). The logic contradicting it seems to be a sort of straw-man utilitarian only able to concern itself with the immediate happiness of those affected – rather than second-order effects which we intuitively understand to be significant, but which are harder to see and quantify.

    Case (1): I’m surprised anyone here is actually in favor of drug prohibition from a utilitarian perspective. In particular a hypothetical wonder-drug. I’d like to see any evidence that prohibition helps people. Don’t addicts just find replacements? And do we really want to burn the commons because some minority tends to hurt themselves there? Let that behavior select itself out.

    Case 2 hardly bears mentioning. In the iterated prisoners dilemma, you need to do something like tit-for-tat or you will be eaten.

    Basic income currently isn’t a trade-off between “miserable jobs” and “pointless lives”, but “miserable jobs” and “a totally unrealistic economic policy”. It’s a discussion for another day.

    • Edward Scizorhands says:

      Basic income currently isn’t a trade-off between “miserable jobs” and “pointless lives”, but “miserable jobs” and “a totally unrealistic economic policy”. It’s a discussion for another day.

      Some day Scott will make a post about UBI where he isn’t talking past the sale about the inevitability of AI domination, and he will address serious problems with the proposal. Today is not that day.

      I’m not just worried that UBI recipients will sit around feeling unfulfilled. I’m also worried that we will get even more NEETs seeking the meaning of life through Twitter mobs trying to get the people who do the actual and necessary work fired.

      • JohnBuridan says:

        I think a UBI would likely enable new forms of wealth extraction through the creation of more economic fundamentals. My napkin sketch of things society requires for work but doesn’t help me acquire are car, phone, internet, computer (laptop?).

        A car costs about $6,000 per year, is required for work, and in my state you have to pay a tax for owning it. A decent cell phone will probably cost $800 a year. Home internet costs $500 per year, a laptops will probably cost about $450 per year.

        If I make $32k and get 24k after taxes and only 16k after paying for the “luxury items” that allow me to do my job then I need to get food and rent and health insurance for 1.3k per month. It’s difficult to do.

        I worry that a UBI will just cause society’s corporate infrastructure to add some other item to the list of “required items to engage in the economy.”

        Edit: I do think a UBI would help the poorest people, and people who would like to take more entrepreneurial risks.

        • The Nybbler says:

          What job requires a car, home internet, and a laptop, and pays $32,000/year? Phone is cheaper than that too, if you go prepaid and use it for little.

          • JohnBuridan says:

            Civil servants, teachers, local government employees, event coordinators, day cares, account managers for small business, data entry jobs, graduate school assistants, interviewers, real estate agents -1sd.

          • The Nybbler says:

            Most of those shouldn’t need a laptop, or if needed it should be provided by the employer. Several of those (day cares, data entry jobs) shouldn’t need home internet or a laptop.

        • jaimeastorga2000 says:

          The hell? Your numbers are ridiculously overinflated.

          A car costs about $6,000 per year, is required for work, and in my state you have to pay a tax for owning it.

          My current (used) car cost about half of that to purchase outright. Are you also counting insurance, gas, maintenance, parking, tolls, and traffic tickets as part of the cost of ownership of the car? That plus the amortized cost per year of the car might get you somewhat close if you have a long commute, but I still think you are overestimating by at least a factor of two.

          A decent cell phone will probably cost $800 a year. Home internet costs $500 per year, a laptops will probably cost about $450 per year.

          I got my laptop for $200. I expect it to last me at least five years on average from the date of purchase before breaking or being lost or getting stolen or becoming so obsolete that I have to upgrade, so the amortized cost is at most $40/year. That’s an order of magnitude less than what you think a laptop “probably” costs.

          My cellphone cost $20 (the $200 price you see now is because the model has been discontinued) and I purchase a $100 card once a year to keep it working. It’s not a “decent” cellphone, but if I already have the internet and a laptop, why do I need a smartphone and a data plan? Conversely, if you DO have a smartphone with unlimited data, why do you need home internet? Just tether the phone to your laptop and browse from there. Or, if your job doesn’t require 24/7 internet connectivity and only requires you to check your e-mail once a day or something, maybe just forego paying for internet altogether and use the library, which is free.

          Overall, I feel like you could really use some Early Retirement Extreme mindset. I recommend “Frequently Asked Questions”, “How I live on $7,000 per year”, “How to retire in 5 years”, and this awesome reddit comment.

          • The Nybbler says:

            AAA puts total cost of ownership of a new car over the first five years at $8850. This is very high, but the key point is “new car”. Average age of the fleet right now is over 11 years.

          • jaimeastorga2000 says:

            AAA puts total cost of ownership of a new car over the first five years at $8850. This is very high, but the key point is “new car”. Average age of the fleet right now is over 11 years.

            Yeah, sounds right. My car is from 2005/2006 (can’t remember which right now), and I got it a few years ago. My mom still drives a 2001 car that was purchased new. Neither of us is spending $8000+ per year on our cars.

  66. slate blar says:

    This argument basically boils down to “Society shouldn’t harm some people so that other people that constantly make short-term gratification decisions can be helped, regardless of the net utility calculation”.

    You ‘choose’ (or are deterministically driven to) to satisfy your immediate urge and that damages your long-term utility.

    Society can prevent you from doing that(at some cost to some people) or not.

    Interesting sort of related: If you combine this with a deterministic world-view this is sort of a probabilistic form of eugenics. You are identifying individuals with poor traits(or at least traits the society views as poor) and somewhat reducing their reproductive fitness(though not by 100%).

    The converse, harming long-term thinkers for the benefit of short-term thinkers, does the opposite.

    This latter view of it makes me quite uncomfortable, though I think it describes the nature of what is happening when societies punish certain behaviors(though is this biological determinism? who knows).

  67. jonm says:

    Strangely these examples just don’t hit the intuition pump for me.

    1) Provided the utility of banning the drug outweighs the utility of treating those patients, I would be in favour of the ban. Drug addiction has a lot of externalities so it’s not just the lives of the drug users you’re harming by letting them abuse it.

    2) I think your appeal to rule utilitarianism seems like the right response here and I doubt the claimed balance of utilities in the real world.

    3) Basic income appeals to me insofar as it helps solve social problems. If it empirically doesn’t then it’s not very appealing even if it fails because of lack of willpower (see also abstinence only teen pregnancy prevention)

  68. dlr says:

    Does the ‘utilitarian calculation’ include the changes in behavior that are going to occur if you stop exposing people to the consequences of their actions? so that they are not longer even slightly incentivized to do the right thing?

    Once you don’t kick out the jerk, then everyone else (and the jerk) get the message that that kind of behavior is OK, and it becomes way more common. I bet that ‘utilitarian calculations’ don’t include this first order effect.

  69. Dindane says:

    Is it okay to nominate someone else for the short end of a tradeoff?

    If not, why is it okay for one part of a person’s mind to nominate their entire mind for the short end?

    I think I also like this as a new argument against naive conceptions of revealed preferences. “Alice is always mean to Bob, so clearly the Alice-and-Bob system has a revealed preference for the Bob part of it being miserable” is clearly bunk, so why would it be automatically valid when the system is a single human whose mind is still a complex object?

  70. realamerican1 says:

    Natural Selection as moral principle (think Herbert Spencer)–this seems to be the paradigm which best explains this phenomenon for me. Have you ever stopped to think that your instincts are the product of thousands of years of natural selection and that they don’t need any kind of justification. You have the genes which cause these instincts because these instincts are successful life strategies (at the individual and societal level).

    1 – It’s OK to let drug addicts die–I think there is a reason we feel the need to help people who are in distress of no fault of their own, and that we often don’t feel the same way about those who are suffering due to their own foolishness or stupidity.

    2- Societies which purge sexual harassing deviants survive.

    3- The UBI case seems to be a natural disgust for those who lack self control.

  71. Nietzsche says:

    I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?

    Ummm… finally give up utilitarianism as an irredeemably flawed moral theory? People love it because it is easy to understand, you can apply it in every situation, and the idea of calculating utility functions has a nice aura of mathematical exactness about it. Those are bad reasons for accepting a bad theory.

  72. syrrim says:

    The problem, perhaps, of applying utilitarianism to these situations is that it has great difficulty considering the really long term effects of our decisions. In short, catering to a particular group makes that group more powerful in the future. Or: over long enough time scales, all actors behave rationally. Consider the termites that invade your home. They destroy your walls, and you want to get rid of them. A utilitarian enters, and points out that due to their number, the preference of the termites outweighs your own. Furthermore, you have the rational capacity to leave your home, whereas they do not have the same capacity. You would have to kill them to get rid of them, but if you left you would stay alive. What is your response? Mine is this: these termite may not learn their lesson if I kill them. But every time a termite is killed, termites as a whole will become smarter. If I let them invade my home, their genes would be very succesful, and their babies would go on to invade more homes, thereby making the world an exponentially (in each generation) worse place. If I kill them now, this wont happen. Furthermore, the termites remaining will be slightly more inclined to avoid houses. Repeated enough times, this would cause them lto invade fewer homes, thereby worsening fewer peoples lives, and crucially, making themselves less likely to die as well. To summarize: even as termites individually are irrational, termites as a whole are rational, and therefore respond to incentives.

    The same thing, I think, applies to humans, and to the 3 situations listed. The primary difference with humans is that we should never kill them to try and control their behaviour (or castrate). The people abusing drugs shouldn’t be shot, nor given drugs, nor even allowed to have drugs. But if they happen to get drugs despite our best efforts, and if they die as a result, we shouldn’t shed too many tears. This law also applies even when death isn’t the result. Someone who is rejected from a community might have less capacity to have children, even as they live their full lifespan. Someone who is made more sad may invest less in their children, worsening their genetic fitness.

    • acymetric says:

      It seems like you have to start considering the time value of utils to really get this right though. I mean, the 7th generation thing seems like a nice principle, but there is a reason it is hard to get society to coordinate in that way (and maybe reasons why they shouldn’t). The improvements you are talking about are generations in the making (and it may not even turn out to work out that way, which would be a real disappointment after all that time and apparently needless suffering), and in the meantime a lot of people other than the addicts themselves are also going to suffer as a result of the now floundering addict class. Of course, this is also predicated on some false dichotomy where we either help addicts or help some other group leaving addicts and potential addicts twisting in the wind. It seems likely that a good utilitarian function would require providing assistance to both, not choosing one or/over the other.

      • syrrim says:

        I don’t think there can be a time value of utility. I think that this must be equivalent to egoism. It amounts to preferring your kin over others, which in turn amounts to preferring yourself. Utilitarianism is that mode which steps back as an ambivalent actor and asks “what would be best for this society?”. Surely such an actor would not prefer one point in time over another, just as they don’t prefer one individual over another.

        it is hard to get society to coordinate in that way

        In general, people who are less happy, who are hurting themselves, will naturally be less fit. The only coordination required is to avoid increasing their fitness, which most people are all too happy to do.

        Of course, this is also predicated on some false dichotomy where we either help addicts or help some other group leaving addicts and potential addicts twisting in the wind.

        One real world example of this phenomenon might be opiates. These are prescribed to people to help deal with pain, an obvious utilitarian positive. Some fraction of people get addicted, and finding their prescription running low or too expensive, will seek elicit variants. Their habit might start hurting them, and they might eventually overdose. The solution to this is to ban prescription opiates – hurting those who benefit from them. It is less obvious that overall (given a single point in time) this would be a utilitarian positive, but I don’t think it is overly contrived to say it might be. Note though that we needn’t leave the addicts twisting in the wind. We can still offer to help them, we just avoid doing so at the expense of the other group.

  73. Michael Cohen says:

    There’s another thought experiment that intuitively favors desert utilitarianism. (That’s the name for when the utility of good people counts for more than the utility of bad people; it doesn’t necessarily mean the utility of bad people counts for nothing). Imagine two possible universes. In universe 1, planet A has 1000 of virtuous people with utility 100, and planet B has 1000 vicious people with utility 5. In universe 2, planet A has 1000 of virtuous people with utility 5, and planet B has 1000 vicious people with utility 100. If we’re not perfectly indifferent between these two possible worlds, it suggests some form of desert utilitarianism.

    For me, I find it a persuasive, and I’m not sure, but I think I still reject this as a faulty intuition.

  74. scottmauldin says:

    I was inspired to write a response to your Whole City is Center post, around the difference that you touch on between explanation and excuse – To Explain is not to Excuse.

    This shows up in many other areas, such as body shaming, poverty discourses, etc. I think there’s a balance to be struck between “explain everything, condemn nothing” and “condemn everything, explain nothing”. It’s important to break the perception that explanation and condemnation are some kind of substitutes for one another and that they exist on the same spectrum. Rather, one can ideally strive to explain everything and then figure out what to condemn after the fact, and not let the status quo become synonymous with the should.

  75. ec429 says:

    One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor.

    So, you kind-of dance around this in the following paragraphs with all the talk of Schelling fences, but I suspect what’s really behind this is something like the NTICTD rule: just because the utilitarian calculus favours the non-desert solution doesn’t mean you can ever have enough evidence to believe that in the presence of uncertainty. If Omega comes down from on high and tells you that banning the drug will increase total happiness, what’s the odds that actually happened, vs. you hallucinating Omega?

    Misaligned incentives and agency problems are so overwhelmingly common (mumble inadequate equilibria mumble) that it makes sense that we’re deeply suspicious when someone says “This intervention screws up incentives, but don’t worry because it’ll still work out just fiiiine”.

    And people in whom that heuristic dominates, tend to become libertarians. I would say “welcome to the club”, but you’ve been hanging around just outside the door for years now 😉

  76. Worley says:

    It seems to me that you’re looking at this somewhat incorrectly. The real question is, “Will society A out-compete society B in real-world situations?” The answer to that tells you what sort of world will exist after a long enough period of contact between the two societies.

    There’s no way to escape that sort of analysis, because the principles we use individually to decide “How should society work?” are just memes that have been installed in our heads from the great pile of memes that constitute a society that has been very successful against other societies. The popularity of utilitarian arguments shows that societies that give respect to utilitarian arguments have an advantage over those that do not …

  77. Sniffnoy says:

    So, I hope you’ll forgive me for bringing up once again such an old, combed-over CW topic, but–

    This, or maybe not this but something like this, is one of the things that bothers me about the whole incorrect-but-unquestionable-feminist-restrictions-on-dating thing (as discussed in your Meditations series on your old blog and to a lesser extent here). Because, you see, occasionally you see these incorrect things defended with, well, that’s what it takes to get the message through to ordinary people; it sucks that some people take these ideas so seriously, but that’s what it takes. And to my mind, like, that’s just not right. If you have to choose between telling the truth, so that those who take ideas seriously will get the right message, or distorting things, so that those who don’t know how to take off their common-sense goggles get the right message, you choose the former. It is wrong to (implicitly) punish people for taking ideas seriously.

  78. angularangel says:

    Hmm. I’m a bit late, so someone’s probably already said everything I’m going to say, but there’s a chance no one has and I don’t want to dig through and check every post, so I’m gonna post anyway and hope it’s of use.

    I think that the heuristic you put forth here is of some use. I would generally agree with your decisions in all three cases. I also think, however, that it has significant potential to be applied dangerously, and that you should be very careful to hedge against people doing so.

    For example, not banning the drug despite the fact that it does more harm than good because the harm is to those who choose (In as much as anyone can choose to do anything) to misuse it seems reasonable. (Not banning it when it does more harm than good is fairly basic utilitarianism.) But, extending that reasoning to “We’ll mount a massive drug war that only makes things worse, jail drug users by the hundred thousands, criminalize everything to do with them, etc, etc, because they’ve nominated themselves for the short end of that trade-off.” is much less reasonable, and is how I worry that people would actually apply that heuristic.

    I suppose, really, this point boils down to “People are stupid limited meat-creatures with limited meat-brains who will misapply your heuristics, and thus anytime you’re going to give them a heuristic, you need to make sure they also are taught to apply it correctly, or at least warned away from some of the most egregious failure states.”

    Also, when it comes to this heuristic in particular, it has the potential to play into our cultures (Possibly all cultures, or even all humans?) perverse need to hurt people, which is something I would want to be very careful about. :/

  79. rangerscience says:

    From what you’ve described, I’m in (almost) complete agreement with you. Where we might disagree is that kind of services to give the people who’ve decided to give themselves the short end of the stick, but I think that just follows from what I think the underlying premise actually *is*, which would be: keep giving people more choices AND keep not making those choices for them.

    I would point to two concepts: “causal entropic forcing” and “instant karma”.

    CEF is basically a formalized, mathematical way to both say and implement “do the thing that opens up possibilities”. In your first two cases, this is clearly what is going on: You are giving people *more* choices and then standing back. They might choose things that limit their possibilities, but *you*, the notional civil organizer, have made the choices that open up more possibilities.

    Karma (as I have experienced it), possibly better termed “instant karma” since AFAIK original karma only applies on death, isn’t so much “you’ve done good things and good things happen” but that the results of your actions come back to you in such a way that you can learn from them… and that it’ll keep happening until you learn from what you’re doing, good AND bad. This isn’t quite the case in the third case, but it could be (as it’s non-repeating, or rarely-repeating): rather than ostracize the person, bring it up with them all the time, every time, so long as they keep up the behavior.

    The problem with addiction – and then also ostracizing – is that there is a category of choice you make from which you cannot learn. A possibly defining aspect of this category is “cannot make this choice more than once”.

    Huh. This was a bit make-it-up-as-I-go, but I could come back to this? I think there’s something here, where the two principles in opposition (find the balance point, find the good place!) are “give people choices” and “problems with choices you can make only once”.

    Edit: OH! Choice Empowerment VS Akrasia…?

    So, (when?) should you fight people’s akrasia on their behalf…?

  80. Andrew Klaassen says:

    It seems that you’re coming around – from an odd direction – to an idea that most of us try to teach our children: Actions have consequences; you have personal responsibility for your actions.

    Parents differ in how strictly we stick to the letter of this law. Some parents strictly apply all promised consequences: You have transgressed, you will be punished. Some parents mix that with sympathy for situational factors: I’ll let you off this time, because I know you’re exhausted. And some parents don’t enforce promised consequences at all, nor do they let any natural consequences take their course if they can intervene.

    I try to be in the middle group, where it seems that you’re also trying to end up. I’m making it up as I go along, and I wouldn’t even dignify it my process with the description “heuristic”. It’s guessing, and learning as I go, and hoping I’m not completely screwing up. I tried to come up with consistent moral systems when I was younger, but since I’ve realized how completely that pursuit falls apart when one is dealing with ambiguous moral situations multiple times per day. There isn’t time to do the calculations of moral logic, and they probably wouldn’t be correct in the end anyway.

    I think that you could call any particular system of moral logic a heuristic: It gives an answer in a reasonable amount of time, but it isn’t guaranteed to be the best answer. Keep in mind that that’s what you’re pursuing. Moral logic is never going to give you anything better than a heuristic.

    Being a caregiver of a young child presents you with moral dilemmas constantly, and you might find working in a daycare an interesting exercise if you want to test your moral ideas and moral processes in the real world. The height of human violence happens around the age of two – though thankfully the violence isn’t very effective – and most of us are more selfish at that age than we will ever be again. As a caregiver, you have to make choices about how you explain the moral world; your explanations will have different effects on different children. You have to make choices about how and when to intervene. Do you impose artificial consequences? Do you let natural consequences play out? What if letting natural consequences play out leads to a roomful of screaming, fighting children? What moral choices do you make when you’re exhausted and you’ve been hit in the face with a phlegm-covered rubber duck for the twelfth time today? And how do you judge yourself for them?

  81. MB says:

    Most of the people who would end up being basic income recipients may get bored with idleness and take up not necessarily a job, but harmful or self-harmful activities.
    The reasoning behind this assertion is that if they had good self-control, interesting and useful hobbies, were self-motivated and disciplined, if they were trying to find meaning in their lives in a socially approved way, then they would fare quite well under the current system.
    So it’s more likely that the guaranteed income recipients would end up abusing drugs, enjoying petty thefts and vandalism, picking quarrels with acquaintances and strangers, getting tattoos or other forms of self-abuse to help them express their true personality — anything, that is, but another routine job.
    One can already find fulfillment in many careers and occupations, if one is reasonably easy to please. Or many people take unfulfilling jobs in exchange for more time with their families or hobbies that they find meaningful.
    But obviously we are not talking about such people. Given a basic income, many of the currently unsatisfied people would just give in to their worst instincts and become an even greater burden on society than they already are right now, when they are employed in a menial make-work job.
    This may not be the argument SA is responding to, but definitely this is what most people refer to when they talk about “structure and purpose”: people on basic income could end up not only wasting their lives, but also harming themselves and others.

  82. Manx says:

    So, someone probably already said this, but there are a lot of comments I haven’t read yet. I don’t think you can draw an equivalence between 1 and 2 and 3. I don’t necessarily agree with the conclusion of 3, but it is not the same thing in kind regardless. Number 1 and 2 involve people blatently violating rules which are clearly stated and everyone knows. Number 3 is a lot trickier. It’s a matter of social engineering and social norms – which is not a choice an individual makes. If the norm is to not work, then people will assume that is fine. People might not realize that stopping working is going to make them depressed. They might assume that they are in the group that can handle it, and then they can’t, and then they are too unskilled or depressed to join the work force. Maybe that is the majority of people. I don’t know. Again, I’m not saying this is true, I’m saying it is a different class of problem.

    Some other examples – Atomization in America. People grow up thinking this is normal. It makes them isolated and depressed and they don’t know why or how to fix it. It’s a *societal* and *cultural* problem. Maybe subsidizing all those highways to the suburbs, and expecting everyone to go far away for college, and letting everyone put their parents in free nursing homes wasn’t that great an idea afterall…

    Another drug example that is more relevant and more similar to #3 than to #1: Opiates for chronic pain. Sure docs tell patients that they are addictive, that they shouldn’t take too much, and shouldn’t be on them for too long… But once the patients are on them, it is really, really, really hard to give up that pain relief. Once tolerance develops, it is really, really, really hard not to take more than the doctor prescribes. And once you’ve been taking enough for long enough, it is really, really, really hard for your whole motivational structure to not be centered on opiates. So maybe, you know, we should be a lot more careful with how much and to whom we prescribe these drugs? Even if it means that someone who would have used them totally responsibly might not be able to get them?

    I really feel you mott and bailleyed this one.

  83. SaiNushi says:

    I shared this post with my partner, who likes to poke holes in things. Here’s what he pointed out as a problem with each scenario. (He was doing a long drive, and needed to be kept awake, and I ran out of things to talk about over the phone…)

    1. The idea that nobody can get addicted by accident is false. Pranks happen. Imagine, a friend gives you cupcakes, and beyond a friendly “don’t eat them all at once!” gives you no warning. You disregard the line, because it’s akin to “don’t spend it all in one place”, and sit down with a giant mug of milk to binge on Netflix and cupcakes. You suffer a bad trip, and get rushed to the hospital, but the damage is done. Pretty soon, you enter withdrawal, and are forced to ask your friend what was in the cupcakes.

    2. My partner believes that there are therapies which can help a man who compulsively harasses women once a month. As he tends to have more random knowledge than me, I’m inclined to believe him. Thus, the first solution is to see if therapy can help the man, not kick him out of the community.

    I don’t remember what his objection to the third scenario was… there might have been a topic change before we got to it.

    • Sniffnoy says:

      2. My partner believes that there are therapies which can help a man who compulsively harasses women once a month. As he tends to have more random knowledge than me, I’m inclined to believe him. Thus, the first solution is to see if therapy can help the man, not kick him out of the community.

      I mean, this is just rejecting the hypothetical. This is saying, but you don’t have to make a tradeoff! OK, so maybe you don’t in this case. But what about when you do?

      1. The idea that nobody can get addicted by accident is false. Pranks happen. Imagine, a friend gives you cupcakes, and beyond a friendly “don’t eat them all at once!” gives you no warning. You disregard the line, because it’s akin to “don’t spend it all in one place”, and sit down with a giant mug of milk to binge on Netflix and cupcakes. You suffer a bad trip, and get rushed to the hospital, but the damage is done. Pretty soon, you enter withdrawal, and are forced to ask your friend what was in the cupcakes.

      Again, I don’t think this has much to do with the substance of #1. The substance of #1 is about balancing getting help to people who need it vs. providing people with the ability to hurt themselves. What about people who use this to hurt others? Well, then they’ve committed a crime or tort against that person, which is an entirely separate matter, which can be handled by the legal system in the usual manner, and which doesn’t really touch on this tradeoff.

      • brmic says:

        this is just rejecting the hypothetical

        Yes! So what?
        Most non-canonical hypothetical are horribly loopsided and implausible and wear some real life scenario like a halloween mask. They should be rejected because they don’t tell us anything beyond the trivial fact that if you put your thumb on the scales hard enough, anything can be made the correct conclusion.

    • J Mann says:

      On #2, the original hypothetical was:

      Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.

      So this guy has had warning and an opportunity to desist, but hasn’t. Presumably, during the time for his warning, either he didn’t sign up for therapy or he did go to therapy and it didn’t work, at least not yet.

      Now it’s possible that more therapy might work, but you’re still presented with the choice of do you (a) kick him out of the community until you are reasonably confident that he won’t harass anyone, or (b) do you leave him in the community while he’s working in therapy and accept the risk that he’ll harass more community members in the future?

  84. These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer.

    Judging by my moral intuition, lazy people and criminals are in different categories. A lazy person doesn’t deserve to have lots of nice stuff, since he hasn’t done the work to earn it. But if he happens to get lots of nice stuff–wins the lottery, or is born on a tropical island, or has rich parents–there is no reason for me to object or to want it not to happen.

    On the other hand, if someone has actively done bad things–stolen, murdered, tortured people, or the like–I do want him to have bad things happen to him. It isn’t just that he, like the lazy person, doesn’t deserve good things. He, unlike the lazy person, does deserve bad things.

    I suspect that that intuition is very widely shared.

    I should probably add that I stopped reading comments on this post quite a while back, so apologize if I am repeating what someone else already said.

  85. antilles says:

    That just sounds like a version of contractualism. Which is fine, I like contractualism.

  86. wiserd says:

    Why do so many people assign zero utility to personal choice itself? If we assigned some measure of inherent utility to personal choice, even when choices lead to bad outcomes, at least some of these problems would be less problematic.

  87. en says:

    Utilitariansim by itself is a meta-ethical framework. It only becomes ethics proper when you substitute a utility function, and I claim with some confidence that most other (though the disbelievers may not like it) value systems can described in a utilitarian framework, allowing for things like infinite utility for deontology, etc. Indeed, utilitarianism’s most powerful use is as mirror that allows us to reflect on and compare our value functions in some common meta-ethical language.

    The post’s conclusion that the examples brought forward somehow question “utilitarianism” is suspect. They at most question a particular utility function: that of equal value assigned to all human experience, with no weight afforded to freedom of choice and no weight afforted to social standing or other contextual information. I think that’s a stupid utility function, in conflict with our shared moral intuitions, and in conflict (a la mob and magistrate) with the creation of a society where virtue as we intuit it is a stable behavioural attractor.

    I am always puzzled by how intimately LW-types tend to marry this function to utilitarianism as a whole. To speculate a little bit in bad faith, I think that this attachment arises out of a deep-seated moral indignation at the wrongness of unnecessary, punitive suffering that seems to correlate strongly with LW-typedness. This emotional spark then nucleates the protective armor of utility and expands in volume until the function-substitution step is forgotten and the ethics becomes partly conflated with the meta-ethics.

  88. dwietzsche says:

    I think this sort of thing mostly reveals just how useless conventional moral hypotheticals have been for the task of making sense of what morality is. Like, the problem of utilitarianism is that it is built around the idea that you can avoid making a choice(a tick that goes all the way back to the beginning of western moral philosophy), i.e., that a moral decision is in fact a kind of logical decision that has a fixed solution that it is the purpose of a moral philosophy to reveal. And in the fullest expression of this kind of systematic approach to moral reasoning, human beings are effectively logic chopping automatons whose moral effectiveness can be measured by how well they adhere to some vaguely idealized axioms that supposedly govern correct human behavior (despite nobody ever having invented a convincing example of any such system). It doesn’t really matter what kind of system you use, whether you’re a utilitarian or a virtue ethicist or a deontologist, as long as you approach morality with a decision maximizing heuristic, i.e. the intuition that there is an objectively correct choice for any moral dilemma, you are going to get screwed by something, there will always be some way of assembling the rails of the trolley that results in you being obliged to commit an atrocity on behalf of your claimed priors (that it turns out you don’t actually have because otherwise you wouldn’t recognize the atrocity for what it was).

    Instead you might just ask yourself, starting from the assumption that the world is full of trade offs and that in many cases you cannot arrive at an ideal solution to many problems of policy and politics, “which world would I rather live in, a world where a certain percentage of people who are prone to drug abuse have their lives ruined by antidepressant X, but where many depressed people receive the benefit of an effective drug, or a world where those abusers’ lives are not ruined, but that set of depressed people are deprived of the drug’s benefits?” And the important thing here is that you can go either way, there really isn’t a right answer. I am inclined to agree with you here, that the potential for drug abuse shouldn’t be a reason to prohibit an effective drug (and perhaps we might do something on the side to try to ameliorate the problems of those abusers if we’re concerned enough to offset those effects). But if I thought I was likely to be among the abuse cohort, or there were many people I knew who I thought would fall in that group, I might find myself going the other way. Life’s full of trade offs just like that, and there are very many dilemmas that are really dilemmas in the six in one hand, half a dozen in the other mold, which is just to say, you have to make a decision and from any maximizing calculus the decision is arbitrary, you need some other basis for making the decision, and that basis can be anything from advancing a particular aesthetic to just flipping a damn coin.