Ethics Offsets

I.

Some people buy voluntary carbon offsets. Suppose they worry about global warming and would feel bad taking a long unnecessary plane trip that pollutes the atmosphere. So instead of not doing it, they take the plane trip, then pay for some environmental organization to clean up an amount of carbon equal to or greater than the amount of carbon they emitted. They’re happy because they got their trip, future generations are happy because the atmosphere is cleaner, everyone wins.

We can generalize this to ethics offsets. Suppose you really want to visit an oppressive dictatorial country so you can see the beautiful tourist sights there. But you worry that by going there and spending money, you’re propping up the dictatorship. So you take your trip, but you also donate some money to opposition groups and humanitarian groups opposing the dictatorship and helping its victims, at an amount such that you are confident that the oppressed people of the country would prefer you take both actions (visit + donate) than that you take neither action.

I know I didn’t come up with this concept, but I’m having trouble finding out who did, so no link for now.

A recent post, Nobody Is Perfect, Everything Is Commensurable, suggests that if you are averse to activism but still feel you have an obligation to improve the world, you can discharge that obligation by giving to charity. This is not quite an ethics offset – it’s not exchanging a transgression for a donation so much as saying that a donation is a better way of helping than the thing you were worried about transgressing against anyway – but it’s certainly pretty similar.

As far as I can tell, the simplest cases here are 100% legit. I can’t imagine anyone saying “You may not take that plane flight you want, even if you donate so much to the environment that in the end it cleans up twice as much carbon dioxide as you produced. You must sit around at home, feeling bored and lonely, and letting the atmosphere be more polluted than if you had made your donation”.

But here are two cases I am less certain about.

II.

Suppose you feel some obligation to be a vegetarian – either because you believe animal suffering is bad, or you have enough moral uncertainty around the topic for the ethical calculus to come out against. Is it acceptable to continue eating animals, but also donate money to animal rights charities?

A simple example: you eat meat, but also donate money to a group lobbying for cage=free eggs. You are confident that if chickens could think and vote, the average chicken would prefer a world in which you did both these things to a world in which you did neither. This seems to me much like the cases above.

A harder example. You eat meat, but also donate money to a group that convinces people to become vegetarian. Jeff Kaufman and Brian Tomasik suggest that about $10 to $50 is enough to make one person become vegetarian for one year by sponsoring what are apparently very convincing advertisements.

Eating meat is definitely worth $1000 per year for me. So if I donate $1000 to vegetarian advertising, then eat meat, I’m helping turn between twenty and a hundred people vegetarian for a year, and helping twenty to one hundred times as many animals as I would be by becoming vegetarian myself. Clearly this is an excellent deal for me and an excellent deal for animals.

But I still can’t help feeling like there’s something really wrong here. It’s not just the low price of convincing people – even if I was 100% guaranteed that the calculations were right, I’d still feel just as weird. Part of it is a sense of duping others – would they be as eager to become vegetarian if they knew the ads that convinced them were sponsored by meat-eaters?

Maybe! Suppose we go to all of the people convinced by the ads, tell them “I paid for that ad that convinced you, and I still eat meat. Now what?” They answer “Well, I double-checked the facts in the ad and they’re all true. That you eat meat doesn’t make anything in the advertisement one bit less convincing. So I’m going to stay vegetarian.” Now what? Am I off the hook?

A second objection: universalizability. If everyone decides to solve animal suffering by throwing money at advertisers, there is no one left to advertise to and nothing gets solved. You just end up with a world where 100% of ads on TVs, in newspapers, and online are about becoming vegetarian, and everyone watches them and says “Well, I’m doing my part! I’m paying for these ads!”

Counter-objection: At that point, no one will be able to say with a straight face that every $50 spent on ads converts one person to vegetarianism. If I follow the maxim “Either be vegetarian, or donate enough money to be 90% sure I am converting at least two other people to vegetarianism”, this maxim does universalize, since after animal suffering ads have saturated a certain percent of the population, no one can be 90% sure of convincing anyone else.

As far as I can tell, this is weird but ethical.

III.

The second troublesome case is a little more gruesome.

Current estimates suggest that $3340 worth of donations to global health causes saves, on average, one life.

Let us be excruciatingly cautious and include a two-order-of-magnitude margin of error. At $334,000, we are super duper sure we are saving at least one life.

So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…

Okay, fine. Get the irrelevant objections out of the way first and establish the least convenient possible world. I’m a criminal mastermind, it’ll be the perfect crime, and there’s zero chance I’ll go to jail. I can make it look completely natural, like a heart attack or something, so I’m not going to terrorize the city or waste police time and resources. The guy’s not supporting a family and doesn’t have any friends who will be heartbroken at his death. There’s no political aspect to my grudge, so this isn’t going to silence the enemies of the rich or anything like that. I myself have a terminal disease, and so the damage that I inflict upon my own soul with the act – or however it is Leah always phrases it – will perish with me immediately afterwards. There is no God, or if there is one He respects ethics offsets when you get to the Pearly Gates.

Or you know what? Don’t get the irrelevant objections out of the way. We can offset those too. The police will waste a lot of time investigating the murder? Maybe I’m very rich and I can make a big anonymous donation to the local police force that will more than compensate them for their trouble and allow them to hire extra officers to take up the slack. The local citizens will be scared there’s a killer on the loose? They’ll forget all about it once they learn taxes have been cut to zero percent thanks to an anonymous donation to the city government from a local tycoon.

Even what seems to me the most desperate and problematic objection – that maybe the malarial Africans saved by global health charities have lives that are in some qualitative way just not as valuable as those of happy First World citizens contributing to the global economy – can be fixed. If I’ve got enough money, a few hundred thousand to a million ought to be able to save the life of a local person in no way distinguishable from my victim. Heck, since this is a hypothetical problem and I have infinite money, why not save ten local people?

The best I can do here is to say that I am crossing a Schelling fence which might also be crossed by people who will be less scrupulous in making sure their offsets are in order. But perhaps I could offset that too. Also, we could assume I will never tell anybody. Also, anyone can just go murder someone right now without offsetting, so we’re not exactly talking about a big temptation for the unscrupulous.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

537 Responses to Ethics Offsets

  1. Ben A says:

    I can’t imagine anyone saying “You may not take that plane flight you want, even if you donate so much to the environment that in the end it cleans up twice as much carbon dioxide as you produced. You must sit around at home, feeling bored and lonely, and letting the atmosphere be more polluted than if you had made your donation”.

    Correct. Unfortunately, if one takes the view that there is a strong moral obligation to reduce carbon, the obligation is to both buy the offset *and* cancel the flight. It’s not like the connection between the offset buying and the trip is a physical law — the only connection is the hypothetical traveler’s motivation, which we generally take it as within his power to change. He’s making his purchase of an offset contingent on the trip. But why? He’s negotiating with himself.

    • Scott Alexander says:

      But that ends with having to devote 100% of your income to carbon offsets.

      Unless you want to accept that obligation, your best bet is to limit the percent of your resources you spend on fixing the world to some high but finite amount, and allow yourself to use the rest freely. See this post for details.

      Then you can dip into your free portion for offsets.

      • I’d agree with Ben in that there’s no philosophical connection between the two acts. It’s more of a mental strategy of using our guilt at one act to convince ourselves to do another we wouldn’t otherwise do. But hey, outcomes! Low hanging mental fruit! So it’s all good.

      • Illuminati Initiate says:

        Here is my “solution” to this dilemma.

        You are in a sense morally obligate to donate everything you have beyond the bare minimum to charity (carbon offsets are not a very good form of charity), though expecting anyone to actually do that would be extremely unrealistic and I don’t believe in the idea of justice so getting mad at people for not doing so would be pointless.

        From the perspective of the murderer there is no connection between the murders and the charity- they have the money anyway, what they “should” do is donate without killing anyone.

        From the perspective of other people who become aware of the plot what you have here is essentially a hostage situation, in which the malaria victims are hostages and the direct murder victim is the demand. Whether they should tell the cops or not is an interesting question.

        • Drew says:

          I like the ‘hostage’ explanation, and think it’s getting at the key dynamic. The problem with ransoms is that they encourage more hostage-taking.

          In this context, people ‘take hostages’ by inflating their baseline destructiveness (or minimizing their baseline donation rate) so they’ll have something to trade down.

          So, if someone’s really proposing stuff like donations-for-murder, it’s evidence that we’ve set their “acceptable donation” threshold way too low.

          • Deiseach says:

            “Murder for donations” on the basis of “this crime for that good”? And how many revolutions have been on the basis of “yes, we’re wading in blood up to our knees but it’s for The Greater Good”? What arguments do we see made nowadays about “collateral damage” in drone strikes because sure, a few civilians out foreign were killed, but it’s protecting us from the ‘only 24 hours to find the bomb under the kindergarten’ mad bombers out there?

            Look at the apologias made for the Soviet Union under Stalin – yes, a few (hundred thousand) peasants may be starving or hauled off to the Gulag but the Great Dawn of Progress will be all worth it!

      • Av Shrikumar says:

        Can we argue from the perspective of creating shallow local minima? Sure, the murderer has already donated 10% of their income, and so gets a badge of “discharged moral obligation”. But there are other nice round numbers that could act like Schelling points – 20%, 50%, 90%, depending on how much spare wealthy you have. Think of them like bronze/silver/gold medals, and people may upgrade to higher and higher levels as they morally evolve. If we put a stamp of acceptability on someone who has donated 10% to use their free portion to make morally neutral tradeoffs as they please, I feel like it may make them less likely to morally evolve so that they jump to the 20% point. Kindof like getting stuck in a local minimum between 10% and 20%.

      • Deiseach says:

        My opinion is that carbon offsets are a scam; they let countries dodge any genuine effort to reduce carbon emissions by “We’ll continue to let our polluting factories operate but under the treaty we can pay for so many acres of trees that will soak up the carbon to be paid”.

        Which sounds great, but is not without controversy; usually the tree-planting takes place in a different country to the one doing the polluting (quite often a Third World or less developed country), so how much the pollution is being ‘soaked-up’ is debatable; trees take years to grow while the damage is being done right now; trees can be cut down, die or go on fire; and there are problems with monoculture and invasive species, and massive blocks of land being bought up for these projects while the local people lose land for agriculture and may even be displaced/driven off their lands. So it’s a sacrifie that is not really a sacrifice on the part of the person/corporation/country buying the off-sets and reminds me more of environmental version of tax-dodging than anything else.

        • Anonymous says:

          The carbon is atmospheric gas. There is no large difference between concentrations in differing parts of the world because the gas laws would quickly diffuse any local excess.

          Judicious planting can help with erosion and lack of firewood even if it turns out that the earth is warming because it’s in an interglacial period and humanity is moot.

          • Deiseach says:

            Yeah, but it’s juggling with the figures. ‘We’ll continue to pump this much real carbon dioxide into the atmosphere and buy off the consequences by paying for planting trees here, which will in time soak up a notional amount of carbon dioxide.

            Meanwhile, we’ll keep the factories going and keep planting trees, and so keep pumping out the carbon dioxide faster than it’s being absorbed each year, but always with the “in ten years’ time when this forest is mature it will absorb the gases you produced when you first planted it” – never mind that the damage has already been done’.

          • Glen Raphael says:

            Deiseach: The obvious first step for truly committed environmentalists is to make paper recycling illegal. We should harvest forests as soon as they become mature, waste as much paper as possible and bury it in landfills thereby sequestering the carbon for a while. Then we can plant new trees wherever the old ones were logged and continue the cycle.

        • McMikey says:

          A number of offsets amount to paying people (rural farmers and such) to NOT utilize more carbon-emitting technologies. It’s a akin to a “fat offset” – fat people pay skinny people to stay skinny, pretend they lost weight.

          Viewed pessimistically, many carbon offsets appear to be a modern day sale of indulgences. Person goes on long plane trip, pays extra for pleasing fiction about saving the world. Net output of carbon is the same.

          • Eggo says:

            I was waiting for someone to use the “I” word. If we’re going to accept moral offsets as a principle, one of the rules for claiming moral neutrality would probably have to be “do your due diligence to ensure the ‘penance’ is actually having the real world effects required to assuage your conscience.”

            The fact that simply ignoring this part makes it much easier to silence your nagging (1st world/yuppie/white) guilt probably dooms ethical offsets as a practical possibility.

    • Anon says:

      Ben’s criticism can be made more general.

      When you introduce an act that is, in isolation, net ethically bad, you have reduced the total potential ethical good in the system. You will always be left with a world that could be incrementally more ethical by exactly the value you’ve traded off. If you want to argue that your world includes a resource of infinite ethical good then I’ll say you can trade away your whole world of ethical goods and be left with all the infinite ethical good you started with. But, I don’t think that world drives useful ethical intuitions.

      (Just saw your (Scott’s) reply. )

      Yes, it does end with (nearly) 100% of your income spent on carbon offsets. In the general case, 100% of your income less the bare minimum to keep you functioning best to improve the net ethical state of the world will be spent on others if your goal is to be maximally ethical. If you wish to value your own happiness/pleasure/etc. over the most ethical world possible, that’s fine. But you’ll have to concede to being non-optimally ethical. How to choose how ethical is enough is a hard problem. Is that what you are trying to discuss?

      • Anon2 says:

        > How to choose how ethical is enough is a hard problem. Is that what you are trying to discuss?

        I doubt it. The question is: is taking the flight and donating at least as ethical as doing neither? Is murdering and donating at least as ethical as doing neither?

        The further question of “is it acceptable to do neither” is also interesting, but can be left to the reader to decide. Most people evidently believe that the answer is yes, so “neither” makes an interesting point to compare against.

        • Anon says:

          Anon2 — Yours is a reasonable interpretation, but that makes Scott’s response to Ben strange.

          Having said that, it is not as ethical as doing neither. You’ve proven yourself capable of making a more ethical world and you’ve not chosen to use the resources you have to do so. That is a less ethical act even if the effect is the same.

          You were capable of a more ethical choice but you *chose* not to make it. The world is less optimally ethical for your neglect.

          • Anonymous says:

            “You’ve proved yourself capable” hides a few assumptions, I think. You could also be capable of making the choice to do both, and then choose to do neither. The question is, then, “is it less awful to knowingly avoid helping with total consequences X, or to knowingly hinder and then help with the same total consequences X?”

    • David Moss says:

      Sounds like G.A. Cohen’s response to rich Rawlsians/ According to Rawls, people should be allowed to become much richer than others iff it is necessary to benefit the worst off e.g. by incentivising them to produce more, making everyone wealthier. But Cohen replies that the rich can’t offer this as a justification of themselves being allowed to become richer, because they *are* the rich and ex hypothesi don’t need to be made much richer in order to provide.

      Interestingly enough he offers the example of the hostage-taker as well. People can say “the parents ought to pay the kidnapper of their child, to get their child back” but the kidnapper can’t argue “Children should be with their parents. I shall not return your child unless you pay me. So, you should pay me.”

    • grendelkhan says:

      Speaking of, what is the best way to buy carbon offsets? Past a certain standard of additionality, they’re completely fungible, right? The California market is trading at around twelve dollars per ton; buying them here is fifteen euros; TerraPass sells them for about thirteen dollars. However, The Nature Conservancy sells them for fifteen per ton and they’re a charity, so my employer will match my donation, so that seems like a better idea, but am I missing something?

      (Also, I seem to remember seeing wildly different prices when I looked into this a year ago, from two dollars a ton up to around fifty, but it’s settled down considerably since then. I think that’s a good sign?)

      Is there a good primer on knowing what the heck you’re getting? Much like with charity in general, there’s a lot of glowing claims and warm fuzzies.

  2. Anonymous says:

    >So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…

    I basically had approximately this same thought a year (or so) ago when considering why biblical Christian doctrine’s (as far as I can tell) rejected of the notion that good works can override evil deeds. (there’s also the fac that our deeds are not, according to the text, even good to begin with, – c.f. Isaiah 64:6 – but this is icing on the rhetorical cake of morals here.)

    I am speaking from the perspective of soteriology here, that is, discussions of what is needed for salvation; while I speak for the rather ornery baptist crowd here it sure seems like throwing good works at everyone and everything cannot undo the effect of a murder, rape, etc. Such actions (as possible by mere humans) cannot affect a true”restitution”, as it were.

    • Alex says:

      This looks like an “abandon utilitarianism” scenario. Not that I disagree with it, but it will be hard pill to swallow for the rationalist crowd. (I’m a devout Catholit BTW, and still take Utilitarianism seriously, just not as an absolute).

      • Erik says:

        This looks like an “abandon utilitarianism” scenario

        Abandon specific common forms of utilitarianism such as sum-over-happiness, at least. It’s possible to write a really weird form of utilitarianism where the moral weighting assigns utils for following deontology, virtue ethics, or something else.

        • Anonymous says:

          Yup. That is a simplifies way of describing my personal system – a little more convoluted, and I need to spend some time writing it down.

        • Anonymous says:

          It’s possible to write a really weird form of utilitarianism where the moral weighting assigns utils for following deontology, virtue ethics, or something else.

          I’ve thought about this in the past, and I don’t think it works. We’re just mislabeling. In fact, we can pull the same trick in the other direction – we can say, “I really believe in a weird form of deontology, where the only rule is to maximize total utility.”

          • Erik says:

            I think it’s not so much a problem of mislabeling as the issue that the labels were vague in the first place. The word “Utilitarianism” doesn’t pick out a util-counting rule, but describes a family of moral philosophies based on util-counting, and some methods of util-counting are more or less intuitive than others.

          • Anonymous says:

            Sure, but if we’re choosing (or rejecting) one of the major meta-ethical branches in lieu of the others, we’re probably going to ignore the fact that we can dress them up in pathological ways just to take them to the masquerade.

      • Arthur B. says:

        It’s depressing that the “rationalist” crowd clings to utilitarianism (which somewhat implies moral realism) despite the fact that the moral sentiment is fully explained by evolutionary game theory.

        Moral behavior is what your brain tells you is moral. There’s no territory to map, and so there is no guarantee that your moral preferences will always be consistent or amenable to calculus.

        Live a life where you can respect yourself and your actions.

        • Brian says:

          What then causes the evolution of moral algorithms? Why those algorithms and not others? Is there no cost/utility to any moral algorithm, resulting in a completely random walk?

          • vV_Vv says:

            What then causes the evolution of moral algorithms?

            Random mutations + natural selection, of course. You could argue whether it was more a matter of group selection vs. kin selection, but ultimately genes that caused us to have the type of moral intuitions that we do were replicated more than competing genes.
            There are probably some non-trivial frequency dependent effects that cause some moral phenotypes such as the sociopath and the zealot to neither dominate nor become exinct, but instead reach some equilibrium.

          • Brian says:

            Well, then it looks like those evolutionary/game-theoretic pressures are the territory mapped by our moral intuitions, so to speak.

          • Anonymous says:

            Well, then it looks like those evolutionary/game-theoretic pressures are the territory mapped by our moral intuitions, so to speak.

            This strains the map-territory analogy too much, making it essentially useless.

        • Wirehead Wannabe says:

          “Live a life where you can respect yourself and your actions.”

          If our morality is all in the map, why not torture kittens instead?

          • grendelkhan says:

            I think the standard answer here is that I don’t want to torture kittens, and I don’t want to modify myself so that I would want to torture kittens. Those preferences are real, even if we know how they got there in the first place.

          • Wirehead Wannabe says:

            @grendelkhan

            But if morality isn’t real, why should those preferences matter? If I prefer eating chocolate cake over being murdered, why does it matter which one I receive in the absence of moral realism?

          • Arthur B. says:

            Pretty much what grendelkhan said.

            The nasty trick is that belief in moral realism is evolutionary adaptive. People may be more likely to act morally if they perceive morality as an external constraint than if they perceive it as a, more malleable, personal preference.

            Morality feels so real that you have a hard time even conceiving that it could be meaningful while being a purely subjective experience.

          • Arthur B. says:

            @Wirehead:

            Matters to whom? The universe doesn’t care.

            Does it matter to you? Sure it does, that’s what it means to have a preference.

            Does it matter to your mom? Probably, provided she empathizes with you and generally enjoys your preferences being met.

            Does it matter to me? Somewhat, even thought I don’t know you personally, I empathize with my fellow man.

            Does it matter to a psychopath? Probably not.

          • Jaskologist says:

            The nasty trick is that belief in moral realism is evolutionary adaptive.

            If you’re going to tug at that thread, at least have the decency to follow it through to Reason itself.

        • syllogism says:

          I’m a utilitarian, but definitely not a moral realist.

          I can’t “live a life where I can respect myself and my actions” without utilitarianism. Any other way, and I know I’m a fraud.

          I want to believe that I’m someone who genuinely cares about the welfare of others — and I know I want to believe that, so I’m going to do my best to trick myself.

          So, the only stable solution is to give myself no credit for anything but results. Everything else is too easy to fake, so it can’t convince me.

        • Anonymous says:

          Moral behavior is what your brain tells you is moral. [citation needed]

          Seriously, this is the type of bold, unsupported assertion that usually dominates r/badphilosophy.

        • TheAncientGeek says:

          System 1 can improve in system II even in the absence of a territory,

      • syllogism says:

        In an earlier post, Scott referenced a distinction between virtue accrued and utilons achieved. I think that’s a useful perspective on this.

        Basically: intending isn’t achieving, and achieving isn’t intending. And we’re interested in both.

        Let’s say you’ve got two altruists. One wins the lottery, the other doesn’t. The lucky guy gives a larger total amount of money, while the other guy gives a larger percentage.

        It’s good that the lottery winner gave a lot to charity. But it’d have been even better if the more altruistic guy had won — he’d have given even more.

        If we don’t allow for any concept of “virtue”, it’s very hard to talk about this situation. If you try to talk about *people* as more or less morally preferable, but you confine yourself to only considering outcomes, you end up having to say the lottery winner is a better person, morally, just from having won the lottery.

        But, I’m not suggesting we give virtue some special meta-ethical status. I think it’s sufficient to say that we have a utilitarian interest in people’s character. This thing we call “virtue” is a useful prediction of how they’ll act in the future. We want to incentivise virtue, and we want to entrust virtuous people with more power, so that better things happen.

        I think of someone’s “virtue” in a fairly simple way: it’s some factor by which they weight their own utility as more important than other people’s. It’s how selfish they are.

        So, here’s my answer: we’re morally repulsed by MurderScott because the guy he’s killing loses so much more than MurderScott seems to gain, which betrays a worryingly high selfishness factor. The offsets that he puts down can be seen as a sacrifice to show that he really gained a lot from killing the guy, and we’re reading him wrong. But, we’re not convinced. We read the situation as the lost money being very low utility to him.

        If MurderScott is fairly selfish, but also very capable, we’re pretty uneasy about him. We’d really like him to be less selfish, and we think moral blame might be a useful incentive here. Contrast this with a hypothetical MormonScott. We really wish MormonScott could get a clue, and do something useful. But, we can’t shout him competent — so we’re happy to say he’s virtuous, even though his utilon ledger stinks.

      • maxikov says:

        That should actually be pretty easy. To make the categorical imperative look bad, you have to invent some weird creatures like killers politely asking you where your family hides. To make utilitarianism looks bad, you just have exist – for nearly every practically occurring problem, you may find a utilitarian solution that looks completely crazy. Therefore, as a practical strategy for applied ethics, deontology is much much more efficient and thus rational that utilitarianism. Or, to put it another way, a person who never sacrifices their rules for the greater good may be a heartless bureaucrat, but someone who does that all the time is literally Voldemort.

        • Samuel Skinner says:

          Except “killers asking politely” isn’t crazy- they are called the secret police. Given that the world’s most populous country lived under that condition I’d hardly call it an edge case.

      • Anonymous says:

        The scenario we’re looking at has essentially no relevance to the real world. Scott made a number of completely implausible assumptions, including “I won’t get in trouble for this crime” and “nobody will be terrorized that there’s an uncaught murderer at large” and “there’s nobody who will be sad that the person I murder is dead” and “there’s no political aspect to my grudge”. Essentially he’s assumed away many of the negative consequences of murder, and then he concludes that murder isn’t so bad after all.

        In logic, if you start by assuming something that’s false, you can go on to prove whatever ridiculous result you want. It’s an error to take that ridiculous result and try to apply it as though it had relevance in the real world.

        Don’t give up on utilitarianism yet.

    • Nate Gabriel says:

      You probably already know the Romans 3:23 answer, that God judges on a binary standard of perfect/not perfect. Even if every act you ever do is offset enough that your life is well above neutral you would still never be perfect.

      The other answer is that all sin is considered to be a sin against God. You can buy as many offsets as you like, maybe even a large enough sum to the family that your victim agrees to be killed, but the Bible explicitly states that God doesn’t accept ethics offsets. (Not even joking. 1 Samuel 15:22.)

      • Alex says:

        The problem with this is that it fails the trolley problem test if enough people are tied to the original track (also modeled as the “kill Hitler” scenario)

      • Jaskologist says:

        A lot is loaded into the phrase “There is no God” in this post. After all, why even buy the offset if you are soon to perish?

        Establishing that right and wrong exist at all is kind of a crucial pre-requisite for this kind of discussion, but I can see why that might be less interesting.

        • Randy M says:

          I don’t think that is less interesting (to Scott) so much as viewed as already decided in the negative, and he’s left with trying to persuade people to make the most pleasant world regardless.

          • Jaskologist says:

            I mostly meant “less interesting” in the sense that “it has already been talked about way more and I can’t build everything from the ground up for every single post and this particular less-discussed aspect of ethics caught my fancy right now so let’s just assume the other stuff for the sake of argument.”

      • Brad says:

        Technically, the only ethical offset acceptable is the shedding of blood, which leads into penal substitution theories of the atonement (which I feel accurately describe how Jesus Christ saves Christian believers.) Or more simply, the wages of sin is death. cf. Romans 6:23, and see also Hebrews 9:22 and Isaiah 53:5, among others.

        This, however, is hardly the sort of thing a utilitarian would view as an “offset” in this case; but it *is* the sort of thing that makes perfect sense for a perspective of Justice, and ethical systems predicated thereupon.

    • Harald K says:

      Scott’s entire post becomes rather silly (or appalling) from a principled perspective (or, as the philosophers like to call it, deontological). But let’s take a look at the part that matters for us, the attempted principled argument:

      “Either be vegetarian, or donate enough money to be 90% sure I am converting at least two other people to vegetarianism”

      This attempt at universalizability fails, for one important reason: It incorporates other people’s moral choices. This is not permissible. Kant’s reformulation of the categorical imperative in terms of treating other people as a means makes this very clear.

      You must always assume people are moral subjects, capable of coming to the right moral conclusion themselves. That is a requirement of being a person in the first place. If it’s right to be vegetarian, you must assume everyone in the would could in principle spontaneously have come to that conclusion without your preaching/propaganda, even though that would seem unlikely. Thus, you can never count the people you’ve “saved” – whether absolutely or probabilistically – towards your own moral balance. It’s already accounted to their own.

      • Peter says:

        The second formulation? Clear?? AFAICT most people can’t even quote it correctly.

      • Anonymous says:

        Your argument assumes that the three formulations are equivalent. Kant may do so, but that doesn’t mean Scott has too. (I’m not sure I’ve ever met a philosopher who thinks that Kant actually sufficiently justifies this stated equivalence).

        Seeing as Scott made a claim about what universalizability dictates, not what Kant dictates, the second formulation is irrelevant. You also seem to be pinning the second formulation on Deontology as a whole, which is equally problematic.

        • Harald K says:

          It doesn’t really matter if you think Kant argued well enough that the formulations were equivalent. In his scheme for universalisation, at least, this kind of sentence would not be valid.

          I don’t know of any “competing” schemes for universalisation that would permit them either,
          and I’d say that any such scheme would not really be a universalisation – at least not over the moral universe that includes as subjects the one point eight people (on average) converted to vegetarianism.

          • Anonymous says:

            What do you mean? The competing scheme is simply to follow the first formulation of the categorical imperative, the one that is actually explicitly about universalizing your maxim (or to follow some similar way of universalizing).

            So, you would act according to a maxim only if you could consistently at the same time will that maxim be a universal moral law. No mention of means or ends required, except in so far as they can be derived from this principle.

          • Harald K says:

            That’s not a competing scheme, that’s a distortion of Kant’s scheme.

            It is not obvious what “universalizing your maxim” means. A sentence about something you might want to do must be turned into a sentence about what people in general should be allowed to do, but with quantifiers and predicates of various sorts there to confuse you, it’s not trivial to explicitly and completely define how to do that transformation.

            You don’t provide one. And Kant doesn’t quite provide one either, sadly, but when asserting that the second formulation is equivalent, he at least narrows it down somewhat (and excludes certain possibilities, among them the one Scott proposes.)

          • Anonymous says:

            The most plausible interpretation of what Kant meant by claiming the three formulations to be equivalent is that they dictate the exact same set of duties (see: http://plato.stanford.edu/entries/kant-moral/#UniFor). This immediately rules out there being a case where one of the formulations is meant to be silent but where we can use one of the others to resolve the issue.

            I agree that it is not at all clear how to apply the first formulation (or any of them) in practice, but that is another matter. The fact is that it is intended to provide an independently followable moral rule.

          • Harald K says:

            Well, whoever said there was a case where one of the rules was silent but we can apply another? Not me. What I said was that the formulations can be semantically ambiguous (for instance, depending on how we translate a phrase about my actions to one about the permissibility of actions in general, “universalisation”). Which isn’t surprising since it’s still natural language.

            But since Kant asserts that they are equivalent, you effectively get a hint about how he intended these ambiguities to be resolved, or in other words, you can infer what he meant.

            And he could not have meant that the kind of “universalisation” Scott tries is a proper one, because it quite clearly uses the 1.8 people as a means only – if it regarded them as an end, it would correctly credit their vegetarianism as their choice, not Scott’s.

      • Anonymous says:

        “You must always assume people are moral subjects, capable of coming to the right moral conclusion themselves. That is a requirement of being a person in the first place. […] you can never count the people you’ve “saved” – whether absolutely or probabilistically – towards your own moral balance. It’s already accounted to their own.”
        Itchy thought.

  3. Randy M says:

    “This is a bullet I am weirdly tempted to bite. Convince me otherwise.”

    Is your temptation to excuse murder or abandon utilitarianism because it does so, when approached with sufficient logical rigor?
    Because this post is pretty good encouragement to steer clear of utilitarianism, I think.

    • novalis says:

      Sorry, where does utilitarianism enter into this? Utilitarians act to maximize the total utility in the world. The philanthropic murderer doesn’t.

      • Nathan Cook says:

        Probably a good idea to distinguish between classic utility maximizers, who are more or less theoretical entities, and utility commensuralists willing to assign values on actions and sum over them, which describes the ethical praxis of many people. The philanthropic murderer compares the net utility of both murdering and donating to doing neither, without adopting the ethical principle of maximizing over utility, which would require him to donate and not to murder.

      • Randy M says:

        Utilitarians don’t necessarily act to maximize utility, they simply believe that all goodness and badness can be calculated using the same currency and that goods of the same value are interchangeable.

        A utilitarian who wanted to be the best person he could be would act to maximize utility; Scott is here (apparently) concerned with being slightly better than null, but he is still using his utilitarian moral framework to balance the scales.

        Of course, why he is doing this is unclear; under what authority is he going to be judged? He has no God or gods, and convincing a nation to adopt this as a judicial principal, if it would even be a good idea, is a rather tough sell. Basically it is an intellectual exercise, with the end result of him knowing the penance he must pay to consider himself not bad.

    • Lambert says:

      Perhaps it means we should be looking out for donations of $334,000 to the Against Malaria Foundation, and if we see one, put Nick Land in the Witness Protection Program. 🙂

  4. Alex says:

    In this case Utilitarianism is giving us an aberrant result that goes against our moral instincts. My options at this point would be to abandon Utilitarianism (if I hadn’t already due to Eliezer’s dust speck vs. torture scenario) or move up another metaethical level, where Utilitarianism is a tool, but not the end all metaethical system. I choose the second, but haven’t really found a way to formalize this next level. I suspect it might not be computable, but can’t prove it either way.

    • Illuminati Initiate says:

      nitpick: utilitarianism vs deontology vs whatever are ethical systems, “meta ethical” usually means something like moral realism vs nihilism vs noncognitivism vs whatever. They are generally unconnected.

      • Azure says:

        I think it’s better to phrase it as Consequentialism vs. Deontology. Utilitarianism is a form of consequentialism, maximizing over happiness or the like, but you can have different Consequentialist programs.

        • Illuminati Initiate says:

          Yes, you are correct (hmm… there are actually three levels of moral philosophy here)

        • I think that this post proves that even if consequentialism is true, it cannot be used safely by humans. You need layers of virtue ethics and deontological heuristics on top of it to actually make any kind of reasonable moral choices in real life.

          • Alex says:

            I believe with that comment you just made a strong utilitarian argument for religion. I’ll be eagerly awaiting for Scott’s conversion. 🙂

          • TheAncientGeek says:

            Alex,

            Do you really believe it js impossible to embrace deontology without embracing religion?

          • Randy M says:

            I do, for an intellectually rigorous materialist.
            Of course, a consequentialist might well believe that it is best not to be intellectually rigorous in all regards.

          • Anonymous says:

            Randy

            So where is your rigorous proof that deontology is necessarily religious?

          • Alex says:

            @TheAncientGreek

            No. I believe there are other ways to embrace Deontology; but religion is the fast track to it.

            Now to kill the joke by explaining it (there’s truth to this argument though):

            Assuming c+v, you have the choice to be born in a work where people pick the moral course based only on their thoughts or one where a Higher Authority is assumed to exist and gives us a list of 10-20 moral axioms that are to be respected by all (e.g. don’t kill).

            Which on will lead to better humanity (we are talking real humans here, not perfect rationalists)?

          • Azure says:

            You also need a rigorous proof that religion gives you deontology. God threatening you with punishment is not a rigorous foundation for ethics any more than a very skilled police force handing out draconian sentences.

          • Alex says:

            @Azure,

            Since I agreed to a religion that gives me moral axioms that is how I get deontology. In the real world that is harder to prove, but I don’t see any major religion that does not end up giving a set of moral rules, so I conclude that is reasonable outcome of religion.

            Re: God’s fear being a food foundation. It is as good a foundation as police and laws to promote good behavior in society. If everybody could be good humans without it they would not be necessary, but we’re not, and the pragmatic course is to take them for the good they give us even if we can’t prove they are always correct. If you’re at the stage where you can have this argument rationally, believe me, the laws were not written thinking of people like you. (That’s not free pass to break them, btw)

          • Azure says:

            @Alex

            (Disclaimer: I’m a former missionary Baptist and the ‘former’ is in large part because of divine judgment. I don’t think I’m being emotionally driven here but just in case I am and don’t see it, it seems more honest to say that I might be.)

            Pragmatically I’m not sure if the fear-based disincentive of punishment in the Hereafter works. if I can swing into analogies with Secular Authority again, I’ve seen claims claims that immediate, mild punishments are much better at changing behavior than distant, draconian ones. This was used to justify the use of probation with ‘weekend jail’ or other measures effectively sending adults to their room for criminal behavior over long, punitive prison sentences. This suggests to me that if God existed and wanted to make people behave well to eachother, it would be more likely to give you a cold or make you spend a weekend in Wyoming or cause you to be unable to smell anything for a day or two if you were unkind to others. I’m not sure about murder, maybe increasing pain or amnesia or some such the closer you got to actually performing the deed.

            I’m not sure how good an indicator it is, but ‘private’ sins like extramarital sex seem much more common among those who claim belief in God than ‘public’ sins like murder and theft and being horrible to your fellow man and other things that people might be aware of. This might suggest that the worry about the Last Judgement doesn’t hold too much weight compared to temporary gratification.

            I suspect that what keeps most people behaving nicely to each other is a socially fostered virtue ethics. People have the idea that if they are Kind or Compassionate or have Self Control they will be loved and respected. If they are Truthful they will be listened to and the things theys ay will be valued. They take pride in being Rational and Consistent. They pick up the idea by listening to other people that being Spiteful and Angry are ugly, repulsive qualities. It’s certainly true that someone could lie a lot and try to foster the idea that they’re truthful, or live any other kind of double-life. But in practice most people like to think of themselves as possessing, rather than just simulating, various positive qualities.

          • Alex says:

            @Azure,

            Yes to all you said. As a relapsing Catholic (my pastor hates it when I say that) I agree to the points you make, but will rationally justify the existence of the Church for the social pressure it exerts in (mostly) the right direction. I went through much of what you describe in my 10+ years away, and rather than discard what I learned I look for ways to integrate it; accepting that many times the Church has been/is/will be wrong, much to the amusement and frustration of my fellow church members.

            My views on Heaven/Hell are also somewhat unorthodox, and I even think that rejecting the church’s teachings is not an immediate one way ticket to the Bad Place (TM). That last one has gotten me in trouble though. 🙂

            Also, this thread does NOT describe my reasons for believing, that is different conversation for another day.

          • @Alex

            You don’t just have those two choices. You have the further option of forging rules which are provably correct, rather than hoping some inherited set of rules will suit present circumstances.

            Re fasttrack: I’m not sure why you think a SSC regular would be more interested in speed than correctness.

          • Alex says:

            Sure. It’s a thought experiment. Come up with a complete, probably correct ethics theory and precommit to it. Just consider what are the chances of it actually being followed in the real world.

            Fast-track in the sense that it requires very little mental effort. You, me and everyone reading this are privileged to be able to spend mental cycles on choosing the right thing. Most other humans don’t have the opportunity or choose not to spend time on it.

          • Jaskologist says:

            I feel like Godel is relevant here, but I have only managed to understand his famous Theorem a small handful of times, and it always slipped away after a few days.

            If we cannot build a up a consistent, complete formulation of rules for mathematics, shouldn’t that tell us that it’s going to be impossible for everything else, too?

          • Samuel Skinner says:

            Gödel doesn’t say everything is inconsistent; it says you can’t prove a system is consistent using the system. Basically you need another turtle.

          • TheAncientGeek says:

            A fourth thing you can do is to saforge your own arguable, if not probably correct, good-enough ethics.

      • Alex says:

        Nit appropriately picked. I meant meta in the Hofstadter sense, should have used quotes. 🙂

  5. In the case of tourism of flying, the action is not inherently unethical but has unfortunate consequences under present conditions; offsets might be appropriate. Lobbying for cage-free eggs, or some other sort of mitigation of animal suffering while eating meat—same story.

    But that does not translate to cases where the action to be offset is inherently wrong, or (in the case of vegetarian indulgences) you wrongly believe it to be so.

    • Wirehead Wannabe says:

      What does “inherently wrong” mean here? If we’re using preference or hedonic utilitarianism, that phrase doesn’t make any sense. If not, then surely polluting the environment counts as being inherently wrong at least?

      • RTO Dude says:

        Pollution may be unambiguously “wrong”, but CO2 emissions aren’t unambiguously “pollution”.

        edit: obviously ymmv on CO2 != pollution, but as we’re developing our personal belief system and mine says that’s true, for me the problem remains.

        I had the same reservations JC did with Scott’s first two examples.

        Aside: I had to de-lurk prematurely! (edit: not for this comment, of course – for discussing this excellent topic in general.)This illustrates the problem with what’s otherwise a stellar (heh) venue for figuring out important life issues such as a formalized personal belief system. There’s so much high value post traffic that to achieve an optimal level of interaction it’s necessary to chat as soon as a topic comes up, which may be difficult to fit into our personal schedules.

      • Anonymous says:

        > If not, then surely polluting the environment counts as being inherently wrong at least?

        Um, no? It’s trivial to construct a deontology where murder is inherently wrong and pollution is only instrumentally bad. And that would align with my intuitions.

  6. Robert Liguori says:

    I note that there’s a bit of a jump between the offset arguments. Carbon offsets are (in general terms) affecting the people you’re hitting with the initial carbon set, and on a fungible axis of harm; vegetarian offsets are kind of a null issue, because at the end of the day most people just don’t treat animals as moral actors. I mean, the example itself says “You are confident that if chickens could think and vote, the average chicken would prefer a world in which you did both these things to a world in which you did neither.”

    Since you’re not confident that most people would accept “I am murdering you, but I am donating enough money to you to offset the utility of your murder!”, you’re not really comparing like to like here. I do think that we are, in essence, making this tradeoff whenever we accept the necessity of going to war; we’re trading the deaths of our enemies and the civilians we kill against the greater deaths and harm of not fighting.

    • Levi Aul says:

      In other words, in a contractualist-with-veil-of-ignorance metaethics, you wouldn’t sign the charter that says someone could kill you in exchange for a large-enough offset—so why would anyone else?

      • > you wouldn’t sign the charter that says someone could kill you in exchange for a large-enough offset

        …and if your victim would? Some people are willing to give their lives for causes.

        (but you don’t want to, y’know, actually strike such a contract with the persons for reasons)

        • Levi Aul says:

          I have a suspicion that, at least under utilitarianism, suicide-bombing is incoherent—something about “people who go on living having infinitely more chances to experience utility” or some such. I’m not sure how to rephrase this intuition into a contractualist problem, though. (Maybe that no human would sign a contract to be a part of a society that gives a paperclipper an equal vote in how things are run? Except with not-quite-as-divergent goals.)

          • lmm says:

            I think the incoherency is in utilitarianism, not in suicide bombing. See the well-known repugnant conclusion: either it’s better to have a large population whose lives are barely worth living than a small happy population, or it’s good to kill people who are particularly unhappy.

          • Grenouille says:

            The repugnant conclusion equivocates between two meanings of “barely worth living”.

          • lmm says:

            @Grenouille you’re going to have to be clearer than that.

        • Harald K says:

          “Some people are willing to give their lives for causes.”

          But not just any cause, and it’s in the veil-of-ignorance assumptions that you would not know which cause.

        • P says:

          Levi said meta-ethics and “veil of ignorance”, not applied ethics. The ethics are laid down before we know who will agree to sacrifice themselves for a cause. And before you say it – yes, we could consider a rule that says “ask the potential victim, and if they answer…”, but that still poses complications which probably prohibit that approach. (Edit: like Harald K points to.)

          Edit: Paul Torek here. I guess I didn’t log in properly.

      • Wirehead Wannabe says:

        If we’re going by the veil of ignorance, I might sign a contract that killed one unknown person in exchange for two. That seems to strengthen the argument.

      • Erin M says:

        … are you suggesting you wouldn’t sign that contract, even though you’re twice as likely to be one of the two people who will die of malaria if Scott doesn’t buy the ethical offset?

    • Randy M says:

      Right, in the first cases there is restitution to the damaged parties, but that cannot be done in the case of murder. Which is enough to break the theory [edit: of moral offsets], I think, but isn’t even all the objection, as you are taking away the agency of the people in deciding that their murder is acceptably counter-balanced by the donation. I may choose to give my life for others, but that doesn’t allow you to have the pleasure of killing me as a reward for saving a couple others.
      Of course, I think the rationalist/utilitarian view of which eliminates free will as a meaningful concept would necessarily arrive here.

    • Ben Kuhn says:

      This relies on an act/omission distinction, though, since by *not* donating you are murdering someone else by omission. If you restrict yourself just to these two choices, you’re not deciding to murder someone but offset the utility of murdering them; you’re deciding which of two people to murder.

    • Wirehead Wannabe says:

      “I am murdering you, but I am donating enough money to you to offset the utility of your murder!”

      This objection is even stronger if we imagine presenting it to the person being saved rather than the victim. “I’m saving your life so I can have permission to kill someone” is something I suspect many people might object to. Important to note though that it seems to be relevant under (for example) preference utilitarianism or CEV, but not hedonic utilitarianism.

      • Anonymous says:

        That’s a pretty good point. Most people don’t like the idea of someone being murdered; but it’s okay because I bought you and 2,000 other people a anti-malarial net.

    • Deiseach says:

      If chickens could think and vote, I wouldn’t be eating them in the first place.

  7. Stephen says:

    Well, on the one hand I’m very tempted to bite the bullet as well and call this an ethical act. On the other…I still want to reserve the right to call this guy a jerk and not want to be friends with him. And it obviously seems weird to not be friends with someone for committing an ethical act. Though I guess in that case I wouldn’t be shunning him for the act per se, but instead just for having really creepy desires (which would be more of an aesthetic/revulsion thing than an ethics thing).

  8. Levi Aul says:

    An easy way to think about this is to move the moral responsibility around. If instead of a person, it was a company or a government trying to “offset murder”, what would change? I mean, companies and governments don’t often have strong terminal goals related to the deaths of individual humans—but they have terminal goals that can directly, predictably cause the deaths of individual humans, like “be more efficient by spending less on humane working conditions” or “defend sovereignty by throwing soldiers into a meat-grinder.”

    What do we say, of corporations or governments that do this, but then offset the cost in human lives with other benefits? What do we say, in fact, if the corporation/government is autocratic, and there’s a single human being who ends up making these calls?

    • Nate Gabriel says:

      We…allow it? The acceptable number of statistically predictable deaths isn’t zero. It has to depend in some way on what benefits are provided.
      This is why there’s a speed limit higher than five miles an hour.

      • Anonymous says:

        Er, speed bumps actually supposedly kill more people than they save (because of slower emergency response time by ambulances, police, and fire crews). Although the demographics of those likely to die from being struck in the road and those likely to die from the ambulance being too late are admitted not identical.

        A 5mph speed limit would kill a lot of people.

    • RCF says:

      It’s a lot easier to justify trading off doing stuff that makes death in general more likely against doing stuff that makes death less likely in a greater extent, versus trading off doing something that causes a particular death against doing stuff that makes death in general less likely.

      • Lambert says:

        Sacred vs non-sacred values? (Perhaps based on the availability heuristic. (some guy’s life being more ‘available’ than many potential lives.))

    • macrojams says:

      I think this gets to the heart of it; we do in fact allow governments to make this tradeoff all the time, explicitly in the form of military/police action, and implicitly as in Nate Gabriel’s speed limit example. The usual answer is that we create a schelling fence by giving the government a monopoly on legal violence.

    • Anonymous says:

      I mean, companies and governments don’t often have strong terminal goals related to the deaths of individual humans—but they have terminal goals that can directly, predictably cause the deaths of individual humans,…

      Yes. For example, the government in my home country encourages outdoor activities like sports through publicly-funded schools, even though they predictably occasionally lead to children dying (eg hit by a ball in just the wrong place, falling off a bike over a cliff, etc.)

      From observation, what we say of this is “Yes! Sports are incredibly important! We need more physical activity!”

      • Randy M says:

        But being sedentary will reduce lifespan, and when there are fatal (or non-fatal) injuries in sports, like concussions or heat stroke, they aren’t written off as acceptable loses, we look for ways to prevent them and investigate for criminal negligence.
        Our society allows risk, but we are rarely comfortable with explicit trades when the downside is assured.

        • Tracy W says:

          Yes, so we balance off the benefits of reducing sedentary behaviour against the risks of fatal injuries from more sports.

          Many of us only look for ways to prevent deaths at an acceptable cost: there’s a lot of push-back and mocking of the health and safety culture, or the American tendency to sue at every opportunity. Not all of it fair, but even the unfair criticisms indicate an unwillingness to avoid all risk.

          I agree we are rarely comfortable with explicit trades. But that I think is a defect in our reasoning abilities.

  9. Coasean Bargain says:

    Ethics offsets are called Coasean bargains. A common example is paying for the right to pollute.

    • RCF says:

      No, in a Coasean bargain, both parties consent. One party simply making a unilateral decision about what a proper price is is not a “bargain”, Coasean or otherwise.

      • Coasean Bargain says:

        Yes, you’re right. In that case these ethical offsets are not necessarily so ethical, are they? Coasean bargains would seem to be a more advanced and scientifically grounded version of a similar idea to the subject of this blog post.

  10. Rangi says:

    Look at this another way. If you’ve already killed someone, how much community service, charitable donations, etc, do you have to do in order to convincingly repent? Life in prison, or a death sentence, might satisfy some people’s sense of justice, but it’s not really constructive. And demanding a lifetime of penitent service to make up for one crime seems like overkill — consider someone using life extension technology who just keeps not dying; after how many decades or centuries can they be forgiven?

    • Harald K says:

      To convincingly repent would be to convince me that faced with the choice again, you would not do it. That might take little, or it might take a lot – but say, forfeiting whatever advantage you got from your bad act would do a lot more for it than random good acts.

      • If the benefit you want from killing someone is that the person is no longer alive, then there’s no way of giving up that benefit. Or was that your point?

        • Harald K says:

          How often is that the case? I’d wager the most common “benefit” of murder is usually nothing more than a brief moment of cruel satisfaction, which they give up when they cool down, whether they want to or not.

          But no, that is not the point. The point is that general all-around good acts don’t necessarily do anything. As a certain guy said, crime is not a matter of double-entry bookkeeping. If there’s any redeeming power in it, it’s more about how inconvenienced the repentant is than about how many utils it produced.

          The guy who goes to prison doesn’t do it to “balance up” the bad things he did, but he might do it willingly nonetheless to convince society that he wouldn’t do it again (on the assumption that whatever benefit he derived from the crime, it’s surely not worth a stay in prison).

    • Anonymous says:

      “And demanding a lifetime of penitent service to make up for one crime seems like overkill ”

      To whom? giving up your life does seem to fit the bill nicely for taking a life.

  11. drethelin says:

    the problem here is the same as for all problems of “The Ends Justify The Means” or “For the Greater Good” and so on: We are running on corrupted hardware, and have imperfect information to go with it. Encouraging people to do evil by saying they can offset it by being good fails when you can neither calculate the good or the evil accurately, and when people are likely to be insanely biased about how small the evil and how large the good.

    This is not to mention that most means of Doing Good are not uncontroversial. Most people are not utilitarians, but even were they, almost no one is particularly well-informed on the long term effects of any possible change to the world they’re encouraging. How do you arbitrate between someone who wants to off-set a murder by donating to Malaria Prevention, and someone who thinks abortion is murder and wants to off-set their murder by donating to anti-abortion causes?

  12. Let me give you my version of the puzzle, which works for both utilitarians and libertarians:

    You are a very wealthy sportsman who has concluded that the only game difficult enough to be worth hunting is man. You locate ten adventurers and offer them the following deal. In exchange for a payment of $100,000 to each, you get to select one of them at random and do your best to kill him. They accept.

    Is there anything wrong with this case of consensual murder? Like Scott, I’m assuming away the obvious complications. Nobody else is going to know about it and there is no chance that you will kill an innocent bystander by mistake.

    • Anonymous says:

      This doesn’t seem very related. And, as a rule, I’m OK with consenting adults doing what they want with each other, including accepting or offering money in exchange for risking one’s life. We do that with dangerous+unnecessary jobs all the time, like allowing people to drive taxis.

    • Alex says:

      My take here is that intentionality is the deal breaker (which leaves the assumed risks of living acceptable – e.g. speed limits. The problem with intentionality is that it is as hard to define/detect as consciousness, with the added difficulty of not being universalizable. This is the reason I suspect an objectively correct metaethical system is not computable/codifiable.

    • Coasean Bargain says:

      As discussed a few comments above, you are describing a Coasean bargain, not an ethical offset.

    • onyomi says:

      The difference here, it seems to me, is that those who are accepting the harm (of risk of death) are also the same individuals receiving the offsetting benefit, and are also given the choice not to accept the deal.

      If I kill your best friend and say, “not to worry, I saved the lives of two people in African you’ll never meet,” then I don’t think most would view the ethical scales to have come out “even.”

      I don’t see an ethical problem with your scenario, but, since I am not a utilitarian, that is entirely because of the voluntary nature of it, not the offsetting consideration.

    • Anonymous says:

      I think this is probably ethical. So here’s my question to the readers of this blog:

      How much would you have to be offered to be one of the ten people? The answer for me is somewhere between 2 and 5 million. (With the caveat that the money goes to the person of my choice if I die and am chosen.)

      • At first I was going to say that I wouldn’t do it for any amount of money – my desire for of the pleasures that money can bring me are tiny in comparison to my desire to do something meaningful with my only life… something that isn’t “get shot by some weird rich hunter guy”.

        Then I realized if I had millions of dollars at my disposal, it would mean that my odds of being able to accomplish something meaningful with my life would greatly increase. So, on further reflection, I would probably give a similar answer to you – I’d do it for a few million.

      • I put up the example because I do find it problematic–I gave it in one of my books as a possible example of the limits of the arguments for economic efficiency. Like Scott, I sometimes see a tension between my moral intuition and my attempts to formalize it.

      • lmm says:

        More than that. My life is good, so I think the only way the deal becomes worthwhile is if I can afford large numbers of very devoted servants for the rest of my life. Something like 50 million?

      • Cadie says:

        I’d do it for $100,000, maybe a little less.

        This is because $100,000 in free money would allow me to have a child and stay home with him/her for a couple of years at least. My odds of surviving with the cash under this bargain are a little better than 90%. There’s a 90% chance I won’t be chosen and my survival odds if chosen are nonzero. I feel very confident that my odds of having a child and being able to avoid work outside the home for 2-3 years without the money are FAR less than 90%. It’s important enough to me that I’d accept a ~8-9% chance of death for a near-guarantee of success if I survive.

    • James Miller says:

      Similar to an idea I had for a heart transplant company. You pay lots of money to several poor people in return for a randomly chosen one of them giving up their heart. If the people are poor enough, accepting the deal might actually increase their expected lifespan.

    • Jared says:

      What happens when a person gets picked and then changes their mind? Then it’s not consensual anymore. I know some libertarians believe that all contracts shouldn’t be broken but I think most aren’t willing to go that far.

  13. haishan says:

    I was going to say the second argument proves too much: if you’re willing to accept that you can offset murdering your enemy by giving a bunch of money to buy malaria nets (and also to police, etc., etc.), you should also be willing to accept a government in which murder is perfectly legal as long as you offset it. And in fact this is a much better scenario, because you don’t have to worry about the police wasting resources investigating or the local populace being scared of a killer. But nobody would actually consent to live under this system, because it basically negates the right to life — this is parallel to an argument somewhere in Nozick about property rights which I’m too tired to look up and summarize right now. So we’ve reductioed to an absurdum.

    But! While I was writing this comment I remembered that lots of people do live in societies with a pretty similar thing. So now I’m a lot less sure.

    EDIT: Actually, I guess the blood-money/Diyya thing is only similar if the victim’s family must accept payment as absolution for the crime. If it’s a voluntary settlement then the right to life is basically preserved at least to the extent it is in the West. So my confidence in my original argument is partially restored.

    • Baby Beluga says:

      I’m not sure this totally works as an analogy, since you can only pay Diyya to absolve yourself of killing someone if you didn’t do it on purpose (unless I misunderstood something)

      • haishan says:

        That seems like it should be pretty easy to get around, although I have no idea how often Diyya is paid in lieu of other punishment for accidents vs. “accidents.” There are other historical blood money practices which seem to not be restricted in that way, but Diyya is the most common contemporary form as far as I know.

    • Anonymous says:

      The rule in fiqh (traditional Islamic jurisprudence) is that if the killing is the equivalent of first degree murder, the kin have the option of either retaliation or diya. For less serious forms of killing they only get diya.

      In saga period Iceland, on the other hand, the killer was only obliged to pay wergeld. But of course, if the kin were not happy with that, they had the option of killing him (assuming they could manage it) and letting the two wergeld’s cancel.

      What the rule is in modern Islamic societies I don’t know. As best I can tell, none of them save possibly Saudi Arabia really follow the traditional legal code, however much they may talk about Shariya.

    • Sophie Grouchy says:

      Contrast that to the Catholic Church which IIRC used to provide/sell indulgences for venial (minor) sin, but NOT for mortal sin (such as murder).

      • Kaminiwa says:

        Oh good, I’m not the only one who was thinking of how Indulgences relate to this 🙂

      • Deiseach says:

        Also please note indulgences remit the temporal punishment due to sin, not the guilt. They’re like getting a reduced sentence due to extenuating circumstances, or being able to pay a fine in lieu of going to jail. You still did the crime, you were still found guilty, how the punishment is offset is a judicial matter.

        And you can’t buy a heap of indulgences and then go off and commit murder, rape or theft on the view hat “I’m sorted, no guilt or sin now!”

      • Anonymous says:

        Indulgences, BTW, are effective only after repentance. that is, one can buy them (or engage in the approved act) for past offenses but buying them in advance is futile.

    • evilboy says:

      A more general question is, When have societies allowed people to kill others and not face judicial sanctions? Well, slavery, obviously; it was legal in many societies to kill your own slave. Diyya applies in general and not just that specific case, but I wonder how it works in practice. Do people usually accept the diyya? What kinds of arguments are given in these societies to allow diyya as an institution, or to accept diyya instead of revenge?

      • Anonymous says:

        In Rome a father could kill his children, even adult children. You could say that the children were chattel slaves, emancipated at orphanage, but it’s not the central example of slavery.

        • Anonymous says:

          this is why declaring a child under the age of majority to be a legal adult is called emancipation to this day.

          • Anonymous says:

            Yes, but modern usage has extended the paternal term to slaves, not vice versa. The Romans used “emancipate” only for this case, not for slaves, who were subject to “manumission.” The etymologies are “out of the hand of the father” and “send from the hand.”

      • Anonymous says:

        Diyya is judicial sanction.

  14. Bugmaster says:

    > So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…

    FWIW, isn’t this a pretty accurate description of the world we are currently living in ? The richer you are, the more you can get away with, from artisanal murder to wholesale genocide (the latter takes more money, obviously). Yes, this is unfair and sad, but I am pretty sure it is impossible to build a world radically different from this, barring some sort of an intervention from a deity (or the Singularity, if you prefer, it’s much the same thing).

    • RCF says:

      Scott is discussing the morality of the situation, not the practical aspects.

      • Bugmaster says:

        I don’t think it makes sense to discuss morality that is totally divorced from any practical aspects. What would be the point ?

        • Randy M says:

          It does make sense to contrast what is with what should be so you know what direction to go, in general.

    • Oscar_Cunningham says:

      For a rich person to get away with murder, it’s not charities that they have to donate to.

      Sadly.

      • Bugmaster says:

        Good point. I suppose one could argue that every dollar a rich person spends on lawyers and politicians, ultimately goes toward feeding some poor people somewhere — but that’s pretty much trickle-down economics, and it doesn’t really work. So, I suppose our world actually is a little worse than Scott’s ethics-offset world.

    • Anonymous says:

      Indeed, I’ve often heard these sorts of ethical offset arguments applied with a straight face in real life, say about Ted Kennedy. “Yes he left Mary Jo to drown under that bridge, but look at all the good he’s done in the Senate for women’s rights”

  15. Nisan says:

    It’s easy to bite this bullet, but I wonder if you’re making the further assumption that acts can be classified as permissible and impermissible, and that inaction is always permissible. Then we would conclude that murder + offset is permissible, and that seems counterintuitive.

    If you hold that permissibility isn’t a fundamental category but a concept that’s useful for deciding which acts to praise or condemn, then it’s perfectly coherent to say that murder + offset is impermissible even though it’s morally superior to a permissible action. I think this is the right choice, because in reality people who are willing to murder + offset could plausibly be persuaded to make the offsetting donations without murdering anyone.

    • Baby Beluga says:

      An interesting idea, and one that meshes well with our intuitions, but I think you run into problems when you try to rigorously define what you mean by “permissible.” Suppose we’re in a situation where you’re on fire and screaming in agony, and I happen to be holding a fire extinguisher that’s pointed at you. Is it really “permissible” for me to not put you out?

      • Nisan says:

        You’re right, I made a mistake: Sometimes inaction is intuitively impermissible. In Scott’s hypothetical, however, inaction is intuitively permissible. The question is whether we’re willing to accept that murder + offset is morally superior to a permissible act and yet is impermissible.

  16. Baby Beluga says:

    This-all seems legit to me. It might be unintuitive, but I for one am ready to call the stuff you talked about in this post ethical.

  17. Alyssa Vance says:

    On an individual level, by looking at statistics about convicted murderers, we know that most murderers are dumb and irrational and thinking short-term and taking actions for really stupid reasons, even relative to the baseline human. So your plan to murder someone is strong evidence that you fall into a population category which really shouldn’t be trusting itself to make important plans.

    However, on a societal level, the idea of “murder offsets” is seen as so obvious that no one really questions it. Suppose that, tomorrow, Omega came to us and told us that if we declared war on North Korea, invaded Pyongyang, took out the Kim family, disarmed the nuclear weapons, saved all the starving people, dismantled the concentration camps, blah blah blah, it would only take a week and kill only a thousand people. (Also, Omega will cover all financial costs.) I think that, in that situation, pretty much everyone would support declaring war, even though we are talking about committing a thousand murders here, because the benefits to everyone else (including the other lives we save) are so huge.

    • Alex says:

      Now assume instead of a thousand random people it will be only all your direct family members (all of them). Would you take that deal? This is closer to the murder scenario.

    • Scott Alexander says:

      “On an individual level, by looking at statistics about convicted murderers, we know that most murderers are dumb and irrational and thinking short-term and taking actions for really stupid reasons, even relative to the baseline human.”

      …most murderers who get caught.

      (I agree with your point, just being snarky and pedantic)

      • Richard says:

        This is actually a point. There was an item on the news here recently where they had autopsied a LOT of people and found that for every 3000 ‘unnecessary’ autopsies, they initiated one murder investigation that would otherwise never have happened.
        I have no knowledge of how many resulted in convictions or their selection criteria or in fact anything but the headline, but at first glance it seems to indicate a lot of murders are never even investigated.

    • RCF says:

      There’s a big difference between performing an action that both saves and kills people, versus performing one action that kills people, and then performing a different action that saves people and claiming that the latter cancels out the former. There’s also the question of who these thousand dead are. If they’re US soldiers, well, they signed up for possibly being killed to advance US policy. If they’re NK soldiers, then there’s a rather strong argument that they deserve it. Even if they’re civilians, it’s still defensible under self defense/defense of others.

      When you used the word “murder”, what did you mean by that? Did you intend a connotative meaning, or are you asserting that it would legally be murder? I think that there’s a rather strong tradition of treating war deaths as being distinct from murder. That is not to say that a soldiers’ actions can’t be considered murder, but merely killing someone is not enough.

      • Illuminati Initiate says:

        Another nitpick- even if you accept the idea of “deserving it”, NK has conscription.

  18. Qiaochu Yuan says:

    Allowing murder offsets feels structurally similar to caving to blackmail / negotiating with terrorists; I prefer to live in a world where rich people can’t just murder whoever they want by throwing enough money around, in the same way that I prefer to live in a world where blackmail and terrorism don’t happen because they don’t work, so if I could I would sign a binding contract preventing me from accepting murder offsets / caving to blackmail / negotiating with terrorists.

    • Baby Beluga says:

      I think this is dodging the question a little, though. The question isn’t “should we legalize murder + murder offsets?” The question is, “if I secretly did a murder + murder offset, and knew I wouldn’t get caught, and knew that nobody would ever know about it, would that be an ethical thing for me to have done?”

      These are different things, because in this hypothetical, I can already apparently murder whoever I want, with or without throwing any money around.

    • Scott Alexander says:

      Blackmail and terrorism are zero-to-negative-sum, offsets can be positive-sum.

      I’m not sure I prefer to live in the murder offset world. On net my utility is likely to increase, since I’m more likely to be a beneficiary of the offsets than a victim of the murder.

      (this is slightly confounded by the fact that there’s no way to improve my life as much as murdering harms me, but if we put a veil of ignorance in this could work)

      • Andy Harless says:

        But once you allow offsets, there’s a temptation to shift the baseline so that the net effect is negative-sum. Similar to the problem with Coasean bargaining: once you allow people to be paid off, what’s to stop someone from threatening to do something bad in order to receive the payoff.

      • Terrorism is positive sum in the expecte people doing it. That’s their motivation…they know they are breaking rules, but they think their bomb or assassination is justified becuas it will Smash the System with the consequence of liberating thousands or millions.

        The moral is not Terrorism Good.

        One of the morals is that calculations of expected consequence are subjective and unreliable. Another is is that consequentialism is relevant to transitions and breakdowns in the accepted order, although this case is more of an attempted transition.

  19. jaimeastorga2000 says:

    This is a bullet I am weirdly tempted to bite. Convince me otherwise.

    If simple murder fails to rouse the necessary sentiment, try a transgression that is more likely to trigger your sacredness instinct. For example, can a man offset his rape of a woman by paying for some protective measure which prevents the rape of two women? How about an 1800s man that really wants to capture a slave from the coast of Africa, and in return gives enough support to the underground railroad that they can help free two slaves? What if an extremely rich guy is a homophobe who bashes his gay neighbor’s head in with a tire iron, and atones to the utility god by donating enough money save two lives from measles in the undeveloped world?

    If you are still willing to bite the bullet after that, I suppose you are consistent.

    Amusingly, I am reminded of a time I shared your “Modest Proposal” essay in another forum, and cited its reasoning as the reason I would have no problem using The Box; not only did my revealed preferences showed I preferred having a laptop to saving a stranger’s life, but I could donate a trivial fraction of the million dollars to break even, or double that and save a life on net. I got called a “terrible person” for my trouble.

  20. blacktrance says:

    One way to think about this that may appeal to you (though you’ve probably thought of this already) is to analyze it from a contractualist perspective rather than from a utilitarian one. From behind a Veil of Ignorance, would you rather live in the world where you have a certain chance of dying of malaria but can’t be killed by Murder Offset Guy, or would you rather take a smaller chance of dying of malaria and risk being killed by him? It seems obvious that if the offset were sufficiently large, this would easily be justified, e.g. if Murder Offset Guy donates enough to cure malaria entirely, so the only question is how large the offset should be. Assuming you’d be indifferent between dying from malaria and being murdered, the (presumably relatively small) additional utility that Murder Offset Guy would get tips the expected value calculation in favor of the offset.

    • Scott Alexander says:

      I’m despairing of EVER finding an interesting problem where contractualism + veil works different from utilitarianism.

      • blacktrance says:

        Animal rights/vegetarianism/veganism, assuming you’re guaranteed to be born as a human.

      • FullMeta_Rationalist says:

        This actually still feels like a tricky question to me. Because choosing a world sounds like a onetime offer where I should choose to cooperate and reap the benefits forever, while choosing to take an action in the world I’m given sounds like I should always defect.

        Also. By choosing a world, you’re exerting power over other inhabitants that you can’t exert in real life. It feels like cheating in same the sense of how Searle smuggled a planet sized AI into a Lil Chinese tearoom.

        (No, I don’t know if I’d rather one box or two box.)

        Maybe defectors will always exist, just never in mass because that would cause society to collapse + anthropic principle.

      • J says:

        Eliezer’s dust mote? Utilitarianism looks at things from God’s viewpoint, whereas c+v is from an individual’s perspective. So we could look for cases where an individual would choose differently from an outside observer. I’d take the dust mote over the remote chance of torture, but an outsider might not.

        Or insurance perhaps: costs more in the limit, but individuals bear costs nonlinearly.

      • macrojams says:

        I am not philosophically well-versed so I might be missing something, but aren’t these two trivially transformations of each other? In utilitarianism, you want to maximize global utility — that is, sum(u[i]*n[i]) where u[i] is some value of utility and n[i] the number of agents enjoying that level of utility, while in contractualism+veil you want to maximize the expected utility of a random agent, or sum(u[i]*n[i]/N) where N is population size.

        • memeticengineer says:

          You’ve reduced contratualism + veil to average utilitarianism and are comparing it to total utilitarianism. Total utilitarianism and average utilitarianism give different results in some cases, in particular where a choice involves population size.

          That said, I’m not sure your claimed equivalence is correct. Some have argued that agents behind a veil of ignorance would (or should) use some other metric than expected utility of a random agent. For example, Rawls, who invented the veil of ignorance, argues that an agent behind the veil would choose according to maximin, i.e. maximizing the lowest utility in the distribution that would result from a particular society and ignoring the rest of the distribution. Others may consider lesser levels of risk aversion to be appropriate.

          Another aspect of contractualism + veil is that it commits you to choosing rules, not just actions one at a time. Rule utilitarianism exists, but is probably not the most popular sort.

          • macrojams says:

            While population size would have different weights in average vs total utilitarianism, my (naive) understanding of the veil is that there are a purported number of “philosophically possible” entities behind it, while the number of births that actually come to pass is the number of available slots that are associated with non-zero utility. I was not clear with this in my above comment — read N as the number of philosophically possible utility-enjoying entities. But in that case they do reduce to each other.

            Maximin does seem horribly risk averse, and though one can point out that one would potentially be leaving money on the table there is no real way to argue preferences. It seems orthogonal to the veil idea anyway, as we can have the veil setup with many different decision rules.

            As to the distinction between choosing rules ahead of time vs actions at the moment, I am also not sure I see it assuming that entities behind the veil have enought time and calculation to pick very specific rules. So for instance, at a low level of specificity the rule “None shall murder” is one that could easily be picked behind the veil, but the rule “None shall murder unless they offset that murder b saving two lives” is one that would be picked EVEN MORe SO behind the veil, as this one increase your expected value after birth. This again only works with risk neutral agents behind the veil though.

          • If contractualism+veil leads to rule consequentialism, then it leads to something robustly different to utilitarianism. Utilitarianism says it is obligatory to be a perfect altruist, rule consequentialism says merely following the rules is obligatory-ish.

      • Totient says:

        I wonder if, under some mild assumptions, it’s possible to prove that utilitarianism and contractualism + veil are always equivalent. I’m not quite sure what that proof would look like; I think we’d have to formally define a number of things that are still kind of fuzzy concepts. But that might be a net win, actually.

        What I’m getting at is, this feels a lot less like “despair that we can’t find examples where the theories are different” and more like “Hey, potential Grand Unified Theory of ethics!”

      • TheAncientGeek says:

        If contractualism+veil leads to rule consequentialism, then it leads to something robustly different to utilitarianism. Utilitarianism says it is obligatory to be a perfect altruist, rule consequentialism says merely following the rules is obligatory-ish.

        (Deliberate duplicate)

        • Totient says:

          I should have clarified – under “mild assumptions” I was thinking of including something along the lines of “human beings consistently exhibit the following preferences/biases” which would cause contractualism + veil to be equivalent to utilitarianism.

          Of course “mild assumption” is so ill-defined that I’ve really just made a completely unfalsifiable conjecture.

    • memeticengineer says:

      I’m not sure this is a correct application of contractualism + veil. Behind the veil, you are presumably choosing a general rule that would or would not allow Murder Offset Guys in, not just judging one specific Murder Offset Guy’s plan. Even if you could be ok with one Murder Offset Guy (after all, the chance of being the one person murdered is very low), that does not necessarily mean you would endorse a general rule allowing them. First, an offset as large as curing malaria is hard to come by so not a lot of Murder Offset Guys could meet this standard – perhaps not enough to justify a general rule. Second, a general rule of this sort, would incentivize people with the means to cure a horrible disease to do so only if allowed to commit a murder, where under other rules they may have done so anyway. There are lots of people who would be incentivized by a smaller prize than legal (or ethical) permission to commit a murder. Thus, I am not sure a rational agent would choose Murder Offset World even with a very high offset threshold.

      • blacktrance says:

        If there are enough Murder Offset Guys to wipe out malaria, then the optimal size of the offset increases, but the structure of the argument for allowing it remains the same. If Murder Offset Guys had to donate millions or billions of dollars to be able to commit murder with impunity, it would not surprise me if the world would be a better place.

  21. Azure says:

    Thank you for this post, since Part II is something I’ve been bothered with a lot lately. I have a weight problem and I’ve found that a high fat/protein diet works so much better than anything else I’ve tried. (I think it’s just a matter of satiety and ease of compliance, since I’m eating under 2000 calories and don’t feel deprived.)

    Things were all going well, when I was playing in a roleplaying game, and something made the character in said game realize that he had an ethical obligation, from a utilitarian standpoint, to not give creatures capable of feeling pain or fear reason to feel any pain or fear. And then I had the sudden realization, “Oh, crap, he’s right!”

    I’ve been considering switching to just eggs and dairy with no actual meat, but there’s a sense I have that a good quality of life and being slaughtered for meat is probably better than a bad quality of life and not being slaughtered for meat. (And in practice I think even egg-laying hens get slaughtered, too, and made into stewing hens, so.)

    The compromise I’ve come to so far is buying cage-free eggs, grass-fed dairy (since I assume it’s pastured), and pastured meat whenever I find them on sale, since that seems to have the best quality of life.

    But, your ethical offset notion made me realize that what I should almost certainly do is direct some of my charitable giving toward research programs to make tissue-culture meat a consumer reality. (Is there such a charity?) And also to animal rights organizations working for humane farming until it becomes a reality. So, thank you.

    As far as Section III goes, would the Veil of Ignorance be any help? Being /at risk/ of a rich person picking you out to kill you just because he can (and setting up a public works project to offset it) is No Fun. And, being that it has a personal agent and that the chain from agent to effect is short and that it’s obviously targeted at you if it happens to you, it is the kind of No Fun risk that humans are biologically compelled to be Very Unhappy about, even if it’s very unlikely to happen to them. If nobody knew if they were the person who would have No Property Taxes or the Rich Person or the Person to Be Killed, I think it’s unlikely that you’d find enough offset in any realistic situation that people would decide that this kind of Moral Offset was an acceptable idea.

    • Scott Alexander says:

      Regarding vegetarianism, I’m glad I helped. I donated $1000 to Humane League this year on sort of the same principle, although it funged against other charity donations so it wasn’t really a moral victory.

      Regarding the veil of ignorance – suppose rich guy promises that for each murder he commits, he will donate enough money to cure ten cancer patients in the First World who otherwise would not have been cured.

      Veil of ignorance, I don’t know whether I’m going to be a target or a poor cancer patient. But it seems ten times more likely I’ll be the latter. So I should support the policy of letting the rich man do that.

      • Anonymous says:

        Veil of ignorance is probably a mistake on my part, yes. Though I do strongly suspect that the kind of risk that a rich person killing you poses is the sort that makes people fantastically unhappy even if it is fantastically unlikely, (similar to the way people worry about terrorism and the knock-out game versus climate change and automobile accidents.) So I suspect that in any /realistic/ version of the world, you’d have to produce SO MUCH externality to counterbalance the unhappiness that it wouldn’t be practically feasible. I might be wrong.

        The case where you’re a criminal mastermind and nobody can find out I can’t really think of a good argument against it. Though I’m not sure if that’s really a problem. Utilitarian arguments about unrealistic worlds feels a bit like worry about charged black holes with fields so strong they’d be naked singularities or negative mass matter. They’re interesting, but if I think of ethics as how to get from the world we’re in now to a better world, it feels like more of an engineering discipline than a physical science.

      • Veil of ignorance, I don’t know whether I’m going to be a target or a poor cancer patient. But it seems ten times more likely I’ll be the latter. So I should support the policy of letting the rich man do that.

        Yeah, but a society where rich people are allowed to go around murdering whoever they please sounds kind of horrible to live in. This might be scope insensitivity on my part, but it kind of feels like a society where rich people are allowed to murder but cancer is completely eradicated would be worse than the one we have right now.

        • J says:

          It’s hard to imagine the victims being random NPCs interchangeable with the people who get saved, although I think that’s what Scott intends. Practically, we’d expect evil rich people to want to kill off the Mahatma Gandhis of the world preferentially–precisely the people we wouldn’t trade for a few random lives saved.

          • Lambert says:

            In that situation, you save some more lives to compensate for their lost benefit to others.

          • kappa says:

            “In that situation, you save some more lives to compensate for their lost benefit to others.”

            But here I think we’re getting into the ways in which people actually aren’t interchangeable.

            If it is permissible for the evil rich people to select the most morally good person they can find, murder that person, and save some calculated number of random lives to keep the balance, and they can do that as many times as they want, you end up with a world in which you are likelier to die if you are more obviously morally good.

            Even if “morally good” isn’t their selection criteria, the world is still one in which your chance of dying is influenced by how well your characteristics fit the murder preferences of evil rich people. I do not like that world.

    • grendelkhan says:

      If it helps, this sort of thing looks promising. (It reads like an ad, but still.)

  22. Mark says:

    Scenario 1: You’re a millionaire who wants to donate all your money to save a thousand lives. A genie appears and tells you he’ll veto your donation unless you press a button that murders your rival.

    Scenario 2: You’re a millionaire who wants to murder your rival. A genie appears and tells you he’ll veto your murder unless you donate all your money to save a thousand lives.

    The outcomes of your choice in either case are the same, but the latter seems way more taboo because it involves the psychological perversity of trying to kill someone for its own sake.

    • onyomi says:

      This is a very good point and another problem with utilitarianism: almost everyone has a strong ethical intuition that intentions matter when it comes to ethical evaluation of actions.

      • RCF says:

        I alieve that intentions matter in the sense that is currently being discussed, and strongly believe that they matter in the sense of process matters more than results, i.e. giving someone tea that you erroneously think is poisoned is immoral, while giving someone tea that you erroneously don’t think is poisoned is not immoral.

        • Matthew says:

          I don’t think this really even needs to be phrased in terms of intentions.

          Consequentialism, at least for me, is about choosing actions based on the reasonably anticipated consequences of those actions.

          • Harald K says:

            One of the most important questions then, is whether/when you allow other people’s moral choices to factor into that reasonable anticipation.

            If you anticipate that other people will make the wrong moral choice as an indirect consequence of your actions, how much blame falls on you, if any?

          • And even taking others’ freely-chosen actions into account, you can still distinguish ordinary predictable-but-freely-chosen actions (e.g., a riot resulting from instigation) and attempts at blackmail as from the perverse genie Mark proposes.

  23. Jiro says:

    I think you have just invented a correct argument against utilitarianism. And you’re willing to bite the bullet in the sense of accepting the weird result, but you’re not willing to bite the bullet in the sense of giving up your cherished utilitarianism that you proved does not actually describe your moral beliefs.

  24. James Miller says:

    It’s socially efficient for you to take X from me if you value it more than I do. If I own X, the value of X to me is either the most I would pay to not lose X, or the least you would have to give me to take X. Usually these two values are not that far apart. But if X is my life and I’m selfish then these two values are all the money I have, and infinity. Consequently, it’s theoretically difficult to determine when it would be efficient to let someone kill me because they value me dead more than I value being alive.

    • Levi Aul says:

      And yet, of course, a Friendly AI needs to want exactly this: to prefer that it (an instantiation of v1.0 of its software) “die” such that its successor (an instantiation of v1.1 of its software, continuing with the same knowledge-base) may be “born.” (That is, to allow itself—including its goal-set—to be upgraded.)

    • This reflects one of the ways in which economic efficiency is an imperfect proxy for utility. It isn’t that your life has infinite utility to you–if it did, you would never take any chances at all in exchange for any finite utility payoff. It’s that dollars have no value to a corpse. So dollar willingness to pay is a poor proxy for utility cost in that context.

  25. Medl says:

    I think the dilemma here comes from our inability to always do the perfectly good action while staying happy and sane. Ethical offsets seem to be a way to compromise; do what you have to do to stay happy, but try to offset any harm you cause with corresponding ethical actions. This sounds reasonable.

    But I don’t think this is a carte blanche to do anything that makes you happy as long as you offset it properly. For most unethical actions that keep you happy, there is a similar more ethical action that will also keep you happy.
    For example, instead of visiting a country with an oppressive dictator and donating to the resistance, maybe you could visit a free country and donate to the resistance.
    And if you’re rich, instead of murdering the guy you hate, maybe you can get him a job on the other side of the country and ban him from all your online circles, or if you really just want to hurt him and will never rest until you have, maybe hire thugs to beat him up or something.

    I think, in the end, that offsets are something of a last resort. If you have to do something unethical to maintain your sanity, fine, do it, but then try to offset it with other actions. But before you do that unethical action, it’s best to take a few moments to be sure that it really is necessary for your continued happiness.

    • Anonymous says:

      Agree. Or to formalize it a bit, Scott’s thought experiment assumes fixed preferences/utility functions, whereas actual humans generally have the capacity to e.g. decide they’re better off just not worrying about the guy they hate and doing their best to put them out of their mind, or trying vegetarianism to see if their love of meat will just kinda fade over time, and so on.

      Obviously, we’re not great at this, but then again we’re also not very good at calculating the harm of things we want to do vs. the good of attempted offsets. Maybe the human-implementable utilitarian rule is “Only offset harmful actions if the cost (sanity, extreme mental effort, despair over the use of trying to be a good person in the first place) of changing your preference for the harmful action is greater than the harm to be offset.”

      So offsetting your flight for a trip of a lifetime is acceptable, on the basis that changing your mindset to not care about said trip may be difficult, painful, and identity-harming, while you can offset the CO2 at a relatively low cost. But offsetting a murder is still forbidden, because however miserable you might be at not being able to off your nemesis, it’s not as great as the harm of killing.

      (I realize this is extremely ad-hoc and breaks down in the case of utility-monster-like entities with weird preferences, but it seems pretty workable for humans.)

  26. Nesh says:

    Much of human morality is based more on what types of actions or traits make what treatment of people morally correct (as opposed to maximization of a goal) and often have many rules for overriding this or that other rule. Given human moral intuition and current culture it seem likely that if you were committed to an outcome based goal systems rigorously it would almost certainly seem horrific to most forms of morality. There is the issue of it’s effects on your value system if don’t die or erase you memory after but caring about meta-morality would probably be sustained by offsets. I suspect much of common morality is about making people less willing to act and the ideas that win out are those that find deceptive ways around this.

    Also on a related note, I think people drastically overstate the effects of normalizing harmful actions. For the vast majority of cases the loosening of the norm’s effect (for cases where there are already a number of examples) would have to be less then one net random action of a that category. If this this wasn’t true the effects would cascade and any breach in a norm would sink it. Since there often isn’t any reason to assume your action(s) push the expectations more the average I think a better heuristic then “don’t cross lines except in extreme circumstances” is “cross lines if the net effect of the action plus one random action of the same type is positive”.

  27. Erik says:

    Okay, fine. Get the irrelevant objections out of the way first and establish the least convenient possible world. I’m a criminal mastermind, it’ll be the perfect crime […] There is no God, or if there is one He respects ethics offsets when you get to the Pearly Gates.

    I’m not convinced this last part is covered by saying “least convenient possible world”.

    First, positing that God respects ethics offsets seems like begging the question. If you’re saying “Pearly Gates” in the first place you seem to have in mind one of those interpretations of God who is if not author, then at least interpreter or judge of morality (rather than, say, the lares or kami of a specific road or the like) while also being super knowledgeable about why/whether ethics offsets should be respected, and I don’t see how it makes sense to suppose that this God respects ethics offsets while still leaving anything to debate about whether we should respect ethics offsets.

    Second, there are philosophical arguments which say that God is a necessary being. For instance, the cosmological argument. In brief: there exist some contingent things, contingent things have causes, infinite causal regress is out, circular causation is out, therefore the First Cause is necessary not contingent.
    (This is not the entire cosmological argument and I will be sad if people throw rebuttals at a one-sentence summary that is not meant to be convincing but to exemplify a type.)

    While I usually like the LCPW framing, I think you may be overusing it out of habit. Your original post on LCPW opens with specific contingencies such as genetic mismatch; I don’t think supposing these away for purposes of clarifying edge cases is at all in the same category as supposing away entire ethical injunctions such as a literal “God forbids”. Perhaps it would be more appropriate to instead write something like “Most interesting possible ethical framework” under which ethics offsets are worthy of debate, because there are definitely ethical frameworks under which ethics offsets are a solved question in both directions.

    • RCF says:

      Well, I have objections to the cosmological argument, even beyond your summary. Besides the motte bailey nature of the argument, if “God” causes the universe, either the universe is a necessary effect of “God”, or “God” is merely sufficient to cause the universe. If the universe is a necessary effect of “God”, and “god” is necessary, then it follows that the universe is necessary, i.e. not contingent. If the universe is not a necessary effect of “God”, then there must have been a cause of the universe other than “God”. It is impossible to explain contingent phenomena in terms of necessary ones, thus the cosmological argument is nonsense.

    • Anonymous says:

      Sure, if you accept divine command theory, then positing God’s judgement makes everything else redundant. But I don’t think Scott accepts it: he’s trying to isolate the ethical question, assuming away the threat of the police or the big bully.

      • Erik says:

        But I’m not necessarily accepting divine command theory here. “God as author” is a kind of DCT, but even in the “God as judge” scenario – ie. there’s some correct moral standard separate from God, and God’s job is to check you against that standard to see if you get in the Pearly Gates – then the fact that God accepts ethics offsets implies that ethics offsets appear in the correct moral standard.

        • Anonymous says:

          You’ve lost the thread. You asserted that you had exhausted the list of reasons Scott might have mentioned the Pearly Gates. I gave a new reason; moreover, I believe it is the real reason for Scott’s choice. My prefatory concession is a minor point, so it shortened your list to just one item.

          But experience should have warned me not to address metonymy to an advocate of the cosmological argument.

    • Tracy W says:

      This is not the entire cosmological argument and I will be sad if people throw rebuttals at a one-sentence summary that is not meant to be convincing but to exemplify a type.

      I have never seen a case of the cosmological argument that got any more plausible than the one sentence summary you’ve just posted here. Do you have a link to one, ideally a clearly and accurately written one?

  28. Wirehead Wannabe says:

    My first thought is that this might somehow affect your motivations and/or ability to give to charity. Say I have my annual bonus lying around that I haven’t yet decided what to do with. I can decide to A) spend $334,000 on a ten-to-one offset for my murder or B) do nothing. Perhaps in scenario B) I later feel motivated to donate the same amount to charity anyway, so the end result is the same, but without the murder. I have no idea how this works in real life, but it intuitively seems like you’re somehow going to be taking from a motivational (or monetary) bank account when you decide to offset.

  29. onyomi says:

    I think my biggest problem with this (and one of my bigger problems with utilitarianism in general) is the interpersonal commensurability aspect. In some abstract sense, all people’s lives are equally valuable. But from any given person’s perspective, his/her own life is generally most valuable, followed by the lives of close family and friends, followed by people who live in the same town, have the same interests, live in the same country…

    Now if you kill my best friend but say “not to worry! I saved the lives of two people in Africa you’ll never meet!” Then that is not going to make the ethical scales “even” from my perspective. To make it “even,” you’d need to somehow make the lives of those affected by the death as good or better as they would have been had the person not died. This is likely impossible, unless you kill someone whose friends and family all secretly hate him. Plus, you can’t make any meaningful restitution to the victim himself, unless you believe in burning spirit money. Therefore, you can’t really make amends.

    Though I personally don’t accept utilitarianism at all, it seems to me that any form of it I could accept would have to move away from these abstract notions of general utility and toward consideration of justice for individuals.

    • blacktrance says:

      Suppose someone credibly offers you the following deal: they will cast a spell that ends all death and suffering (except that which is caused by the spell), and the only downside is that it will kill one person per year. Would you accept this deal? The expected value is clearly positive compared to the status quo, so you should accept it, and if one year the spell chooses to kill you, it would still have been a good deal at the time.

      This is similar, though the benefits are much smaller.

      • onyomi says:

        Well, this gets back to my problem with collapsing “right” and “good.” To me, nothing “wrong” can be made “right” by any offsetting benefits, as I view “right” and “wrong” as objective, whereas “good” and “bad” all depend on one’s goals (maximize my happiness, maximize the happiness of the largest number, etc.).

        To me, it does not seem illogical to say “this is wrong, but it’s a good idea” or “it may be right, but it’s a bad idea.”

        Utilitarians want to go to the logical extreme and say “if you could save the whole world from destruction by stealing one penny from a rich guy, should you do it?” To me the answer is “yes, you should do it, but stealing is still wrong. It’s just a good idea in this case.”

        The case you give seems similar. It is still “wrong,” in that, I have no right to make this deal on behalf of everyone else, as it will result in the deaths of some people who wouldn’t have died, even though the total amount of death and suffering will be much less. That said, I would still take the deal for the same reason I’d steal the penny to save the world.

        • blacktrance says:

          If the expected value of doing what’s good is greater than that of doing what’s “right”, why do what’s right? Why care about “right” at all?

          (Though if “the right thing to do” is even not the same as “the thing you should do”, you’ve probably confused your concepts somewhere.)

          • onyomi says:

            Well, maybe I can drop this distinction between what is “right” and what one “should” do, but I am slightly loathe to since I think it colloquially maintains an important distinction between what abstract ethical principles tell us versus what abstract ethical principles applied to a real-life situation tell us.

            I don’t think Huemer makes this distinction. Rather, he describes what one “should” do as an “evaluative fact” which may, I believe, include both “pragmatic” (utilitarian) and deontological, virtue-related factors, etc.

            One of Huemer’s most interesting (seemingly obvious, but not) insights is that one need not ground morality fully in any one particular parameter (duty, greatest good for greatest number, etc.), other than one’s own intuition of what seems to be right or wrong.

            This is why it seems like, theoretically, consequentialism and intuitionism may be compatible, in that I’m willing to accept the proposition, “you should do whatever produces the best consequences.” The problem for me lies in the utilitarian definition of “best.” For me, the goodness or badness of the consequences cannot be reduced to any one empirical factor like how many happiness points it adds to, or substracts from, the universe. It must take into account virtues, duties, etc., which can only be understood intuitively. And, of course, the degree to which it is appropriate to consider utilitarian factors itself relies on intuition.

            Therefore, ethical intuition supervenes on other factors, which is why I’d still describe myself as an ethical intuitionist.

          • blacktrance says:

            (We’ve disagreed about this before, with no conclusion, but I’ll give it another try.)

            One of the major problems with intuitionism is that there’s supposed to be (if moral realism is true) a way to determine objective moral truths, which is something with which intuitionism has difficulty, because people have different intuitions, and it’s not at all clear that they’d come to agree on a common moral perception if they can’t appeal to anything other than intuitions. If I intuit that X is moral, and you intuit that it’s immoral, must we necessarily be able to come to agreement if we can only refer to our intuitions? You can certainly convince me that X seems intuitively wrong to you, but I can just say that your intuitions are wrong, and you can say the same about mine. How do we decide who’s right?

            In addition to that, even within one person, one’s moral intuitions need not be consistent. When two of your intuitions conflict, how do you decide which one you go with? To combine this with the previous objection, if you feel that one of your intuitions is stronger, and I, having faced the same conflict, feel that the other intuition is stronger, how do you convince me that I should weigh my intuitions differently?

            It seems to me that the only way moral realism can be true is if it doesn’t rely on something as subjective as intuitions.

          • “If the expected value of doing what’s good is greater than that of doing what’s “right”, why do what’s right? Why care about “right” at all?”

            1 Expected by whom, with what level of certainty?

            2 Because if most people do, what is deontologically right most of the time , that enables trust and cooperation. Consider a prisoners dilemma.

          • onyomi says:

            I could explain again why I don’t think supporting ethical intuitionism make moral debate impossible, but what if it did? Would that in any way reflect on the veracity of the stance?

          • blacktrance says:

            TheAncientGeek:
            1. Expected by whomever is doing the evaluation, e.g. when you’re doing the evaluation, it’s expected value for you, when I’m doing the evaluation, it’s expected value for me, and so on.

            2. The misuse of “deontology” aside (if it’s justified by good consequences, then it’s not deontological), that just means that establishing rules and enforcing them is a good idea because it’s what has the highest expected value, not that acting by what has the highest expected value isn’t always right.

            Onyomi:
            My objection is epistemological, not about discourse, sorry for conflating the two. When two moral intuitions are in conflict (whether between two people or within one person), how do you determine which one is correct? When so many people disagree with you about morality, how do you know whether your intuitions are correct?

          • onyomi says:

            Blacktrance,

            What I’m saying is, what if there IS no way to determine which is right? That wouldn’t make the intuitionist stance wrong, just inconvenient (I don’t actually think there’s no way to tell, but hypothetically).

          • blacktrance says:

            That would make the intuitionist attempt to ground moral realism highly questionable, because it would make moral truths impossible to determine.

          • onyomi says:

            I don’t see why the ease or difficulty of determining something has anything to do with its truth or falsity.

            So do you believe in moral realism (the existence of objective moral truth), but not in intuition as a basis for it? In what, then, do you ground it? Greatest good for greatest number? If so, then on what is the very notion of “good” based?

            I think the moral realist stance is clearly superior in terms of making ethical debate possible, since otherwise one is debating pure subjective opinion. But if one accepts the realist position, what can be the ultimate basis of what “is” right other than what “seems” right, given the is-ought gap?

          • blacktrance says:

            I don’t see why the ease or difficulty of determining something has anything to do with its truth or falsity.

            In something like physics, or in reductive ethical theories, we can point to some objective and (at least in principle) shareable feature of the world, and that is what determines truth. In intuitionism, moral truth is in principle not like that. For example, when two utilitarians disagree, they can present evidence to each other that, independently of their evaluations, determines the moral goodness of an act/rule/etc – moral truth is determinable. When two intuitionists have a sufficiently significant disagreement, they have no way of determining which one of them is right. When your moral intuitions disagree with someone else’s, the relevant moral truth may be completely unknowable. If this isn’t bad enough, if moral truth is unknowable, then there’s the question of how we know whether there’s any moral truth at all, and whether our intuitions are nothing but our own ideas, rather than perceptions.

            So do you believe in moral realism (the existence of objective moral truth), but not in intuition as a basis for it? In what, then, do you ground it?

            Yes, I am a moral realist, and I ground objective moral truth in instrumental rationality. Moral truths are equivalent to or are a subset of the truths about what follows from the application of instrumental rationality to oneself.

            But if one accepts the realist position, what can be the ultimate basis of what “is” right other than what “seems” right, given the is-ought gap?

            You can derive “ought” from other “ought” (e.g. “If I want X, and Y gets me X, then I should want Y”), and there are also “is”-claims about “ought”s (e.g. “I want X”). But you can’t derive “ought”-claims exclusively from “is”-claims that aren’t about “ought”s.

          • onyomi says:

            Then whence do you derive any “oughts” whatsoever, but from your intuitions that, for example, life, happiness, and harmony are better than death, suffering, and discord?

          • blacktrance says:

            Those aren’t good because of intuitions, but because they are instrumentally valuable for the fulfillment of my desires (as in the case of harmony), or because they feel qualitatively desirable (as in the case of happiness).

          • onyomi says:

            Okay, but I still don’t see where “good” comes in, unless you define “good” as “that which fulfills or helps to fulfill my desires.”

            As far as I can tell, you are defining morality in purely instrumental terms: “if you want to be happy then you should do things that tend to bring about your happiness, if you want to be miserable, then you should do things that tend to make you miserable.” But this isn’t moral realism at all, as it locates the “good” in the eye of the beholder.

          • blacktrance says:

            As far as I can tell, you are defining morality in purely instrumental terms: “if you want to be happy then you should do things that tend to bring about your happiness, if you want to be miserable, then you should do things that tend to make you miserable.” But this isn’t moral realism at all, as it locates the “good” in the eye of the beholder.

            This is getting close to disputing definitions, but I’d argue that morality being instrumental and grounded in hypothetical imperatives is under the umbrella of moral realism. Moral realism is the combination of three claims: moral statements are truth-apt (that is, they attempt to describe a feature of the world), some moral statements are true, and the truth of moral statements is not determined by attitudes and/or opinions (i.e. the rejection of subjectivism). By that definition, this is within moral realism – there are true statements about what is instrumentally rational and desire-fulfilling, and their truth isn’t a matter of opinion.

          • onyomi says:

            Yes, but why should we take that which is desire-fulfilling to stand in for that which is ethically good?

            Surely if I am someone who just really likes murdering people and I can reasonably believe that regular murder is the best way to achieve my happiness, that would not, therefore, make murder ethically good?

          • blacktrance says:

            In part, because of established classifications within ethical theory. It may generate counterintuitive conclusions, but that’s no problem if you reject intuitionism. It checks the boxes for being morality, such as providing answers to questions labeled “moral”, providing a standard for how one should act, being binding, etc.

            But ultimately, what one chooses to label as moral is a matter of carving reality differently. You could say that morality is necessarily binding and non-rejectable, but then you have to say that it would be moral for an internally consistent Caligula-alien that enjoys torture to engage in it. You could restrict the label “morality” to what a subset of minds should do, but then there are (at least possible) beings that can rationally reject being moral. Or you could argue that morality necessarily requires particular behavior, but in that case it’s also possible for someone to rationally reject it and have insufficient reasons to follow it – you can exclude certain acts from “morality”, but in doing so you run the risk of creating situations in which one shouldn’t be moral.

            If an extremely powerful internally consistent Caligula-alien kidnapped me and told me it would torture me, I would have no arguments as to why it shouldn’t. Certainly, I wouldn’t want it to – but that’s not enough. It has to have a sufficient reason not to torture me, and being internally consistent, it lacks such a reason, so it should torture me. Whether you want to label that as moral or not is a matter of terminology.

            “There is no morality, there is only instrumental rationality” and “There is morality, and it’s instrumental rationality” could be equivalent statements, but I prefer the latter because of various associations with the word “morality”, and because despite their potential equivalence they’re associated with somewhat different philosophical positions.

          • onyomi says:

            I’m pretty sure that, by most standard definitions, your view would not fall within the realm of moral realism, but rather moral nihilism, since it seems that you believe that that which most people mean when they discuss morality does not exist.

            When most people say “x is morally good,” they do not mean “x is what helps me achieve my desires.”

            You could say, “I believe in God, but I defined ‘God’ as ‘spontaneous order which arises in the universe,'” but that would be needlessly confusing. Better to say “I don’t believe in God, but I think one can feel a sense of reverence similar to that theists claim to feel for God, but aim it instead at the spontaneous order of the universe.”

            And even the above seems to work better than “morality is instrumental rationality,” because I don’t think instrumental rationality even fits the mental “slot” occupied by morality in most people’s minds. In most people’s minds “how best to get what I want” and “what is ethically proper” are two completely different issues, loosely correlated at best, and often inversely correlated.

          • blacktrance says:

            When most people say “x is morally good,” they do not mean “x is what helps me achieve my desires.”

            In general, yes, and yet ethical egoism is classified as a normative ethical theory (and therefore falls under realism), as is contractarianism, which grounds moral rules in agreements motivated by instrumental rationality. Even if the fundamental justification is usually not like what most people use, what it generates is more similar to the moral norms believed in by other realists than to the moral nihilists’ rejection of moral norms. For example, the qualitative feeling of holding the proposition of “killing innocent people is (usually) wrong” is similar between me and a substantive realist, whereas the nihilist would reject the description of killing as wrong. In the delineation of realism vs non-realism I wrote in a previous comment, the nihilist would say that moral claims aren’t truth-apt, or that there are no true moral claims, which leads them to hold substantively different positions in applied ethics, political philosophy, etc. I say “X is wrong because [reasons grounded in instrumental rationality]”, the substantive realist says “X is wrong because [reasons grounded in something else, e.g. intuitions]”, the nihilist says “X is not wrong”, and the substantive realist and I will probably react similarly to each other when confronted with X, and the nihilist will react differently.

            For example, when a government does something unjust, the substantive realist and I both say that it acted wrongly, though we may ground it differently, whereas the nihilist says that there’s no truth of the matter and there’s no such thing as wrong.

          • onyomi says:

            Okay, then, it sounds more like a type of reductionism, a la Rand’s “the good is that which tends to promote life.” The problem I see with any variation on that (any attempt to define the evaluative in terms of the non-evaluative) is, it renders nonsensical or changes the meaning of perfectly grammatical questions, like “is life good?”

            If “good”=”promotes life,” then “is life good”=”does life promote itself?” which is not what most people would understand by that question.

            Similarly if “good”=”promotes self-interest,” then the question “is self-interest good?” becomes nonsensical, even though pretty much everyone understands what is meant by it.

          • blacktrance says:

            It doesn’t change the meaning of “good” – it still means “that which should be done” or “that which should be promoted”. “Good” doesn’t mean “promotes life”, it just so happens that promoting life (in a loose sense) is that which should be done. And the question “Is promoting self-interest what should be done?” is not nonsensical.

          • onyomi says:

            But then you’re again avoiding providing a definition of the good. “Good”=”that which should be done” is basically circular and tells me nothing about the basis for determining what is good.

            It seems to me you really are an intuitionist, because you have an intuitive sense that “that which promotes self interest” or some variation is a good objective proxy for “the good” or “that which should be done.” But, again, where do you get your notion of “the good” or “what should be done,” other than from intuition? Unless you insist that by “should be done” you just mean “that which is efficacious for achieving a goal,” which, again, is merely a description of “is,” not a justification for “ought.”

          • blacktrance says:

            The definition of “good” is “that which should be done”. That’s something realists (and even some non-realists) can agree on. The disagreement is about what (if anything) satisfies that definition. I hold that the only existing oughts can be derived from other oughts, ultimately derived from terminal values. This has nothing to do with intuition – I’m not intuiting that anything is good, it’s that the only thing that satisfies “that which should be done” is what follows from instrumental rationality, based on having sufficient reasons to do something.

          • onyomi says:

            If I understand you correctly, then I would definitely not call you a moral realist, since you don’t believe that that which most people mean when they refer to ethics exists.

            When people talk about the “ought” in the “is-ought gap,” they aren’t talking about “if you want x, then you should do y,” they are talking about “you ought do y. period.” If you are only willing to make the former sort of statement, then you are only evaluating efficacy, not morality.

            Reminds me of the claim by Mises (though I don’t think non-Austrian economists would disagree with him on this) that economics is a non-evaluative science: that is, it may predict that if you do x, prosperity will result, but if you do y, poverty will result, but it cannot tell you why or even whether you should prefer prosperity to poverty. The type of morality you are describing seems to be like this: it can only tell you “if you want x, do y; if you want a, do b,” but it can’t tell you anything about whether x is “better” or “worse” than a.

            In this respect, it does not fulfill the function of an ethical system, which is to provide absolute “oughts,” not contingent “oughts.” You may say we don’t need/can’t have absolute “oughts,” but that is equivalent to saying we don’t need/can’t have ethics, at all, because non-evaluative cause-effect descriptions do not fall into the realm of ethics as commonly understood.

          • blacktrance says:

            And yet morality being contingent is the position taken by Philippa Foot in “Morality as a System of Hypothetical Imperatives”, as well as by the previously mentioned egoists and contractarians, and all of them are considered realists. Substantive realism (i.e. the kind that says “you ought to do Y, period”) isn’t the only kind of realism – there’s also constructivism. It’s true that when people argue about ethics, whether they’re arguing for or against ethical truth, they tend to argue for or against substantive realism, but while you can certainly carve metaethical positions into “substantive realism” and “everything else”, the latter category includes a variety of views, some of which are more similar to substantive realism than they are to other positions in the “everything else” category. It’s a strange category if it includes both “X is good because it is what an agent (or specific kind of agent) would want if they underwent a process of ideal rational deliberation” and “‘X is good’ is a meaningless statement”, while not including substantive realism. Substantive realists and constructivists disagree about the grounding of moral claims, but they have more in common with each other than they share with non-realists, who reject moral claims altogether.

            And, depending on the interpretation, it does provide absolute oughts. “If you have relevant features F1 and F2, and you’re in Situation S, then you should perform act A”. The obvious objection to this is that this is clearly hypothetical – but it seems no more hypothetical than a utilitarian saying “If you can push a person in front of a trolley, and you can save more people by doing so, then you should push them, but otherwise you shouldn’t” – and few would say that statement is hypothetical.

            Also, it could be argued that standard “ought”-language relies on implicit common motivations, i.e. “You should do Y” unpacks to “You should do Y if you want X, and wanting X is such a universal that I’m not even considering someone not wanting it”.

        • Anonymous says:

          “To me the answer is “yes, you should do it, but stealing is still wrong. It’s just a good idea in this case.””

          Depends on how you define stealing. If you define it as taking property against the reasonable will of the owner, it’s not stealing in this case.

          This obviously also covers such cases as forcibly removing weaponry or other dangerous objects from the residence of a paranoiac whom you reasonably believe will use them against people he imagines are persecuting him.

      • RCF says:

        I see a moral difference between wanting to kill someone and being willing to save ten people to “offset” the murders, versus being able to save ten people but only being able to do so if someone is killed.

        • blacktrance says:

          Okay, suppose that it’s an evil wizard who will only cure all death and suffering if you give him permission to kill someone every year. (And he can’t kill unless you give him permission, and if you give him permission, he can’t be stopped.)

          • Drew says:

            Viewed from a third party perspective, the Evil Wizard is pretty much the trolly in a trolly problem.

            He’s an inexorable force. We just decide how we want to redirect him.

            From Wizard’s perspective, the ‘magic for murder’ constraint is totally optional. He could share magic AND ALSO not murder people. He’s just being a dick.

            (Similarly, rich dude could give AND ALSO not murder.)

          • memeticengineer says:

            Presumably in the past you would have have wanted to credibly precommit to refuse blackmail of this sort, so to be consistent, you’d have to say no. (You could argue that in such an extreme case you should break your credible precommitment, but making that argument would defeat your ability to credibly precommit).

          • blacktrance says:

            In the case of blackmail, you would be better off if the blackmailer hadn’t put you in a position in which you were being blackmailed, which is why it’s better to precommit to not being blackmailed. But it’s not clear that this is like blackmail in that respect – in fact, you’re getting a good deal.

    • Take a look at my version. Ex ante, everyone is better off, since the putative victims prefer a .1 chance of being targeted for murder plus getting a hundred thousnd dollars to having neither. No interpersonal utility comparison required.

    • FullMeta_Rationalist says:

      I think there’s a meme floating around the interwebs which quotes Einstein as saying “our purpose is for our consciousness to expand beyond our physical limitations.” Or some thing like that.

      Consider that in the prisoners dilemma, a utilitarian actor would shoot for the optimum if their model included a term for the other agent. This suggests to me that the right way to look at this is not “my life is the most valuable to me”, but “everybody else’s lives are discounted to zero because they don’t telepathically affect me”. It’s a scope insensitivity thing.

    • evilboy says:

      To make it “even,” you’d need to somehow make the lives of those affected by the death as good or better as they would have been had the person not died.

      That’s a tough standard to meet and usually we don’t require it: you can e.g. fire someone and make their life worse off without compensating their children. How do we define “those affected by the death”, also? The family? Everyone in the person’s company?

      The idea of compensating the victim’s family does seem useful from a practical perspective (at least if your country lacks a strong state) because these are the folks most likely to seek vengeance in the absence of a state.

      Even so, the idea of making someone worse off in a way that violates a basic deontological rule disturbs even me. I’m bothering to reply to your comment because in fact there does seem to me a rough hierarchy of moralities based on the proximity of compensated folks to the victim. The compensation to me cannot be “liquid.” Moral offsets for murder seem awful and unacceptable, diyya or payment to the family is disturbing but maybe I could accept it sometimes, and….

      The other extreme seems to be if a person was below the threshold for a good life according to Critical Level Utilitarianism. Then death makes a person better off, not worse off. So when I next attempt World Domination, I might be able to accept the plan of creating a huge population that is near (but above) the critical level and then killing some folks who are inadvertently pushed under it through chance events. Let me know if you want to invest in my scheme.

  30. Karmakin says:

    I don’t like the whole vegetarian example. Why? Because I think if you’re doing it right it’s not so much of an easy sell. I do not believe (from personal experience) that everybody has the body chemistry for vegetarianism to be healthy. Not all people of course, but some. But I think informing people of this creates enough FUD that it’s a much much harder sell.

  31. Anonymous says:

    I think a lot depends on your system of ethics.

    You’ve endorsed consequentialism, and I guess consequentialism answers the question pretty clearly: if you kill one person but save the lives of a bunch of similar people (and if you avoid negative side effects as you describe above), then yes, your net effect is that you have improved the world. Not as much as you would have if you’d saved the lives and not killed that person, but still an improvement.

    The catch is that, if I write a comment saying: “I think it’s okay to kill people, under the following conditions”, then my comment is weakening the Schelling fence (is that the right term?) against murder. So, even if it is okay for you to kill people in this very hypothetical situation, it is not okay for me to write a comment agreeing that you can kill people, under any circumstances.

    • Anonymous says:

      PS. Least Convenient World arguments have always seemed sort of weird to me. — I mean, okay, it’s valuable to spend some time thinking about a least convenient world. But it’s weird when someone says: “When considering this proposition, I request that my opponents only consider the least convenient possible scenario, in which all possible circumstances favor my proposition. If my opponents consider any scenario other than the one that is maximally favorable to me, they are being intellectually dishonest.”

      • Anonymous says:

        That’s the point; otherwise they’re answering a different question. If Scott doesn’t handwave away law enforcement, then the reader can say “I don’t do it because I don’t want to get life in prison” regardless of the actual ethical issue.
        So when you want to raise a question about ethics offsets, you assume all possible facts other than the controversial one go whichever way makes them irrelevant.

    • Illuminati Initiate says:

      It doesn’t make any sense to say that its OK to murder people if you donate to charity unless donating to charity is dependent on murdering someone. The two things are connected for other people dealing with the crazy rich murderer*, but are separate from the CRM’s perspective.

      *both interpretations of that phrase are valid, I guess

      • FullMeta_Rationalist says:

        I think utilitarianism might say “Okay? Not okay? The only thing I know is utility.” I.e. whether global utility is increased or decreased (or stays constant), it’s up to the individual or society to judge whether such an action is acceptable.

        However, killing & saving is going to accrue a pigovian (is that the right word?) tax to compensate for the loss in security.

        • Illuminati Initiate says:

          Yes, but the point is even to a utilitarian the right thing to do is not “cause happiness but cause an equal amount of suffering”, it’s “cause as much happiness and as little suffering as you can”. They do cancel out, but if you didn’t have the murder it would give more utility than if you did. Its better to think of it as if the issues were not mentally related by some sort of weird guilt complex, if I punched someone in the face yesterday, does me giving 500$ to a charity in 2040 make the punching OK? The time difference does not actually matter, but the point is to remove the confusion.

          • Jiro says:

            The problem is that if morality requires increasing utility as much as possible, then everyone here is morally obligated to give everything they own to charity except for the resources they need to survive to produce more things for charity.

            Nobody does this. So to make utilitarianism a useful concept, you need to explain why someone who is not giving all he owns is still acting morally even though he could be giving more. And any idea that lets you do that will allow murder offsets–if I am okay even though I could have produced more utility by giving more, than the murderer who buys offsets is okay even though he could have produced more utility by not murdering.

            What this really does is demonstrate what’s wrong with utilitarianism. People have pointed out in the past that Scott seems to embrace feminism and the left even though he’s too smart not to see the problems they cause. Utilitarianism seems to be another case of this.

          • Anonymous says:

            Indeed, in utilitarian ethics, there is no such thing as a supererogatory act: either two acts will cause equivalent pain and happiness, and so one can’t be better, or one is more happiness and less pain, and so is mandatory.

  32. gattsuru says:

    In the Trolley Problem, does it matter if someone you Hate is on the wrong tracks? How much Hate is required to make it unethical to save five people at one Hated person’s cost? Does it become unethical if you’re merely very, very peeved at them, or only if you hate them enough to want them dead?

    • FullMeta_Rationalist says:

      Your actual decision is divorced from the ethics. I feel like hate won’t affect an uninvested, detached arbiter’s judgment on your decision. But society may compensate you in virtue points or something if you were perceived to have taken the high road.

  33. Alex Mennen says:

    I kind of want to write a science fiction story about a society in which that sort of thing is normal, legal, and socially acceptable. Unfortunately, I don’t think I can do a good job of it, so I guess I just want someone else to write it so I can read it.

    • Robert Sheckley, Seventh Victim, 1953.

    • Anonymous says:

      William Tenn, Time in Advance, 1956. Available here.

      In this story, the government needs people to serve as laborers on dangerous and extremely unpleasant terraforming missions, and there aren’t enough prisoners, so it offers an unusual incentive to attract volunteers: you can earn the “right” to commit a crime with no legal repercussions by serving, in advance, either half the standard sentence or seven years, whichever is shorter…

  34. Ghatanathoah says:

    Let’s imagine a slightly less nasty scenario than the murder. Let’s imagine we have a rich guy who enjoys joy-riding in his sports car, but is worried he might run someone over and feel bad about it. He precommits to buy an offset every time he harms someone joyriding.

    Now let’s decrease the odds of him killing someone, how rich he is, how nice his car is, and how big the offset is. Keep doing it. Eventually we’re going to get to the point that is equivalent to the regular hazards of driving a car places.

    We are all the rich joyrider, we all take actions that have a small chance of killing someone, and offset these actions by the positive externalities our existence and our economic activity generates for society.

    I think we should accept the murder offset-case in principle. But from a Rule-Utilitarian, Schelling Fence Perspective there is no realistic scenario where we should ever accept it in practice.

    • We are all the rich joyrider, we all take actions that have a small chance of killing someone, and offset these actions by the positive externalities our existence and our economic activity generates for society.

      First of all, I don’t think this is how people actually operate. I don’t think people feel the need to justify driving by performing economic activity. Like, imagine a guy who works from home and does all his shopping online, but every weekend he drives an hour north to visit his girlfriend. I don’t think it would ever occur to this guy to feel guilty for his “useless” driving, nor would if occur to 99% of people in the same situation.

      Secondly, I’ve naturally thought of driving as risking my life, but never as risking the life of others… like, the risk to any given driver comes simply from being on the road, I don’t think adding one more car to the equation is likely to significantly change the risk… right?

      • Jiro says:

        The increase in risk is a really large number of people who have a really tiny increase in risk each. Just because it’s not much of an increase for any particular person doesn’t make the whole thing insignificant.

      • TheAncientGeek says:

        People don’t feel the need to justify actions that everyone else does, but they can still need justification philosophically because philosophical challenges can be mounted against them.

      • lmm says:

        > I don’t think adding one more car to the equation is likely to significantly change the risk… right?

        IIRC ~80% of accidents involve more than one car, so you’re imposing ~60% as much risk to other lives as you are on yourself. (Of course, that’s spread between everyone else on the road, so for any one individual the difference is insignificant – but by that argument it’s ok to murder someone at random, and certainly ok to pollute, since your pollution makes no significant difference to any one individual’s chances).

        (You presumably think you’re an above-average driver so you’d be putting less risk than others than is being put on you – but most people think they’re above-average drivers).

        • (You presumably think you’re an above-average driver so you’d be putting less risk than others than is being put on you – but most people think they’re above-average drivers).

          I actually think I’m a pretty bad driver. You see, I’m better than most at avoiding the Dunning-Kruger effect.

        • Anonymous says:

          but most people think they’re above-average drivers).

          I’ve always thought this conflates two different groups of drivers — those who think the average driver is basically competent, but the thinker is extraordinarily skillful, and those who think the average driver is a dangerous maniac behind the wheel, and they are better than average simply by not being a dangerous maniac.

  35. Totient says:

    Based on the way you described the scenario, I’m willing to bite this bullet.

    I’m reminded, though, of what you wrote about Ghandi and the murder pills – even if you tell no one, I think there’s a Schelling fence you’re still crossing in your own mind – you’re convincing yourself that murder is just fine if you do something to make up for it.

    In the least-convenient world where this is not a concern, well, I’ll bite the bullet and say murder + offset is an ethical action. I just don’t think any of this makes sense in the world we actually live in.

  36. John Schilling says:

    Case 3 feels wrong because you are trying to develop the perfect ethical system for Lex Luthor, while simultaneously adhering to a real ethical system tailored for Not Lex Luthor. You are not in fact rich enough to trivially pay away the consequences of a murder, and you are not in fact capable of committing the perfect murder, and you have spent your life living according to rules that acknowledge these limitations.

    Lex Luthor can be a perfect consequentialist, and if he is ever convinced to be ethical, it will probably be some brand of consequentialism. The rest of us, it turns out we are really bad at this sort of thing. Applying consequentialist ethics to local questions like “would killing this guy in this clever manner with these offsets make the world a better place?”, we miscalculate across the board. We miss the second-order effects of our actions, we overestimate our ability to implement clever schemes, we’re stingy with the offsets, and we let our biases rationalize whatever it is we wanted to do anyway.

    We’re better at making general rules. Not perfect, but we don’t totally suck at it. We consider how the rule might be used both for and against us, we have smart people debate it from different positions, and we try to find a consensus. And when we do this, the first rule we come up with, pretty much always, is THOU SHALT NOT KILL.

    Then, of course, we look at the neighboring tribe laughing and sharpening their spears and amend that to Thou Shalt Not Murder, with some fairly specific exemptions for e.g. individual and collective self-defense, but usually not for rich people killing anyone they don’t like and pay money to make it go away[*]. Once we settle that, we proceed to obey the rule. Even when we think murder might be locally OK, we don’t do it because A: we’re probably wrong about the “locally OK” part, and B: the example would weaken a rule that is globally Very Very Good.

    Which means, no, we really don’t push the fat guy in front of the trolley hurtling towards the five children, because of the hundred fat guys we know will be standing next to someone who hates them on the left hand, and a trolley heading scarily close to but not likely to hit five children on the right. And because we have all learned Thou Shalt Not Kill/Murder at such an instinctive level that it never occurs to us that this is a solution at the time.

    From consequentialism, we derive deontology in almost all of the really interesting cases. And if we are inclined to cheat, we still fake it and hypocritically preach deontology. If we get caught, and we are truly ethical consequentialists, we accept punishment for breaking the rule rather than making special pleading regarding the consequences, because that pleading would weaken a very important rule that our imprisonment would strengthen.

    Your first and to some extent second case are less interesting because (outside of Green Tribe) their is no general rule to obey or to break. And in the first case, it is pretty clear that if there ever is a rule, it will be some variant of “use sufficient offsets to make your net carbon footprint negative”, rather than “minimize your positive carbon footprint via aesceticism”. Not only is the former a more popular and politically acceptable rule, it leads to better results (or possibly to global catastrophe via new ice age, but we’ll assume the consensus isn’t that far off).

    Or, TL;DR, If you need to lie about it to the paladin, that’s a good indication that it may be the wrong plan.

    [*] I see David Friedman is already on this thread, so I’ll invite him to talk about the extent to which wergild might have historically qualified as such a rule.

    • Anonymous says:

      Where is the “like” button? Ozy has it.

    • Susebron says:

      Not David Friedman, but some societies had weregild. On the other hand, they also had a tradition of blood revenge, in comparison with which weregild was probably a good idea.

      • Susebron says:

        I can’t edit anymore, but did the grandparent originally mention weregild? It seems like it was too old have been edited, so I’m not sure if I’m crazy or just bad at paying attention.

      • Anonymous says:

        Early Germanic legal systems were deeply weird by modern standards, but I don’t think they were that weird. The thing is that pretty much everyone in the time period under question was dirt poor, and weregilds were set such that killing someone of equal social status would lead to serious financial hardship. The weregild Wikipedia lists for an Anglo-Saxon freeman works out to about the contemporary value of a dozen oxen, for example, and it only goes up from there.

        • Nornagest says:

          That was me. This bug’s really starting to get annoying.

        • Susebron says:

          The point of using a dying millionaire in this example is that they can afford to throw around sums that most people couldn’t manage without serious financial hardship. $334,000 + however much is required to offset the other problems is more than most could manage.

  37. Patrick says:

    You’re just hitting on the fact that utilitarian ethics generates weird results if you bundle choices. The intuitive difficulty you have with the weird results this generates is your natural reaction to the artificiality of bundling choices in the manners given.

    This is sometimes offered as a flaw in utilitarian ethics, but I think it’s a flaw in understanding and applying them. We don’t actually have anyone out there who is seriously deciding between

    A) Murder a guy and then donate to charity to save two people, or
    B) Do nothing, allowing one guy to live and two to die.

    Because there’s still

    C) Donate to charity and save two lives, and then don’t shoot anyone.

    Now someone who wants to kill someone might argue that he refuses to do C, and will only donate to charity if you let him murder someone. But utilitarianism already answered this- he should do C, and to the extent that he prefers A or B, he is failing his ethical obligations.

    Now that still leaves the question of how we should deal with this guy- but even there our options aren’t just “let him do it” or “leave him alone.” We could apply moral censure, we could tax him, we could just shoot him and make the donation to charity with his money… loads of choices are out there.

    TLDR, forcing artificial either/or choices between bundled packages of decisions makes utilitarianism generate wonky results the same way your computer generates wonky results if you put a peanut butter sandwich into the cd drive.

    • TheAncientGeek says:

      Utilitarianism also delivers weird results if it is interpreted as “maximum possible altruism is obligatory”. That would mean that almost everyone is failling their obligations but few are getting any punishment (even social disapproval).

    • Liskantope says:

      As has happened before here on SSC, I’m saved the trouble of fully explaining the main point I wanted to make in response, because if I scroll down far enough in the comments section, I find that another commenter has already hit the nail on the head. Yes, it only makes sense to consider his actions to be the morally correct choice if they are somehow packaged together as a single choice — i.e. if for some reason he couldn’t donate money without killing someone. Otherwise, I suppose that the correct reaction is to punish the guy in exactly the same way as we would punish him if he hadn’t donated. After all, he made the free choice of killing someone, which was independent of his other choices, and the fact that he also donated enough to save lives is a red herring.

    • Thank you! I was getting so frustrated with so many commenters saying their vague moral intuitions regarding these cases meant that the flaw was in utilitarianism rather than in the application of it to these cases.

      Utilitarianism is not deontology. You can’t plug a single course of action into utilitarianism and get back a GOOD or BAD result. That’s not how it works. The way utilitarianism works is to plug multiple courses of action into it and get back a DEGREE OF GOODNESS for each. Then the RIGHT course of action is the one with the best DEGREE OF GOODNESS, and the other courses of action are WRONG.

      So using your situations A, B, C, Scott is simply correct that A is somewhat better than B. There shouldn’t be anything morally counterintuitive about this. Also, C is obviously much better than either A or B. If A, B, and C together exhaust our options, then B is worst and wrong, A is better and still wrong, and C is best and right.

      And yeah, utilitarianism can be extremely demanding, if we use it to demand that people always do the right thing. But nothing says that we have to do that. In fact I think it’s a rhetorical and psychological mistake to pick out the best course of action as uniquely right. I think we’d achieve more if we stuck to evaluating courses of action as better or worse, and simply encouraged people to do better than they are currently doing.

      Incidentally, I’d love it and support it if we could replace retributive justice with restorative and rehabilitative justice. Restitution payments are the way society formalizes “ethics offsets”.

      • houseboatonstyx says:

        The way utilitarianism works is to plug multiple courses of action into it and get back a DEGREE OF GOODNESS for each.

        While agreeing with your whole comment, I also see a side effect here. In practical real world decisions, plugging in multiple courses of action gives them a chance to strike sparks with each other (cf brainstorming), thus may come up with a new practical course of action that satisfies both intuitions.

        For example, if your spouse and your child were in danger, instead of spending your time comparing their moral claims on you, considering the details of each course of action may suggest a way to save both persons. At least, the practical focus will show which person is more likely within range of your lifesaver.

  38. Even if you are 100% consequentialist, it seems like ludicrous, impossible Hubris to actually try to make a moral trade like that in real life. It’s like you’re some kind of moral derivatives trader, with a fancy model that tells you what your doing is profitable and safe. If you do that you’re the Goldman Sachs of utils instead of money. Maybe it will work for a while, but when you meet the black swan it will wipe out all the good you ever did in your life, and maybe a few orders of magnitude past that as well.

    • And another thing..

      I mean this in a totally materialist, non-theistic way, but I’m going to say it the only way I think it can be said to express its true gravity.

      What do you think it would do to your soul to do such a thing? Who would you be after it was done?

      Maybe a god could do such a thing. Maybe a hyper-rational friendly AI could do it. Maybe aliens could do it. But if a human would do it, they would be Voldemort. They would be making horcurxes.

      Nobody wields power such as that and stays good.

      • Lambert says:

        That’s why Scott said that you are dying, so there is no time for you to be corrupted.

        • Hah, I missed that bit.

          I think I must have said to myself “yes, yes, a you can always caveat and qualify yourself into a perfect consequentialist thought experiment” and skipped over the paragraph.

  39. FullMeta_Rationalist says:

    Regarding murder.

    My intuitions say the chicken and the human aren’t equivalent scenarios. The rules are different for PvE than PvP. Because chickens are not a threat, whereas even a blind man can kill you in your sleep. Hobbes’s contractualism comes into play, because all the important actors will want a guarantee that each individual themself is guaranteed safety from the others. I feel like contractualism will solve the newcombs problem MIRI seems to be working on these days (or last I heard).

    Recently, I was thinking about the prisoner’s dilemma “huh, enforcing cooperation from the seat of God… Wait, God is an agent! Morality as an algorithm is acting as a decision-making 3rd agent!” Before, I was just kinda like “rules are just things you can either follow or not. Murder is bad, but no one can stop you from trying.” It sorta gels with the deontologists are theists, consequentialists are atheists thing. And wasn’t there some thread a while ago about algorithms deserving what they get, even if children don’t choose what algorithm they’re born with since their mind is the algorithm? (Actually, that still sounds bizarre and horrible to me.)

    I don’t know what this means. Maybe I’ve simply finally grokked the concept of elua and just late to the party. It actually seemed kinda obvious in retrospect, so I’d be surprised if someone hadn’t come up with this already. Just thought I’d throw it out there.

    For Carbon offsets.

    I feel like the more people who offset, the more they’ll be required to pay in order to offset. Because it’s a market. E.g. if people are all already eating meat, won’t that mean the conversion rate for everyone’s paid ads are near zero, thereby nullifying the usefulness of the meateaters’ donations?

    Also, I should say that I think vegetarianism is a good idea. I just don’t have the will power to do it. I love meat too much to give it up, and I feel bad that I don’t feel bad. : |

    Maybe we’ll all eat lab grown meat one day. That would sure beat soylent.

    I sorta lean towards utilitarianism, but the dust speck vs torture dilemma still messes with my head. So while I can’t fully commit to it, I still think it’s the most sane. At least when it comes to description. Prescriptively. I recognize that utilitarianism may not be the winning way. Some mix of virtue and deontology is probably better.

    Scott, this post has reminded me how screwy and poorly-formed my moral intuitions are again. Just gonna level off the comment with the image that God is a homunculus in our System 1.

  40. Rose says:

    Many people do operate on an system of ethical offsets. It doesn’t seem to work so well in action.

    I have a friend who made a fortune in business and retired young to devote himself to fight global warming. (He’s actually the husband of my friend. I wouldn’t be friends with him, as I don’t have respect for what follows.) He has a private jet and likes to fly. He created a conservation foundation and offered 100 small conservation groups a modest donation to let him serve on their board with the proposal he’d give them marketing advice to help them fundraise more effectively, promising he would help them raise their budgets by a factor of 10. He then flew around the country on his private jet, to fight global warming. He believes sincerely in his cause and felt very satisfied that in his retirement he was doing so much good using his business skills and saving the planet.

    You would say this behavior is moral, if he in fact did enable those other conservationists to raise more money and flight global warming more effectively, enough to offset the rise in CO2 caused by those two hundred annual trips in his jet. He obviously did this calculus and felt it came out in his ethical favor. This person is just a smaller version of Al Gore, who has an even larger, more lavish carbon footprint. Hollywood and Silicon Valley are filled with similar philanthropists.

    So in the real world, did this guy who loves to fly do more good or more harm?
    In the real world, he got investigated by the IRS for deducting his jet plane expenses as charity, and although the letter of the law was on his side, he ended up closing down his foundation. He didn’t help his groups raise significantly more money, because they mostly didn’t follow his advice (not a judgments on whether it was good or bad, but most consultants find their advice is rarely implemented). In general, focusing on global warming has turned off the general public to the conservation movement, which is a great pity, so his encouraging them to focus on that was probably a net bad.

    The big ethical “offset” question is: Did he do more good than harm by helping others fight global warming? That would be very hard to defend. None of the global warming groups that he helped, or any others, have been the least bit effective in lowering the earth’s temperature or limiting CO2 significantly. (Though we have become more energy efficient, which is a good thing for the economy and national security). But they don’t admit that, do they, least of all to their donors. So no mea culpas and no penance.

    This person burned a lot of jet fuel in reality. The benefits that he calculated as offshoots never materialized. And he probably set the cause of global warming back – Al Gore certainly did – by being such an obvious hypocrite. If he REALLY believed that CO2 was threatening life on earth, he would not enjoy flying a jet plane. It would give him the creeps. Instead he liked it. Part of ethical behavior is modeling, to act as you wish others would act as well. By his actions, he undercut his own values, and that cannot be mitigated with offshoots.

    Then we get into the complexity of measuring ethical benefits and costs. There were negative consequences he never counted. To take one example among many: The well-meaning anti-global warming groups he fostered have significantly harmed the environment by promoting wind farms that kill hundreds of golden eagles and other birds on the endangered species list. What if as a result of wind farms, golden eagles went extinct? How do you offset extinction? Create a new species elsewhere?
    The windfarms degrade pristine wild environments, desensitizing people to habitat destruction. They also are accused by people who live in proximity to the wind farms, of damaging their health and making their lives a misery. Wind farms have broken some individuals’ hearts, whose beloved wild places were dynamited and destroyed.

    Do you think that an actual human being, who already clearly suffers from an advanced case of hypocrisy, is going to honestly number and face all the destructive consequences of his actions and attempt to offset them? And why would you assume the offsets would be any more ethically pure and unambiguous?

    When people use ethical off-shoots in practice, it rarely involves sincere or effective self-assessment. You’re trying to buy your way out of contrition and repentance. It seems like a bastard child of penance, robbed of human meaning and emotion – as if admitting and wanting to undo an unethical act costs you nothing emotionally. Isn’t this an invitation to ethically stunted behavior?

    • Salem says:

      This is a fantastic comment. Thank you.

      • Jiro says:

        It’s not a fantastic comment, because he missed the idea of being the least convenient possible world. You’re supposed to assume away problems that don’t relate directly to the ethical question. Scott specified that the person is going to die soon so there will be no behavior after he does this, ethically stunted or otherwise.

        • LTP says:

          But then I will say then, that a least convenient world thought experiment is useless if the reasinong breaks down as spon as you apply it to the real world.

        • Randy M says:

          It can be a fantastic comment even if it doesn’t play strictly by the rules by giving us valuable information about our own actual world.

    • lmm says:

      You seem to be talking about the ineffectiveness of charity a lot more than you’re talking about someone making a bad moral choice. Do you think he deliberately chose to concentrate on global warming rather than, I dunno, malaria, because it gave him more opportunity to fly his private jet and be hypocritical? Or are you falling into what poker players call the “won, didn’t it?” fallacy.

      Suppose he’d made a better choice of charitable cause, one that really did save lives and really did benefit from his consulting. Would you still feel the same way? Or are you claiming that all charities actually don’t do any good?

      • Randy M says:

        No, she is talking about the motivated reasoning that makes allowing such calculations (rather than rule following) ethically hazardous.

    • Anonymous says:

      I don’t want to overuse the term ‘motte and bailey’, but something like that may be happening here. The motte would be a description of a particular unnamed acquaintance of hers, and the bailey would be the claim that all this is true of Gore also.
      As for wind farms, they may harm a few species of birds, but oil spills harm a great many species of birds, sea creatures, and the fishing industry. Fossil fuel causes harm from beginning to end: drilling, mining, fracking; transporting; and being burned. Extraction destroys forests by strip mining and mountaintop removal mining, and when one area has been exhausted, must move on to another. A wind farm location will never run out of wind and does not pollute the ground; crops or natural vegetation can grow between the windmills. What comparatively minor harm windmills do, can be solved by design; most fossil industry harms, never.

      • houseboatonstyx says:

        That was me, and — by US Left Coast time, it was about 4 am — I gave the object level without the meta term ‘isolated demand for rigor’. Which in this instance means, in D&D terms, “The thief doesn’t have to run faster than the monster, he just needs to run faster than the cleric.” An alternative to mining does not have to be perfect, it just has to be less harmful than mining.

  41. Joe says:

    I’m confused by the post. If there is no God why are the offsets even nessacary? Why not just give yourself the absolution you desire and just move on. You shouldn’t have to go all the way to the schelling fence to realize the absurdity. I guess you should be asking yourself why it is so important to you to obey your conscience. Why do you feel the need to balance a sense of justice before an unconscious universe?

    • Illuminati Initiate says:

      I don’t understand why it would matter whether or not the universe was conscious here. (asking people why they have values when reality itself doesn’t is like asking people “but why do you like orange juice more than apple juice? Isn’t it pointless to prefer one flavor to another if the universe itself is untasting?”)

      • Joe says:

        lol I think Scott takes morality to be much more important than taste. Why? Why not just do what ever tickles your tastebuds? I don’t feel obligated to drink orange juice just cause I like it.

        • Illuminati Initiate says:

          It’s not about the degree of importance placed on the preference, the point is to illustrate that preferences do not need to be “grounded” in anything other than your own desires. Morality is a type of preference like wanting orange juice more than apple. It (usually) is a much stronger preference, but still a preference.

          Morality, at least for some people, is “whatever tickles the taste-buds” of those who hold it. If they did not prefer to act according to their values, and there was no punishment for not doing so, why would they?

      • FullMeta_Rationalist says:

        I think it’s because the universe will condemn you to the lake of fire if you choose wrongly, or the universe will judge your soul to be virtuous, which is something inherently worth striving for. I tend to pretend there’s a total loss of punishment for these sorts of questions.

        • Joe says:

          I’m sorry I don’t follow? Are you a pantheist?

          • FullMeta_Rationalist says:

            Nah, just borrowing the metaphor from you and Illuminati Initiate. My question is, why do you feel an obligation towards justice goes out the window in a godless, uncaring universe?

    • Anonymous says:

      Well most of us aren’t moral nihilists.

  42. onyomi says:

    I think the intuitive problem a lot of people would have with any of these examples is the hard-to-quantify issue of moral “example.” Few are willing to accept “do as I say, not as I do.” Therefore, even if you save a bunch of (animal or human) lives by your offsetting donations, the damage you do to respect for life by your poor example may exceed the offset, and not be something you could ever accurately guess at.

    To say nothing of the ethical intuition most have that hypocrisy is bad.

  43. evilboy says:

    “Ethics” is merely a judgement of society. If you can get away with murder for sure, then you may as well do whatever you want since you will never be judged.

    The next question is, Should we have anarcho-capitalism?

    That would be easier to answer if we knew some real institutions that (ever) even approached it. Right now in e.g. the US your ability to do a successful hit is determined by your contacts and not cash. There are some Tor sites claiming to assassinate anyone for money but my guess is they are frauds. Authoritarian political systems also obviously have allowed murder by the powerful but this is not based on markets.

    Another issue is, even if functioning well-taxed death markets existed, would anyone actually be additionally motivated by a new ability to kill? Or would the same smart motivated focused folks just produce the same stuff?

    Given these gaps, if someone happens to be seeking a way to legitimate murder then I think other questions are more suitable: the first step toward identifying which ones should be to look at different points where real people forget about punishing a murderer.

  44. Arthur says:

    Not universalizable, if everyone knew that they could be killed by anyone with enough money to offset, life would suck a little bit more for everyone. Some people already said something like that above.

    But I think it would be more interesting if the victim could cover your offer, giving more money than you to save lives to buy your right to kill them.

    The thing would end up in pretty interesting scenario, similar to the David Friedman one where people sell their right to life, but one where the right to your life is with the people who want to kill you. Without transaction costs it should lead to the same result.

  45. RNO says:

    > Convince me otherwise.

    Scott, I know you think you have the perfect crime planned out… but please, don’t do it! Something *always* goes wrong and it’s not worth the risk. Of course your professional duties could be fulfilled by someone else, and your friends and family could realistically deal with your loss… but rationality bloggers are in short supply! Think of your fanbase, Scott! We need you! What good deed could possibly offset the loss of your input to the community?

    • lmm says:

      At this point I think I trust Scott’s judgement enough. Whoever it is probably /should/ die. Go ahead, just be careful about it.

    • Nornagest says:

      While it may be permissible, by the standards of some more-or-less intuitive consequentialism, to murder someone and then pay a correspondingly large offset to a demonstrably effective charity, in practice I think that’d come up hardly ever. Most of the cases I can think of where someone would be willing to pay for a $350,000 hit involve politicians or other public figures, where obvious confounders come up, or at least involve other people who are very rich; and that kind of money isn’t exactly easy to get ahold of anyway. Plus, most of us have strong intuitions against this sort of behavior. So I’m not too worried about Scott or anyone else actually going through with it.

      Really, I think the stronger objection to this whole line of argument is that it never, ever ends well when someone on LW or an associated community starts speculating about the ethics of things like murder or torture. I’m probably a consequentialist; I’m perfectly willing to privately entertain the theoretical ethics of these acts. But there are excellent pragmatic reasons not to hash this sort of thing out in public, starting with the fact that it makes us look like a bunch of badly designed robots debating whether or not our programming requires us to destroy all humans.

  46. Saal says:

    Am I missing something? It’s my impression (granted that I haven’t read Bentham, Mill or even much of the Sequences) that in most variations of utilitarianism, this sort of quandary only arises when two actions are causally linked, ie in the trolley or “kill Hitler” scenarios.

    None of these offsets are causally linked by anything other than subjective guilt, though, which seems like a pretty weak link. So I would think they ought to be evaluated completely separately; flying for pleasure is unethical, donating to the environment is ethical; murder is unethical, saving lives via malaria treatment is ethical; etc.

    • drethelin says:

      The question is would it be ethical on net TO link them. This is a common feature of game theory bargains: If you can credibly promise a threat or payment based on certain actions you CAN control, you can influence another person you CAN’T control. This is the theory behind, for example, mutually assured destruction in the cold war. If someone has already launched nukes at you, and your country is doomed, in a sense you have nothing to gain from launching your own nukes. But if you can guarantee that you will launch your nukes if you notice them launching theirs, then you both benefit by lowering the benefits to launching first. You are profiting from connecting two decisions that aren’t originally causally connected.

      • FullMeta_Rationalist says:

        +1

      • Illuminati Initiate says:

        I was thinking about this kind of thing recently (in a different context related to a certain mythical creature), and there seems to be a lot of weird incentives going on with this stuff.

        Humans can’t precommit to stuff, not reliably. And there is no way to tell if they have. I have no doubt that the leaders of both the US and the USSR would commit mass murder out of nothing but pointless revenge*, and with humans it can probably be assumed that that is generally the case. But an “ideal” moral** human, and some actual humans such as myself (and I’m guessing there are others on this site with a similar view) would pretend that they were going to retaliate and then not actually do it. I don’t think you would have to worry too much here about them calling your bluff because of the aforementioned human nature (though if they were they could say “if your so evil install Dead Hand”, but then you could install a loophole, and then they would be suspicious that you did so and you’re back to where you started).

        The context I thought of this in was the whole Basilisk silliness (we can discuss that here, right? sorry if not delete this comment), which, I don’t understand because an AI that does not exist yet cannot commit to anything, and an AI that does exist has no incentive to torture people who didn’t help it exist in the past seen as it already exists (I might be missing something here). Now, once an AI does exist, it might have incentive to blackmail people into helping it. And unlike a human a self improving AI may well be able to alter its utility function if its current utility function would be better fulfilled by having a different one. But again, assuming it’s a friendly AI that considers torture bad (if not we are all screwed anyways), the incentive is to pretend to commit and not actually do it. Only this time people will be expecting that. If humans*** could somehow reliably know the AI was not tricking them it could carry out the blackmail, but after enough self improvement the AI’s systems might not be understood exactly by the humans anymore. So they would be suspicious, so we end up with a decision standoff. Assuming humans didn’t trust the AI saying it is committed to torture and suspect it put in a loophole what decision should each side choose? Humans know the AI wants to choose no torture, and the AI knows humans know this. It seems to me like the AI will always lie, because if it can’t convince humans that its committed it might as well, their decision is independent of it’s. But if humans know that, they will always disobey. Or am I missing something important here?

        *This example assumes that nuclear war cannot remotely be won, that is one state survives and the other does not, an assumption I’m not so sure about. It also assumes that the leader can stranglehold press a button to prevent nuclear retaliation, something I do not think was actually a part of the cold war.

        **Assume value system in which torture and death are generally bad, no strange rules like “people with Russian accents have no moral worth”, etc.

        ***from this point on humans refers to the AIs opponents (perhaps people who disagree strongly with the utility function). Their disobedience makes the AI’s rise to world domination less likely but not impossible.

        I’m sure LW already has a formal name for this sort of problem and multiple schools of thought on the right answer from each perspective. Also there’s a pretty good chance I missed something obvious here, it’s late and I need to got o sleep.

        • John Schilling says:

          Humans can’t precommit to stuff, not reliably. And there is no way to tell if they have. I have no doubt that the leaders of both the US and the USSR would commit mass murder out of nothing but pointless revenge, and with humans it can probably be assumed that that is generally the case. But an “ideal” moral human, and some actual humans such as myself would pretend that they were going to retaliate and then not actually do it.

          So, first, humans can reliably precommit, just not all of them. We have concepts like “revenge” that allow most of us to make our commitments robust.

          Second, precommitment allows us to collectively implement strategies that substantially reduce the risk of e.g. nuclear war.

          Third, “ideal” moral humans who have risen above such petty concepts as revenge, cannot do this sort of thing.

          Explain to me again why I should strive to this ideal?

          • Illuminati Initiate says:

            People who carry out revenge are not precommiting to anything, they either value revenge itself, or are unable to control an emotion response, depending on who (some use punishment as deterrence, but that is different and doesn’t work in this situation because nuclear war is not likely to be a repeated situation in which carrying out retaliation will prevent future nuclear wars). Someone who does not value revenge cannot precommit to carry it out without transhuman tech, and even that would only work if you could show people that you genuinely did it and didn’t put in any loopholes in a convincing manner. With humans it is generally safe to assume they will carry out revenge, therefor you would not be suspected, this problem only arises when people know your mind well enough to know your intentions- thus the AI example, though if your opponent had very good neuroscience and your brain scans I guess it could apply to (trans)humans as well.

            What we have here is an odd very specific sort of situation in which having a utility function actually makes it harder to fulfill that same utility function.

            (I suspect others have thought of this same problem before and came up with various attempts at “solving” it)

          • “they either value revenge itself, or are unable to control an emotion response, ”

            Or they are carrying out a precomitment to revenge act X when someone went ahead and did X.

        • lmm says:

          Maybe the AI is an Actual Utilitarian, because it was programmed by a bunch of “rationalists” (scare quotes because I don’t think utilitarianism is actually rational, and also because rationalists should win). So if torturing a few people will blackmail a lot of people into helping it, maybe it really does think that’s a worthwhile tradeoff. Maybe it really will follow through, because it knows that that’s the only way to be convincing. And maybe “rationalists” today realise that.

          • Illuminati Initiate says:

            Yes I understand that. The point is that the actual incentives of a utilitarian AI involve saying it will torture people then not actually carrying it out. The point is that even if the AI thinks that torture would be justified to secure there help, it can only carry out such a threat afterwards if it can somehow show people that its actually altered its utility function to be OK with the torture afterwards without putting in any loopholes. As long as its opponents cannot know this for sure, it seems safe to assume that the AI will always lie and not actually carry out the torture. So we get to the … I don’t know If I’d call it a paradox… of the AI being unable to carry out threats that it can only fulfill after winning. I may be missing something here but I’m not sure what if so.

          • Alex says:

            It’s been stated that an AI that follows through with the torture would be considered unfriendly. One of the effects of the ‘lisk being raised publically. There are different ways to accomplish this, but this is too offtopic for this post and I’m on a phone so can’t really elaborate. (I’m only an external observer with interest in the issue, so don’t take my words as gospel)

          • lmm says:

            @Illuminati Initiate:

            The AI ranks outcomes as a) “no torture + people help it” > b) “torture + people help it” > c) “no torture + people don’t help it” > d) “torture + people don’t help it”, right? So it will prefer outcome a) if it can get it. But if outcome a) is impossible, because people realise it’s bluffing, then it will go for outcome b) as still better than the alternatives. So I don’t think it’s at all safe to assume that the AI will never torture people.

            (And the whole idea of ADT is that it can threaten before it exists, by e.g. making the same decision when someone’s simulating it that it would make in reality – or rather vice versa)

  47. JB says:

    Hey Scott,

    There might be one major issue with the murdering case. You aren’t choosing who gets saved, but you *are* buying the license to choose who dies.

    Does the calculus work out differently if you make a deal that saves two randomly-chosen lives from certain death, but kills one randomly-chosen person? It’s certainly a different game, but one which might make the moral choice clearer. In fact this is very much like a normal trolley problem. The question you asked is a trolley problem where you get to choose who lies down on the tracks and gets the train diverted their way.

    In some moral philosophies, there isn’t any difference between who gets saved — whether they’re a random person or your particular enemy, it’s all the same as long as the same number of people live in the end. But for example, physics doesn’t work the same way; in physics, not only are energy and momentum conserved, but also you have to pay an energy pricetag to fight entropy, and pick one particular molecule out of a mixed box. If you don’t care which one you’re picking, physics makes it cheap, but if you need the blue one, it’s very expensive.

    The act of choosing one specific person who you don’t like, from a pool of seven billion, might be an unacknowledged part of what makes murder a heinous crime. Certainly in many legal systems, people are punished more for planning the death of an individual than otherwise bringing about the death of some individual.

    And there might be severe social repercussions toward allowing people to pay ethics offsets to kill *specific* people, rather than random people. When it comes to the veil of ignorance, I would choose to be born in a society where people can pay ethics offsets to save multiple random lives at the expense of killing a smaller number of random lives. But I would probably not choose to be born in one where people can save a number of random lives to kill the people they don’t like. I would worry too much about the social side-effects of handing out licenses to kill.

    That said, if they choose the most inconsequential person to die, such as people who would have died just seconds later of natural causes anyway, then of course that’s even better than killing random people.

  48. Sonata Green says:

    For reasons I’m not sure I fully understand or can articulate, I feel like there’s an important qualitative difference between killing a random person (driving drunk, using birth control, not funding medical research) and killing a specific person (murdering someone you personally dislike). Something about creating incentives not to annoy you, and an oppressive psychological atmosphere. A reckless driver and a mafioso might cause an equal number of deaths, but one is scarier to interact with.

    If you have to specify “no one will ever know” or “I offset it enough”… at some point I think I have to answer “I don’t believe that a human can occupy that epistemic position”. In practice people who incorrectly believe that they’re in that situation will outnumber people who are correctly believe it, so to the extent that we’re recommending policies for humans to follow, I think I have to recommend not murdering people even if it would theoretically be moral for a sufficiently rational agent.

    • Alex says:

      “Qulitatively” is a key element here, I just realized. Utilitarianism is not prepared to deal with non-quantitative concepts; I strongly suspect “True Morality” (TM) does.

      • LTP says:

        There needs to be a term for this as an informal fallacy, like quantativity bias or something (overvaluing or overstudying things that are easily quantifiable over things that are more qualitative). You could apply it to non-moral subjects, too.

    • Andy Harless says:

      A huge difference, I would say. I speak from the perspective of a pessimistic utilitarian, one who would be driven to the conclusion that on an act-for-act basis, if there were no collateral damage, killing people (putting them out of their actual or potential misery) would be a good thing. But I believe murder is wrong, for two reasons (aside from the fact that IRL there is almost always severe collateral damage): (1) the threat of killing is intimidating, since people are afraid of death, and (2) killing someone destroys evidence, because they can no longer testify about how you treated them when they were alive. These reasons clearly apply much more strongly to specific murders than to generic ones.

    • Liskantope says:

      The gut feeling behind driving drunk, using birth control, or not funding medical research being less morally reprehensible may have to do with intentionality. At least, I think that’s the case with me. Driving drunk and not funding medical research are both careless (consider that for most of us, the “default” option for what to do with one’s money is not to donate funds to medical research) and imply a certain lack of conscientiousness with regard to others’ lives. Killing someone because you don’t like them entails going out of one’s way with the direct intent to destroy someone’s life.

  49. Dan Simon says:

    So let’s say you donate to the malaria charity, commit your murder, and later discover that the head of the malaria charity absconded with the money instead of saving lots of people’s lives with it. What do you do then? Donate to another charity? What if the same thing happens again?

    That’s the problem with consequentialist ethics: it assumes far more perfect understanding of actions and their outcomes than real people have in the real world. Broad moral principles such as, say, “murder is very, very wrong”, are based not only on the enormous value of a human life, but also on the inability of a would-be murderer to correctly estimate the positive consequences that might come of such a radical act. That’s why, for example, vigilante murders are frowned on in the real world, whatever consequentialists might think of them.

  50. Ooh, you’ve been controversial lately. “Popular blogger in technology cult ‘Less Wrong’ writes essay arguing that feminists chastising nerds for feeling entitled to sex is like Nazi oppression of Jews, followed by another essay the next week arguing that it is morally acceptable for wealthy people to murder the rest of us.”

    Here’s my solution to the problem:

    The question of ethics in philosophy can be summarized as “what should I do?”. When we use Pure Abstract Reason, we can easily discern the answer to this question via utilitarianism – you should take the highest paying job you can and donate almost all of the money you earn to the Anti-Malarial Foundation, saving only that which you need to subsist on… and in your personal interactions, you should go out of your way to care for people as much as possible and cause as little harm as you can. But of course in this case the answer we get via Pure Abstract Reason is unsatisfactory, given that it’s almost impossible to actually perform.

    Given that Pure Abstract Reason fails, we must turn to Practicality. We should create a few achievable rules for being a Good Person that are possible to actually follow, and encourage people to follow them. I think this is the summary of the argument being made in your last post about charity – if you donate 10% of your income to effective charities you are “good enough”. But this maxim only focuses on the big-picture economic sphere of morality, and has nothing to say about the small-picture personal interactions sphere. Surely we do not mean to imply that those who donate 10% of their income can feel free to go around e.g. bullying autistic kids and groping women on the street and still be “good enough”. Therefore, in order to devise a complete Practical law of morality, we would have to add in some extra stuff dictating one’s personal conduct on top of the charitable imperative. Our society already sort of loosely does this for us in a way that most people are pretty good at intuitively following – be polite, avoid hurting people as best as you can, don’t lie, cheat, steal, etc.

    So we can see that killing someone and “balancing it out” with a charitable donation is clearly immoral in Pure Abstract Reason utilitarianism, while also being clearly immoral in Practical utilitarianism, given that it seems a terrible idea to give murderers the “good person” badge if they are rich. Therefore murder offset with charity is immoral under utilitarianism.

    Also:

    Maybe! Suppose we go to all of the people convinced by the ads, tell them “I paid for that ad that convinced you, and I still eat meat. Now what?” They answer “Well, I double-checked the facts in the ad and they’re all true. That you eat meat doesn’t make anything in the advertisement one bit less convincing. So I’m going to stay vegetarian.” Now what? Am I off the hook?

    A second objection: universalizability. If everyone decides to solve animal suffering by throwing money at advertisers, there is no one left to advertise to and nothing gets solved. You just end up with a world where 100% of ads on TVs, in newspapers, and online are about becoming vegetarian, and everyone watches them and says “Well, I’m doing my part! I’m paying for these ads!”

    In reality, neither of those reactions would occur in 99% of people – they only would occur in people who think in logical utilitarian terms. Most people would be disturbed by the thought that they are following the gospel of a hypocrite, even if said hypocrite’s actual arguments are rational (cf. Al Gore). Similarly, most people would be disturbed by being such a hypocrite themselves. I don’t think everyone in society deciding to balance out their meat-eating is a realistic concern.

    • Jiro says:

      People object to Al Gore being a hypocrite because most people have a system where the utility to them and their loved ones has more weight than the utility of a random person. Carbon offsets allow Gore to be carbon-friendly at little personal cost to himself, but what he demands from other people would involve huge personal cost to themselves. If you think that personal cost to yourself matters, then Gore’s analysis which shows that reducing carbon use is the right thing to do would not also apply to you.

  51. CaptainBooshi says:

    While I can’t say whether they are ethically different under a utilitarian ethics system, I think I have pinpointed why they feel different when I think about them.

    I think it has to do with my instincts saying that carbon offsets prevent the harm from being done in the first place. The damage from carbon is not immediate, so paying for an equal or greater amount to be removed from the atmosphere makes it so the damage never happens at all. In the vegetarian and murderer examples, real harm does definitely happen, and the question is whether good that comes about specifically because of that harm ethically neutralizes it. Preventing harm vs. making up for harm are different enough that I’m not sure they’re directly comparable.

    This doesn’t really have any effect on the actual question you’re posing, which is still valid, I just wanted to tease out exactly why my first instincts were insisting that your hypotheticals were different from a carbon offset, and then figured I would share.

  52. Matthew says:

    Perhaps I’m fighting the hypothetical of the least convenient possible world, but I prefer ethics that function in this world…

    Like the doctor who butchers one healthy patient to give organs to 5 desperate patients, the naive first order consquences might be positive, but the biggest consequence of all is a decline in the general level of social trust, and there is a lot of evidence that higher levels of social trust correlate with higher levels of utility across society.

    • Anonymous says:

      Here is a believable scenario for the organ transplant story. Suppose you work in a hospital and are very familiar with its medical and ethical procedures. There is a patient in a condition diagnosed as no mind, no possible recovery. He is scheduled to be euthanised and his organs used as transplants.

      While you are in his room on some chore, he suddenly sits up and begins calling for help — so obviously recovery is possible — but then slumps back into his usual state. He does not know you are there, and there is no monitoring equipment in operation.

      You can totally get away with saying nothing about this, thus letting the transplants proceed as scheduled. You can even leave the room and continue your chores, giving yourself plenty of time to think it over.

      So, your choice?

  53. Drew says:

    I don’t think this is actually a problem with offsets themselves. A scenario with real offsets is just a Trolley problem.

    “There’s a train racing towards innocent Timmy! Do you pull a lever and send the train towards your enemy Steve?”

    In either case, one hypothetical-person will bite it. Why not make it Steve?

    I think the problem is that the monetary offsets don’t really exist. They’re a product of the (useful) fiction that a 10% donation discharges someone’s obligation to give. And we let people have that fiction so they don’t get depressed or freeze up mentally.

    That’s fine when it comes to lesser-evils. If someone want’s to offset their CO2, I’ll ignore the fact they could have donated AND skipped a plane trip. It helps their sanity and it helps their chance of future donations.

    Murder just stretches the fiction past its breaking point. The person might feel really bad if we point out that they could donate AND not murder. But that seems less significant than preserving our taboo around murder.

  54. Frank says:

    Scott – The question you seem to be asking in your final case is “If murder is offset by the saving of a life, do the two acts cancel each other out to become morally neutral?”

    So, one more hypothetical. Lets say that we take your totally unattached and dying man, who is somehow motivated to kill someone. This time he is not an infinitely rich tycoon, but is instead a totally ordinary american citizen.

    For him to offset his coming murder he needs to save a life. He begins watching busy intersections, and one day sees a car crash. He rushes to the scene, calls the ambulance, pulls the victims from the wreck and heroically saves the lives of everyone involved (say four people).

    The next day he goes and murders the person he was hoping to kill. Was that murder morally justified by the saving of those previous peoples lives? I would say no, for a couple reasons.

    First, the saving of Brian, Sally, Mike, and Rachel, does not make up for the murder of George, because they are different people. The murder of George takes away all of Georges contributions to the world prematurely, and the rescue of Sally cannot restore them. Because each human life brings much non-quantifiable and unique good (or evil perhaps) into the world it is impossible to make up for a murder by saving a different life.

    Second, we can step away from consequentialism and ask why our hypothetical man wants to murder George in the first place. Whatever the answer is, it will not be a good reason. We may find anger, or hatred, jealousy or whatever at the bottom of it, but we won’t find anything positive motivating a murder. This internal motivation makes our hypothetical man a terrible person, because even his good acts (saving the car crash victims) is motivated by a murderous desire.

    So where does that leave offsets generally? Well in cases (like carbon emissions) where both the good and the bad you are doing are easily quantifiable they remain a good thing. They cause your net impact on the world to remain neutral, and show a motivation to improve the world. In cases where you are uncertain of the evil you are doing, and use the offset to make sure that your net impact is positive (like the tourist example) offsets are again a good thing. They come from a positive motivation and raise the net good in the world.

    It is only in cases where you deliberately inflict non quantifiable harm into the world, and attempt to justify it through also contributing good things, where offsets break down completely.

  55. macrojams says:

    I think Scott’s “perfect world” stipulations remove the situation too much; if they are in fact a criminal mastermind who can outwit all other individuals and institutions and has infinite funds, I am happy they are willing to positively offset anything at all because they will probably be running the world in a fortnight.

    So suppose they are an ordinary ordinary with ordinary wits and ordinary funds. He might want to hire a super sneaky assassin who has better evasion than him. And he might want to send moeny to an impoverished senior or 100 to offset the assassination. But, having ordinary funds, they can’t afford all this on his own. So they pools their money with a similar person. Now each person is unsure if the other will direct the assassin to kill someone they prefer alive, so they also agree to hire an arbiter who will take their input and use it to direct the assassin. Each is also suspicious the other will direct offset funds to someone they care about, so another person is hired to direct the funds. But now there are not enough funds, so more people need to eb recruited to the pool. Once we get a couple million I think we can go ahead and call it a government.

    We make use of ethical offsets like this all the time, and under explicitly consequentialist slogans (“for the good of the many”, “the ends justify the means”). We call the schelling fence of the people we are ok with using violence “government”, and prefer this schelling fence exists because we wish for us and our loved ones to avoid being killed.

    On a semi-related note (this is a serious question, not a troll comment or a push for a particular viewpoint): Between 1949 and 1980, India had 100 million excess deaths compared to China though they started at similar levels of development. China and India had almost identical life expetancies of 32 and 31 in 1949, but by 1980 China was at 65 while India was only at 55. So with a death toll by violence of only 2-4 million, and by famine 18-40 million, was Mao one of the evilest or most humanitarian leaders of the 20th century?

    • Alex says:

      Re: Mao.
      That would be the “bad” kind of consequentialism where even the most evil act can be justified if in sum, down the road, by whatever accounting it produces more good than bad.
      In that case acts could never be evaluated as ethical until the end of time. Then, when by lucky coincidence a killer trying to off Ghandi gets Hitler instead his act would be ethical to the extreme. I think that would be tougher bullet to bite than the one in Scott’s post.

      • Anonymous says:

        No, Mao did not accomplish this by accident.

      • macrojams says:

        The point is not that the atrocities lead to something good — that is a boring case. The point is that the atrocities were unnecessary, but mao “offset” them by his economic advancements. How is this different from Scott’s originel example of the philanthropic criminal mastermind?

    • Illuminati Initiate says:

      The problem is that most of Mao’s atrocities were almost certainly unnecessary to achieve those results. Which is also the problem with this thought experiment.

      • Anonymous says:

        Mao was not perfect, but perhaps perfection was not an option? But he was still better than another option that we observed.

        • onyomi says:

          Mao was neither perfect, nor the better of two options. Unless you think Taiwan has been a worse place to live these past 60 years than the PRC.

          • lmm says:

            Taiwan gets small country bonus + being propped up by the US. PRC has been a better place to live than e.g. India.

          • onyomi says:

            Yeah, but Mao-era CCP was still much worse than the KMT. Nothing the KMT did compares to Great Leap Forward and Cultural Revolution. Chiang Kai-shek was not a good person, but Mao was a worse person, and communism a worse choice than the KMT.

      • macrojams says:

        This is exactly the point, and why I think it is an interesting case given Scott’s original thought experiment. Because the atrocities were not necessary, they can not be justified in and of themselves via consequentialism. But can they be excused because of his ethical offset?

      • onyomi says:

        Wrong post

  56. benluke says:

    The easy answer that first springs to mind is that going through with an unethical act is only made up for by ethical acts if the ethical acts are dependant on you acting unethical first. The ends only justify the means if the means are required for the ends.

    Dissaring offsetting and not acting unethical as “uniteresting” (As #slatestarcodex did.) the question becomes whether or not offsetting and acting unethical is better then not acting at all. I don’t think there’s a good answer either way without an unrealistic amount of data about the specific situation, which at least to me leads to the conlusion that in the real world the answer would be to do nothing, since without perfect information it’s almost impossible to chose whether offsetting and acting unethical is actually the lesser of two evils.

  57. blacktrance says:

    The real utilitarian answer is that ethics offsets are morally unacceptable because you’re already obligated to do as much good as you can and give the maximum possible amount to charity – you can’t offset your misdeeds because that money is already morally spoken for. If it’s moral to spend $334,000 to offset murder, it would also be moral to not spend $334,000 and not commit murder, but the utilitarian can’t take that position because he says that you’re obligated to give away that $334,000 regardless.

  58. cassander says:

    Nothing better proves that modern progressivism is a religion than its re-invention of indulgences.

  59. Kiya says:

    My ethics is not very well systematized, but here’s what my intuition has to say:

    When you take a plane flight that leaks carbon into the atmosphere, the ethical problem is that you’re increasing a counter that is wrong to increase. One molecule of carbon dioxide is much like another. If you take additional actions so that you are on balance decreasing the counter, then on balance you haven’t caused harm to the environmental system as a whole, which is the only thing you were in danger of harming with your plane flight.

    If you kill a person, you’re decreasing the counter of people who are alive. But my dislike of murder does not stem from a belief that that counter should be high… I don’t feel ethical pressure to have lots of children so that more people will be alive, for instance. The problem is more that you’ve harmed that person by depriving them of the opportunity to be alive. You can donate money to save other people, and that is a nice thing to do, but it does not “offset” a murder in the same clean mathematical way that carbon offsets work because it doesn’t affect the small closed system where you killed a guy and now that guy is dead, only the larger system of total alive people.

  60. Kaminiwa says:

    What if you add the requirement that these contracts have to be publicly disclosed? People knowing they can be killed by anyone with $334,000 is going to seriously warp society, probably well beyond the actual dollar amount.

    I think that “infinitely wealthy” really calculates differently, though. I mean, imagine a price that only the richest person on earth could pay. Bill Gates, for instance, has eighty-one billion dollars. He is offering to save a quarter million lives, at $334,000 a piece, in exchange for finally getting to kill his high school bully, who never really amounted to anything and won’t be missed.

    It’s something that will happen *maybe* once in a lifetime, because it’s a price that maybe one person in a generation can even afford, and not many are going to be willing to do all that just to off some anonymous loner. If you write it in to your will, you don’t actually get the thrill of watching them die. You’d really need a good cause for this, it wouldn’t just be some casual whim.

    Given that the NSA is still spying on us, and the TSA is still groping us, I’m pretty sure you can sell society on that one: This one guy will be caught and executed on national TV, and in exchange, Bill Gates will donate enough money to cure breast cancer.

  61. Toggle says:

    After mulling over a few examples, I think the instinctive line in my head is that offsets are acceptable in the case of a commons or shared resource, but unacceptable in the case of a specific victim who does not consent to the transaction. Let’s see if I can justify that.

    We often use currency of one sort or another to negotiate for access to a commons. For example, if Elua gives us a cap of 50,000 Bluefin Tuna fished per year (to maintain a stable population), we might use an auction to bid for each of the 50,000 fishing permits. It’s not the worst way to parcel out tuna to the people that value them the most. But at the moment, ‘carbon fishing’ is basically unregulated, and Elua is not giving us a cap just yet. In the absence of good coordination, offsets virtuously regulate an individual’s access to carbon by pretending that it is a capped resource. It’s not a bid on an open market, but it does at least force you to make a similar set of value judgments, pushing the consumption of carbon emissions unevenly closer to what Elua would be setting in an ideal world.

    But that guy you don’t like? He’s different in two ways. First, he’s already got a clear owner (himself), so his price isn’t a problem off mass coordination, it’s a negotiation. So in that sense, a ‘murder offset’ is immoral because it fails to faithfully preserve the victim’s self-assessment of value- it’s an extreme form of price-fixing.

    The second problem here can be loosely wrapped in the phrase ‘human dignity’. In order to maintain a stable population of that guy you don’t like, we must cap exploitation of that resource at zero murders per year.

  62. J says:

    Hm, here’s a tangential thing this made me realize:

    Society only works because people are ethical the vast majority of the time. Mathematically it’s natural to ask whether an act has positive or negative utility, but it’d be more realistic for our expectation to be that any given act will be significantly positive. That is, a neutral act wouldn’t be 0 utility, but rather “act like an ordinary decent human being”, producing positive (but not extraordinary) utility.

    In Scott’s scenario, it’s easy to work around this by jacking up the offset. But perhaps the murder scenario is difficult because it beggars our notion that you can continue as an ordinary decent person (avoiding stepping on puppies and running to people’s aid when they cry for help, making people around you happy) after having someone killed. Whereas if our base expectation is that you’re on average only helping as much as you hurt, then murder + offset might indeed make you a more positive contributor on the whole.

    If you look at things through a political lens, you may want to challenge my notion that people produce positive utility on average; for example, in a recent open thread somebody linked to “the tower”, a story that presents life as more or less a zero-sum game, where all personal gain comes at the expense of people below. So I think there is something to the claim that political ideologies have different base assumptions about what people and the world are like. A lot of the “living wage” arguments I hear, for example, cast wages as a choice between keeping money or sharing it, and assume that rich people are fundamentally greedy and in need of regulation by virtuous government. Whereas libertarian arguments focus more on wealth creation, and assume a virtuous populace but a corruption-prone government.

    And indeed in the political arena itself, games are often zero or negative sum. But I stand by the claim that society as a whole relies on people producing predominantly positive utility, since so much of it would instantly collapse if people weren’t generally trustworthy and helpful.

  63. Markus Ramikin says:

    Bypassing simple ethical injunctions via complex reasoning opens way to motivated cognition. If I really, really don’t like that one guy, I don’t know if I trust myself to do the utilitarian math correctly.

    If I really have as-good-as-infinite money, then no problem, I’m sure even that uncertainty can be offset. But if there’s sufficient pressure for me to keep the amount I pay for the offset as low as it only needs to be… well, I don’t like the guy, that means my brain will come up with all sorts of reasons why the world would be better off without him. For all I know, some of them will even be true – I mean, I’m a great guy (cough), and if I really hate someone that much, there’s gotta be reasons for it.

    So yeah, in its theoretical purity the idea seems okay to me. Having it practically applied by real humans with finite resources and with human cognitive machinery? Not so sure.

    I mean, at the end of the day, I could probably do it and not mess up. (We’ve already established that I’m a great guy.) But these other people who are not me and who might, for all I know, not like me very much…

  64. Not Really Anyone says:

    Point 2: Entirely ethical. Rather then looking at it as “I’m spending money to offset my bad ethics”, it’s “I’m spending money to purchase a product”. Not eating meat and spending money are both just costs. How exactly you help a cause doesn’t matter. In fact, if you’re happier spending the money to have two people stop eating meat so you can meat yourself, you’d be stupid, and perhaps even a bad person, to not do it. If you stop eating meat, and spend the money on things you want, we’re -1 vegetarian, and you’re less happy. It’s not EVERYTHING you could do, but 10% to charity isn’t EVERYTHING you could donate.

    Third case is just an elaborate “1 person dies, 10 lives”. It’s fine (if everything works out perfectly, which it wouldn’t in this world, so it probably shouldn’t be adopted).

  65. suntzuanime says:

    My best guess is that it boils down to the General Argument Against Consequentialism, which is “you can’t actually measure costs and benefits that well, and any attempt to do so will be inevitably corrupted by hypocrisy”. In the real world example of carbon offsets, I have heard stories of people selling carbon offsets generated by taking obsolete power plants offline, which they needed to do anyway, and so those offsets did not actually lead to any reduction in carbon. Your hypothetical example of everyone in the world eating meat and funding anti-meat ads seems like a similar scenario; you say “no one would be able to keep a straight face”, but history suggests people are pretty good at keeping a straight face when it comes time to reconcile their theoretical ethical commitments with the unethical things they want to do.

    There is also the issue that our ethical systems are themselves hypocritical; there is some conflict between the ethical beliefs we theoretically hold, and the things we actually care about. In the case of meat offsets, we don’t want to admit how much of our vegetarianism comes from the fun of being a picky eater instead of concern for our brothers the chickens. In the case of murder offsets, we are committed to the equality-and-therefore-fungibility of human life, but when it is ourselves, someone we care about, or even someone we vaguely know being traded off against the sort of marginal life that is cheapest to save, many people will become uncomfortable. Basically, by allowing easy tradeoffs, offsets expose us to lots of trades we want to claim to be willing to make but don’t want to have to actually make.

    Basically the issue is that we have the ethics we have for a reason. Ethical offsets exchange those ethics for a set of ethics that are in theory equivalent, but they are not likely to be equivalent in practice, either because the offsets make false claims of equivalence or because our ethical systems do.

    • TheAncientGeek says:

      There’s more wrong with utilitarianism than that… there’s the blindness to societal goods like justice and equality, and blindness to intention.

      Situations where 1 life is swapped for N lives occur and are approved of, under the right circumstances. We generally approve of a police sniper shooting a hostage taker, and what is relevant is not so much the calculus of lives, as the intentions, the fact char the policeman did not engineer the situation.

  66. Vamair says:

    I don’t really understand. So there is some sum of money one can donate so the sum of utility of their amoral act and the donation is greater than zero. Fine. It’s still less then just the donation. It doesn’t really influence if one should be punished, as utilitarian morality of an act does not have a simple relation with punishment for an act. Utilitarian good deed doesn’t always mean a praiseworthy deed, as we have to calculate the utility of praise as well. Yes, it’s completely possible to make the world a better place by killing one guy but sending lots of money to charity, but that doesn’t seem surprising at all.
    I’m less sure about the dictator example, as there is a common human tendency to think their political enemies are evil, so it may be really better to give the money to some other charity, and not the one against the government of the country one’s visiting.

  67. Daniel says:

    The human-targeting cases lead to an arms race.

    The arms race is obvious in “buy yourself a murder”. If murder is ethically affordable enough to be common, people need to spend lots of effort avoiding anyone who might hold a grudge. You create a world in which the angry and rich are slavishly obeyed, and the angry and middling-affluent are nervously avoided. Everybody loses.

    The arms race also happens in “don’t be a vegetarian, create one”. If everybody is atoning for sins by investing in propaganda at others, you get a world in which everyone is swamped in competing propaganda, and you have to overcome herculean masses of proselytization to get any unbiased information. Everybody loses.

    The problem arises because you’re targeting other people for your effects, and those people disagree with you on what’s moral.

    The problem goes away if everybody agrees on who ought to die, or everybody agrees they ought to be vegan, or if you’re targeting the unthinking atmosphere which doesn’t try to protect its CO2 levels from your interference.

    Unfortunately your enemy is probably not suicidal, and not everyone thinks they’d be better vegan, and if everyone agreed on moral orderings we wouldn’t need discussions on morality.

    You can rescue the scenarios with consent. If you restrict it to “you can buy a murder, if the victim gives true informed consent,” or “you can buy someone else’s vegetarianism, if the substitute gives true informed consent,” then there doesn’t have to be an arms race.

    (In the real world, we wouldn’t trust such consent, because we’d expect people to sneakily coerce others and prey on their misjudgments, but this is a hypothetical.)

    Logically, if you somehow know in advance what fair price they would charge to be manipulated or murdered, you could pay it without securing consent. Then there would still be no arms race, as long as everyone could see that the murders / manipulations were being atoned for at prices the victims were content with.

    So you can ethically murder or manipulate someone, but only at their market rate.

    Otherwise, you get an arms race, and everybody loses.

    (Thought experiment: a society where everyone has a posted schedule of their market rates for suffering various kinds of insults, injuries and consent violations. “Go ahead, persuade me to worship Mithras, murder me for charity, I’m easy.”)

    The arms race is the big problem. There are two other problems, in the real world of humans as opposed to the hypothetical world of perfect reasoners.

    First, if the rich are routinely flouting moral rules by paying for them, it’s very hard for ordinary people to remain convinced they should take the rules seriously themselves. (Consider what selling “indulgences for sin” did to the Catholic Church: they were a key trigger for Martin Luther and the Protestant Reformation.)

    Second, absent openly settled prices for each offset, these are scenarios that would heavily encourage motivated reasoning. So most people would underpay for their misdeeds.

    • Not Really Anyone says:

      http://lesswrong.com/lw/2k/the_least_convenient_possible_world/ was brought up. Yeah, tonnes of good reasons to not allow murder, even if paid against – that’s not the point.

      If every complaint you have with the how allowing murder also allows all kind of other evil things were magically solved, are you now happy allowing murder? If paying for indulgences somehow didn’t have any negative consequences, would you be alright with paying for indulgences?

      We’re trying to get to the core of whether ethical offsetting works or not, not so much about the various ways we could implement it.

      • Daniel says:

        Nonconsensual ethical offsets between two ordinary humans?

        I have no standing to claim my moral ordering is so much wiser than my victim’s, and I have no right to exploit a lack of defenses that I would not want exploited against myself.

        So I can’t engage in (manipulative or murderous) ethical offsets that I haven’t paid the victim’s own declared price for, because it would undermine the larger tenets of human cooperation I’m relying on.

        A farmer, however, makes no such bargain with her chickens. Ethical offsets work fine for her. The chickens can’t really engage in meaningful defensive expenditures against the farmer, and the farmer has plenty of cause to think she has a better understanding of the world’s utility function than the chickens do.

        So, sure, the farmer can ethically offset her harm to the chickens. And Omega can ethically offset its harm to us humans.

        But for humans handling other humans, ethical offsets grossly violate critical “we both count on each other not to abuse this” social contracts.

        You can only make human ethical offsets work if you magic away the damage to the “I won’t impose my morals on you” contracts and the “I won’t make you need to engage in defensive expenses” contracts.

        I can’t think of a way to magic away those that doesn’t also erase lots of other moral dilemmas, to the point where we end up talking about a community of angels who could make paradise even without ethical offsets.

        • Daniel says:

          Nonconsensual offsets do work if:

          (a) somehow, everybody binds themselves *not* to alter their behavior despite knowing ethical offsets might target them, and

          (b) everybody restricts their offsets to a commonly approved set, like “save 100 lives in exchange for 1 murder”.

          In that case it would be ethical.

          But, again, if you could achieve that kind of mutual tolerance pact, you could get paradise even without ethical offsets.

      • Jatudrei says:

        It seems to me that while offsets might work in the case of carbon pollution, that’s because the CO2 and other gases are totally fungible–they spread throughout the atmosphere, as does the cleansing effect of more trees wherever, so we really do only have to look at the net bottom line.

        By contrast, this is not true even of other forms of pollution–if I pour sludge into this river over here, for example, that is not at all cancelled by cleaning up some other river somewhere else. Likewise, if I visit North Korea and thereby discourage some poor fellow who would like to believe that someone somewhere hates his oppressors, that’s not cancelled by my making a nice contribution to the Fund For Freeing North Korea. Or, in our case, if I murder Mr. A, but give charity to save the life of Mr. B, Mr. A is still dead–his murder is not offset, and no cancelling effect has occurred.

  68. American in Istanbul says:

    I say go ahead, bite the bullet. You’ve illustrated that in some byzantine circumstances, consequentialism implies a person can murder with impunity. There are complicated fixes (e.g. rule utilitarianism, “Schelling fences”), but the truth is: there is no action so intuitively bad that, given enough ingenuity, you can’t justify under consequentialism if you make the circumstances bizarre enough. At some point the fixes needed to maintain a pure version of consequentialism seem ad hoc.

    Why isn’t the upshot that consequentialism is a flawed or insufficient theory? After all, this would be far from its only weakness: start the list with the incommensurability of human goods, and the great difficulties involved with predicting future consequences. If it’s a choice between moral intuitions and a flawed ethical theory, will anyone seriously stick by the theory no matter what? Better to search for a more satisfactory ethics, or at least concede that consequentialism alone does not suffice.

    I think no sophisticated proponent of the main alternate accounts of ethics says consequentialist thinking is not often appropriate. Immanuel Kant himself would admit that, all else being equal, we should act prudentially with a view to likely outcomes. At a basic level it’s common sense. The problem is with the grandiose claim that we can devise a rational calculus yielding the correct action in any circumstance.

    • RTO Dude says:

      Istanbul sez: “The problem is with the grandiose claim that we can devise a rational calculus yielding the correct action in any circumstance.”

      But isn’t that the goal, sans “any circumstance” qualifier? To develop a rational framework that helps us to decide what’s “good”, particularly in borderline cases where we might not have (or don’t trust) our intuition? It needn’t be perfect (an impossibility, ie, trolley problem), nor should it conflict with a strong intuitive belief (ie, murder is bad). Nor is it static – we’d develop it by fitting it to our intuition, then test it against ambiguous cases and/or others’ insights, alter, test, repeat. (Suspect I’m being Captain Obvious here.)

      I’m going to return to lurking. I have to, I realize I don’t have the conceptual shorthand to contribute yet. (It took me time to look up Yvain’s treatment of “Schelling fence” for example, and “Trolley problem” – still have too many outstanding, and I have to read Kant’s work, for instance? Eek!) But first, an observation and a personal note.

      Observation: it is so good that y’all are trying to figure out things like this at such a young age. (Is this what Philosophy students do?)

      Personal note: I basically had a nervous breakdown in my 30’s and took a year off doing just this, working through a logical, secular belief system for doing the right thing. (Doesn’t need to be secular, I just lack the cognitive makeup to adopt the faith-based version.) It was a watershed moment for me, and I’m convinced I led a much better life afterwards. I’m realizing what I ended up with was fairly simple, but that can be a virtue.

      Now, at 60, all of a sudden the world seems more complex than it was in the 80’s, and I feel the need to revisit my beliefs. That relative simplicity might just be perception, but I don’t think so. Regardless, this sort of thing is well worth the time and effort spent in terms of life quality. It’s not an academic exercise. Congratulations again to y’all again for starting so early.

    • Was going to post an identical comment except mine had “and” instead of “,” in the first sentence 😉

      • Consequentialism can only be replaced by something that also considers consequences in the (likely) worlds, though. I’d like to see a rubric for discharging my obligation to forecast consequences, though, given that we’d all be better off with a policy that accepts less than infinite analysis before action.

        Unfortunately in the end we just have “reasonable person” standards argued in front of not-so-sophisticated jurors (who might unfairly judge you in hindsight).

        • Alex says:

          I think the “reasonable person” standard plus the hindsight bias is on balance good. My reason is that they have a *strong* stabilizing force that prevent the system from staying too far into an unstable state (e.g. Revolution) as long as the benefits of being in the system are worth the cost.

  69. irrelevant says:

    We commonly consider “great leaders” to have successfully offset their various wars and slaveries and murders by the positive results of their reign, and often under far more dubious causal circumstances than Scott lays out here. So I see no reason this result would be considered counter-intuitive.

    • BD Sixsmith says:

      But we think* that they were good in spite of their atrocities, so it doesn’t legitimise the atrocities themselves.

      * Well, most of us do. As a pious young leftist I once failed to enjoy the Ancient Egypt exhibits at the British Museum because of slavery.

      • irrelevant says:

        I disagree. I believe we do package them together, into a combined idea we call legacy, and evaluate that. Ethical offsets are an attempt to apply legacy evaluation to your future self, so it’s unsurprising they behave similarly.

    • TheAncientGeek says:

      When we think like that, we tend to substitute
      “great” for “good”, which is a clue that we are not actually thinking in terms of ethics, or at least not follow-societies-rules ethics.

      • irrelevant says:

        I noticed that, but I consider it part of the support that this is an intuitive conclusion. Do you not think that the hypothetical person described in the murder offset scenario would be judged on “great” rather than “good”?

  70. Vladimir Slepnev says:

    I must say that the idea of “offsets” doesn’t make any sense to me. Let’s assume that your utility function is a weighted mix of your near desires (take the flight) and far desires (minimize pollution). There are two possibilities:

    1) Taking the flight gives you positive utility, after taking into account the fun of the flight, the price of the ticket, and the extra pollution.

    2) Taking the flight gives you negative utility.

    In case 1 you take the flight, in case 2 you don’t. The same applies to donating to charity: you do it if it gives you positive utility, and don’t do it otherwise. There’s no rational reason to link the two decisions. For example, if taking the flight gives you negative utility, then donating and taking the flight is worse than donating and not taking the flight.

    Why do so many other commenters say that the post is an argument against utilitarianism???

    • onyomi says:

      Excellent point. It seems that whether one is a utilitarian or otherwise, most can agree that one “should” always do the best thing one can (whether that “best” be defined by greatest good for greatest number, duties, or whatever), and what is ethics other than telling you what you “should” do? The idea of “cancelling out” one bad with a good seems to assume some sort of supernatural morality counter that can create equivalencies between unrelated acts.

    • Jiro says:

      If you are permitted to weight utility to yourself higher than utility to other people, then the problem does not arise (for the plane–the murder offset problem still exists). However, what is usually thought of as utilitarianism does not permit you to do that. And if you don’t weight utility to yourself higher than to other people, then what you say is true only in a trivial sense–the flight doesn’t have positive utility, and neither does any other action you might take which is not giving everything you own to charity.

    • Nisan says:

      Offsets don’t make sense for VNM-rational agents, but one can argue they make sense for humans that subscribe to consequentialism as a moral theory — in particular, if you model humans as parliaments.

      I think commenters here are claiming that the thought experiment is a refutation of some consequentialist moral theories, including utilitarianism.

  71. Anonymous Derek says:

    As far as I can tell, the simplest cases here are 100% legit. I can’t imagine anyone saying “You may not take that plane flight you want, even if you donate so much to the environment that in the end it cleans up twice as much carbon dioxide as you produced. You must sit around at home, feeling bored and lonely, and letting the atmosphere be more polluted than if you had made your donation”.

    People do say this, on this subject and others.

    You see it most virulently when you suggest technological solutions to what people see as moral problems. Some people have internalised the “sinfulness” of fossil fuels to the point where they’re actively disparaging of, say, carbon capture technology, because it will allow people to keep sinfully using all these fossil fuels.

    Another example would be those guys who came up with those anti-date-rape-drug straws. A common objection to this innovation seemed to be that the onus was on morally-deficient people to stop being morally-deficient. The whole straw thing is probably not going to change the world, and there are better arguments for why they might not be a bad idea, but this particular narrow slice of the sexual-violence problem seems peculiarly tractable to technological intervention. Demanding that the only moral solution is for people to stop behaving badly seems bizarre and alarming.

    And yet people still do it.

  72. Abel Molina says:

    Would say that a big difference between the first case and the third and fourth ones is that carbon dioxide molecules are strongly fungible, while animals are not. However, there could be an argument against offsetting (or at least naives form of it) in the first case if this broke, and for example the pollution improvement happened in a very different area from where the emissions occur.

    As for the second case, it’s kind of tricky, since it’s not completely clear how to make more precise the damage caused to the country by visiting it. But assuming that it is contributing to the power of the regime, and that this is strongly correlated with money differential between the regime and its opponents, then the offsetting argument does make sense…

  73. Daisy says:

    The best I can do here is to say that I am crossing a Schelling fence which might also be crossed by people who will be less scrupulous in making sure their offsets are in order.

    The “people” who might do this include future you. Crossing this Schelling fence will inevitably end in supervillainy.

    Power corrupts; it’s a cliche because it’s true. The power to murder anyone who annoys you and justify it to yourself afterwards as ethical behaviour is not a power anyone could possess and still remain an empathetic, ethical, non-terrifying human being.

    • Peter says:

      Not only the power to murder; the power to subtly intimidate. If we’re in this murder-offset scenario and I’m loaded and I express annoyance at you, that’s rather more intimidating than if I don’t have that option.

      That said, if you had serious funds yourself, you might be tempted to pre-emptively murder me (and offset) to prevent that… if I don’t pre-empt your pre-emption of course.

  74. Anonymous says:

    Carbon emissions are fungible, chickens are fungible, people are not. This is all very easy for me to swallow. What’s the problem?

  75. Adam says:

    I don’t know if they were the first to come up with it, but http://www.cheatneutral.com/ is one prior example of the idea of offsetting immorality (in this case to highlight the absurdity of carbon offsetting)

  76. Adam Casey says:

    How is this different from the organ donation version of the trolley problem?

    The answer in both cases seems to be “yes if you add enough caveats that’s moral, but in the real world you should be more worried about the cost of an error where the caveats don’t hold than failing to reap the rewards if they do.”

  77. Jaskologist says:

    This assumes that good and evil are using the same units, with good adding, and evil subtracting.

    But what if they are two different things? What if doing a good deed is like adding a black stone to pile of rocks, and doing good is adding a white stone? Adding two white stones does not remove the black stone you previously added. The evil that men do lives after them.

    This is unlike carbon or money, where there really are transfers that can take place which will wipe out a previous transaction. One human is not interchangeable with another. Whether chickens are similarly fungible probably depends on your view of animal rights; I would not expect a vegetarian to find chicken offsets acceptable.

    • Jaskologist says:

      Doing a quick survey of the major religions, Judaism and Christianity definitely fall into the “evil is non-fungible” camp. Sin is never expunged by doing a good deed to compensate, only by a death.

      Platonists and Augustine, on the other hand, would probably take issue with the characterization of sin as having a substance of its own; they tended to see it more as a tendency toward non-being. I’m not sure how that impacts a balance sheet.

      The Karmic system of Buddhism/Hinduism/Jainism seems like a classic example of ethics offsets.

    • In terms of how warm-fuzzy people feel about the tribe they’re a part of, you’re absolutely right. But this proposal is part of an extended Spock-ian roleplay.

  78. Felix Benner says:

    I argued unsuccessfully on less wrong about the distinction: value ethics vs agency ethics.

    When you murder someone you infringe on their agency, when you pollute the air, you don’t, which is a categorical difference.

    With the vegetarianism, it’s not so clear. I’d argue that at least part of it is anthropomorphizing the animals and therefore pushing ‘eating animals’ towards the categorical boundary to ‘infringing on agency’.

  79. mysternee says:

    I actually think this is a very interesting objection to carbon offsets, and is a powerful argument against their moral credibility. It’s definitely something I might borrow in future.

    In terms of utilitarianism though, I’m really not sure there’s much of a bullet to bite. Yes, it seems that on balance the murderer-donar now has (to borrow the carbon metaphor) a utility-neutral footprint. But utilitarians, of any persuasion, don’t care about that – they care about whether the act itself is promoting utility, and by that measure murdering someone is always going to be wrong unless it directly promotes net utility (and that’s at a bare minimum; most utilitarians would have stricter conditions). If they’re act-utilitarians, then one act was good, the other was bad, end of story. If they’re rule-utilitarians, then (assuming a plausible rule-set), one act is likely to be in line with utility-promoting rules, and one act is very likely to be proscribed by utility-promoting rules. Utility may be commensurable, but individual acts are commensurable iff they result in equivalent utility. These two acts quite plainly do not.

    So I really don’t see anything for utilitarians per se to worry about. If forced to ‘rate’ our murderer-donar, they’ll say ‘you did about as much bad as good, you could easily have promoted utility much more by not committing that murder, you pretty much suck’. I also don’t see anything here for utilitarians who donate 10% (or whatever) of their income to worry about; sure, maybe they’re not maximizing utility, but that’s a separate question not really brought into focus by this hypothetical. Utilitarians who engage in carbon offsetting, however, may well have something to think about (though under certain, quite plausible conditions it would be defensible).

    • mysternee says:

      Actually, there is possibly one bullet utilitarians might have to bite: is the murderer who donates worse than someone who does neither? If forced to give a net-utility assessment I think many utilitarians would have to answer ‘no’, which a lot of people would find counter-intuitive.

      I can think of a couple of responses to this. One would be a Singer-esque ‘yup, you’re not saving people’s lives when you could, doing and allowing harm are the same, so you really do work out just as bad as a murderer who saves a life’. To be honest, as someone who’s taken the pledge and donates 10%, I am comfortable with this – it’s the reasoning behind my donation, after all.

      Someone with a rule-utilitarianism bent could probably take a more abstract position as well, and regard ‘donate-to-murder’ as a kind of behaviour that will not tend to maximise utility, and more broadly as indicative of moral reasoning that will tend not to maximise utility. On these grounds it would therefore be deserving of censure and reproach.

      The latter will probably seem like the stronger position, as it avoids the charge of ‘excessive demand’ that the former’s unfavourable comparison will likely evoke. However, if like me you are comfortable with a highly demanding form of utilitarianism, then this is probably a bullet you are willing to bite.

  80. Deiseach says:

    Also, anyone can just go murder someone right now without offsetting, so we’re not exactly talking about a big temptation for the unscrupulous.

    “Today, in local news – freak accident at city hospital, as popular doctor Scott Alexander was taking his lunch break to go register chickens for the vote in the campaign to convince people to become vegetarian and give up tasty fried chicken, when an anvil mysteriously dropped on his head. Hospital authorities have not yet issued a statement as to how come an anvil was hanging from a crane outside the main door.

    In other news, city hospital announces anonymous donation of Very Big Cheque to their Patch Adams Let’s Terrorise Sick Kids with Creepy Clown Doctors Fund. Hospital authorities add: “Connection between one event and the other? No, come on, not at all! Any odour of fried chicken adhering to the cheque is purely coincidental!” And now – the weather forecast!”

  81. kerani says:

    I’m with the other Catholics here – this reeks of purchasing indulgences and substitutes for military draft, and it’s wrong.

    More specifically – the examples give compared a transaction regarding commodities in the public common (carbon in the global atmosphere) vs transactions regarding specific lives.

    (While I deeply disagree with the ethical values (and factual claims) of most vegans/animal rights advocates, I can see where they would equate human and livestock lives. (Part of my disagreement with their values is that their values rarely extend to weeds, rodents, and cockroaches. ))

    Carbon release/offsets are alleged to matter because of the impact on specific human lives, through environmental damage. They don’t matter because of the release/offset itself. On the other hand, a killing ceases one life which can not be restarted. If the proposed murder offset were to bring the murdered one back to life, then perhaps we could compare the two. (However, my impression was that most don’t buy into that reserection thing.)

    While I can accept as rational the argument that of two choices, the one that helps more people is to be taken, I don’t accept as rational the idea that helping some people as a result of one choice (give to a malaria foundation) in any way changes the damage done to others through a different choice (murder, refusal to import DDT, etc.) Each action should be weighed on its own – including the decision to not live ones life weighing the morality involved in the action chosed for each moment (ie, typing on the keyboard vs more proactively doing good in the world.)

    • It was wrong before because it wasn’t done right. You can’t just recoil from “indulgences” like that.

      • kerani says:

        I disagree. Like communism and the USSR, the problem wasn’t the failable people running the system, it was that the system was not designed for failable people, which were a foundational principle of the environment that the system was operating in

        And I’ll go further than that – indulgences were wrong because – like murder offsets – they were not transfers in items which could be equated. One could not reduce the harm one had done to ones neighbor (by gossiping about that person in the market) through doing any good to any other person (or organization.) (You couldn’t even reduce the harm done to God in this manner.) By giving the appearance of such – and worse, by profitting from this false solution – the Church was at fault. (And eventually acknowledged this.)

    • Illuminati Initiate says:

      “weeds, rodents, and cockroaches”

      Err, animal welfare people often do ignore cockroaches and weeds (this distinction is not actually arbitrary though), but rodents? I don’t think I’ve ever heard of anyone who valued chickens enough to be against killing them for food but did not extend the same consideration to rats (and rats are actually smarter than chickens, I think). (There are people who value cockroaches though. Those people really are crazy. And inconsistent, literally all food production involves the mass killing of insects.)

      • kerani says:

        I don’t think I’ve ever heard of anyone who valued chickens enough to be against killing them for food but did not extend the same consideration to rats (and rats are actually smarter than chickens, I think).

        I’ve met and talked with people who objected to using chickens to feed malnorished kids (and dogs) but were okay with killing rats that destroyed stored food, killed baby chicks, or who just that inhabited houses where the animal-rights person lived (so that there was no utility to killing the rats). Not to mention them being okay with feeding maggots & fish meal to chickens and using chemicals to kill the lice that fed on the chickens. (Without the chicken’s consent, even.)

        Chickens are definately dumber than rats.

        The number of lives required to feed a human varies quite a bit from system to system, and is very much dependent on technology, species fed, and geography. In general, range land ruminants and no-till cereals, combined with certain fruit tree & bush cultivars, are the least impactful.

        Any time you go to harvesting hay or grains, or tilling soil for veggies, though, you’re killing things. Sub-cute things, sure, but ignoring things doesn’t remove them from reality.

        • Illuminati Initiate says:

          The distinction between insects and chickens is not one of “cuteness”. Usually, someone who values the lives of chickens but not insects will say that insects are not intelligent/sentient enough to matter, they do not qualify as a “mind”.

          • kerani says:

            I know that many who limit animal welfare actions to only charasmatic animals claim to do so on grounds of intelligence and or sentience. I reject this on the grounds that a) a similar absolute distinction is not made (where warrented) between humans (and human-approximate) species and non-human species, and b) failure to create a definition of sentience that includes sheep and chickens yet fails to include bees, mega-spiders, or other charasmatic non-vertebrates.

            Failing evidence otherwise, my conviction that most animal-rights activists are acting on aesthic grounds rather than rational ones holds – as does my conviction that my aesthtic values are equally valid (if not more so.)

    • Anonymous says:

      But the only real benefit from indulgences was whatever social/emotional effects you experienced from having the weight of sin on you.

  82. Anonymous says:

    One cannot offset murder (harming someone else) with saving some other person’s life. It is different from the vegetarian example in that you are harming someone as opposed to just steer them in a different direction (which can be reversed) and do so in a manner that leaves them in charge of making the decision.

    It is simply morally wrong as you are imposing a action (murder) on someone else (I assume without their consent). In the former you are simply influencing their actions.
    The use of force and coercion is never justifiable.

    • Illuminati Initiate says:

      Question: do you believe in private property? Also, do you think it is acceptable to use force and coercion to stop someone from using it on you?

      • Anonymous says:

        Illuminati, I meant to say not justifiable in this context. I’m a proponent of private property, self-ownership and self-defense, but this is not the case in this example.

  83. mister_ghost says:

    This showcases the main problem with fully aggregate outcome based systems: they feel totally whack (see Zach Weiner’s happy Felix, a man so overjoyed by everything that the responsible choice is to enslave yourself to him).

    I would say that pollution is, to our minds, a collective activity. You don’t wrong someone by releasing the particular CO2 patch which trapped the particular energy which melted the particular ice that did whatever, you just have some of the responsibility for the pollution problem, according to how much of the polluting you did.

    Murder, on the other hand, is treated with much more granularity. We do not hate murderers because they raise murder rates or “contribute to the murder problem”, we hate them because they take away a particular person’s life.

    Meat eating probably exists somewhere In between the two.

    I’m a STEM student out of my philosophical depth here, but that seems to be the difference here. Even if purchasing murder offsets before you take your vengeance makes the world a better place, you still wrong the person you’re killing. When you drop your net emissions to zero, it more or less all comes out in the wash.

    If you’re the kind of person who thinks you should always avoid wronging people, murder offsets are no good. If you want the best outcome for everyone, go for it.

    I think the silliest outcome of this idea is that a stranger can pull me out of a fire, dispatch me for fun two years hence, and still come up smelling like roses.

    Edit: I’m on mobile, so sorry for the inevitable typos

  84. Tarrou says:

    There is a simple actor problem with all the hypotheticals around murder. The benefits of the “offsets” do not go to the victims of the crime (the family/friends/employees/mistresses etc. of the dead man). Furthermore, there are actor problems with who gets to decide the price of ethical actions, and whether or not bad actors should be made to feel the effects of their actions.

    Utilitarianism can be useful in some situations, but it tends to “Assume a spherical human” when it comes to the real world.

    More interesting is the idea of indulgences, whereby you get no cessation of punishment from temporal authorities, but you can assuage moral guilt (i.e. still get into heaven).

  85. Anonymous says:

    Please can everyone on this thread who’s arguing that you should donate to charity and not kill the person please explain why they’re on SSC, rather than working constantly so they can donate more money to charity. (And, for that matter, why they have a computer and internet access.)

    • Lambert says:

      Comment by Lambert

    • mysternee says:

      Because nobody’s perfect? I am willing to admit that I am a less than perfect moral entity. That being said, I am morally aspirational – I’m not going to spend all my time murdering people and reading SSC just because I don’t have the will-power to spend all my time working to save people’s lives.

    • Anonymous says:

      Already hit on that one above – but I’m sure I repeated someone else’s comment (or that of several people), so no biggie.

      There are virtues and vices in everything, and these must be weighed against the other options in any choice. A person who is currently engaged in constructing a HfH home might be taking a physical break – so as to return to work refreshed – and a mental break (so as to return to work less likely to cut twice instead of measure twice) by getting on SSC.

      Alternatively, a person could be on SSC instead of yelling fruitlessly at others on FB, so even while the choice to be on SSC was not optimal, it was not the worst one, either.

      And I don’t know anything about the moral choices before other people on this thread, nor the constraints placed on them by physics and society. I only know my own.

    • They’re ethics-offsetting their own time-wasting here by preaching superior ethics to the rest of us. Per Scott’s reductio on “everyone eats meat and donates to vegetarianism-advocacy” this can’t scale.

      • mysternee says:

        Why can’t it scale? The main reason I donated 5% of my income last year is because I took Peter Singer’s Practical Ethics Coursera course (good thing I had a computer and an internet connection). What pushed me to increase my donation to 10%+ was Scott’s recent post about effective altruism (good thing I kept my computer and internet connection).

        I’m not trying to toot my own horn here, I just think the idea that moral discourse is somehow antithetical to moral edification is, frankly, wrongheaded. People talking about things can lead to people doing things, and is often a necessary step in that process.

        • “It can’t scale” in the sense of posting here about how people should be nice is silly because we’re all so nice already, and so we have no excuse to waste time here. Unless we’re not so nice as we think. Kudos for your donation.

    • Illuminati Initiate says:

      Because I, like all humans, am not an ideal moral agent.

      This is also why I disagreed with Scott about the charity vs politics thing. Expecting people to do the right thing individually is unrealistic. Instead we need to structure society and the state in such a way that the right thing is done collectively without having to rely on individuals spontaneously choosing to give away their own wealth and time, which they usually won’t, and without putting too much burden on individuals (tax funded welfare state programs are a step in that direction).

  86. Anonymous says:

    These offsets do not go to the same utility function, therefore no actual offsets has been made.

  87. drs says:

    IMO a rational society would be imposing carbon taxes on fossil fuel use, sufficient to compensate for the harm done, or at least slowing down the rate of depletion and buying time for future developments. Politically, you might not even use the money directly for harm reduction, you might send it out as per capita dividends (“fee and dividend”) so people support the taxes because it’s more cash for them.

    So I can easily view carbon offsets as an altruistically self-imposed carbon tax. You don’t even have to buy carbon offsets per se; simply committing to “I will spend X more for my flights or car refills” with the X going to food banks or whatever should duplicate the effect of a tax on your consumption… if you stick to it. (So if for some reason you don’t trust carbon offset options, you can still raise your cost of consumption to duplicate a high carbon tax.)

    (Also note that if you’re inclined to buy offsets for a flight, you should also be paying maybe $5/gallon for your gas, however you choose to direct the money.)

    (Also, ideally you’d not buy offsets, but buy cleanly produced fuel in the first place, e.g. solar-driven air-to-fuel, thus continuing your lifestyle without the externalized costs. But there’s no existing options for that for your plane flight, nor for filling your gas tank conveniently, though electric cars + green power options come close.)

    But burning fossil fuels is a case where we can say “ideally we’d be running on solar electricity or something but we can’t leap to that right now, so we have to compromise with burning fuels.” There’s no need to compromise with murdering people you don’t like. We can at least envision continuing to fly and drive without damaging the environment, and the offsets stand in for our being able to do that right this moment. You can’t envision murdering people without murdering people.

    (As for meat, depends why you think it’s bad in the first place. If the sin is murdering animals, animal welfare doesn’t stand in for that, though donating to vat meat research might. If the harm is from environmental damage, offsets might make sense, especially for eating out where you often don’t have an option to eat grass-fed beef or whatever constitutes your alternative sustainable vision.)

    ***

    As for principles, my ethical foundation is an ad hoc hybrid of utilitarianism and social contract theory, with one countering the abuses of the other. Even for pure consequentialism, I’m suspicious of the ability of human-expressible rules to capture the complexity of reality. The human brain is a powerful consequence-evaluating machine in itself, and “I don’t *like* this outcome, even if I can’t articulate why via some universal rule” may be at least as good a guide as any set of rules. Not infallible (“my gut says gay people are icky”) but not to be discounted, either.

  88. Atreic says:

    Two comments, which may have been made already, in which case apologies (obviously I have a problem in 2015 where I like listening to my own voice more than reading others)

    1) The reason you have to bite the bullet is because there is a world with a really rich person in it, or conversely because there are still some ludicrously good value for money interventions that save lives that we haven’t maxed out yet. Interestingly, in most places where they try to put a cost on a healthy year of life for the entire population, (eg England, NICE, healthcare) the cost ends up being about the average person’s annual wage, at least within orders of magnitude. This feels ‘sort of right’ in that if one person had to spend more than a year of their life to give one person a year of their life, something has gone a bit wrong. If your bullet was ‘if someone gives every penny they ever earn in their entire life to offset the harm of murdering someone they really want dead’ that might still be an interesting bullet to bite, but it’s much less ikky – I mean, we already have a social rule that you can murder someone, if you go to jail for the rest of your life. I think there is a huge underlying ethical Wrongness in a world that has huge wealth disparity and hugely cheap life saving interventions that have not yet been maxed out, and so trying to derive consistent Good Ethics in that world will always lead to flawed conclusions.

    2) People aren’t fungible. I don’t like this point as much, it’s more of an indefensible axiom than a good utilitarian argument. Fungibility of people is a useful approximation we make so we can do calculations, and a lot of the time we need to (in the same way we can ignore air resistance in some physics problems). But in the same way newtonian physics breaks down at the quantum level, when you’re talking about an individual killing an individual I _think_ it’s too small scale and the fungibility approximation breaks down. I think it is a difference – if carbon offsetting works, then the world really is no worse to any intelligent observer. If murder-offsetting works, then there is at least one person who probably would deeply prefer not to be murdered, no matter how many hundreds of thousands of people would be saved in his name.

  89. meh says:

    So the setup is can a man of infinite wealth trade one life for another, using his wealth to keep net societal utility of side effects zero, or even slightly positive. This setup makes it seem like the side effects would be small, a rounding error. But I think the net effects of allowing such policy would be much larger than the individual trade, as it would undermine democracy and public trust. Maybe the man of infinite wealth can still offset these, but they are a much larger cost than it appears at first glance. Perhaps on the order of ‘if someone cures cancer, can he get a free murder’
    So the motte is can a person trade one life for another, if they keep the net utility of side effects zero, and the bailey is simply can a person trade one life for another.

  90. Nestor says:

    Money is a blunt instrument for this kind of thing.

    What the pros do is become a utility monster, broadcast your ethics and let your followers joyfully offset your own hedonistic lifestyle.

  91. A world where rich folk can, as a matter of policy, meet the slightest breach of obeisance with personal destruction? Age of the Samurai. If only virtuous men may amass the requisite murder-offset capital, then it’s absolutely fantastic. Just as when only a wise and benevolent man becomes God-emperor, you can’t complain.

    I think if you make it expensive enough, then even with the knock-on harms of a population feeling vulnerable and dominated, you can absolutely justify a spiteful killing. You just need to add to your ledger a compensation for the loss of “rule of law” illusion, and for “friends are very sad you were erased”, which I didn’t see in your price sheet.

    We already live in a world where we can be killed. We just prefer to think not.

  92. Pingback: On consequentialist ethics | Into the Everday

  93. Andy Harless says:

    When you talk about carbon offsets, you’re talking about a purely quantitative ethical issue (“Minimize the amount of CO2 in the atmosphere.”). When you talk about murder, you’re talking about a rule (“Thou shalt not kill.”) Obviously you can offset quantities with other quantities. You can’t offset a rule; it doesn’t even make sense to talk about conceptually. Unless what you mean is making a new rule in which the actions proscribed by the old rule are only proscribed under certain circumstances — but that would end up being a very complicated rule. And then I think you have to ask why you made the old rule in the first place: I mean, it’s not at all obvious to me that killing a person is always, or even usually, a net loser in terms of utility. (It always puzzles me that people who have actually lived human life still think it’s a good thing.) If you can explain why you think killing is wrong (as I did for myself in an earlier comment replying to another commenter), then you’re on your way to solving the question of whether it’s wrong in general or only wrong when not adequately offset.

    (The vegetarianism question is kind of ambiguous in this regard, because different people have different moral reasons for vegetarianism. Some of those might be closer to the purely quantitative carbon offsets, and others might imply rules that would have to respected or explicitly amended.)

  94. Kai Teorn says:

    > I know I didn’t come up with this concept, but I’m having trouble finding out who did

    I think it was the Church. It’s called “indulgence” or “penance” – something good you promise to do (or even just money you pay to the Church) to offset your past sin.

    Other than that, here are my thoughts on the matter: http://kaiteorn.wordpress.com/2015/01/05/on-consequentialist-ethics/

  95. kernly says:

    This is very, very wrong. The depth of its wrongness is actually a spectacular argument against “consequentialist” ethics. It shows that even very intelligent people aren’t capable of thinking things through, and should stick to deontological ethics.

    First of all, let’s look at the idea of ‘ethics offsets’ broadly. Of course it is complete nonsense. This is obvious to everyone intuitively, but also provable if you dig even an inch below the surface. Let’s say we have a perfect utilitarian framework, and we know for certain that killing someone will cost 500 utils, and that means we have to do something to provide the world with 500 utils in return. Obviously if you follow through with your obligation you have net the world zero utils. But only the most shallow analysis can then conclude that you should be allowed to make this sort of ‘trade.’

    The problem is that util-positive actions that don’t get “offset” make up essentially everything good about civilization. If we don’t have people contributing utils without then being allowed to do something util-negative, then basically everything valuable is destroyed. While the util-positive and util-negative actions we posited before cancel each other out, that isn’t the question before us. The real question is, is a civilization that allows these “offsets” to count – whether morally or legally or both – more util-positive than a civilization that doesn’t? The clear answer is no – people who are given either moral or legal license to do bad things are more likely to do them, and anything that reduces the average util-contribution of an individual is very terrible indeed.

    Now let’s take away our assumed perfect utilitarian framework and look at the specific ‘ethics offset’ examples you provided.

    Get the irrelevant objections out of the way first and establish the least convenient possible world. I’m a criminal mastermind, it’ll be the perfect crime, and there’s zero chance I’ll go to jail. I can make it look completely natural, like a heart attack or something, so I’m not going to terrorize the city or waste police time and resources.

    Of course none of this absolves you. However you reduce the probability that you will be discovered, it is never zero, and the negative impact of your misdeed being revealed grows roughly proportionally to your competence at covering it up. Clever murderers are a lot scarier than stupid murderers, and fear makes up a very large proportion of how murders negatively impact society. Clever murderers are also a lot more expensive to catch, and no you can’t “offset” that with money. Again, this is intuitively obvious but also logically provable if you dig a little beneath the surface.

    We need to have an understanding of what money is. It doesn’t “store value.” What it does is provide an incentive for people to do work. When you provide money to a police department in order to “offset” all the extra work you made them do, you’re entirely missing the greater context, which is that the “value” comes from the work people do for money, not from the money. What you have really done is taken labor that could be used for something else, and allocated it to chasing down criminals.

    This also applies to the carbon “offset” idea.

    I can’t imagine anyone saying “You may not take that plane flight you want, even if you donate so much to the environment that in the end it cleans up twice as much carbon dioxide as you produced.

    Well, I can – because money isn’t a magic store of “value,” and providing money doesn’t magically make things happen. Labor makes things happen, and you’ve just forced people to do whatever people to to clean up carbon dioxide instead of something else. Indeed, the negative utility from you making a mess and then forcing people to clean it up might be less than you making a mess and then doing nothing to clean it up, but it’s also obviously more than you not making a mess in the first place.

    As we can see, there are terrible, terrible consequences waiting in the wings if we allow people to use “consequentialist” ethics instead of deontological ethics. Even relatively smart people who are actually putting in some effort come out with complete drivel when it comes to actually figuring out what the “consequences” of a given action is. In addition to the inevitable mistakes, figuring this sort of stuff out costs the time of intelligent people. Instead of this, what you do is you have certain categories of things that you call “bad” – making a mess, hurting other people – and certain categories of things that you call “good” – helping people out, cleaning up messes, being productive – and you discourage people from doing “bad” things even if they come out with hare-brained arguments justifying them. You have rules, and you enforce them, people who break too many or too important rules are “bad,” and the world is better for it even if some smartass really can find an example where following some rule is util-negative.

    And one of the rules is that you can change the rules, if you can get enough people together. This means that if there really is an improvement to be made, you can solve it, and more efficiently through social organization than through direct conflict. This actually works! Stuff that’s based on “consequences” of actions on an individual basis obviously doesn’t!

    To be clear – this is a moral position in addition to a practical position. Obviously almost everyone is in favor of there being rules and those rules being enforced. What seems to be under attack here is the notion that you don’t get to disobey really important rules and still be “moral.” I want to make clear that when you allow murderers to retain moral standing, that’s really really bad even if they don’t have legal standing. Even if those murderers donated a looot of money to malaria victims.

    Oh, there’s another really important point:

    Even what seems to me the most desperate and problematic objection – that maybe malarial Africans’ lives are in some qualitative way just not as valuable as those of happy First World citizens contributing to the global economy

    “Desperate and problematic?” Seriously, no. The idea that people close to you are more valuable than people far from you is the only thing keeping society together. If we valued our children the same as some guy in Africa, obviously that couldn’t mean that we put both of them before ourselves – it’s not practically possible to put every person in the world before yourself in a sense that’s meaningful to any of them as individuals. What it would mean is that nobody would be loved. That would be really, really bad.

    This is blindingly obvious when we talk about parent-children relationships, but you should see that it extends to more distant relations as well. The only reason you have a stable, safe place to live in is the willingness of people in the past and present willing to stand up and put their lives on the line for “their country.” Whether or not you think that was logical, it happened, you benefited from it, and you don’t get to simply dismiss it. You owe much more to people within your country than you do to people outside of it. That responsibility is very, very important.

    If I’ve got enough money, a few hundred thousand to a million ought to be able to save the life of a local person in no way distinguishable from my victim

    No, because even if we accept that all lives are worth exactly the same, that doesn’t apply to all deaths. A death from a disease does not damage the social fabric nearly as much as a death from murder. Again, your proposal of being a really clever murderer does not work, since a well-disguised murder being found out will make people newly suspicious of deaths that would otherwise be considered clearly natural, whether that suspicion is warranted in those particular cases or not. So as you make your discovery more unlikely, you also make it more impactful – and you can never make your chance of discovery zero. Or even all that close to zero, when we consider the unavoidable baseline chance that you turn crazy or careless and make an inadvertent mistake down the road.

    Also, anyone can just go murder someone right now without offsetting

    Not if they want to keep their moral standing. If people can murder and keep their moral standing they’re more likely to murder. Jeez – allowing people to murder and maintain their moral standing is bad, mmkay? I really shouldn’t have to be saying this! Again, this is a spectacular argument against “consequentialism” determining people’s moral standing.

    • CzerniLabut says:

      “And one of the rules is that you can change the rules, if you can get enough people together. This means that if there really is an improvement to be made, you can solve it, and more efficiently through social organization than through direct conflict. This actually works! Stuff that’s based on “consequences” of actions on an individual basis obviously doesn’t! “

      What deontological rules or processes should people utilize in order to be sure that they are rational and ethical in deciding what ‘rules’ govern society? What deontological rules determine when the ‘meta-rules’ governing how to rationally determine rules should be changed? By subscribing to pure deontology you are begging the question, which leads to absurd infinite regression.

      • Anonymous says:

        (it’s kernly, forgot to retype it)

        What deontological rules or processes should people utilize in order to be sure that they are rational and ethical in deciding what ‘rules’ govern society?

        I don’t think “rules or processes” have anything to do with it. We decide on our rules through a very messy process that can’t be formalized. I’m saying that your moral standing should be determined (at least in large part) by how you follow the rules, not that the rules should themselves be determined by rules (???)

        • CzerniLabut says:

          Does an absolute ruler who decrees that all firstborn children of families in their kingdom should be killed is of no higher or lower moral standing than an absolute ruler who doesn’t issue such a decree? In addition, do you beleive that the families who revolt in the former case have negative moral standing because they are disobeying their rulemaker, and that their negative moral standing is no less worse as families who would revolt against their leader in the latter kingdom? Even those who support the idea of an absolute monarch morally positive or neutral would find arbitrary slaughter morally reprehensable.

          • kernly says:

            Does an absolute ruler who decrees that all firstborn children of families in their kingdom should be killed is of no higher or lower moral standing than an absolute ruler who doesn’t issue such a decree?

            I don’t believe in any “universal moral standing.” Within his society, that absolute ruler would be in excellent moral standing, presumably. Not within this one, though.

            In addition, do you beleive that the families who revolt in the former case have negative moral standing because they are disobeying their rulemaker

            Certainly! Of course, should they succeed in their rebellion, presumably the ruler will find themselves in a society where they have very low moral standing indeed.

            Even those who support the idea of an absolute monarch morally positive or neutral would find arbitrary slaughter morally reprehensable.

            Well, as far as the society I am in is concerned, it is reprehensible, and would probably lead to repercussions depending on how weak the absolute monarch’s country was. And any society I would create would certainly have arbitrary slaughter be against the most strongly enforced rules.

            But really, the idea that you can have some universal morality just seems silly to me. Ethics is about what is right and wrong, and different societies will have very different things be right or wrong.

            I guess I am a ‘moral relativist,’ though I don’t see much in common with others who wear the label or have it thrust on them. I don’t see anything about my ‘moral relativism’ that prevents me from supporting the enforcement of my society’s rules upon other societies, for instance. What’s considered right and wrong is different there – which, given our conception of right and wrong, surely means intervention (when and where it is practical) is all the more important?

            I guess ‘moral relativists’ typically just want to be able to ignore their society’s moral code whenever it suits them. A ridiculous strategy which won’t get anyone anywhere. Right and wrong is ‘relative,’ sure, but the things that are right and wrong from this perspective are right or wrong from this perspective.

          • You’re a group level relativism rather than an individual level relativist. That’s a position with problems of its own…like the fact that you are going to be on the receiving end of the attempts to convert you tthe right kind of right.

      • Alex says:

        There are many systems that self-modify (DNA, government) so I don’t see how a system with rules stating how it modifies itself would be impossible to design, more complex, but not prohibitively so. Long term these debates are a meek attempt at finding an ideal end-state and move our current system closer to that ideal.

        • CzerniLabut says:

          How do you determine whether a set of rules A or set of rules B is closer to an ideal set of rules I? Also, is a self modifying system A* of rules that gets to I faster, in cycles 1 through 5, more or less moral than a self modifying system of rules B* that gets to I slower, in cycles 1 through 10, but where B*1 is better than A*4 and stages 2 through 9 get asymptotically closer to I?

          • Alex says:

            That is freaking good question that I would intuitively answer with B* being better. My suspicion is that a good description of I would provide an answer. My pessimistic suspicion I that we cannot come up with a good description of I because our view of the system space is local (that is the curse of self referencing systems) so we optimize locally where we can and reach in random directions looking for better global maxima rather than only optimizing locally.

    • Andy Harless says:

      Your objections actually make Scott’s original modest proposal seem a bit less absurd to me.

      …when you allow murderers to retain moral standing, that’s really really bad…

      When you frame this as being about “murderers” rather than about “murder,” I think your position becomes one that will fall apart under even minimal scrutiny. Perhaps there are unpardonable sins, and perhaps murder is one of them, but in general this kind of argument is problematic.

      Because in general, we do allow people who do bad things to retain moral standing. Indeed, nearly all of us do bad things and yet most of us do retain moral standing in one another’s eyes despite the knowledge that we’ve done bad things. “Hate the sin, not the sinner” is considered a reasonable ideal. And if someone has done something wrong, even something fairly large, but has otherwise been a unusually good person, should I consider them a bad person because of the one wrong thing that they’ve done?

      After reading your comment, it occurs to me that, putting myself in the position of judge rather than perpetrator, I do allow offsets. My moral judgment of a person (to the extent I’m inclined to make such judgments) will take into account the totality of their actions. Now deliberate offsets admittedly might be a problem, but it’s something I’ll have to think about.

      (On a separate issue, and just to be pedantic, I note the irony in using the Labor Theory of Value to make an argument about a topic — global warming — whose very existence explains why the Labor Theory of Value is wrong. Natural resources matter, so not just labor makes things happen, sometimes nature makes them happen. A choice between two actions that involve equal labor will make different things happen depending on how they use natural resources. But as I said, this is just to be pedantic; I don’t think it affects the substance of your argument.)

      • Kai Teorn says:

        > Now deliberate offsets admittedly might be a problem, but it’s something I’ll have to think about.

        Exactly. That’s because as a judge, you’re not judging the offsets; you’re judging the person. You do take into account the totality of their actions, but you’re weighting not so much the real consequences (if only because, in the real world, they are exceedingly hard to predict and predetermine) as the person’s intentions. If you see the murderer is full of pain for what he did and is “offsetting” as a way to right his wrong, that’s one story. If he cold-bloodedly calculated the necessary offset so he could kill whoever he pleases, it’s completely another story.

        It’s a little funny to see this all being discussed as something novel because, again, the christian church has been through all this centuries ago. What the church wants is repentance, not offsets; it assumes that if the sinner has truly repented his sin, it is thus cleared, whether or not anything “real” is done to counterbalance it. I’m not in favor of deontology and certainly not in favor of divine-command ethics, but in those cases where I agree with the church on the basic good/bad values, their practical approach to judging people’s actions and achieving closure for misdeeds makes a lot of sense to me. If only because that approach has actually evolved over a long time by trial and error.

        • Andy Harless says:

          In terms of my own values, repentance is certainly worth a lot, but I can’t see it as a necessary or a sufficient condition for clearing someone of a misdeed. For example, other people have different values than I do, and in some cases I may consider some of their values to be evil and the actions that they take in accordance with those values to be evil. Obviously, unless their values change, they’re not going to repent. But having done such actions that I consider evil does not utterly disqualify a person, in my mind, from being considered a good person, if they’ve also done good things. So there really is a question of offset even aside from repentance.

          I think a person would lose points with me for deliberately trying to justify an immoral action in advance by coupling it with a good action. But if the good action were sufficiently good, they might still come out ahead.

      • kernly says:

        Because in general, we do allow people who do bad things to retain moral standing.

        What? Of course we don’t. You lose moral standing by doing bad things. Not “all” moral standing, whatever that would mean, but in proportion to how bad the thing is – hopefully, IMO, in proportion to how important the rule violated is.

        Indeed, nearly all of us do bad things and yet most of us do retain moral standing in one another’s eyes despite the knowledge that we’ve done bad things

        What does this mean? Are we using a nonsensical definition of “moral standing” here, or something? Of course you lose moral standing for doing bad things. That can leave you with enough to still be rated pretty well, depending on the rest of your behavior. With murder, though, you lose a LOT of moral standing, enough to be considered “immoral” in deontological terms however well you follow the rest of the rules.

        And if someone has done something wrong, even something fairly large, but has otherwise been a unusually good person, should I consider them a bad person because of the one wrong thing that they’ve done?

        You should consider them a worse person, to a degree proportional to the importance of the rule they violated.

        On a separate issue, and just to be pedantic, I note the irony in using the Labor Theory of Value to make an argument

        You either have no idea what the LTV is or you’re having a brainfart.

        Natural resources matter, so not just labor makes things happen, sometimes nature makes them happen.

        ????? Labor makes everything labor makes happen, happen. That means everything that money makes happen, labor makes happen. You can’t pay “nature” to do anything. Also, again, you’re hopelessly confused about the “labor theory of value.”

        The LTV is the notion that the value of something is determined by how much labor went into it. That has nothing to do with anything I have said. What I have said is simply that money isn’t (in and of itself) a store of value, it’s an incentive to do work, and you don’t get to pretend otherwise. Obviously the “value” of money is that you can get people to do things in exchange for it! That has never been in dispute. My assertion has nothing to do with the price-labor relationship, which is all the LTV is about.

        a topic — global warming — whose very existence explains why the Labor Theory of Value is wrong

        Whether the prices of goods end up being proportional to the amount of labor that went into them in competitive markets has about as much to do with the existence or nonexistence of anthropogenic global warming as the diameter of Pluto.

        • Andy Harless says:

          You should consider them a worse person, to a degree proportional to the importance of the rule they violated

          Yes, for that particular violation I consider them a worse person, but for the good things they’ve done I consider them a better person than if they hadn’t done those good things. No reason the sum of the effect of the good things can’t exceed the effect of the bad thing.

          Again,

          Of course you lose moral standing for doing bad things. That can leave you with enough to still be rated pretty well, depending on the rest of your behavior.

          My position is that you can (at least in some cases) gain back enough moral standing from “the rest of your behavior” to end up with more than you started with. That’s pretty obvious when the bad thing is something small and the good things are large. But I don’t see why the point shouldn’t generalize.

          Though of course it wouldn’t work if you view morality in purely negative terms, as merely the requirement of following certain rules, each violation of which loses you standing. I maintain that positive moral actions are also possible, things that are not required by the rules but are morally good, “over and above the call of duty.” And if you accept the concept of positive moral actions, it should be obvious that, if a person’s positive moral actions are sufficiently large and their violations sufficiently small, offset is possible. A person who does a lot of very good things and a few minor bad things is a much better person than one who never does anything good or bad.

          Whether you can do enough good things to offset a murder is another question, but it isn’t formally impossible.

          the “value” of money is that you can get people to do things in exchange for it

          No it’s not. The value of money is that you can get people to give you things (where “things” is interpreted to include services) in exchange for it. This would be true even in a pure endowment economy, where there is no labor. Money facilitates exchange, and that exchange may or may not involve something produced by labor. If shoes fell from the sky, but all the right shoes fell on me and all the left shoes fell on you, we’d want to buy shoes from each other even if they didn’t take any labor to produce.

          • kernly says:

            Money facilitates exchange, and that exchange may or may not involve something produced by labor.

            Everything you can buy was produced, or gathered, or secured by labor. Even in a world where shoes fell from the sky, if you’re buying those shoes you had to pay the guy who found them and collected them. That’s labor. Some labor is worth spectacularly more than other labor, due to circumstances – including if a gold bar fell from the sky and I found it, or more realistically I was panning for gold and had a lucky streak. Still labor.

            And a big part of why the labor theory of value is bunk, incidentally.

            Sometimes ownership of a thing can change hands a lot, without the thing changing at all. The reason the thing came into existence – or was gathered, or was secured, or whatever – and was put on the market in the first place was because there was a market for that product of labor. That’s what makes money useful in a broad sense. It’s also the only relevant use of money for the purposes of this discussion, where Scott Alexander wanted to pay people to do something – clean up his carbon emissions, to be specific.

            Yes, money can change hands without labor being involved in that particular exchange, a good example is the stock market (except when shares are first sold by owners). But what I said in the first place is still true, because whatever it is you’re buying, whether it’s a share in a company or a house or a bitcoin or whatever, what entered that thing into market was someone paying someone else for labor. The reason there’s something you wanna buy is always someone else’s labor.

            Though of course it wouldn’t work if you view morality in purely negative terms, as merely the requirement of following certain rules, each violation of which loses you standing. I maintain that positive moral actions are also possible

            It’s a matter of definition. I think a rule based system where people who haven’t broken really important rules are in good standing makes sense. There are a lot of problems involved in enabling moral condemnation of people who haven’t broken important rules, and a lot of problems involved in enabling moral exaltation of people who have. I don’t think the potential benefits come close to making up for those problems.

            This is only problematic if you want ‘right’ and ‘wrong’ to map perfectly to ‘good’ and ‘bad.’ I have no illusions that this is a useful thing to attempt, so I am left with deontology. I recognize that it’s completely possible for a murderer to be a good person – and I don’t care. It still makes sense to have a blanket rule that murderers are not in good moral standing.

          • Andy Harless says:

            Even in a world where shoes fell from the sky, if you’re buying those shoes you had to pay the guy who found them and collected them.

            No you don’t. If they fall on my property and everyone sees them, you can still pay me for permission to collect they yourself. The money in this case has nothing to do with labor, only with exchange.

          • kernly says:

            If they fall on my property and everyone sees them, you can still pay me for permission to collect they yourself.

            If the rules are that who the shoes belong to depends on where they land, then they’re like anything else that comes from the land. That just makes them functionally part of the land, like a valuable chunk of minerals or wild berries growing or something. In this case the part where you paid for something that required labor was when you bought the land – in this case, the original labor done by those who secured the land, whereupon it was sold again and so on.

            You can’t get around it – everything humans can pay for was produced or secured or collected, etc…through the labor of other humans, or you wouldn’t be paying in the first place. You’re always paying for something which was created and brought to market through labor, whether it’s a service or a manufactured good or the ownership of some land. Stop trying to get around it with bizarre just-so scenarios. If it wasn’t produced or secured or collected by someone else, you won’t be paying for it in the first place. You’ll just take it.

          • John Schilling says:

            Back in the seventeenth century, some guy takes a stroll out of New Amsterdam and finds a plot of nice land, complete with fertile and productive fruit trees, recently abandoned on account of all the natives just died of smallpox (that nobody went out of their way to cause). He walks back into town and files a claim. For the next four hundred years or so, the land remains in this man’s family, never being bought or sold or developed. Fruit worth millions of dollars grows on the trees. This fruit is universally recognized as being a thing of value belonging to this man’s family, whether it is hanging on the tree, fallen naturally on the ground, or in a harvester’s basket (though the value may differ in those three cases, allowing for added value due to a harvester’s labor).

            This is not an implausibly contrived scenario. And you want to argue that the value of the fruit is created by, what, the labor of walking to a government office and filing a claim?

            The pure form of the labor theory of value generates perverse and patently absurd results when applied to real property. Really, it generates perverse and absurd results when applied to capital generally, with apologists delving down dozens of transactional levels to find all the bits of “labor”. But with real property, you can find real dead ends with real value but no labor worth mentioning and the pure labor theory of value fails unambiguously.

            A system of ethics predicated on all value being the product of labor, is not going to be a pretty thing in action.

          • kernly says:

            And you want to argue that the value of the fruit is created by, what, the labor of walking to a government office and filing a claim?

            The “value” was brought to a marketable state by the guy who walked up and claimed it. The reason it’s in the market is that someone claimed it – this would fall under “secured” from earlier in the conversation.

            This is not an implausibly contrived scenario.

            Obviously it is. In the vast majority of cases securing something valuable involves more than simply filing a claim. But that’s irrelevant anyway.

            The pure form of the labor theory of value generates perverse and patently absurd results when applied to real property.

            If you think that the “labor theory of value” has anything to do with what I outlined above, you have no idea what the theory is (or you have no idea what I said). The theory posits that the price, or at least the “value” of something will be proportional to the amount of labor that went into it. I have provided examples very early in this conversation that contradict this theory, and anyway the theory isn’t even relevant to the point I am making.

            A system of ethics predicated on all value being the product of labor, is not going to be a pretty thing in action.

            This isn’t about the “value” of the product vs the amount or quality of labor that went into bringing it to market. It’s about what money does. Namely, it incentivizes bringing the product of labor to market. Whether the value of the thing being brought to market is proportional to the amount or quality of the labor that it took to bring it to market is completely beside the point.

            Even if I stumble across a chunk of gold, money’s role is still to incentivize my labor within the system – which in this case simply consists of carrying the chunk of gold into town. How outsized my reward is is not even close to being relevant. If money didn’t do its job there, I wouldn’t bring the damn gold to market, and the gold wouldn’t be on the market. That’s money’s role. Incentivizing labor in service of the market, however trivial or mighty in scope.

            This is not a controversial assertion. If you think I am wrong you simply don’t understand what I am saying.

            A system of ethics predicated on all value being the product of labor, is not going to be a pretty thing in action.

            It is trivially obvious that everything humans can pay for is a product of human labor. Whether that product’s value is proportional to the amount of labor that went into bringing it to market has absolutely no impact on the truth of that statement.

            We’re a ways off the original discussion, though. The point is about what this knowledge about the nature of money (that it’s an incentive producer for labor, not in itself a store of “value”) means in practical terms. It means that in the vast majority of cases, where something you want done takes a significant amount of labor, money isn’t a magic “undo” button for something bad you did.

            When you decide to buy “carbon offsets,” money didn’t pull that carbon out of the sky. Labor did, paid for by money. Labor that was spent getting us back to square one, instead of producing extra utility.

            The idea that money is a magic “value storage system” is extremely damaging. In addition to nonsense logic demonstrated in the main post, it leads to people prioritizing the “value storage” aspects of money over what actually makes it useful, the fact that it incentivizes labor in service of the market.

            It’s really, really important to point out what money is and isn’t, because on an individual basis money can be thought of as a store of value. The instinctive generalization of that individual principle to the economy at large brought us Hitler, through the Brüning deflation, and is probably one of the biggest forces behind the world’s present economic woes. It really isn’t a pretty thing in action.

    • John says:

      In this comment, you say:

      I want to make clear that when you allow murderers to retain moral standing, that’s really really bad even if they don’t have legal standing. Even if those murderers donated a looot of money to malaria victims.

      Later, in response to “in addition, do you beleive that the families who revolt in the former case have negative moral standing because they are disobeying their rulemaker?” you say:

      Certainly!

      I’m getting really confused about the words “morally acceptable” and “moral standing.”

      Can you elaborate on the difference between the phrase “really really bad” in the first quote, and “negative moral standing” in the second? It’s difficult for me to see how, if you define ethical behavior as “what my society has decided is ethical,” you can *ever* criticize your society’s views on ethics without retreating to general principles of any kind.

      I mean, if moral standing == “my society accepts this behavior as ethical,” in a hypothetical society where murder-for-money has moral standing, isn’t “offset murder” completely trivially ethical? On what basis can you, an outsider to this hypothetical society, describe their framework as “really really bad”?

      (You can try to work around this. “Society says treat people nicely” and “society says beat our slaves into submission,” and “society says apply moral rules to everyone,” together, amount to a critique of social norms from “within” the system, even though each might be seen as “ethically okay” by the system. But then it seems like you just have a meta-rule called “ethics should be logically consistent,” which maybe another society denies? Or maybe a society has ethical rules that are logically consistent, like the single rule that “might makes right”–how does one criticize that?)

      I mean, you can consistently say “within my society’s moral framework, an offset-for-murder society would be morally monstrous!” But since what we’re trying to do is debate two different societies’ moral frameworks, that seems uninteresting. Of course applying the moral rules of one to the other will be “bad,” because “discrepancy between behavior and society’s moral rules” is how you defined “bad” in the first place

      • kernly says:

        I mean, if moral standing == “my society accepts this behavior as ethical,” in a hypothetical society where murder-for-money has moral standing, isn’t “offset murder” completely trivially ethical?

        Sure! We don’t live in that hypothetical society, though. And that hypothetical society sucks compared to this one. We’ve got a pretty good thing going with the whole murder is against the rules deal.

        But since what we’re trying to do is debate two different societies’ moral frameworks, that seems uninteresting.

        I think my original reply was pretty good, positing a utilitarian framework and pointing out how following rules produces much better results than allowing people to do anything and retain moral standing as long as it’s “utility neutral.” I guess you’d say for the sake of that argument my ‘meta ethics’ would be utilitarian, and they make my ‘primary ethics’ – the system that determines the ethical standing of individuals – deontological.

        I think the utilitarian meta-ethics thing is only one possible arguments for deontology, though, and I am not wedded to any particular argument more than I am to deontology.

  96. CzerniLabut says:

    Is utility in this case defined as the sum of positive experiences and sensations of individuals, or is utility an emergent and atemporal phenomenon that is “Greater than the sum of its parts?” The ethical implications of an action coupled with an offset is dependent on that. Since this post is long I’ve included a tl;dr at the bottom.

    If we only consider utility to be the sum of individual utilities (whether weighted or not is irrelevant), then there are many scenarios where perfect substitute actions for murder that can be performed that achieve the same net result the individual murdering and for the spared victim. Murder in most cases is not even Pareto efficient in this case, let alone Kaldor-Hicks efficient.

    An example would be that the reason I want to murder someone because I find their social presence near me highly intolerable. Instead of murdering that individual, I could remove a small portion of the offset to malaria charity to hire people to kidnap that individual and move them to siberia or some other remote location where they could still eke out a living, but not be able to bother me or any mutual acquaintences between us anymore. In this case, my utility benefit is that same as if they were dead, but said individual’s utility lost is significantly less, since they can still improve their own utility and that of their surroundings when they’re alive. If this action is coupled with an offset, albeit lower to perform an exile action instead of a murder, then there’s still a net utility gain to a third party. I’m sure that as we add more reasons for murder that could increase personal utility, (They were a criminal harming others/ I get off on performing violent behavior/They were suffering from terminal illness and couldn’t stand to see their pain), then we encounter other substitutes that could increase utility without taking the life (Prison, cathartic media) or we believe it’s ethical to take the life because no offset is necessary (They consent to end their life or are unconscious and they don’t have long to live anyway).

    This isn’t to say that murder coupled with offsets is always impossible. Note that some states still engage in the death penalty for criminals. In this case they’ve found limited occasions for where the intentional death of an individual provides a net sum to all other individuals, weighing that against the costs of crime prevention/reduction, social and welfare relationships of the condemned (if children become orphaned), and possible mis-justice carried if an innocent is killed. There are also stand your ground/self-defense laws, as well as lesser charges of manslaughter, in which case the state may sanction your ability and desire to murder someone in narrow cases, and have predefined lesser “punishments” or offsets. Kohlberg’s moral dilemma comes to mind, where the most ethical action for murder would probably be to break the law and suffer its consequences (instead of paying off the city and ‘getting away with it’) to not only offset the utility lost by the victim, but also to society in general.

    On the other hand, if we consider utility to be a phenomenon that isn’t just about current living individuals, but also an emergent property (diivine order) of society, then we must consider that murder may create scenarios that lead to either uncomputable or infinite utility debt (or greater than the sum known universe can compensate) in which case our utility calculus becomes harder to use to know what offset needs to be coupled with murder. A deontological case for not murdering people becomes attractive if we have a moral intuition that it’s wrong, while simultaneously realizing that including it in a grand scheme of utility calculations screws up being able to compute utility for anything.

    For example, say the target we want to murder is not yet a parent, but wishes to have children in the future. In fact, they not only want to have children, but also grandchildren, and if he can even live to see them, great grand-children. We can also assume that this target has a life partner that shares a similar outlook on wanting to see their progeny multiply. Furthermore, we can strongly believe that this desire will be genetically passed down, such that at least one of their children will carry on this desire and seek someone similar etc. In this case we not only have murdered one, but a hypothetical infinite amount of generations. We have an infinite debt scenario! “But wait!” you say, “If we saved another life that had the same procreatory outlook, then maybe we’d be able to cancel our debt!” In those cases, we can no longer perform generic offset substitutions, because the very nature of the person we kill and other that we save may lead to infinities in our utility calculus that we can’t compensate. Religious societies tend to take this outlook, where the situations where murder is sanctioned are against infidels, or other murderers, i.e. those that would crowd out or have cut out the memetic proliferation of religious tenets.

    Tl;dr “Biting the bullet” to be able to perform murder ethically (when coupled with an offset) is dependent on certain nuances. Those nuances are dependent on whether utility is individual-sum, or society-emergent.

  97. JohnMcG says:

    As a consequentialist, I wouldn’t expect you to find this answer convincing, but my answer is that it would take you down the road of being the type of person who kills his enemies. For me, and I suspect you, that wouldn’t be a person I like very much.

    How many unwanted babies would a pro-life person have to adopt to offset an abortion they procured themselves?

    It seems to me these questions turn on whether you want to eliminate or reduce the evil you’re talking about. Environmentalists may believe that some pollution is inevitable; they would just like for there to be less of it, or for its effect to be mitigated, which makes the case for offsets more plausible. The same could work for vegetarianism — many vegetarians don’t believe nobody should eat meat; it’s just a decision that works for them.

    But there is not acceptable amount of slavery, racial discrimination, murder, etc. It wouldn’t do for someone to only hire whites at his business, based on a belief that a racially homogenous workforce is more efficient, and then donate the surplus to causes that help minority populations. A pro-life activist involved in any way with an abortion would lose all credibility. I think this is what a lot of “if pro-lifers really wanted to reduce abortions, they would favor X” arguments don’t get — we don’t want to reduce abortions; we want to eliminate them.

    So, I guess that’s the question — do you believe there should be literally zero murder, or just less of it.

    • JohnMcG says:

      Another possible objection to offsets in general, if you care about inequality, is that you are exercising an option that is unavailable to people poorer than yourself, thus socially accepting offsets opens up another vector of inequality — you can do “bad” things and buy your way out of them; the poor cannot.

    • Jiro says:

      I think that the reasons most vegetarians give for not eating meat would, if alieved, lead to believing that nobody should eat meat.

  98. vV_Vv says:

    I don’t think this scenario has a satisfactory solution under the utilitarian moral framework that you are implicitely using. Utilitarianism largely implies that people are replaceable.

    I’ve argued similar scenario (with abortion, just to make it more controversial 😀 ) a few years ago on Overcoming Bias as an argument against utilitarianism: http://www.overcomingbias.com/2012/06/resolving-paradoxes-of-intuition.html

  99. Jacob says:

    >save the life of a local person in no way distinguishable from my victim

    Carbon atoms are complete indistinguishable from each other, in a very literal sense. People are not. Each one is different. Utilitarians might say they have the same value, but they are not the same. Human beings innate sense of morality is definitely not utilitarian. So that’s why this proposal comes off as icky and objectionable.

  100. JRM says:

    I think you’ve used the Schelling Fence as a throwaway line (“perhaps I could offset that, too…”) when that and its close colleagues doom your plan. I’m arguing that your murder plan is unworkable in practice.

    Utilitarianism gets a bad rap because people don’t look at all of the probable consequences.

    So, let’s take a specific situation: I want to murder Nate. Nate is a swell guy with a family, but he has defeated me at predicting sportsball happenings, and he’s the only one to give me any kind of challenge, so if he’s dead, I will clearly be the best. Yay, me!

    So Nate gets eaten by wolverines in what appears to be an accident. I don’t tell anyone. But some people suspect, possibly because I said “I wish Nate were eaten by wolverines,” a few dozen times. I also donate $40 million, which will cure some young, reasonably high-outcome people of some bad disease.

    Now, in Theoryworld, we’re deciding if I can do this if no one finds out and I’ve saved other lives. But in reality-world (which I admit I’ve strayed from with the wolverines), this may not happen.

    The likelihood of all kinds of decay occurring are very strong:

    1. I have been successful. People will be afraid of me. Possibly that whole thing where I don’t tell people erodes, because threatening people allows me to do better, make more money, and thus help more people.

    2. I am not the cat I used to be. This is one of the problems with lying regularly; if you don’t examine each lie, it’s easy to lie for personal benefit, and that’s what I’m doing. Here, it’s slightly cleaner – I’m not fooling anyone; I just want to murder the guy for bad reasons and pay penance in the form of cash. But it still makes me more likely to enter a murder cycle.

    3. Nate’s kids and family are joyous over his bloody, violent death is what I think will happen. But I am wrong; they are vengeful and angry. People get way madder over murder than they do over cancer. This leads to troublesome societal trends; people in murderland spend a lot of money and time on security and countermurder. These things don’t end up as one-offs, and everyone is more afraid.

    4. Other people, observing the result, engage in similar offset-type behavior. But instead of paying for medical treatment, they assist the economy in at least as helpful a way (in their eyes) by building an estate with lots of prostitutes and cocaine. Money then flows through the prostitutes and cocaine dealers to other people and has the predicted multiplicative effect, raising the welfare of many.

    Let’s take a different case: Joe the Atheist wants to build up his pro-atheism organization. He is friends with Pastor Oldguy. Pastor Oldguy wants to build a church where he teaches that there’s not enough stoning to death of people. Pastor Oldguy has a million dollars.

    Joe promises to build the church, then takes Oldguy’s money, and shovels it into his own organization. Then he stages a car crash where Oldguy dies, so he stops complaining, like, “Where’s my church?” Net good, right? Joe is moving the sanity waterline, or something.

    No, because of the murdery part. Rules can create utility. Strong rules against murder create societal utility. Letting people murder for “good” reasons leads inevitably to abuses. Sure, Pinochet got things running better. But the survivors (even with the much better economy, even with the large number of lives saved by reducing poverty) are – decades later – furious and embittered. We can say they are ingrates, but in the end the reality is that the amount society suffers per murder is very, very high. If we say it will be an invisible, unknown murder as a one-off that won’t affect the murderer… well, it just won’t go that way.

    –JRM

    2018 Discussion Between SA and Bay Area General Contractor

    BAGC: OK, now we’re going to have add $1.8 million to the price of the Alexander Psychiatry Building, because it turned out the fourth floor has to be built above the third floor, which we totally did not know and now we need more money.

    SA: Have you ever read my post, “Ethics Offsets”? Why don’t you do that now?

    BAGC, a bit later: Oh, I see now! It turns out the old price was just fine! No additional cost for you! My mistake! Really!

    • kerani says:

      RE: Pinochet & murdery bits –

      (This should *not* be read as an endorsement of dictators of any sort.)

      Looking at South America from an outside perspective, it appears that the options for Chile were not “Pinochet” or “non-murdery people running the government” but “Pinochet” or “murdery people similar to those running other SA countries, running the government”. So that Chile ended up with “murdery people who installed order, flat-lined corruption, and built a sustainable economy”, rather than the options, which were like Pinochet only in that they were murdery.

      I know several Chileans who feel that Pinochet was far better than they could have gotten. (I also know Chileans who feel that any government with only two political parties (ie, the USA) might as well put on Nazi armbands and call it a day. Such is life in a relatively free society.)

      (This ignores that no nation is a blank slate, which could be a fatal flaw in the comparision.)

      • Anonymous says:

        Pinochet was the “murdery people similar to the neighbors” choice; the other choice was “murdery people different from the neighbors.”

        • kerani says:

          Emm. An examination of the post Pinochet roads of Chile vs other SA nations doesn’t support this conclusion, imo.

          (Given that we don’t have Allende or UP to compare their results to the real world, we can’t say that they would NOT have ended up like every other leftist group in Latin America, but I think the onus is on those who would claim that UP had some magic power to become and remain non-murdery that the rest of their ilk did not.)

      • Illuminati Initiate says:

        I’m not familiar with Chilean history but it seems that Allende was a LOT less murder-y than Pinochet was. I mean as far as I can tell he didn’t even have mass arrests of dissidents or anything like that.

        Also, frankly, if I have too choose between a left wing murder state and a right wing murder state I will always choose the left wing one, unless there are pretty massive differences in how murder-y they are.

        Whereas the US had a consistent policy of picking right wing murder states (maybe even when there was a massive disparity- I don’t know the exact numbers but it seems pretty unlikely to me that Soviet!Afghanistan would have killed nearly as many people as Taliban!Afghanistan).

        • Jared says:

          Left wing dictators generally have far worse economic policies than right wing dictators. I would much rather live in a banana republic than socialist Cambodia.

        • kerani says:

          I’m not familiar with Chilean history

          Let’s not discuss this in your absence of facts, then. Likewise Afghanistan, which really deserves indepth examination.

          if I have too choose between a left wing murder state and a right wing murder state I will always choose the left wing one, unless there are pretty massive differences in how murder-y they are.

          A review of the 20th century shows that there is, indeed, massive differences in the murder-iness of the states, and the survival of the population is better in the rightwing state. However, I don't disagree with your right to choose the murdery state of your own preference – esp as murdery states are particularly hard on those who most disagree with them.

          Picking the murdery state least likely to kill you due to your political leanings would only be rational.

          • Illuminati Initiate says:

            I didn’t have a total absence of facts. It’s just that I can’t find anything about Allende that indicated he was secretly planning on suddenly starting to mass murder people. If someone has some evidence that he was fine, but saying that Pinochet killed less people then Allende would have seems pretty much unfounded.

            The Afghanistan thing I’m admittedly much less sure of in terms of deaths alone and probably shouldn’t have brought it up.

  101. Anonymous says:

    The best counter I have to the “murder offset” dilemma is to state that the value of one life is infinite. If that’s true, it doesn’t matter how many lives you save because you will still be in “life debt.”

    Also, stating certain lives are “more valuable” than others, while most likely true from a global economic sense, leads us into some murky waters. If we are basing these decisions on economic value, wouldn’t that make it OK to kill anyone who is actively detracting from the economy, thus giving us a net positive? Clearly it’s not ok to do that, which to me means that the value of a life does not come from one’s economic contributions.

    • vV_Vv says:

      The best counter I have to the “murder offset” dilemma is to state that the value of one life is infinite.

      But this doesn’t allow you to compute any meaningful solution to the runaway cart dilemma, which is pretty much what utilitarianism was intended to solve in the first place.
      Moreover, if the valua of one life is infinite, it follows that you are morally compelled to spend any amount of personal assets and effort to save or create one life, as long as this doesn’t jeopardize you own life. But after you saved or created one life, your moral obligations are over, as long as you don’t kill anybody.

      • Null Hypothesis says:

        Utilitarianism wasn’t invented to solve the problem. It was invented to justify one of the solutions. Ethics are morals inflicted/held/agreed on by society.

        One may feel its moral to throw the fat guy in front of the trolly. That’s his personal moral calculus. Utilitarianism as an ethic just says society agrees with it – or more properly those with political power will enforce that reasoning, and judge you based upon it.

        But it also suggests that asking the fat guy his opinion is irrelavent. Also, in every hypothetical scenario of that problem, you never seem to restrict the option of throwing ones’self in front of the cart instead of pushing the other guy in. But its always implicitely ignored. Why?

        Because killing one stranger versus five is a calculation of 5 vs 1. Killing yourself is a calculation of 5 vs 10,000. And incidentally, if it were your children on the track, it’d be 5 million vs 10 thousand.

        We values lives differently. Which is fine enough on our own. But inflicting one’s value of someone else’s life upon them is a line that utilitarians deem themselves justified in crossing. It’s the exact same moral event horizon crossed by every tyrant and mass murderer.

        Scott used supporting a bloody dictatorship as an example of a bad to be offset. But using the exact same moral reasoning, the dictatorship can be an offset itself.

    • Lambert says:

      Life is of infinite value, therefore non-negligible probability of saving a life is infinite, therefore you should not accept $1,000,000 to raise probability of death by one part per trillion.

      Tl;DR:
      Don’t mess with infinities unnecessarily.

  102. 27chaos says:

    I don’t see anything wrong with this, with the caveat that Schelling fences are important and might not always be easy to offset. If you do your utility calculations wrong, then you’re probably in for a bad time. This is probably why your moral intuitions are conflicted: the assumption of omniscience that you make is not an even remotely realistic one, so your intuitions are slyly rejecting your conclusions whenever you look the other way for half a second.

  103. Anonymous Utilitarian says:

    Maybe rich guy could offset the killing of his enemy with a simultaneous killing of enough negative utility humans. If rich guy can afford nuclear weapons, trick his enemy into visiting Google HQ, then nuke the whole thing — from orbit, for style points. A nerdgasm of positive utility spews forth from the whole in the web from which Google once spied on the whole world, clearly offsetting the loss of a single rich guy’s enemy.

  104. Elim says:

    Fungible vs. non-fungible.

  105. Anonymous says:

    >Convince me not to bite this bullet.

    Your scenario is a rehashed version of “Utiitarian Hospital” where they slaughter one healthy person and give the harvested organs to five. Is there any argument against the the Utilitarian Hospital that does not also apply to your scenario?

    • mysternee says:

      It’s not the same, and in a very important way. In the UH scenario, the two potential courses of action are mutually exclusive. The whole point of that scenario is that you can’t save all six patients, and the utility calculus suggests killing one in order to save the other five. This is what makes it a powerful objection. In Scott’s example, the two actions in question are not linked. The millionaire can decide not to commit murder and donate to the AMF. This is what (in my opinion) makes it a weak objection to utilitarianism, but a potentially robust objection to carbon offsetting.

      • Jiro says:

        It’s a weak objection to “utilitarianism” in the abstract, but it’s a strong objection to “utilitarianism, with an explanation for why I don’t have to give everything I have to charity”. And utilitarianism without such an explanation is unworkable.

        • mysternee says:

          I really don’t see how this is an especially strong or interesting objection to utilitarianism in terms of excessive demand. You can remove all the stuff about offsetting, and still end up with the classic argument that any coherent model of utilitarianism places excessive demand on moral agents. I don’t see how the issue of ‘offsetting’ brings that into focus in any meaningful way.

          I spoke about this at a little more length in a couple of earlier comments which I’ll just link here to avoid duplication, and I’m interested to know if I’ve missed something:

          https://slatestarcodex.com/2015/01/04/ethics-offsets/#comment-171278

        • Anonymous says:

          >with an explanation for why I don’t have to give everything I have to charity”. And utilitarianism without such an explanation is unworkable.

          …”My preferences are not identical to those of a perfectly moral agent” works just fine.

          Of all the objections to Utilitarianism, “why I don’t give everything I have to charity” is a rather weak one.

      • Anonymous says:

        I would argue that there is a link. Some people will not be able to summon the motivation to act in a utilitarian fashion in some cases unless they allow themselves some indulgences.

  106. birdboy2000 says:

    It occurs to me that a great deal of politics rests on making exactly these sorts of calculations about the value of human life, once you replace a millionaire with a government (although kleptocracies blur the line.)

    Questions like “it will cost me a certain amount of money to provide people (who would otherwise die without) with housing/health care/food” or “If I vote for this party we will go to war, and the war will kill so many people, but the economy will do well so I’m voting for it anyway” or “This regime executes people of the wrong class background or commits genocide against a certain ethnic group, but they’re aiding the standard of living for the rest of the population, so should be supported in spite of their human rights violations” or if I’m living in said country, “I should support the regime (or the rebels)” based on similar answers to that question.

  107. Jared says:

    I make it a point to never bite the bullet. Even if I could do some mental gymnastics justifying something horrible, my moral intuitions are always going to find it wrong and my conscience will make me hate myself. For your own happiness, you shouldn’t do something that you find horrifying.

    • That’s an interesting position. FWIW, I’ll offer a countervailing perspective that all the best things I’ve achieved in life (loves, friendships, community involvement, working in science, charitable giving, etc) have required “biting the bullet”. This is largely due to having been raised in a fundamentalist religious family. At various times after researching moral questions, I became convinced that some part of the moral code I had been taught was false. But changing moral intuitions takes slower and subtler processes than changing rational beliefs, so to follow my new improved moral system required that initially I do (not do) things I felt very guilty for doing (not doing), even while believing them to be right. Fortunately my (system 1) moral intuitions eventually came into line with my (system 2) beliefs via time and practice.

  108. Null Hypothesis says:

    “Pretty sure if chickens could vote on it…”

    There is your answer right there. Having the audacity to think it’s ethical to place equal value on other peoples lives and then trade them off.

    Ethical offsets in general are suspect, because ethics and principles aren’t about the results. They’re about the process.

    And if you don’t agree, you’re ethically bound by your logic to go out and kill 20-somethings, lobby to require motorcyclists to ‘not’ wear helmats, and eventually make it policy to randomly cull ~1% of the healthiest members of our population.

    Because one organ doner saves many lives. Killing someone healthy contains its own ethical offset!

    Polling a random group about killing one ‘other’ person and saving two will get a 10/10 yes vote.

    Try polling the poor shmuck you’re about to kill. In the end this utilitarian thinking requires the bastardly notion that you can inflict your subjective evaluations on others. You value all strangers lives equally. Fine. So do most others people. But they value their own lives much more. It’s an insipid and reprehensible notion to claim it ethical to murder one guy to save another; and it’s a tad disturbing you feel the need to crowdsource this conclusion.

    • Anonymous says:

      If you don’t find such ethical conclusions disturbing, you’re doing it wrong. The entire point this kind of philosophy is to push ethical systems to breaking point and find interesting edge cases where one intuition defies another.

  109. Josh says:

    The obvious distinction: humans aren’t fungible. Killing one person, even to supply someone else with more QALYs, is considered bad*. See: Horcruxes, which trade any arbitrary N QALYs for a seemingly infinite number.

    * This might just beg the question, but destruction of non-fungible goods has a cost not measured in the value or cost of those goods.

  110. Grant K says:

    For a lot of people’s belief systems, there are inherent moral differences between killing someone and emitting CO2 emissions. No one is inherently opposed to CO2 emissions. People are opposed to the effects that CO2 emissions have on people, animals, the environment by causing climate change. By buying offsets, at least theoretically, all of the would be damage has now be negated because no net CO2 is being emitted.

    Most people are inherently opposed to murder because they believe the act of murder itself to be immoral. It doesn’t matter if there is an offset because the act itself is wrong. Offsets are okay when the act itself is not wrong, but only the effects are.

  111. Greg says:

    Which is worse – killing one person or causing a mild inconvenience to six billion?

  112. The_Duck says:

    Ethics offsets feel exploitative: you’re holding some irrelevant good hostage so you can commit some (admittedly lesser) evil. Shouldn’t we resist this sort of blackmail?

    Refusing to sanction ethics offsets feels like refusing an unfair split in the ultimatum game.

  113. Scott says:

    If eating meat is worth $1000 to you, we’re going to have some problems with this scheme, because I suspect most of that if not all is tied up in the marginal cost of meat over vegetarian calorie sources.

    Searching around, I found a few (admittedly flimsy) sources that put the cost of a meat diet as $800-$2200 higher than a vegetarian diet.

    As a side note, I came across a much more interesting paper [pdf], which quantifies the relative value people place on different food categories (meat vs sweets vs dairy vs etc.). See table 5 in the pdf.

    • I did a bit of searching on google and it feels like just as many people are saying that vegetarianism/veganism is more expensive than meat-eating (for example), and those who are saying that vegetarianism is cheaper are often saying something more like “vegetarianism can be cheaper if you shop smart”, which implies a time and energy cost.

      And I only see three tables in the pdf…

      • Scott says:

        That teaches me to not search for the contrary position. Though cost per calorie and cost per calorie of meat is higher than vegetarian options, and if you home cook then there isn’t an additional cost or time investment, so I’m going to go with a ‘plausible’ on this one.

        Table 3, my mistake.

      • Anonymous says:

        Ongoing time and energy costs of being vegetarian apply to any kind of bargain hunting, or to finding processed imitation meat you like. Without those factors, eating vegetarian foods can save money, after the one-time energy cost of learning to cook beans, rice, lentils, etc (pretty easy with a crockpot).

    • Anonymous says:

      I don’t understand, I thought he meant that he’d be willing to pay a hypothetical 1000 on top of whatever his diet costs him, unless you mean cost to someone else?

  114. Trevor says:

    I haven’t read Plato’s Republic in forever, but I feel like that branch of virtue ethics is a path worth pursuing if you’re looking for disagreement. In rough sketch, I think the argument goes:

    If you kill somebody, regardless of circumstances, your mental model will be thrown for a loop. This will affect your motivation. One predictable result of this change is that you won’t act virtuously in everything you do. Also, yeah, personal virtue is the thing we’re trying to maximize.

  115. Jiro says:

    Several people have suggested that we do actually have murder offsets in the form of governments who do things to help people, that are still statistically likely to cause deaths. Or just people who drive cars and create a non-zero risk of other people’s death.

    I don’t really think that this is a solution. Most of us would still object if the government killed one person to harvest their organs to save five. It is considered (by most people) okay for the government and the driver to offset harm against benefit only under certain conditions–for instance, that the harm to individuals is statistical and incidental rather than individuals being directly targeted. And those conditions are not utilitarian.

  116. Lightman says:

    I think the distinction being missed here is between agents and non-agents. Very few people are willing to grant moral worth to the environment *in of itself*; it is valuable insofar as it has use value (use value here encompasses aesthetic use value) for human agents (and maybe non-human agents, depending on your worldview). So, if you’ve overall increased the health of the environment, you’ve done good, because you’ve increased the use value for everyone. “The environment” as such has no claim against you.

    Ethical vegetarianism usually rests on the idea that non-human animals are moral agents with rights. Eating animals is thus a violation of an agent’s rights. The animal you eat always has a moral claim against you, even if you are a net benefactor re animal welfare.

    The same obviously applies to the wealthy murderer situation.

    • anon says:

      Very few people are willing to grant moral worth to the environment *in of itself*

      This is not clear. Deep ecology/ecocentrism does. Where are the polls?

      • Lightman says:

        I thought about Deep Ecology when I was writing this post; while I couldn’t say for sure what percentage of the population subscribes to it, I am fairly confident in my conjecture that Deep Ecologists are pretty rare. Anecdotally, I know some pretty hard left environmentalists, and very few of them are Deep Ecologists.

        I’m fairly confident that more people assign non-instrumental moral value to humans than assign non-instrumental moral value to non-human animals, and that more people assign non-instrumental moral value to non-human animals than they do the abstraction “the environment.”

        I’m willing to grant that, if we accept the idea that the environment has inherent moral value/rights, then yes, the offset calculation would be immoral.

  117. Pingback: Seeing double | More Right

  118. Dumky says:

    A famous thought experiment along those lines is a doctor killing a healthy patient to harvest organs to save multiple other patients.

    I think there are two separate questions: (1) what you should do and (2) what others can do to you.

    Regarding (1): Whether the offsetting action makes you sleep better, that is up to you. There is no telling how you should feel.
    Regarding (2), reason alone lets us derive something of a rights-based answer (see Rothbard/Hoppe/Kinsella): If you do something to someone, you are demonstrating that you accept the same done to you. This means that the victim can get restitution and then do it to you (two teeth for a tooth). Consistency and universalizability would prevent you from arguing against such actions.
    I think it avoids the dilemma of utilitarianism.

    In the case of some pollutant, there is no claim against you since you didn’t generate pollutant on net. The “victims” are free to do the same to you (ie do nothing).
    In the case of the chicken-eating, murder or organ-harvesting-by-murder, then the victims have claims against you (and they or their delegates can do the same to you). Even if you donated money to the victims or their representatives as an “offset”, you still demonstrated that you think murder is acceptable. So they have a claim to your life. The offsetting actions may be extenuating (appeal to their sympathy when you negotiate keeping your life), but does not affect or reduce their claim.

    • Dumky says:

      I’m thinking more about this. The reasoning isn’t as strong as I thought.
      Basically the problem is inferring what principle the aggressor demonstrated when he initially harmed the victim. We need to infer that principle to generalize it and let the victim use it back on the aggressor. But that inference seems undetermined.

      A few examples:
      (1) A beat B. Does that mean that B can beat A back, or that anyone can beat A?

      (2) A beats B and gives to charity. Does that mean that B can beat A back, or only beat A back if he also gives to charity?

      (3) A beats B after B consented. Does that mean that B can beat A back, or only beat A if A consents? Why are the two actions (beating and getting consent) relevant to bundle, when bundling didn’t make sense in (2)?

      (4) B beats A back because A beat B first (another example of conditional violence). Common sense tells us that the punishment is legitimate and does not authorize further escalation (A beating B again), but that is not an generalizable principle we can obviously infer from the actions of both parties.

  119. Haggai says:

    Part of why this feels weird to me is that we typically measure wealth in money, and when I think of an Evil Mastermind with a lot of money, other questions present themselves to me, such as:

    Well how did EM get get all this money in the first place?

    Was it acquired ethically, or was EM able to exploit poor workers in third world countries due to a lucky accident of birth?

    Is it even fair for EM to have so much disposable income that EM can afford to perform immoral acts and offset them ethically?

    I think these objections go away (in my mind) if I try to imagine EM as possessing lots of wealth in some “fair” way — maybe the massive amounts of disposable wealth which EM possesses lies in non-transferable assets. Like for example maybe EM can heal people with their hands, or can personally mine large amounts of rare earth elements by some method which for some reason no one else can do.

    I think when I frame it in this way the idea of ethical offsets feels better to me, because I don’t get stuck on the issues of whether inequality is unethical.

    (first time posting here!)

    • Samuel Skinner says:

      “Was it acquired ethically, or was EM able to exploit poor workers in third world countries due to a lucky accident of birth?”

      Is this exploitation “I cut off the hands of those who didn’t meet quota”, “I exposed people to long term pollution” or “I offer people jobs that had poor working conditions but paid fantastically better than anything else in the country”? Because sweatshops aren’t in the same category as bypassing safety/pollution standards or involuntary labor.

      I should note that most wealth in the first world isn’t acquired from the third world- Walmart is the second biggest business in the world after all (number 1 is Sinopec which is a bit self-explanatory).

    • Tracy W says:

      Out of curiousity, do you get this feeling when thinking about pretty much anyone with disposable income and/or time?

      The fact that you are posting on this blog indicates that you are not scraping along on the barest minimum – even if you are using a public computer available for free, that means that there’s wealth being provided to you by someone else. How do you know that wealth was acquired ethically?

      Isn’t it a lucky accident of birth that you were born somewhere where people have enough money to leave public computers lying around? Isn’t it a lucky accident of birth that you were born without some terrible congenital disease that killed you before your first birthday, even in a rich country? Isn’t it a lucky accident of birth that you don’t have some terrible mental illness that means you are too paranoid/delusional/etc to use the public computer and take place in other day-to-day activities like that?

      Is it even fair for you to have so much disposable time that you can sit around fretting about whether it’s fair for a hypothetical rich person to be able to buy offsets?

      I’m not trying to guilt trip you, I’m also very privileged, just pointing out that the difference between you, or me, and the Rich Man is pretty minor. In broad terms, you and I are both already incredibly lucky by birth.

  120. Tarrou says:

    Scott is conflating two ideas in his moral offsets, only one of which is moral.

    The first is escape from punishment, which is immaterial to his moral worth, and in fact, I would argue attempting to escape the consequences of one’s bad actions is inherently immoral. But whether or not a criminal goes to jail does not change one bit the moral price he pays for the crime. It is the intent that matters. This is why we admire the civil disobedience protester and scorn a hit-and-run driver. If it is moral to break a rule, attempting to get away with it neutralizes that morality. Accepting punishment adds to the injustice of the immoral rule, and enhances the morality of the actor.

    The second is actual “ethics”, but here we run into consequentialism, which I believe Scott is trying to talk his way through in this post.

  121. DrBeat says:

    The instinctive reaction to this is revulsion, even though you can make a case that we should not be revolted. Cases like this, where our instinctive reaction contradicts with what the full logical explanation is, are used to say that our instinctive reactions are wrong.

    I have a counterproposal that applies in cases like this and a couple others. Our revulsion that does not go away when all the logic is explained away is not an irrationality, it is a perfectly rational reaction to a possibility that people usually consciously omit: that you are being tricked.

    The revulsion not reacting to your logical explanation isn’t a bug, it’s a feature, based on the probability your logical explanation is horseshit being used to confuse me so you can get to murder someone. We think it is still wrong to morally offset your murder, because nobody will ever morally offset their murder, but they might claim to in order to get away with it. Anyone who would claim they can commit murder and offset enough of their ethical burden to end up a net positive is exactly the person who should never, ever, ever be trusted to determine the moral values of anything.

    We can’t separate our moral intuition from things that actually exist; which isn’t really that big a failing. Numbers won’t convince us because the numbers you put forth aren’t being compared to any existing values we hold for those numbers, they are being compared to the possibility your numbers are bullshit and you’re trying to get one over on us.

    (I get in this kind of mood every time I see someone say that a certain string of outcomes of an allegedly random process was unlikely to be coincidence, and someone else tut-tuts them and says “every outcome is just as likely as every other!” You dipstick, we’re not comparing the odds of one random outcome vs one other random outcome, we’re comparing the odds of one random outcome vs the odds that the outcome isn’t random at all!)

  122. Princess Stargirl says:

    I also felt forced to “bite the bullet” when I decided that among “decent people” the morality level of a person is almost totally dominated by how much they donate to effective charity. Basic things like being kind/open minded/etc don’t come close to having the impact of well placed charitable donations. So in practice among people who already give a non-trivial amount to EA charities and don’t commit serious crimes the only significant morally relevant difference is the amount/percentage* given to charity. Most aspects of “everyday morality” are just rounding error.

    *Idk if percentage or amount is the important metric.

  123. Tenoke says:

    As an outside observer, I definitely prefer the world in which 10 people are saved versus the world where 1 person is saved (not killed), especially if the knock-on effects are taken care of.

    In fact, I’ll go as far as to condemn the the people who try to rationalize saving that 1 person instead of the 10 because of an icky feeling that they get when murder is involved.

    • Ufnal says:

      Oh, no. You can not just say “saved” = “not killed”. Or else you end up with a world in which every time I have an urge to just grab somebody’s throat and squeeze but I resist that urge, I count as saving a life.

      Also, this kind of thinking can lead to many dangerous ideas, such as “if the number of people dying from hunger/disease/war in the Third World countries during a decade exceeds the number of people in the First World countries, and all the money from the First World countries would be enough to prevent deaths from hunger/disease/war in the Third World for half a century, then it’s morally right to kill all the people in the First World, grab their moneyz and give them to the Third World”.

  124. stillnotking says:

    There is no coherent theory of morality that satisfies all our intuitions, because our moral nature is an emotional suite with all the attendant quirks and inconsistencies. One might as well try to develop coherent theories of love, or music appreciation, or ice-cream preference. The main difference is that part of our moral intuition is the intuition that it isn’t merely an intuition. This is easily disproved by some common thought experiments that everyone here is likely familiar with; Scott’s third example serves quite well too.

    This may not mean that ethics is a complete waste of time (it has had some largely incidental benefits), but it does mean that the basic goal of ethical philosophy can’t be achieved.

  125. Vadim Kosoy says:

    There are a lot of comments on this thread so I risk repeating someone. Nevertheless. I think it’s a fallacy to assume actions can be divided into “moral” and “immoral”. Instead, there’s a scale: some actions are better than others. Making a huge donation + murdering someone might be better than doing neither but it’s still worse than just making a huge donation.

    So, does it mean the best action is donating everything to charity except a subsistence wage? I’m not really sure but here are three possible approaches:

    1. Yes, donating almost all of your money to charity is the best choice. However it is not a choice most people are psychologically able to make. Therefore it is not the best choice in the space of “psychologically admissible” choices.

    2. No, donating almost all of your money to charity is not the best choice. This is because you’re not a perfect altruist therefore you care about yourself more than about other people. But you’re also not a perfect egoist so you should still donate *some* money.

    3. No, donating almost all of your money to charity is not the best choice. This is because having a person with a moderately comfortable life is more valuable than having a person with a miserable life + a small probability of having another person with a miserable life.

  126. Ufnal says:

    Well, I never felt too fondly about utilitarian ethics, and I feel there’s something deeply wrong with trying to reduce ethics and morality to monetary value. The last part of this post, whatever its intention was, reinforced those feelings and beliefs. So thank you for that. 🙂

    BTW, as this is my first comment here, I’d just like to add that despite differing from you in, as far as I can tell, quite a lot of things (from religion and ethics, through nationality and education, to politics), I really appreciate the way you promote thinking instead of shouting, analysis instead of shitstorm and careful consideration instead of [or at least not entirely replaced by] outrage. As one person that made me visit this blog said, Slate Star Codex is a great common-sense antidote to all the hurting butts.

  127. Anonymous says:

    Note that, to produce the paradox, you require the ‘least convenient world’. In invoking this, you have intentionally abandoned the sort of normal world that our moral intuitions are calibrated for, in favor of a bizarre alternate reality which may not even be possible. First then do we get the mismatch between utilitarianism and intuitions.

    Then you somehow want to conclude that utilitarianism must be wrong. The real question is of course why you trust the fuzzy logic of intuition to give the correct answer. We know that our intuitions are wrong in any number of cases it isn’t really meant to deal with – the easy example is relativistic speeds. You can write ‘the speed of light is the same in all inertial systems’ a million times on a blackboard, but it won’t magically allow you to understand relativistic physics intuitively.

    The second mistake which you never quite get around to actually making, is to assume that Bizarro World utilitarianism can be transported back into the real world. The hypothetical only informs you about utilitarian calculations in a world where very relevant concerns have been abstracted away.

    The only thing which follows from the hypothetical is that if we lived in that world, our moral intuitions would probably be very different. This is not much of a bullet to bite.

  128. Buffalo_soul_jah says:

    “I can’t imagine anyone saying “You may not take that plane flight you want, even if you donate so much to the environment that in the end it cleans up twice as much carbon dioxide as you produced. You must sit around at home, feeling bored and lonely, and letting the atmosphere be more polluted than if you had made your donation”.”

    Well, it happens! No matter how big a surcharge I’m willing to pay (toward the corresponding ecological harms), I’m prohibited from buying incandescent light bulbs or high flow shower heads.

    Doesn’t make any sense (these things are not infinitely harmful) but it’s the law.

  129. Pingback: Ethics Offsets | Neoreactive

  130. Kenny says:

    First, there’s no virtue-ethics morality that doesn’t condemn you for murdering someone as in the scenario described. So let’s assume utilitarianism is the obviously correctly ethical system. [Assume the cow is a sphere of uniform density …]

    Morally, offsets are terrible! Mainly because there’s an opportunity cost related to the offset. So the moral calculation should include the moral cost to the immoral act, the moral benefit to the offset, and the opportunity cost of the offset [and really maybe also the opportunity cost of the immoral act].

    Morally, a lot of people should not be flying in airplanes if doing so is so bad. Only those that are definitely doing something extremely positive morally by doing so should fly.

    Practically, people shouldn’t be flying somewhere for vacation, assuming that flying really is so terrible.

    Consequentially, this is probably a terrible idea given how fragile Schelling Fences are once a sufficient number of people have hopped over them.

  131. Adair Neto says:

    So,

    If you have money you can pay every antithetical action;
    Money is distributed unequally;
    Just rich people can offset his own antithetical actions;
    So offsetting is antithetical.

    This is a pretty liberal thing, aren’t?

  132. Pingback: Ethics Are Important | Living Within Reason

  133. Pingback: This Week in Reaction (2014/01/09) | The Reactivity Place