A Series Of Unprincipled Exceptions

Meeting with a large group of effective altruists can be a philosophically disconcerting experience, and my recent meetup with Stanford Effective Altruist Club was no exception.

Buck forced me to pay attention to an argument I’ve been carefully avoiding. Most people intuitively believe that animals have non-zero moral value; it’s worse to torture a dog than to not do that. Most people also believe their moral value is some function of the animal’s complexity and intelligence which leaves them less morally important than humans but not infinitely less morally important than humans. Most people then conclude that probably the welfare of animals is moderately important in the same way the welfare of various other demographic groups like elderly people or Norwegians is moderately important – one more thing to plug into the moral calculus.

In reality it’s pretty hard to come up with way of valuing animals that makes this work. If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is probably within an order of magnitude of all human suffering. You would need to set your weights remarkably precisely for the values of global animal suffering and global human suffering to even be in the same ballpark. Barring that amazing coincidence, either you shouldn’t care about animals at all or they should totally swamp every other concern. Most of what would seem like otherwise reasonable premises suggest the “totally swamp every other concern” branch.

So if you’re actually an effective altruist, the sort of person who wants your do-gooding to do the most good per unit resource, you should be focusing entirely on animal-related charities and totally ignoring humans (except insofar as humans actions affect animals; worrying about x-risk is probably still okay).

I acknowledged the argument was very convincing, but told Buck that I was basically going to safe-word out of that level of utilitarian reasoning, for the sake of my sanity.

Buck pointed out that this shouldn’t be too scary, given that many utilitarians have already had to go through a similar process. Peter Singer talks about widening circles of concern. First you move from total selfishness to an understanding that your friends and family are people just like you and need to be treated with respect and understanding. Then you go from just your friends and family to everyone in your community. Then you go from just your community to all humanity. Then you go from just humanity to all animals.

By the time most people figure out what they’re doing they already accept at least friends, family, and community. But going from “just my community” to “also foreigners” is a difficult step that’s kind of at the heart of the effective altruism movement. In the same way that allowing animals into the circle of concern totally pushes out the value of all humans, allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it. So why shy away from doing the same with animals?

It’s a good question. After thinking about it for a while, I think my answer is that I never actually completed the process of widening my circles of concern and neither has anybody else, and because I’m thinking about this one in an abstract intellectual way I’m imagining actually completing it, which would be much scarier than the incomplete things I’ve done before.

Like, although I acknowledge my friends and family as important people whom I should try to help, in reality I don’t treat them as quite as important as myself. If my brother asked me for money, I’d lend it to him, but I wouldn’t give him exactly half my money no-strings-attached on the grounds that he is exactly as important to me as I am.

Likewise, although I acknowledge strangers as important people whom I should try to help, in reality I don’t treat them as quite as important as my friends. We all raised a lot of money to help Multi when she was in a bad situation, but there are thousands of other people in the same exact same bad situation and we’re not putting nearly as much effort into them.

You can try to justify this in terms of “well, I know myself better than I know my brother, and I know Multi better than I know strangers, so I’m more effective at helping me and Multi, so I’m just rationally doing the things that would have the most impact”. But I think if I bothered to dream up some thought experiment where that wasn’t true, I would prefer to help me and Multi to my brother and random strangers even after that factor had been controlled away.

This doesn’t come as a surprise to me and I’m not sorry. But…well…I guess my worry about the animal charity thing wasn’t that I was inconsistent, so much as that I was being meta-inconsistent; that is, I didn’t even have a consistent set of rules for deciding whether I was going to want to be consistent or not.

And now I think I might have a consistent policy of allowing some of my resources into each new circle of concern while also holding back the rest of it for the sake of my sanity. Thus my endorsement of GiveWell’s principle that you should donate at least 10% of your income to charity, but then feel okay about not donating more if you don’t want to. I am allowed to balance resources devoted to sanity versus morality and decide how much of what I have I want to send into each new circle of concern – without denying that the circle exists.

I think that armed with this idea I am willing to accept Buck’s argument about animal welfare being more important than human welfare, insofar as this means I should donate some resources to animal welfare without necessarily having to give up caring about human welfare completely. I don’t think I can make a principled defense of doing this. But I think I can claim I’m being unprincipled in a meta-consistent and effectively sanity-protecting way.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

644 Responses to A Series Of Unprincipled Exceptions

  1. “Most people intuitively believe that animals have non-zero moral value; it’s worse to torture a dog than to not do that. Most people also believe their moral value is some function of the animal’s complexity and intelligence which leaves them less morally important than humans but not infinitely less morally important than humans. ”

    I don’t think so. Most people don’t think torturing a dumb dog is more acceptable that torturing a smart dog, or that torturing a dumb human is more acceptable than torturing a smart one.

    Most people think this is about pain — straightforwardly, that you should not increase the nett total of pain in the world.

    (Of course, almost everybody think that vivisecting a drosiphila is better than vivisecting a dog. In this case, there is doubt as to whether the fly feels pain at all)

    Geeks seem to find it necessary to take a detour through “intelligent and complex”, presumably because they see talk about pain as the head of a slippery slope that leads to talk of qualia, and then zombies, and then dualism.

    • Groober says:

      ‘Most people think this is about pain — straightforwardly, that you should not increase the next total of pain in the world.’

      Do they? I guess when asked, most meat-eaters i’ve spoken to have said that they don’t want the animals to suffer, and that it’s alright to eat them if they’re treated humanely.

      Do people in general have an aversion to causing insects pain (contentious though the idea of insect pain may be)?

      If people have an aversion to causing pain, then the most effective altruistic step would definitely be in helping animals.

      • > Do people in general have an aversion to causing insects pain (contentious though the idea of insect pain may be)?

        What do you think would happen if it were definitively proven that insects or fish feel pain?

        • bellisaurius says:

          It might not matter. I could argue some level of self defense for insect killings.

          • Which is to say, they would acquire the same status as non human animals. Which is to say, information about pain makes a difference separately to information about intelligence.

      • fubarobfusco says:

        “He was the kind of kid who pulled the wings off flies.”

        Even if insects don’t experience pain, there’s an expectation that the sort of person who does cruelty-like things to insects is more likely to do actual cruelty to other animals including humans.

        This probably contaminates our reasoning about insect suffering — we don’t want our utilitarian reasoning about insects to lead us to erroneously conclude that “he pulls the wings off flies” isn’t evidence for his general cruelty.

        • John Schilling says:

          For a strange definition of “contaminates” that translates as “transforms from a pedantic thought experiment to a useful tool”.

      • Paul Brinkley says:

        I can’t help but think that Jonathan Haidt would have a great deal to say about this.

    • I totally agree that most people probably don’t hold intelligence up as a marker for intrinsic moral worth. If you pin down the average person on the issue they’d probably see it as purely instrumental. I also agree that the STEM/geek crowd see it different. I’m sure it’s absolutely nothing to do with socially maximising the worth of their best attribute. *cough* *cough* (actually I consider myself partially a part of this crowd and I’m not actually in as much disagreement as this comment may indicate)

      One thing that I think is inconsistent about the pain approach is that it fairly obviously implies a vastly reduced human status, more so than what even most animal rights folks hold to. I’m not aware of any evidence that points to animals experiencing less pain than humans. If anything, I’d guess the opposite, because human ability to plan and make long term decision is something that would evolutionarily require going against the pain mechanism, and we might speculate pain may be a slightly less overwhelming sensation. Under this model its not even 1-1000 chickens as Scott describes it, it’s more like 1 to 1. Not that inconvenient consequences make things wrong, but I think its true that people virtually never apply the pain/suffering consistently.

      I think you’ve read my stuff and know where I stand, but I’m curious, are you a dualist? You had previously argued that consciousness and physicalism where compatible concepts, so I sort of assumed you were physicalist.

      • Dual aspect neutral monist.

      • I totally agree that most people probably don’t hold intelligence up as a marker for intrinsic moral worth.

        What is a moral worth? That’s a wish-washy term. Intelligence is a pretty good marker for the ability of being able to figure things out- a skill that seems pretty important for the development of modern civilization.

        • Peter says:

          I suspect that the intelligence that people go for in intelligent non-human animals doesn’t map wonderfully onto geek traits. When I think about the things that make people say that, for example, a Border Collie is an intelligent breed of dog, they don’t feel particularly “geek” to me.

          Traits like alertness, responsiveness, curiosity, sociability, memory, self-awareness – I suppose all of those suggests that an animal has a richer experience of the world. Alternatively that they’re more “agenty” and thus worthy of some sort of respect – also we can form deeper, more reciprocal two-way relationships with them.

          • Paul Torek says:

            “Agenty” is correct

          • Drea says:

            Really? To me, Border Collies feel absolutely like the geeks of the dog world. They are geeks about control of a herd. Are they more social than Labs? Don’t think so.

            Not that I think it matters. Agreed that IQ \= moral worth, etc.

        • Morality may have several competing definitions, but if you take the time to look into moral philosophy you’ll find there are pretty darn thorough attempts to iron out a extremely precise meaning. In fact, not being thorough is probably one of the only things philosophy probably can never be accused of. I’ve talked to TheAncientGeek before and he’s a fairly good philosopher, so I’m pretty certain he follows perfectly well my meaning.

          Also, intelligence isn’t as unambiguous as you think. There are fierce debates about its meaning. In the end the two aren’t in any conflict they’re apples and oranges.

      • Max says:

        > the pain approach … implies a vastly reduced human status, more so than what even most animal rights folks hold to

        In my experience, most animal rights folks are not utilitarians, they come at it as social justice advocates and view the fight for animal liberation as intersectional with other social justice issues for humans. Most of us are not really thinking about it in terms of trying to add up the amount of pain suffered and solve that problem first.

        • Peter says:

          My impression was that “animal rights” _as such_ tended not to go in for utilitarianism, whereas welfarists were more likely to do so. My other impression was that big-name philosophers concerned about animals tended to be utilitarians (Bentham setting the ball rolling), but that activists in general tended not to subscribe to philosophical schools.

          (Also, my understanding of “intersectional” has just broken. I thought that intersectionality was due to nonlinear effects of two identities – that the experiences of black women aren’t the same as (experiences of white women + experiences of black men – experiences of white men).)

          • Max says:

            I would call myself as a supporter of animal rights and also a utilitarian. In practice legal rights are the most effective way I can think of to protect animals against the majority of abuses they currently suffer. Similarly, a utilitarian can support human legal and moral rights because even though violating these rights can possibly be justified in absolute terms, respect for these rights leads to better outcomes on average over long periods of time (citation needed).

            > activists in general tended not to subscribe to philosophical schools.

            Generally correct in my experience. Most people by default have some sort of deontological stance if they’re not actively thinking about their philosophical stance regarding harm. And of the ones who do think about it, many come out even more deontological (e.g. Gary Francione).

            Re: intersectionality. When I hear most people discuss intersectionality, it means that the source of injustice against animals is ultimately the same source that causes injustice against black people and women, an oppressive system that prevents minorities from obtaining equal rights and opportunities. Your comment can still be true in this framework; it just means that the deeper in the minority sector your group is, the more injustices you are likely to face.

        • blacktrance says:

          Yes. I’ve seen animal liberationists clash with animal welfarists and call Peter Singer a speciesist (!) for adhering to a framework in which some meat-eating could be justified in theory. To them, “raise chickens humanely and kill them quickly to minimize suffering” is the equivalent of “raise humans humanely and kill them quickly”.

          • Douglas Knight says:

            While it’s true that they are not utilitarians, a more relevant point is that they have no clue what he actually says, like virtually all of his critics.

      • Ghatanathoah says:

        The best argument I’ve seen in favor of intelligence making you more morally valuable is the Singerian one. If you are intelligent it is possible to have preferences about the future and your self, since you are intelligent enough to understand what the future is and to have a sense of self. This means that you have two different sources of moral value, your ability to feel pleasure and pain, and your ability to have preferences.

        Less intelligent animals, by contrast, only have the ability to feel. They only have one source of moral significance.

        I think the Singerian approach has a lot going for it. For one thing it is binary, what matters is not how intelligent you are, what matters is if you can have preferences or not. A stupid person’s preferences for the future are just as valuable as a genius’ which seems to fit well with our moral intuitions. Once you pass the threshold where you are smart enough to have preferences, you get some extra moral value.

        The Singerian approach also explains why we tend to consider killing an animal less of a crime than killing a human. When you kill a human you are destroying their sense of self and future plans. When you kill an animal you are not.

        But the Singerian approach also leaves room for considering the suffering of animals as morally important. They do feel pleasure and pain, even if they don’t have preferences, and that definitely counts for a lot.

        • John Schilling says:

          And why most people consider killing an animal less of a crime than torturing an animal, so long as it is done quickly and painlessly.

        • Andi says:

          How can you say animals don’t have preferences or plans?

        • Paul Brinkley says:

          Hmm. What about the relative morality of killing a criminal person, vs. killing an innocent animal?

          Or, if we wish to avoid bringing the matter of the death penalty into it: what about punishing a criminal person, vs. punishing an innocent animal? (It’s hard for me to imagine an adequate equivalence for this, though.)

          My angle here: you speak of killing a person as killing their plans. Well then, I think, what if their plans are evil…

          • Irrelevant says:

            I disagree that “innocent animal” is a meaningful categorization.

          • Ghatanathoah says:

            If their plans are evil, sure, go ahead and kill them. If someone has plans to cause suffering and thwart other people’s plans, then killing their plans will probably be outweighed by the harm you prevent them from doing to other people’s plans. It’s a standard utilitarian calculation, only done with preferences and plans instead of pleasure and pain.

            Also, I think plans and preferences that are actively sadistic and malicious (as opposed to merely selfish) should maybe not be accorded the same moral value as other types of plans and preferences. I’m less sure about this than I am about the utilitarian calculation argument though.

        • Panflutist says:

          If you are intelligent it is possible to have preferences about the future and your self, since you are intelligent enough to understand what the future is and to have a sense of self. This means that you have two different sources of moral value, your ability to feel pleasure and pain, and your ability to have preferences.

          I acknowledge the first sentence, but the second does not follow. I would argue (with the usual futility of arguing ethics) that a preference consists of a proposition on the one hand and the hedonic impact of that proposition’s apparent truth or falsehood on the other. The moral value that you see in preferences, then, comes from their hedonic impact, not the truth of their propositions.

          • I can see how the pain versus pleasure axis could be morally relevant, but it can’t be the whole story, because individual hedonism isn’t widely seen as particularly moral.

          • houseboatonstyx says:

            individual hedonism isn’t widely seen as particularly moral

            Bouncing off to a parallel here….

            It’s often said that suffering is worse for, or is only possible to, a being that has ‘self-consciousness’ or the sense of ‘somebody there’. If so — that’s not symmetrical with pleasure/joy. The joy we feel [Typical Mind Warning] at a classical concert, or wonderful painting, or even a sunset is often described as ‘completely losing oneself’. Instead of ‘wow, this is me here to see this’, we’re using our extra processing to grok the details in the music or whatever.

            So, again, lack of ‘sentience/self-consciousness’ may be a feature in animals, and presence of it may be a bug in humans. It makes bad moments worse … and good moments worse, too.

            There are Buddhist writings that see it that way, and some of Lewis’s too. Hm, remember Snoopy on the doghouse in sunglasses, “Here’s Joe Cool blah blah”? Okay, I’m conflating consciousness/self-consciousness/self-image. But the truth is in there somewhere.

          • Panflutist says:

            @TheAncientGeek: Indeed individual hedonism isn’t the whole story, at least in my book. Collective hedonism is (or may be). While valueing collective well-being can be viewed as a preference, it is an especially basic one that obviates the need for additional preferences.

            I have a chip on my shoulder regarding this which makes me want to emphasize that it is one thing to recognize that hedonism (the default, necessarily selfish kind) doesn’t cover everything, and quite another to open the floodgates to any old preference and declare human values to be conceptually complex. Which I’m not saying you do.

      • “Under this model its not even 1-1000 chickens as Scott describes it, it’s more like 1 to 1. ”

        That doesn’t apply to meat eating unless it involves pain.

        • I agree, but I think that’s exactly the case for the overwhelming majority of meat eaten today, in which I’m fairly sure there is quite a bit of suffering. So in this case, it seems to imply 1-1 under this model. Vat-grown meat might be different, so that’s interesting.

    • Shenpen says:

      I really want to understand what do we understand under morality here. Most people will FEEL bad about torturing a dog, dumb or not, but “I feel bad about it, it violates my sense of compassion, therefore wrong” is not an argument. What else is there?

      • Rowan says:

        What else is there depends on one’s particular theory of morality, but average people don’t seem to have specific moral systems, they just have feelings which suffer from the problem you describe and the various problems that follow from that.

        • Marc Whipple says:

          As the noted philosopher T. Doom once observed, “People have no grasp of what they do.”

          Incidentally, he made a pretty good argument that in fact the “hero” of the story was the bad guy, which is another argument for the basic premise.

      • “I feel bad about it, it violates my sense of compassion, therefore wrong” is not an argument.

        On the contrary, not only is that an argument, it is in an important sense the only argument, in that all more abstract accounts of ethics and morality derive from intuitive judgements of this sort.

        (I’m not entirely convinced myself of the previous argument, but it needed to be made.)

        • Shenpen says:

          I know at least two that don’t: social contract theory and divine command theory.

          • fubarobfusco says:

            The fact that we recognize a claim as being a “moral” claim at all — as opposed to a claim about epistemology, mathematics, or fishing — has to do with its relevance to moral intuitions.

          • RCF says:

            A moral theory is a moral theory only if people believe that it is immoral to act in a manner contrary to it. One has to first accept the proposition that it is immoral to act contrary to DCT, for DCT to have moral standing.

      • Douglas Knight says:

        I don’t know about dogs, but most people in the history of the human race enjoyed torturing cats.

        • Nornagest says:

          Citation needed? I remember Steven Pinker’s cat-burning example too, which I assume is what you’re talking about, but there’s a long road between “popular in France sometime prior to the 1800s” and “most people in the history of the human race”.

          • Douglas Knight says:

            If you have read so little history that you think I got it from Pinker, you should read history in general, and not worry about specific things like animal welfare.

          • Nornagest says:

            I assumed you got it from Pinker because that’s usually the correct assumption when someone in this crowd uses a dramatic example that I remember from Pinker, not because of any history I have or haven’t read. If someone here mentions the Yanomami, for example, what’s more likely: that they’ve read Better Angels, or that they’ve read Napoleon Chagnon’s ethnography or any of the various anthropological responses to it?

            I notice, however, a distinct lack of citations. Might want to do something about that.

          • @Douglas Knight: consider answering the question instead of displaying your superiority. In fact, I’d like to know too.

          • Korobeiniki says:

            Be all of this as it may, I would be interested in a source.

          • cypher says:

            @Douglas Knight

            You had the opportunity to give a real answer instead of a put-down, but you chose not to take it. (And certainly, if “enough” history is enough to hold your belief, then the large majority of even college educated professionals do not have “enough” history.)

            People on the internet don’t know if you’re lying, and using a put-down instead of providing a real answer can be used to hide lying. It’s also a status play.

            With that in mind, I am still curious. Citation?

          • Douglas Knight says:

            Nornagest, maybe Pinker’s book probably marks the point at which it is pointless to discuss the history of violence in this community. But it seems to me that most mentions of Yanomamo on lesswrong predate Pinker.

          • Airgap says:

            Whether or not Douglas is playing nice: I am aware of recreational cat burning, and I’ve never read Pinker.

          • Nornagest says:

            This just won’t die, will it?

            Fine, forget about Pinker. My main objection was always to “most people in the history of the human race”, and I’ve still seen no good evidence for that — though now I’m aware of several more creative animal tortures than I was yesterday.

        • Deiseach says:

          I haven’t read Pinker either, and I’d be interested to know where you get MOST people in the ENTIRE HISTORY of humanity enjoyed torturing CATS (as distinct from either ‘animals in general’ or other animals available to them in their environment – for example, Australians have not been able to torture cats until around 400 years ago, since that was when the domestic cat arrived there. So unless you are going to maintain that every single aboriginal Australian was perfectly free from cruelty and never tortured a koala or similar animal in the absence of cats to torture, you’ll have to account for exceptions to your cat-torturing claims).

          I mean, if we’re going to throw history around, the Egpytians immediately stand out as one civilisation that honoured, valued and even venerated cats.

        • Gerry Quinn says:

          Cats are domesticated all over the world – and realistically, domestic cats are of limited usefulness. If most people liked torturing them, it seems unlikely that the remaining few would have domesticated them.

          • Jaskologist says:

            Cats are extremely useful in any situation involving food storage, which would have been most of human history. I don’t have grain silos, but my cat is still making himself useful by eliminating the (many) mice who want to take up residence in the house.

            Also, there’s a good case to be made that cats domesticated themselves.

      • Mary says:

        Well, an obvious one that in fact confers no moral status on the animal is that by allowing animals to be tortured you are blunting your sensitivity and compassion and thus are more likely to tolerate humans being tortured.

        That, in fact, was Lewis Carroll’s chief objection to vivisection

      • Andi says:

        There’s always the fact that the dog is a being and suffers whether we recognize it or not?

        • Irrelevant says:

          It is? It does? Those are opinions, not facts, and they’re opinions based on, respectively, an unusual definition of “being” and an unprovable assertion about the nature of suffering.

          • Berna says:

            I don’t think defining a ‘being’ as ‘a living thing’ is so unsual…

          • wysinwyg says:

            All claims about the physical world are inferences from sense data. This means that all facts are inferences from sense data.

            Andi is inferring from sense data that dogs are beings and capable of suffering. That is a fairly reasonable inference all things considered; certainly no more of an “opinion” or an “unprovable assertion” than the inference that other humans are beings and capable of suffering.

            We don’t have to make that assumption, but without it I’m not sure what the purpose of engaging in moral reasoning would be in the first place.

          • Irrelevant says:

            Berna: Err, no, it’s highly unusual. “Living being” is a meaningful phrase precisely because living does not imply being-hood. As I’ve said elsewhere on the page, I’m open to the probability that there is no difference in kind between human and animal minds, but not to blithe claims that dogs have Dasein.

            wysinwyg: If it weren’t clear, I am taking issue with Andi’s attempt to declare this a non-argument.

          • wysinwyg says:

            If it weren’t clear, I am taking issue with Andi’s attempt to declare this a non-argument.

            If it wasn’t clear, I am pointing out that this is indeed a non-argument. There is no argument to be made.

            Either you infer from the behavior of humans that they have internal lives much like yours, or you conclude that there is no basis on which to make that inference and assume they are meat robots. Pretty much the exact same situation pertains for dogs.

            If you have made a determination on this one way or the other, there is literally no evidence that could be marshaled to make you change your mind. The consciousness or lack thereof of other animals is not observable.

            If Andi makes the inference that dogs are beings* and suffer, and you conclude that this is “opinion” and “unprovable assertion” then you may very well be right. But intellectual honesty demands that you conclude the same about your inference that dogs are not beings and do not suffer since you have exactly the same amount of evidence for that.**

            based on…an unusual definition of “being” and an unprovable assertion about the nature of suffering”

            This inference of yours seems unjustified since Andi gave you no information about how Andi made the determination that dogs are beings who suffer. I find it far more likely that Andi made that determination on the basis of the observed behaviors of dogs rather than any definition of “being” unusual or not or “unprovable notions about the nature of suffering”.

            *Your argument that dogs aren’t beings seems like a kind of boring, purely semantic, Humpty Dumpty-type argument to me. Introducing “dasein” into the conversation as having a meaning distinct from “existence” but which you don’t explicitly give makes the semantic problem a little murkier rather than helping. Also, dogs are indeed “beings” by most common-sense definitions of the word “being” so I think you might be the one with the unusual definition here.

            **Actually, less because dogs certainly behave as though they are beings that suffer. If that’s enough evidence to infer consciousness in humans, I don’t see why the same inference shouldn’t apply to dogs.

          • Irrelevant says:

            Your argument that dogs aren’t beings seems like a kind of boring, purely semantic, Humpty Dumpty-type argument to me. Introducing “dasein” into the conversation as having a meaning distinct from “existence” but which you don’t explicitly give makes the semantic problem a little murkier rather than helping. Also, dogs are indeed “beings” by most common-sense definitions of the word “being” so I think you might be the one with the unusual definition here.

            I have not argued that dogs are not beings. I argued that you cannot call dogs beings without significant clarification of what you mean by being, because the definitions of that term range from applying to bacteria-on-up but carrying no intuitive moral weight to carrying immense moral weight but maybe not applying to anything that actually exists. (The standard usage in this sort of moral context encompasses humans and hypothetical human-equivalent things like aliens and angels and intelligent robots, not cats dogs and bugs.)

            Dasein as used by Heidegger necessitates awareness of the inessentiality of one’s own existence to the world, and I brought it up as a colorable definition of being that trivially excludes dogs.

    • Jaskologist says:

      Most people think this is about pain — straightforwardly, that you should not increase the net total of pain in the world.

      I don’t think most people actually believe this. Most people are not utilitarian maximizers trying to chart all of morality along a single axis.

    • Arthur B. says:

      I don’t think it’s about pain either, I think for animals it boils down to phylogenetic distance, with a few exceptions (dolphins). I think a majority people would rather hurt a crow than a squirrel.

      • Marc Whipple says:

        Crows are smart. Squirrels are stupid and nasty: basically, they’re rats with cute fluffy tails. I’d give a crow something to eat while I was working on my Squirrel-o-pult.

        Although to be fair, I’ve eaten every squirrel I’ve killed, which is not true of every rat and mouse ditto.

        • anon says:

          Which was his point. Squirrels are cute from a distance, crows are not, hence the gut feeling is to spare the squirrel.

          • DrBeat says:

            My gut feeling is also to spare the crow, because they are smart and do cute things like bring shiny objects as gifts to people who feed them regularly.

      • Tracy W says:

        I learnt a great term for this in my environmental economics class: charismatic megafauna.

      • nydwracu says:

        Crows are generally seen as sinister. Would people rather hurt a robin or an opossum?

        (This is also an exaggerated example: opossums are nasty little fuckers, the Rambos on PCP of the suburban animal world. If you’re lucky enough never to have seen one try to kill your cat, replace it with some similarly uncharismatic animal that’s closer to humans than dinosaurs are. Raccoons?)

        I think it comes down to charisma. If you know crows are smart, you’d rather hurt the squirrel; if you think crows are bad omens, you’d rather hurt the crow.

    • g says:

      It’s certainly possible that there’s some of that going on — or, more likely I think, that because geeks are smart they find the idea that intelligence is important attractive wherever it comes up.

      But at least as far as explicitly expressed opinions go, the usual position seems to be more like this: “Below some level of complexity/intelligence, suffering either happens much less or matters much less. The falloff starts somewhere around human-infant level, and gets to near-zero maybe somewhere around rats. There’s certainly no difference to speak of between an exceptionally unintelligent adult human and an exceptionally intelligent one, in this respect.”

      • The argument behind the argument is that there is a thing called suffering which matters, and which is different from pain which doesn’t matter.

        • Kevin C. says:

          That pain and suffering are different seems to be pretty well established by the existence of pain asymbolia. (Whether this difference is morally significant, OTOH…)

          [This was supposed to be in reply to TheAncientGeek above; not sure why it didn’t thread.]

          • Airgap says:

            Except that people usually use the word “pain” to mean suffering, and “C-fibers firing*” to mean pain.

            *Or something.

            [This was also supposed in reply. Shit, wordpress, get it together.]

      • g says:

        Huh. I intended that to be a reply to someone else’s comment … which I now, several hours after posting it, can’t find. (Is it possible that I didn’t screw up, it was a reply to someone else’s comment, that comment got deleted for some reason, and when comments are deleted their replies end up looking like top-level comments? Or am I just going crazy? If so, perhaps someone can recommend a good psychiatrist around these parts.)

    • Marc Whipple says:

      One of the reasons I disapprove of torture is that I think it is “beneath” us as creatures possessing free will and judgment. (Of course, I am also sympathetic to the argument that we don’t have either and we are just moist robots, so what the Hell do I know?) In my opinion, it makes us “worthier” sapients to make the choice not to harm other beings unnecessarily. (Note that the quotes indicate totally subjective value judgment-bearing words.)

      So in that context it’s really not important whether it’s a person, a dog, or a cicada, and avoiding its suffering is not the point: the point is that I shouldn’t cause suffering when I don’t need to.

    • aerdeap says:

      People don’t care about the amount of pain in the universe, they care about human beings enjoying creating pain for it’s own sake. I don’t think most people care about how pain is caused to animal as it’s slaughtered, for instance, but they do care if someone gets off on torturing a dog becaause someone who’s cruel to animals is likely to also be cruel to his fellow man.

  2. Carinthium says:

    As a non-believer in morality in general, I can avoid inconsistencies of this sort. Given that most rationalist believers in morality redefine morality as human intuitions, I can smiply refuse to do so.

    Maybe rationalists in general could get around this part of the problem that way? I agree that there are other consistency problems, but a major one is solved by instead doing what one wants whether selfish or selfless.

    • Do What Thou Wilt fails to solve a number of practical problems. Ethics is practical reason.

      • rsj says:

        “Ethics is practical reason.”

        I think the OP was pointing out that it might not be such a good idea to try to construct an elaborate meta-set of rules that can be applied in all cases, and then feel bad that you don’t apply them to all cases, or that they end up being inconsistent.

        I.e. in geometry there is the notion of “charts” or “maps”, by which a piece of a complicated surface can be distorted to be a flat square. The chart is a good approximation for a portion of the surface, but not for the whole surface. It is never an exact representation of any portion of the surface. It’s a simplification by making something curved (possibly in multiple other dimensions) nice and flat. The surface is described by a collection of such charts, together with transition functions obeying a consistency condition. Once you have such a collection of local charts, together with transition functions, you have all the information about the shape and can always find the shortest distance(s) between two points. Well, almost always.

        I think of reality as a bit like that, except we don’t have all of the charts, and we only have rough approximations to our transition functions (so of course they don’t follow the consistency rules). But that’s how it is.

        In other words, the world is complex, and we don’t understand the consequences of our actions all that well. We have very limited cognitive capacity, so it can be helpful to make a stylized model (e.g. throw away almost all the information) when deciding to choose A or B. But if we use a different model, then we may get a different answer, and if we try to apply some models to a question, then the results are not solvable at all, even within the model.

        Nor is it possible to come up with some meta-algorithm to generate models on the spot, as any such algorithm would need to be more complex than what is already too hard for us to understand.

        So give up on all of that.

        The value of the model is to save cognitive resources by giving roughly similar answers to roughly similar situations. That way you can do other things than constantly worrying about what to do.

        However roughly similar in one model is not similar in another. Different people can disagree on what to do. That’s also OK. Some decisions will seem inconsistent to a third party (who has only partial knowledge of my brain, of the world, etc). That’s also good. If you want to argue with the third party by arguing that your model is better than the one used by the third party, that’s valid. Maybe you’ll win. In this way, we can try to improve our models over time by debating with others and trying to understand the models a bit better. So I guess philosophy can advance. But you’ll never win absolutely — e.g. you don’t know that choice A is actually better in some model-independent sense. Nor will you ever achieve consistency or your model making machine, regardless of how wise you become or how much you refine each of your models.

        And when you encounter a situation, you may have a dozen or so models that you can pull off the shelf and apply, and the choice you decide will depend on which model you happened to dust off. For example, there have been studies when students were presented with two baskets of goods, and and asked which basket they prefer. The results were that the students found it very difficult and stressful to decide in some cases, and would often given inconsistent examples, e.g. preferring A to B and then preferring B to C and then preferring C to A. So that is how we work and consistency isn’t that important when compared with saving precious cognitive resources. Moreover, consistency is also robust because it means that we might discover new information. It also makes us more interesting, as people.

        The purpose of the model is not to save you from yourself, or to give you some kind of knowledge that your actions are consistent with your overall values, but to make your life easier by giving you a chance to run on auto-pilot for that particular choice so that you can enjoy life and be happy rather than constantly worrying about what to do when you get hungry.

        So it seems really masochistic to be stressed about possible inconsistencies. I use the “don’t hurt animals” model when thinking about kicking a dog. That you might use it when thinking about what to eat is OK, too. But when I have dinner, I’m not using the “Don’t hurt animals” model, I’m using the “What is healthy for me to eat” model, and I may be using other models such as “How much should I spend right now”. Saying that I am not being consistent by not applying the “don’t hurt animals” model to answer every possible question is a bit silly. Of course I would not apply it to answer every question because if I did that, I would make some really stupid decisions and be very self-destructive in the process. It’s a good thing that humans don’t work that way, it’s not something that we should feel guilty about.

        • Interesting post. So, you’re pointing out how models frequently come into conflict with our moral intuitions later down the track. If you’ll forgive my being contrary, if I understand correctly there isn’t a huge jump from your post to “morality is intellectually really difficult, so don’t try too hard?” It seems to me there’s a difference between a model used to explain, or perhaps justify or rationalise, our intuitions (that unsurprisingly fails to remain totally consistent with our fickle intuitions over time) and an authentic approach to discover and pursue the true nature of morality?

          • rsj says:

            I’m pointing out that our beliefs and thinking patterns *are* just models that can be shown to be (and will all inevitably be) inconsistent down the track, therefore they should not be judged by consistency criteria alone, but by their usefulness vis-a-vis the current stock of models already in our heads.

            “It seems to me there’s a difference between a model used to explain, or perhaps justify or rationalise, our intuitions (that unsurprisingly fails to remain totally consistent with our fickle intuitions over time) and an authentic approach to discover and pursue the true nature of morality?”

            The assumption here is that people will fundamentally act in an amoral way unless they stick to some model. Sort of like man is naturally evil, until a teacher comes along and teaches him how to be good. I have a lot of problems with that view.

            I would change that and say that the teacher comes along and helps him get from point A to point B with less injury and a greater chance of success than if no teacher came along.

            I don’t believe that there is some true authentic morality, waiting to be discovered as the result of rational contemplation.

            If you want to think of ethics as an approximation of Truth/Good, but one that is necessarily approximate (and therefore inconsistent) due to our lack of cognitive capacity, then you can do that. I don’t think you need to, though.

            I wanted to give the example that just as a set of charts that are approximations to the surface will necessarily fail consistency checks, yet still be useful for navigation, so our attempts at formulating ethics based on our values will also necessarily be inconsistent, yet still useful for whatever teleological goals you may have, whether a spiritual understanding of Truth or a more simple goal such as “I want to be proud of my life”.

          • > I’m pointing out that our beliefs and thinking patterns *are* just models that can be shown to be (and will all inevitably be) inconsistent down the track, therefore they should not be judged by consistency criteria alone, but by their usefulness vis-a-vis the current stock of models already in our heads.

            So all beliefs are models? And should be judged on usefulness. Of course to judge usefulness we need a criteria, which is a belief, which is a model, which also should be judged on usefulness…. ah.

            I understand your point that models are probably always imperfect. But in a practical sense usefulness looks awfully like motivated reasoning. Basically people do whatever they can get away with and choosing a moral “model” to suit after the fact.

            You say you don’t think models are neccessary, but we live in a very complex world. It doesn’t seem unreasonable to expect an authentic effort to do the right thing might be also difficult and complex, when living in such a complex world. I guess we could speculate, other points aside, that a possibility a lack of models in an intelligent person is a heuristic for a lack of trying?

        • I can see the point about not tying yourself into knots about having the perfect systems, but amoralism seems an exaggerated response.

        • Carinthium says:

          For what it’s worth, this isn’t what I was saying. But it’s a very interesting argument, and I’ll be following it anyway.

      • Marc Whipple says:

        Why does everybody always leave off the first part of that?

        • Nornagest says:

          I’m more familiar with the version from Thelema, Aleister Crowley’s weird-assed religion, which in full reads “Do what thou wilt shall be the whole of the law. Love is the law; love under will.”. The Thelemic conception of Will is something rather more complex than whim or even considered opinion, though, and in fact bears a more-than-passing resemblance to Eliezer’s idea of coherent extrapolated volition; so the quote, in that context, is not describing anything remotely like amoralism.

          • Mary says:

            An obvious rip-off from St. Augustine.

            “Love God and do whatever you please: for the soul trained in love to God will do nothing to offend the One who is Beloved.”

          • fubarobfusco says:

            By way of Rabelais, for what it’s worth.

      • Carinthium says:

        Then let me explain my substitute model. People have moral desires, but those moral desires are classified as another type of ‘want’. It is also a Want that often contradicts our Coherent Extrapolated Violition, as the morality we believe on an intuitive level we know rationally doesn’t exist.

        People already have decision mechanisms for when a decision is entirely selfish (e.g. what to have for breakfast). A person following my philosophy would apply that decision mechanism to everything, including so-called moral decisions.

        They would take into account long term consequences and think rationally about how to achieve wants, yes, but this could be done seperately from a belief in moral truth.

        ———————-

        Against an Elizier-style thinker who tries to redefine morality as the moral intuitions, I ask the question: Why should we prioritise Moral wants over other wants? Especially given that the intuitive belief in Right and Wrong corresponds on an emotional level to a non-existent thing?

        Against a contractualist: I agree there are circumstances in which a contractrualist style morality makes sense as a treaty between individualist. But given society will not change regardless of our personal actions, modern life is not one of them.

        • “as the morality we believe on an intuitive level we know rationally doesn’t exist.”

          I don’t see what that means. For a constructivist or contractarian, a system of morality exists inasmuch as people follow it.

          “They would take into account long term consequences and think rationally about how to achieve wants, yes, but this could be done seperately from a belief in moral truth.”

          I don’t follow that either. Are you saying that no matter how long your time preference or wide your circle of concern, it all still counts a selfishness?

          ” Why should we prioritise Moral wants over other want”

          Because morality doesn’t work other wise. From a contractarian/functional role perspective, I need to know that other people ate going to hold up their end of the bargain. And I won’t know that if they have other prioritities which could morality. Therefore, morality has to be the number priority.

          Ask yourself what is a legitimate excuse for not behaving morally.

          “But given society will not change regardless of our personal actions, modern life is not one of them.”

          AgainI don’t follow. Societies change, so why wouldn’t modern society? Is society changing a conditions of entering into any moral contract?

          I can see how some people might not want to join some contracts, but modern societies are inclusive and tolerant by design…the main demurrers ten to be people who don’t like tolerance.

          • Carinthium says:

            1: That’s because modern people have redefined what it means for morality to exist. As we’re all nominalists here, I’m pointing out that this is fundamentally a redefinition we have no need to make.

            2: You don’t seem to understand, so I’ll try again.

            What I said a basic level was that a person ‘should’ use their emotions to decide what they want independent of moral considerations. But I didn’t want people to misinterpret this as denying a role to rationality.

            It is emotion which decides what a person treats as ‘good’ or ‘bad’ even if nothing is objectively such. But rationality is necessary to determine how to maximise the ‘good’ even when you define what ‘good’ means, as is long term planning.

            3: See my refutation of contractrualism below.

            4:
            By my understanding, a contractrualist uses reasoning similiar to why a Rule Utilitarian would commit to obey a law which lacks utility.

            Namely: If we all obey these norms, we all benefit. Therefore I should commit to obey them for selfish reasons, even if I lose some benefits that way.

            This works in abstract because everybody reasons the same way in abstract. But in reality, they don’t! Even if all of Slate Star Codex became amoral, we would still reap the benefits of civil society because there are far too few of us to make a dent in the cultural norms that hold it up. We’d have to worry about being caught, but that’s what rational calculation is for.

            To use an analogy: If you know that no matter what you do in a Prisoner’s Dilmena the other guy won’t defect, you certainly should. Society might try to punish us, but almost nobody will start breaking rules just because we do.

          • blacktrance says:

            There are some rules, which if broken, would lead to negative consequences for us. This is especially the case for the most important rules, e.g. those against murder. It’s not that other people will start murdering if we do, it’s that they’ll come after us to enforce the “no murder” rule.

          • Carinthium says:

            Fear of being punished isn’t contractualist morality, but pragmatism.

            That particular worry can best be dealt with by amoralist calculation, which is far better to determine where your own self interest lies.

          • blacktrance says:

            Following your self-interest is egoism, which is different from amoralism, since it makes moral claims. Besides, what you call “amoralist calculation” is what grounds contractarianism.

          • Carinthium says:

            The case for calling it amoralism rather than egoism is that although I don’t like it I have to concede that ‘morality’ is a coherent model when defined based on human instincts, but I am stressing that it should be defied in order to properly pursue self interest by supressing moral instincts from an inappropriately great role.

            We’ve had this discussion before of course, but I still maintain that a contractualist identity has a greater risk of being irrationally moral than an amoralist one.

          • blacktrance says:

            The term “amoralism” implies that there’s no correct way of acting at all, which isn’t what you seem to be arguing. If pursuing self-interest is the right thing to do, then we can make positive claims about what one should do in order to fulfill that goal. But if amoralism is true, there is no “right thing to do”. The egoist can say that someone is acting wrongly because they’re not pursuing their self-interest, but the amoralist can’t.

            For example, if you ask the egoist whether there is a right to not be killed, the egoist will (hopefully) say yes. The amoralist will say that there is no such thing as rights.

          • Carinthium says:

            I never believed in human rights, for what it’s worth.

            It isn’t strictly the case that anything is the ‘right’ thing to do, including pursuing self-interest. To say it is would contradict my views, as I recommend pursuing wants whether selfless or selfish.

            But although the idea of objective ‘should’ is a human weakness I constantly try to dodge, it doesn’t actually exist. The reason to pursue wants is because, by definition, they are things we want.

          • blacktrance says:

            Do one’s wants (and what can be derived from them) not objectively exist? We can make truth-apt statements about whether someone wants something, whether something else follows from that, how that interacts with others’ wants, etc. That’s a reasonable basis for “should” statements.

          • TheAncientGeek says:

            @Carinthium

            “1: That’s because modern people have redefined what it means for morality to exist. As we’re all nominalists here, I’m pointing out that this is fundamentally a redefinition we have no need to make.”

            We could say that “moral realism is false”, and then it would be clear that we are denying that morality exists in the nominalist sense.

            “4:By my understanding, a contractrualist uses reasoning similiar to why a Rule Utilitarian would commit to obey a law which lacks utility.Namely: If we all obey these norms, we all benefit. Therefore I should commit to obey them for selfish reasons, even if I lose some benefits that way.”

            Ok, but I think it’s important that contractualism is only a selfish choice for some values of selfish…some people wouldn’t want to take the losses. Contractualism =/= egoism.

            “This works in abstract because everybody reasons the same way in abstract. But in reality, they don’t! Even if all of Slate Star Codex became amoral, we would still reap the benefits of civil society because there are far too few of us to make a dent in the cultural norms that hold it up. We’d have to worry about being caught, but that’s what rational calculation is for.To use an analogy: If you know that no matter what you do in a Prisoner’s Dilmena the other guy won’t defect, you certainly should. Society might try to punish us, but almost nobody will start breaking rules just because we do.”

            But in reality it does work, in a good-enough way, because there isn’t widespread defection (or at least not  unless the social order breaks down). There’s a nonzero level of defection, but so what? You seem to be defining working as working perfectly.

            I’m sill not clear on what you are saying, because there are so many ways of defining working….in theory, in practice, descriptively, normatively , psychologically….

    • I’m curious, what motivates you to spend time to advocate amorality? There doesn’t seem to be much to be gained for your advocacy? (you could be off doing stuff that is far more hedonistic?) Is it an emotional or intuitive motivation? Or is it instrumental to some goal?

      • These a personal gain to free riding but it disappears if others free ride, so it doesn’t make sense to advocate it.

      • It’s hard to define morality, whereas intelligence is easier to define and has practical applications. The trolley car thought experiment is an example of the conundrum of trying to define morality.

      • Irrelevant says:

        what motivates you to spend time [arguing on the internet]?

        Transient hedonism.

      • Carinthium says:

        Let me clarify. I advocate amorality because the probability that the opinions of people on Slate Star Codex will affect me personally are insignificant. Too many free riders would be a problem, but given how large the world is I can afford to do so.

        I am a genuine amoralist, who advocates amoralism because I am bored and can’t think of anything more fun to do.

        • Esquire says:

          People who make the argument you are responding to I think hugely overestimate the amount of “moral reasoning” that drives every day decisions for everyone. I very much doubt that any non-amoralists here could articulate a moral theory they take seriously which suggests that posting here is optimal for them either. We are all kind of playing around, and it is fun to be right and/or smart.

          I am baffled that amoralism is not more popular as a philosophy. Especially given that 98% of purported moral reasoning we encounter in the wild is blatantly rationised and self serving.

          • Carinthium says:

            For a non-rationalist, it is completely understandable to me that they aren’t amoralists. I can also easily see closet amoralists who refuse to advocate it. For a lot of rationalists, however, I’d call their morality a rationality failure.

            To be fair, a person with a deontological or in some cases even a virtue ethical theory might argue that they have the liberty to post here even if they moral areas and so I wouldn’t blame them even if I would consider your argument applicable to the utilitarian posters.

            That being said, for the most part I agree with you.

          • Non cognitive is a moral theory. Well, metaethical anyway.

            Objections to moral persuasion, exhortation, etc, arent objections to moral action, and therefore don’t weigh in favour of amoralism.

          • Carinthium says:

            The problem with non-cognitivism is that most people’s behaviour is strongly ‘moral’ enough to make it clear that they believe on some level in right and wrong. In many things they are very inconsistent yes, but as a description of their psychological state it is clearly the case.

            I don’t want to deny that fact about the human psyche. I also like the fact that my amoralism implicitly emphasises not taking into account moral factors. So I go for amoralism.

          • Emp says:

            Plenty of people are amoral but see no practical benefit in declaring or admitting that this is so.

            On topic, the only rational conclusion for non-vegan types (and even them really, if you start accounting for field-mice, pests etc, which are far more numerous than cows being killed to service vegan diets) is that they are being massively inconsistent. I on the other hand, am quite happy saying that animal life has literally zero moral value. I actually like animals, but I don’t see anything wrong with farming them and killing them for food (or any other purpose for that matter).

    • Deiseach says:

      Congratulations, you are a perfectly consistent animal-torturer. Which gives the rest of us cause for suspicions that you might be a perfectly consistent human-torturer, which means your enjoyment of liberty and the pursuit of happiness (e.g. torturing animals) may be curtailed. At the very least, you run the risk of going to court on charges of animal cruelty and being fined.

      A little less consistency might contribute more to your overall chances of happiness 🙂

      • nydwracu says:

        An excellent argument against torturing animals [or even signaling willingness to torture animals] that doesn’t reference morality at all. Which side are you on again?

  3. Salt says:

    It’d be times like this, or really whenever anybody says anything like “moral calculus”, that I’d question the basic coherency of Utilitarianism. That this argument somehow means something, let alone is found convincing by somebody, is just amazing to me.

    • Rowan says:

      I similarly find most anti-utilitarianism arguments just amazing, it feels like they’re debating whether numbers can be added to each other, or something similarly obvious.

      • anodognosic says:

        The point where I start to question utilitarianism is where it clashes against Stoic virtue ethics–in taking both to their utmost conclusions.

        Stoic virtue ethics first: Stoicism says that the only good is in using your own will to do good for others. Your own happiness or satisfaction is completely out of your hands, and so should not factor into any ethical consideration. This eventually runs into the obvious problem that, if a person’s happiness or satisfaction isn’t good, only their will, that is completely beyond anyone’s ability to influence, and so you can never actually do anyone any good.

        Utilitarianism seems to me to run into the same problem from the other side. It assumes that the good is a sort of passive state or experience of utility. This glosses over the goodness of action, of overcoming adversity, of being a capable and strong person. Utilitarianism seems, eventually, to lead to the creation of a species composed exclusively of insufferably spoiled children.

        I can’t really choose between the two, and this is the reason I can’t fully call myself either a virtue ethicist or a utilitarian. Luckily, this end-state is entirely hypothetical at least at the moment, and both virtue ethics and utilitarianism, as well as deontology, are useful tools for dealing with the world we have. To tie it into Scott’s point: I’d rather be inconsistent in my metaethics than fill it with unprincipled exceptions.

        • highly effective people says:

          part of the reason you can’t choose is because your idea of stoicism has been completely inverted.

          the central idea* of stoicism is that the only ‘good’ is what makes you better and the only ‘bad’ is what makes you worse, and because only the intellect can make itself worse or better** all good or evil is in thought. since most things are neither good nor bad until we assign meaning to them achieving eudaimonia (the good life / happiness) is a matter of being indifferent to sensations about the external world (apatheia) and directing oneself through reason to perform one’s purpose to the utmost. outwardly this is similar to the cardinal virtue of courage/fortitude or amor fati.

          doing good for others comes from the idea that the purpose of man is to live for one another. but it’s hardly central and in fact it’s easy to replace it with a more appropriate telos such as self cultivation.

          *in personal ethics anyway. personally i’d say the idea of Logos / divine reason is more central but then again marcus aurelius makes a convincing point that you should still act virtuously even without it
          **as in your ability to fulfill your purpose as a human being not in terms of how many Hz your brain gets. thought i should get that out of the way

          • anodognosic says:

            OK, I admit I was sloppy in my formulation, but let me try to express this core insight: consequentialism and virtue ethics constantly and necessarily chase each other’s tails. Each locus of goodness by itself is insufficient. To be the best you can be, you have to look outwards. To do the most good, you have to look at not only being the best you can be, but also making others the best they can be.

            Each by itself is insufficient. But they are also not perfectly complementary; the unresolved tension remains between good-as-virtuous and good-as-content. Excessive emphasis on one will point to the other.

      • Salt says:

        Oh, is that what Utilitarians are doing? Then by all means, add your numbers. I don’t know how you think you’re going to get a system of ethics out of that, but okay.

        • MicaiahC says:

          Are you actually looking to understand someone else’s point of view or just wish us to know that we’re weird?

          If the former, VNM utility is how, essentially saying “If you like some things more than others, if you are consistent in doing so, if you can add things and if things you care don’t always effect each other, then it is possible to construct a utility function. ”

          VNM utility isn’t the be all end all, and surely there is room to disagree about whether you can accept the axioms, but I don’t see why people should stop using it as a model.

          If it’s us being weird, I would say that’s pretty salty of you.

          • Nornagest says:

            Note however that the VNM axioms probably don’t hold for people’s ethical intuitions.

          • Douglas Knight says:

            Micaiah, no, there really isn’t any room to reject the VNM axioms. If you’re a consequentialist and you believe in probability, you really have no choice but to accept them.

            Nornagest, yes, people’s intuitions are incoherent. The VNM axioms are a guide to find examples. But, ultimately, people’s intuitions are not consistent with very simple axioms, like completeness.

          • Salt says:

            Well, I did choose Salt as my name.

            My question is more fundamental than “how can you add the numbers?”

            My question is more like, why should I assume those numbers actually mean anything? Me “liking” things doesn’t mean those things are good. Maybe I want to maximize my utility where utility is in terms of axe murders.

            And even if I’m talking about something like just pure unfiltered happiness, what is one value of happiness equivalent to? What is the metric for measurement, and how did anybody find it?

            And if we’re just doing a sort of vague like “well obviously X is more than Y, so more X is better than more Y” then I don’t see why this theorem actually helps me out anymore than just basic thinking.

            I mean, it works with money because 1 is equivalent to 1 dollar or whatever, and I can point to a dollar, but 1 being equivalent to 1 Utility sounds silly.

          • MicaiahC says:

            Douglas, I am aware and mostly agree with the idea that VNM axioms are an ideal (I say “mostly” because I haven’t actually read up on it and understood it deeply, but intuitively it seems correct and no one criticizing it has brought up any decisive counterevidence, sans Fakey’s vehement insistence on circular human values).

            I was trying to say that some of the axioms may disagree with human intuition (mostly consistency). Also, I’m uncertain if there’s a generalization that allows / gives rise to some time dependence.

            To Salt: I think you need to understand utilitarianism better before you start judging it. For one, the VNM axioms linked gives a damn way in which you can start comparing different things For another, AFAIK VNM utility gives you a way to compare things without necessarily giving you a way to give “objective” rankings, so talking about things like “what is 1 utility” is a flat out error in that context. Yes I am aware it is common LW parlance to refer to utilitons. No, it is still shorthand for the current concept being described.

          • Douglas Knight says:

            Micaiah, sure, but if you fail consistency, it’s silly to say that you’re failing the VNM axioms. This isn’t “room to disagree about whether you can accept the axioms,” but disagreement about whether you have preferences at all.

          • MicaiahC says:

            I agree with the “preferences” at all thing. I personally suspect the reason why people react different has to do with their internal state changing over time (e.g. a time varying utility function).

            I guess some people just think the lack of a time variable in VNM utility means that it doesn’t talk about human preference.

          • Douglas Knight says:

            When people reject the vNM axioms and are asked to pick one to reject, they usually pick continuity. But they are wrong.

            Of course people are dynamically inconsistent, but people usually agree that is an error. But let’s leave aside people’s preferences changing from today to tomorrow. Let’s ask: do people have preferences today? The answer to that is no: people will change their minds simply because you asked them about their preferences. People don’t have time-varying preferences because they don’t ever have preferences.

      • Raiden Worley says:

        I think a proper criticism of utilitarianism isn’t that “adding numbers is bad” or anything like that. I think it’s more like “Why should I attempt to maximize global utility? Shouldn’t I just maximize my own?”

    • Justin says:

      Maybe part of utilitarianism is recognizing that sometimes adhering to a framework of pure mathematical calculation to determine things as emotionally charged as altruism is a model with, at the very least, pretty firm limitations.

      If helping others gives an emotional payoff, then giving in a way that maximizes that emotional payoff makes the giving easier, therefore increases the chances that you will continue to and even increase your level of giving.

      If you try to ignore the human component and robotically donate only to those charities that score highest on your (almost entirely arbitrary) mathematical model of need and good, then you may well lower the total amount of good you can achieve by making the process less rewarding.

      Utilitarianism is fine, but most people I know that ascribe to it don’t really expand the scope enough in my opinion. Emotional resonance has real effects and should not be discounted in the “moral calculus.”

      • Randy M says:

        “Maybe part of utilitarianism is recognizing that sometimes adhering to a framework of pure mathematical calculation to determine things as emotionally charged as altruism is a model with, at the very least, pretty firm limitations.”

        I don’t think so. I think utilitarianism pretty much holds that the only reason you can’t always use a dry mathematical approach is a lack of complete information, which admittedly is ever present, but human emotions, while perhaps the *goal* of the whole thing in some preference utilitarian versions, are never tools to be used in getting there.

        • Justin says:

          That is one interpretation. If you accept that utility is inherently subjective, then it becomes more clear mathematical weighting is just a shortcut that removes one degree of arbitrary weighting and replaces it with another, albeit one that is easier for multiple parties to negotiate and agree upon.

          Also, how are human emotions never a tool to use? That is a very narrow definition of utilitarianism. It may be common, but it is my no means the only one. A more natural definition of utilitarianism in my opinion would be to use whatever tools bring about the best results, regardless of how comfortable you are with them.

          The problem in my opinion is that most people who are attracted to utilitarianism tend to be extremely logical thinkers, and therefore are extremely uncomfortable with giving emotion inherent value. It leads to a bit of a paradox where they insist upon accepting moral conclusions that make them uncomfortable, but are unwilling to use tools that make them uncomfortable to reach their stated moral objectives.

          Why wouldn’t a true utilitarian use every possible tool to achieve maximal utility?

      • Jiro says:

        If you try to ignore the human component and robotically donate only to those charities that score highest on your (almost entirely arbitrary) mathematical model of need and good, then you may well lower the total amount of good you can achieve by making the process less rewarding.

        This implies that you can do the most good by self-modifying to make it feel more rewarding. In fact, it makes self-modifying to make it feel rewarding take precedence over any other action towards doing good.

        • Justin says:

          That assumes that self-modifying is practically possible. It may well be that there are some cases where it isn’t fully possible or a realistic goal. It seems like in some cases the best approach is to be pragmatic with was is and isn’t possible, at least if the goal is to do the most possible good.

          Otherwise it seems like the goal is moral purity rather than actual utility, which doesn’t sound like utilitarianism to me.

    • Josh says:

      I mean, the cynical explanation is that believing in something as silly as utilitarianism is a costly signal indicating tribe membership in the rationalist community…

      • Peter says:

        A better-but-still-cynical interpretation is to say that believing in utilitarianism is a way of signalling that you’re too clever to be taken in by the silly objections[1] to utilitarianism, and that you’re above needing the approbation of the hoi polloi[2] who are.

        [1] The ambiguity over whether this implies the non-existence of non-silly objections is not exactly intentional, but welcome.
        [2] yes, I know…

      • RCF says:

        Is believing in it less costly for rationalist? If not, how is it a signal?

      • Airgap says:

        Surely you’re not suggesting that getting the formula for Solomnoff’s universal prior tattooed on my chest was unnecessary, are you? Because they made me pay extra for the Ed Hardy font.

      • nydwracu says:

        The other cynical explanation is that the belief in utilitarianism that’s common in the rationalist tribe is the result of the rationalist tribe attracting the sorts of people who are likely to believe in utilitarianism.

        (This is an instance of the general counterclaim to the general claim that yours is an instance of. The existence of an incentive structure doesn’t prove that people respond rationally to incentives — it could just be that people semirandomly explore the parts of thingspace that are locally available and the incentive structures reward the ones who are lucky enough that they were already doing that. Of course, ~geographic restrictions on the available thingspace can look like incentive structures, and then there’s got to be some sort of feedback effect in both success and drifting into a community…)

    • Airgap says:

      I use the term “Moral Calculus” to refer to the rocks I throw at defectors in iterated prisoner’s dilemmas.

  4. > In reality it’s pretty hard to come up with way of valuing animals that makes this work. 

    Assuming utilitarianism.

    Deontologists can have a rule again animal cruelty, and another rule that human life overrides the first rule, if they want to permit medical vivisection. They have simplified the problem by replacing measure with order.

    It is not obligatory on deontologists to widen their circle, or maximize all value, but if they want to , they can try, and it can be considered good. Deontologists have a concept of the supererogatory.

    • J says:

      And virtue ethicists can point out that torturing a dog is wrong not because of what it does to the dog, but because of what it says about you to be the sort of person who would enjoy torturing a dog.

      • Both these things are true, though I think usefulness is a suspect criteria to assert correctness. I do sometimes noticed that non-consequentialists are usually flexible enough to adjust the rules, or the perception of their own virtue, to suit their intuitive agenda. Consequences seem to me to be less prone to that, at least in comparison to pure reason or intuition used for rules or virtues. I think practically speaking a combination of rules, virtues and consequences are least vulnerable to repulsive outcomes and societies, but again I don’t suggest this is a personal reason for adopting one or the other. I personally like the idea of virtues and the odd rule being instrumentally good because they predictably achieve good outcomes 🙂

        • Yes. Consequentialists are partially deontologists in practice, because they must use heuristics to guess at outcomes, and most deontologists are partly consequentialists because they allow exceptions.

          We hybrid theorists formalize the arrangement,

      • Peter says:

        Kant uses a pretty similar argument. I’ve heard the term “virtue theory” used to mean “things pretty similar to virtue ethics, but grounded in some other framework such as utilitarianism or Kantianism”, reserving “virtue ethics” for theories that take virtue as being what grounds morality. It seems lots of people do, or at least nod to, virtue theory, if not always under that name. As well as Kant I’ve seen Mill and Sidgwick make pretty strong statements along that line.

    • Marc Whipple says:

      And if you are a poor peasant who needs to stage bear-baiting or dogfights to make a little extra money to feed your family?

  5. AD says:

    So… ten percent of animal liberation. Vegetarianism?

  6. Cheers says:

    What’s the best response of the vegan EAs been to Katja Grace’s calculation of the value of avoiding meat meals?

    Personally, I think the sort of unfriendly vegan advocacy practiced by DxE, PETA, etc. is a bit of a dead end… yes, you’ll convince some reflective and thoughtful people to switch to veganism, but you’ll convince just as many to modify their values so meat-eating is no longer a concern for them, and this is harder to measure (cc “for every animal you don’t eat I’ll eat two” shirt). (The analogy might be vitriolic tumblr-style social justice advocacy, which makes as many enemies as converts, vs the friendly Jackie Robinson style social justice advocacy that actually works.) The difficulty of converting even those in the EA movement, who are selected for unusually high altruism and reflectiveness, should demonstrate how hopeless the task is for the population at large. At the very least I would wait until we as a species have reached the point where effective altruist values have saturated, since that’s a smaller jump.

    I’m much more optimistic about non-veganism ways of improving animal welfare: passing laws, breeding happier livestock, artificial meat, feeding livestock animals opiates for their entire lives, etc. (Even these causes seem like a much lower priority than existential risk though.)

    I don’t think you’re necessarily being especially unprincipled, by the way. Ultimately you’re a human and you have human values, where you regard yourself as more important than your kin, who are more important than your kind, who are more important than living beings in general. Morality is what you want. I became a more effective altruist when I stopped pretending my values were something they weren’t (it was causing a lot of cognitive dissonance, depression, etc. that was making me less productive).

    Selfishness and EA aren’t as incompatible as is commonly assumed: EA gives you a purpose in life, which is great for happiness, and doing good is great for your morale in the long run (Seligman devotes an entire chapter to this idea in his book Learned Optimism; Dan Dennet says “the secret of happiness is to find something more important than you are and dedicate your life to it”).

    And selfishly working to improve your happiness is key to improving your productivity. If you feel terribly guilty about not doing EA and you’re not doing anything about it, your brain is simply malfunctioning and you aren’t helping anyone. If you’re in this situation, think of EA as a psychological engineering problem, not a moral obligation.

    • Harald K says:

      I’m slightly disappointed that your link about breeding happier livestock didn’t lead to this classic Douglas Adams character.

      There are paradoxes involved in diagnosing your own brain as malfunctioning, and letting others convince you that your brain is malfunctioning sounds especially problematic. Just one of the many things Descartes considered that he doesn’t get credit for when he’s lined up as a straw man for later philosophers…

    • Shenpen says:

      The first thing that puzzles me about veganism is that it is okay to avoid killing animals, but why not eat them if someone else killed them already? Simply generating demand for killing is not personal responsibility. Responsibility still lies at the killer.

      One level deeper is I think this is how we think about causality. You cannot cause another human to do a thing. If he has free will, then not, if he hasn’t free will you don’t either and not cause anything.

      Thought experiment: is a rich man offering $1M cash prize for killing a random human evil? I would say, no, a choice cannot cause another human to choose to do something. It is still better not to do so, because it increases the chance of such a choice, but still not responsible.

      Of course then it depends on what you mean by morality. To me morality is not minimizing the chance other people suffer, but minimizing the chance that I am bad. Keeping my hands clean, so to speak.

      • Whatever Happened to Anonymous says:

        >The first thing that puzzles me about veganism is that it is okay to avoid killing animals, but why not eat them if someone else killed them already?

        It’s not simply OK, it’s mandatory!

      • Rowan says:

        That sounds like a moral argument for compatibilism more than anything else. “Either it’s possible to have free will while also having your choices caused by external forces, or someone who puts a bounty on an innocent person’s head is doing nothing immoral”

      • Godzillarissa says:

        So basically, you’re saying that hiring a hitman is not evil?

        I’d argue that if you set up an incentive-structure that you know will lead to evil behavior (whatever that is), you’re just as evil as the people executing the foreseeable actions.

        • Shenpen says:

          Does this suggest that we can multiply evil, or responsibility? If killing a man is 100 units of evil, 100 peopel voting on it is not 1 each, but 100 each?

          My intuition – I don’t see really rational arguments either way – is that it is fixed, if one kill is 100 units of evil and the hit man already got 98, hiring him is only 2.

          • Godzillarissa says:

            2 units of evil too much for my liking, but YMMV.

            As for linear growth of evil, I don’t think it’s that easy. Actually, I’m not really sure this should be discussed on such an abstract level, like, ever. It’s a highly complex thing and as such, intellectual debate will probably get it wrong more often than right.

            Or at least that’s how little faith I have in my own abilities discussing this topic. Smarter people might find a way…

          • Jaskologist says:

            This is an interesting question. There’s certainly no principle of conservation of evil. Human intuition falls on the side of “evil can be multiplied” and that is reflected in every justice system I’m aware of. If you conspire with 4 other people to murder someone, you don’t get 1/5 of a life sentence.

          • Paul Brinkley says:

            Shenpen, if you start speculating here on how one might calculate the root of evil, I’m going to have to accuse you of irrationality.

          • Harald K says:

            The evil is not in the outcome, but in the intent.

            The millionaire wishes someone ill. He offers a reward, and look, the hitman wishes someone ill too. Thus, evil multiplied by two – it may seem.

            But what Christian ethics say, which really isn’t popular today, is that the evil was really there all along. The millionaire didn’t change anything for the hitman’s moral balance. If you would murder when offered a million dollar bounty, you’re just as evil whether you ever get that opportunity or not.

          • Tom says:

            But Paul, I am not evil.

      • anodognosic says:

        My problem with that view of morality is primarily practical. The market is *excellent* at outsourcing morally fraught actions so that the person benefiting from them is not confronted with the indirect negative effects of their actions.

        Setting aside that I perceive not caring about indirect effects of your actions as monstrous, there’s also the fact that indirect effects can easily come back to bite you in the ass, at least in the aggregate. For instance, it is entirely plausible, and indeed probably, that your moral model may lead humanity off an ecological cliff that might eventually lead to our extinction or the end of civilization. Of course, you may also not care about that either, in which case we are sufficiently different that I don’t think we have much else to productively discuss on the issue.

        • Shenpen says:

          Fine. I think that is the point where I have to decide if I am a consequentalist or not. Currently, not much. From my childhood, I am used to moral judgements of not consequences, but persons. Doing X makes you a bad person kind of thing. Of course, at some level the reason why it makes you so is the consequences, but still the heuristic I derived from it is that the primary goal is to avoid this kind of blame, direct smear, bad karma, disapproval, judgement, social exclusion, what ever, not to avoid harm to others.

          Well, maybe that is an argument for parenting strategies of never telling “you are bad” but only telling “what you did has bad consequences”. However children will be exposed to that anyhow, the media loves making villains for example, and if we don’t try the you-are-bad game with children, then they will ask back “and why should I care if it has bad consequences for others?” and at that point we can only hope they have a strong sense of compassion.

      • Harald K says:

        “is a rich man offering $1M cash prize for killing a random human evil?”

        Hell yes. But the evil act he does is not killing.

        It’s true, anyone can choose to go for the prize or not do it, so the evil act of the killing itself should be credited for them. But if you wish someone else ill and try to bring it about, that is evil in itself. Regardless of whether you seek the outcome by proxy. Even if the millionaire was fooled, and paid out to someone who only made it look like a murder (a la the Slik Road case), he is still as morally responsible as he would be if he got what he wanted.

      • Tracy W says:

        >One level deeper is I think this is how we think about causality. You cannot cause another human to do a thing.

        I think our thinking about causality is deeply confused. Basically it’s a concept we need to make sense of the world, but the world is awfully complex.

        Eg, let’s take fire. There are three things needed for fire: fuel, oxygen and heat. From H&S training, take away any one of those things and you can make the fire go out.

        So, what causes a forest fire? Obviously forest fires only happen when there’s oxygen around, but it seems unhelpful to say that a forest fire was caused by oxygen because there’s always oxygen around forests. So we generally don’t attribute a forest fire to oxygen. Instead we say “well the wood was dry because of the drought and then some camper didn’t put out their ashes properly and that caused the fire”.

        But, if you have say a lab setup where someone’s working with highly flammable materials so they are doing their job in a vaccum so it can’t catch fire, but some fault lets oxygen rush in, then, it’s much more useful to say that oxygen caused the fire.

        And it’s not just fire. Everything is part of a very complex causal web.

        So, when someone says the rich man caused the killing, they’re not saying that that’s the only thing that caused the killing, they’re just saying something like that “in this situation, the rich man offering the money was the relevant change that led to the killing.” And I say “something like” because my wording isn’t really capturing how we use causality.

        Let’s not go into all the difficulties of how do we attribute causality, as opposed to random coincidence.

        • Harald K says:

          Many years ago I read a book called “Power, Freedom and Voting”. There one Matthew Braham had an essay called “Social Power and Social Causation: Towards a Formal Synthesis” which I thought both hilarious and thought-provoking. It had a bewildering array of different scenarios with assassins in a desert, where who was responsible could differ under various interpretations of accountability. I remember two:

          Example: there are two assassins, and A shot the victim but B poisoned the water bottle so he would have died anyway. Who’s responsible for the victim’s death?

          Another: The two assassins put the victim in the trunk of a car and push it of a cliff. They both pushed but assassin A is strong enough to push the car of the cliff on his own, whereas B isn’t. Who’s responsible for the victim’s death?

          I could have sworn the thing was available for free on the internet, but if it was it apparently isn’t any longer.

          • Jiro says:

            This is another tree in a forest definition problem. Consider the poisoning problem. If “responsible for” means “it would have happened later or not at all, if it wasn’t for the actions of person X”, then neither A nor B is responsible for the death. If “responsible for” means “it would have happened later or not at all, if it wasn’t for the presence of at least one action from a set of actions by people, and X’s action is in that set”, then both A and B are responsible.

          • Harald K says:

            The point of Braham’s essay wasn’t to give answers, but more to give names, build a taxonomy of the various things we can mean by saying that A is a cause of B. So that we can more easily talk and reason about it.

            I did manage to find another essay by the same author. Not quite as many amusing assassins in the desert scenarios, but still a good intro to the ideas

          • Tracy W says:

            I don’t know if it’s that particular essay I read, but those sort of complexities is why I qualified my explanation of what someone is saying when they say the rich man caused the death by offering a reward as “something like”.

            I note that the essay subtitle started “Towards…”, not “A …”. Causalities are not easily formalisable.

  7. Nestor says:

    As I discussed in the comments a while back I think most people have a fuzzy idea about animal potential where if you give a chicken the life of a “proper” chicken then you have fulfilled a part of an implied transaction where now you’re entitled to eat that chicken, who has lived a fulfilling chickeny life and has reached it’s objective by being eaten by you.

    Battery farmed chickens aren’t fulfilled because… uh, they look icky. It’s a clear conflation of ethics and aesthetics.

    Personally I was a lot happier when I realized my ethics were largely aesthetic based and thus subject to change according to my circumstances (I would never eat a puppy, but don’t hold starving post-apocalyptic me to that)

    • Shenpen says:

      This is nicely generalizable to most values. It is very, very hard to find ultimate values in a godless universe, because a pleasure signal in a wet computer cannot be that meaningful.

      I am atheist, but basically this is why I miss having a god, who could decide terminal values by decree: “yes, you must consider the pain signals in every brain as meaningful, because I made them and I sort of care about them and I decide what is a terminal value”. OK. that would be easy. I think meaning can only come from an ultimate decision. So we have none.

      Lacking that, aesthetics is a terminal value. “Dying in the defense of a fortress is good because it is heroic!” “Why?” “Because the heroic is beautiful!” “And?” “No and. That is the terminal value.”

      • Illuminati Initiate says:

        Why is a “pleasure signal in a wet computer” meaningless, but God’s decrees meaningful?

        • Shenpen says:

          Meaning = intent. The meaning of a sentence is what people intend to express with it.

          Terminal values are hard. I suspect they are especially hard of people who lean towards depression like myself: we do not really see even our own life as “what cool shiny thing I want to do?” but more like “Ok, well, what must I do again, what do I owe to others again, for the discharging of what duty must I get out of bed again?”

          Let’s not even use the word god, as it has too many historical connotations. Let’s just say: if we live in a simulated universe, and if the maker of the simulation gave us some commands as terminal values what to optimize for, like, reducing suffering signals in other brains, or maximizinh paperclips, that would be just nifty! Both because our existence itself would be based on the very aribtrary intent that gave us the orders, and because we could derive all our duties from it. It would like “Why must I get out of bed again? To cure that man from illness. Why do I care? The Simulation Maker told us we must about stuff like that. OK. Sigh. Let’s do it.”

          And now you probably know why back then in the bronze age similarly depression-leaning people invented actual religion.

          Perhaps the algorithm of pleasure signal feels different from the inside for people who have them a lot, I don’t know.

          At any rate, it is hard to understand why a pain signal in very different mind matters if we set feelings like compassion aside. If we simulate a brain and make a special kind of signal in it that makes it refraing from things that are wrong for the fitness of the organism, and if we label that signal “pain”, is it now wrong to cause it? If we label it “niap”, is it better? You know what I am asking, right? What exactly gives a moral dimension to the brain signal that says don’t do this, it does not help your evolutionary fitness, which we calll suffering or pain?

          Why can I ignore signals like “hey, nice weather” in the mind of a horse, but I have a duty to not-ignore signals like “ouch that hurts”? Divine / Simulation Maker command theory: “because I said so, arbitrarily, and I made you and the horse too so, I own both of you and I decide what you do, so shut up, daddy speaking here, from authority”, is a satisfactory answer, but what else is satisfactory if you happen to be atheist and not believe much in living in a simulation either?

          • Salem says:

            Meaning = intent. The meaning of a sentence is what people intend to express with it.

            Meaning = understanding. A sentence has a range of meanings determined by what a reasonable observer would have understood from it, in the context in which the sentence was expressed.

          • J says:

            Meaning = understanding. A sentence has a range of meanings determined by what a reasonable observer would have understood from it, in the context in which the sentence was expressed

            No; see Paul Grice.

          • Youre basically thinking of morality as divine command theory. If you think of it as something where you are in the driving seat, which is about your preferences, then way not have a preference to end necessary unnecessary suffering?

      • Harald K says:

        I saw that consequentialism faq, and got as far as point two, “Morality must live in the world” and what they mean by that. They’ve got it exactly backwards! The meaning of the world does not reside in the world, cannot come from inside the world.

        Purpose comes from intent. I can pick up a rock for cracking nuts, call it a nutcracker, and now it’s true that the purpose of this rock (for me) is to crack nuts. But I cannot pick up myself in the same manner and give myself a purpose. If I did not already have a purpose, I couldn’t meaningfully call one possible purpose good or another bad, they’d be random.

        I suppose I could let another human (maybe Kim Il Sung?) or group of humans (maybe my parents? Maybe the White Race?) tell me what my purpose is, but that would quite literally be deifying them, putting them on an order of being above myself. This feels horribly wrong.

        It’s interesting to meet an atheist who actually agrees with me on these things, that never happened before. I can clearly see that thinking like this: “I really need to believe a God exists for anything to have meaning” does not automatically lead to “I believe a God exists”. But strangely, I never had that problem myself, despite also having depression issues and having consistent trouble convincing myself to hope for very many other things.

        • Shenpen says:

          I think I have that kind of personality type that religious people tend to have, I just simply don’t believe it is true. E.g. I don’t think humanity is so perfect or having a personality is such a good thing that an omnipotent being should be any way anthropomorphic or personal, I would be easier to convince to believe in some very very impersonal creative force of love like The Force in Star Wars or Chi in Chinese tradition.

          By that personality type I mean a certain sense of the tragic, a certain sense of futility and vanity of certain efforts. I don’t know if it is minor depression or mature wisdom or complete bull.

          Perhaps the whole problem is no problem for people who have more self-worth, they may just think “Well I want things, and I use myself as a tool to get them.” For me it does not work because I don’t think I am important, hence I don’t think my desires are important, so kind of why bother?

          And I don’t know if it is laudable small-ego modesty or really some kind of a depression.

    • Potentially this could become a problem when the cannibal down the road decides you’d be aesthetically nice on his dinnerplate? Full acceptance of intuition seems like a pretty dicey approach, excuse the pun.

      • Nestor says:

        This seems like the theist arguing with the atheist about the convenience of an afterlife. Sure it would be nice if there was, but believing in god isn’t going to make it so.

        Likewise, my ethics or lack thereof aren’t going to have an effect on the cannibal down the road once the apocalypse hits. But, given my adoption of flexible ethics,

        I’m more likely to be the one eating him, or at least be expecting him to try to eat me because I’m comfortable with the idea of people’s operating parameters being related to the environment. That doesn’t strike me as a handicap.

        • Ah touche, I concede my point was fallacious rhetoric. Inconvenient consequences matter not to the truth. However…

          > Personally I was a lot happier when I realized my ethics were largely aesthetic

          …perhaps you have formulated a good refutation for your own moral reasoning? 🙂

          • Nestor says:

            I don’t know if it’s so much a moral reasoning, rather a drastic simplification, as the act of cutting the Gordian knot.

            “Happy” probably doesn’t describe it, it’s more the relief of a solved problem.

            Unburdened?

          • Fair enough, you’re not literally arguing for this position, just reporting that you hold to it. But wouldn’t you agree your expression of unburdening looks a lot like an argument in support of this moral aestheticism, given it also includes attempts to undermine competing approaches? I will definitely not mention anything beginning with M or B. 🙂

    • You can argue that giving life and taking it away is zero sum, providing the animal doesn’t suffer too much between. That makes the continued objection to factory farming look quite objective.

    • Marc Whipple says:

      How does it factor in that the majority of farmed animals would never have existed at all, let alone enjoyed a life free of predators, hunger and disease, if if were not for the ultimate purpose of their being harvested for human use?

      This carries pretty far: for instance, while many vegan activists despise hunters, Ducks Unlimited is probably the single biggest factor in improving conditions for migratory waterfowl (and by extension, all the creatures that share their habitat.) People give money to DU because they like to hunt ducks. Banning duck hunting would cause significant damage to the quality of life for a lot more ducks than will ever be shot and killed by hunters.

      • RCF says:

        But how far does this carry? What if an antebellum Southerner were to say “Well, the slaves wouldn’t be alive in the first place if it weren’t for us”?

        • Marc Whipple says:

          I guess the answer depends on whether you think people are equivalent to ducks.

        • nydwracu says:

          “Well, the slaves wouldn’t be in America if not for us, and [approximate graph of percentage over time] of their descendants wouldn’t be in America if not for us, and the people who took the place of the slaves in Africa since we provided some part of the continent with a temporary escape from the Malthusian ceiling wouldn’t have existed if not for us, and the African slave-traders wouldn’t have had as much wealth if not for us so who knows what effect that might have had on the economy…”

        • Irrelevant says:

          “If we hadn’t spent centuries killing the families of murderers, just think how much higher the genetic proclivity towards murder would be.”

  8. bellisaurius says:

    If the point of this is to ensure that the altruism is effective, I’m missing how donating money to an animal charity is going to help the chicken. If the money goes towards putting an end to meat eating, those chickens don’t exist anyway. It seems counterproductive to go from ‘raise you until you’re tasty’ to ‘effectively committing genocide’ (assuming that egg laying and milking violate the animal’s dignity)

    • Utilitarian vegans believe that chickens have net negative lives so it is better for them not to exist.

      • bellisaurius says:

        Ouch. That’s gotta leave a metaphysical mark.

      • Godzillarissa says:

        I’d say that should be fairly obvious to everyone that’s ever seen an industrial laying hen. How so many people can just not accept that their lives are net negative is beyond me.

        • Irrelevant says:

          Arguing that lives can hit net negatives is controversial at best, let alone arguing that it should be intuitively obvious to anyone that looks.

          • Godzillarissa says:

            It seems I have a rather extreme position on this then, but I can’t understand how one could think that the latest peta-shocker-video showed life worth living.

          • Irrelevant says:

            There are plenty of people whose lives I don’t consider worth living. But I’m not them, and you’re not a chicken, and you can’t be a chicken to an even greater degree than I can’t be a Calcutta beggar. That’s not a valid method of moral reasoning.

          • Godzillarissa says:

            Irrelevant – I fail to see where this going, and where it’s coming from, even.

            I stated my mind and didn’t try to convince anyone of the righteousness of my claims when you pointed out the typical-minding I did. Neither did I call for the mercy killing of all factory chicken, so what is the point here?

            (And as an aside, I do not accept your nondescript rules of conduct regarding moral reasoning.)

          • Irrelevant says:

            I’m saying that “PETA’s latest shock video of chickens shows a life not worth living” unpacks to “PETA’s latest shock video of chickens shows a life that I, a human with a knowledge of my own life, would rather die than live.” And that this observation is quite true, but also completely morally uninformative.

          • Godzillarissa says:

            Irrelevant – okay, then. Since I didn’t claim to be morally informative, let’s just leave it at that.

        • keranih says:

          If it’s that obvious, explain it. With counter examples of how living the life of a “non-industrial” laying hen – with both upsides and downsides – is obviously better. (Remember to include dead animals in your calculus.)

          • Panflutist says:

            I can’t hope to convince you of the net negativity of an industrial chicken’s life, so I won’t try. However, I will say that the quality of non-industrial life has no bearing on whether industrial life is worth living. Although I can imagine (expensive) worthwile non-industrial but still domesticated chicken lives, I would expect lives in the wild to be net negative on average.

      • Airgap says:

        I believe that utilitarian vegans have net negative lives. The fact that they struggle to live after I shoot them proves nothing.

    • Sus says:

      That’s basically the Repugnant Conclusion but for chickens. If we reject it in humans, there’s no reason to apply it to chickens.

      • Douglas Knight says:

        No, it is not remotely like the repugnant conclusion.

        It is a hypothesis that leads into the repugnant conclusion. Sure, if you reject the hypothesis that more lives are better than fewer lives, then you reject the repugnant conclusion. But there are many other reasons to reject it.

        And if your true rejection of the repugnant conclusion is that you don’t value lives, you’ve thrown out the baby with the bathwater.

  9. (For context: I am currently an omnivore.)

    I’m curious as to how you reply to the Goodwin angle of this. That is, if animal suffering really is as bad as is being suggested, then eating meat would be significantly worse than supporting slavery or Nazi death camps. I don’t really see a way out of this other than to bite the bullet and either cut the people of those time periods some slack or give up meat. I tend towards the former, though I’ll probably give up meat once I’m living on my own.

    (I don’t really have an argument to make here per se, just voicing something that’s been on my mind.)

    • Whatever Happened to Anonymous says:

      According to some images I’ve seen in FB (granted, probably not representative of EA vegans), some people claim to consider current factory farming roughly equal to the Holocaust. The fact that these people are still able to function within the society of top 20 meat eating country (probably higher when adjusting for socio-economic status) tells me they probably don’t realize the full implications of this claim.

      • Godzillarissa says:

        Morissey… although the fact that he and many others didn’t burn down any factories yet, supports your last sentence.

        • Marc Whipple says:

          Speaking of, is anybody else constantly amazed when they consider how seldom anybody tries to assassinate authority figures?

          I mean, there are people who honestly believe $CONTROVERSIALFIGURE is in league with the Adversary. You can buy a Barrett cheap, and everybody goes outside sometime. (Coming soon: Substitute “bomb-carrying RAV” for “Barrett.” People are so lazy.) Yet they never take action. Even people killing abortionists is vanishingly rare, though there are hundreds of thousands if not millions of people who claim to believe that they are mass murderers.

          I can only conclude that many people don’t actually believe the things they think they do.

          • Godzillarissa says:

            A possible explanation I can think of is that people these days feel like they don’t really have any power anymore, so they act like they could do something radical and scream it from the rooftops.

            It’s just a facade, though, so they’ll never make it true. Unless they snap and really do burn down a leather factory or kill someone.

          • Most people, if they found out that their neighbor was a serial killer, would turn the information over to the police rather than just kill the person themselves. They have a strong preference for working through accepted social channels for resolving these problems, and this preference may be even stronger than the preference against letting the neighbor kill people.

            Voting for the other party and protesting outside clinics are both socially-accepted ways of attempting to change the outcome, much like turning your neighbor over to the police. The number of people willing to go vigilante in any of these situations is very small.

          • John Schilling says:

            1. Assassinations that do not depend on dumb luck, are generally very hard to pull off and for reasons that go far beyond the cost of the rifle. For that matter, most actually successful assassinations involve pistols, not rifles – yes, everybody goes outside sometime, but if they don’t tell you when and where in advance, well, you’re going to look awfully conspicuous hanging around in public with that Barrett. If they do tell you in advance, same problem because the security detail checked out the site before the target ever stepped out of the limo.

            2. Anyone worth assassinating, you may be able to kill them but you won’t get away with it. The positive utility of living in a Germany ruled by Goering rather than Hitler, is small compared to the negative utility of living in Dachau rather than Berlin.

            3. Humans are social animals, and do not generally sacrifice themselves in isolation for abstract principles. There are millions of people with the same inclination as you to see $ADVERSARY dead; if they can’t arrange at least a few co-conspirators to provide moral support, why should you be the one to take the fall?

            If they can arrange a few co-conspirators to back you up, odds are one of them is a police informant.

          • In practice, socialization works?

          • Marc Whipple says:

            Those are all good points and I’m not surprised people don’t carry out assassinations often. But you’d think there’d be a lot more than we hear about. Part of that, of course, is that the authorities have reasons (most good, a few bad) to keep such things quiet. But out of three hundred millions, you’d think we’d see more outliers. That’s all I meant.

          • Irrelevant says:

            Looking at things historically, yes, I am also amazed by how few senators, etc. get murdered. I assume it’s the result of generations of peaceful transfer of power.

          • John Schilling says:

            Under what circumstances would it ever be reasonable for a potential assassin to expect any net benefit from conducting a political assassination? Or even the assassin’s family or friends?

            A ruthlessly homicidal egalitarian pure consequentialist individual might do that, but the closest I’ve ever come to seeing one of those was a fossilized footprint and a few bone fragments, tucked away behind the living-history displays of Homo Economicus and the New Soviet Man.

          • Irrelevant says:

            Not sure. I would suggest finding the conditions under which it’s possible to pay people to be suicide bombers, and working backwards.

          • Marc Whipple says:

            I’ll toss off a couple:

            1) If the assassin sincerely believed that God would prefer a state of affairs where the target was dead, and thus hoped to prove to God their sincerity, piousness, bravery, etc.

            2) If the assassin sincerely believed that their families, friends, people of similar beliefs, etc, were threatened by the particular target, and if the target died, the threat would be lessened, even if they themselves were captured or killed.

            3) If the assassin sincerely believed that there was a reasonable possibility they would escape punishment, the number of possible incentives goes way up. Heck, they might buy short options on global financial indexes, since high-profile political assassinations are likely to push markets down.

          • John Schilling says:

            Marc: #1, I don’t think there’s a major religion whose scripture could be read in isolation as a divine command to kill a specific politician, so that leaves you with churches that explicitly tell their followers to assassinate local political leaders. For obvious reasons, this isn’t a survival trait in churches, and I don’t know of any current examples in the United States or Western Europe.

            #2, good for driving the murder of e.g. an abusive uncle. Doesn’t work for politicians in a rule-of-law state because they just get replaced by their party’s designated backup politician, implementing approximately the same policies, and the assassin’s family gets the full “what sort of family bred this vile assassin?” treatment from the police.

            #3, when has a citizen of a rule-of-law state ever gotten away with assassinating one of his state’s political leaders? Christer Pettersson, maybe, but that was blind luck and he’s not in any other way a poster boy for rational political assassination.

            One would have to be extremely stupid or deranged to believe any of these things were at all likely, and stupid/deranged is not a good predictor for successful assassinations – or even for getting close enough to take a shot.

            Irrelevant: You can’t pay people to be suicide bombers. And they don’t do it on their own. Some people can be induced to become suicide bombers, but this requires nearly total control over their social environment for a prolonged period, which basically brings us back to the sort of churches that we basically don’t have in the United States and which would be shut down or infiltrated by the police if they did exist.

          • Marc Whipple says:

            On #1: It’s not important that you don’t believe that. It’s whether other people, especially people with guns, could believe that. History says that this occurs from time to time. Again, I’m not surprised that it’s rare. I’m surprised that it’s so rare given the very large population such people could be drawn from.

            On #2, perusing the news will provide many examples of individual authority figures who are obviously acting far outside their authority due to personal issues. Such people often make life very difficult for individuals, but they very, very rarely fight back no matter how egregious the actions. Eliminating such a person would be, in some sense, rational and one would not expect to have the next person to fill the office be just as bad. But it rarely happens. I’m not surprised that Carl Drega was an outlier: I’m surprised that he was such an outlier.

            On #3: Nobody ever started a war expecting to lose. 🙂 I’m sure we all know extremely smart people who believe something dumb. All it takes is one of those with the right dumb belief.

            Also, you absolutely can pay people to be suicide bombers. Granted, cold-calling is unlikely to work, but offering someone money for their family in exchange for becoming a martyr has been observed to be effective. You don’t even have to pay them with money: you can offer them an indulgence. Worked for many of the 9/11 attackers.

          • Deiseach says:

            Speaking of, is anybody else constantly amazed when they consider how seldom anybody tries to assassinate authority figures?

            “It is terrible to contemplate how few politicians are hanged”. G.K. Chesterton 🙂

            Re: your point about the killing of abortionists – possibly because, like the demand “If you really believe abortion is murder, why don’t you campaign for women who have abortions to be arrested and tried?”, it’s not a genuine query about consistency of philosophy, it’s a “gotcha!” propaganda point: ‘See? The mask is off the anti-choicers! They don’t care about babies or ‘pro-life’, they hate women and want to punish them for having sex!’

            Also, if I think murder is wrong, I don’t think you can commit murder even to prevent murder. You may not do evil, even that good may come of it.

            Romans 3:8 “And not rather (as we are slandered, and as some affirm that we say) let us do evil, that there may come good? whose damnation is just.”

          • Samuel Skinner says:

            “it’s not a genuine query about consistency of philosophy, ”

            Why can’t it be both? It as valid as pointing out the hypocrisy of the left fighting against free trade.

            “Also, if I think murder is wrong, I don’t think you can commit murder even to prevent murder. You may not do evil, even that good may come of it.”

            The issue is that not all those against abortion hold that view. You’d expect atheist utilitarian’s who are against abortion to be more likely (per capita) to kill doctors than religious deontologists, but that isn’t what we observe.

          • nydwracu says:

            Doesn’t work for politicians in a rule-of-law state because they just get replaced by their party’s designated backup politician, implementing approximately the same policies

            This is exactly why.

            You can’t kill an institution by killing one person in it. You can’t even dent it. If you kill an abortionist, another abortionist will take his place. If you kill ten abortionists, another ten abortionists will take their place. If you kill a hundred abortionists, maybe that starts to scare people off and only fifty abortionists take their place and then people have to travel further to get an abortion.

            It turns out that people do try to assassinate people who they think are making a personal difference. (The Great Men of History, if you like, as opposed to the Eichmanns.) If you buy the consensus narrative around JFK’s assassination*, Lee Harvey Oswald probably shot JFK over Cuba. And how many attempts were there to kill Hitler?

            Sometimes the assassinations even get covered up. There’s obviously no good way to tell how many of these there are, other than pointing to ones where they didn’t do a good enough job at covering it up. Who killed James Forrestal?** And why?

            * Which has the advantage of explaining why Lee Harvey Oswald shot JFK, but the disadvantage of being utterly at a loss about all the other weird shit. Maybe some of it can be chalked up to paranoia, but does it have an explanation for why Jack Ruby shot LHO? If so, I’ve never heard it. The others can explain everything else, but not, as far as I know, why it was LHO who shot JFK.

            ** Some people would give Huey Long as another example. There was certainly a motive for someone or some group (the same that probably killed Forrestal, maybe) to kill Long, but the guy who did it also had a motive. So the consensus is probably right about Long.

        • Jacob Schmidt says:

          I don’t think that’s valid.

          We know there were people opposed to the Nazi’s living in Nazi Germany. I think it’s wrong to say they weren’t really opposed or didn’t understand the implications of their opposition because they didn’t try to burn down death camps. Because burning down death camps is a good way to get a lot of people, including yourself, killed, and it’s dubious as to how helpful it would really be.

          Albeit now burning down a farm or meat packing plant probably wouldn’t lead to one’s death, I sincerely doubt it would save anybody. The industry is massive, and it is fluid. One factory down means people will buy from another factory; the animals that factory would have used will go to another factory. The outcome probably won’t be “A significant number of animals will live a happy life to old age rather than be slaughtered.”

        • Jaskologist says:

          Just War Theory seems relevant here, specifically the “probability of success” criterion of Jus ad bellum.

    • Alex Richard says:

      Amusingly enough, there was a similar discussion at the meetup- given that Hitler was a committed vegetarian, would it have been better for him to have won WWII, if he then stamped out factory farming?

      • RTWWII says:

        I hate to be pedantic, but Hitler was not a committed vegetarian; he adopted a mostly vegetarian diet in an attempt to control his stomach issues & flatulence (some speculate IBS), but is chronicled in both Speer’s writings & “Hitler’s Table Talk” as occasionally eating meat.

        I realise this does not impact the thought experiment.

        • Alex Richard says:

          Per wikipedia, that seems to be wrong. See in particular:

          “An extended chapter of our talk was devoted by the Führer to the vegetarian question. He believes more than ever that meat-eating is harmful to humanity. Of course he knows that during the war we cannot completely upset our food system. After the war, however, he intends to tackle this problem also.” -Goebbels

        • nydwracu says:

          What is with WW2-era leaders and odd dietary habits? Mussolini drank three quarts of milk a day, and Churchill matched him in whiskey.

          (As far as I know, Stalin’s only unusual habit was a lot of Georgian wine. How about FDR?)

    • Baby Beluga says:

      Why not both? ^_^

      In seriousness, though, I think it’s a good idea to cut people from the past a LOT of slack. It’s really, really hard to go against the crowd; it’s often hard even to notice that there even is a crowd going in a particular direction, and that one could be going in a different direction. I think that almost everyone is “good” in the sense of “thinks they are doing the right thing,” and I think the same is true of people from the past, even Nazis or slave-owners or whatever you prefer to put there.

      I want people from the future to accept and forgive me for who I was, rather than use me as a bogeyman because of whatever monstrous act I’m committing now that I may or may not be aware of. That’s not to say that I shouldn’t try to stop doing those acts when I figure out what they are; just that I think a healthy dose of charity is good when you’re moving across time periods.

    • Wrong Species says:

      It’s both. I could make the argument that meat eaters are worse than slave owners. Slave owners made a lot of money off slavery and were financially devastated when it ended. What’s my excuse for still eating meat, that it tastes good? Of course, I don’t really believe that 99% of people around today are evil, so it makes me sympathize more with our supposedly malicious ancestors.

      • Jiro says:

        You are a human being, so increasing utility for yourself (i.e. “it tastes good”) is legitimate.

        • Wrong Species says:

          Are my taste buds so important that it offsets any suffering that an animal goes through? You are going to have a hard time convincing me of that. If someone believes that torturing animals for the fun of it is wrong(which I do) then I don’t know how they can possibly justify eating meat in our contemporary society.

          • Jiro says:

            If you can offset slavery with the gain to slave owners, you can offset animal eating with the gain to the animal eaters. The fact that this gain is smaller is of no import because what you are trying to offset is also smaller.

          • Irrelevant says:

            Slavery is almost necessarily negative-sum. Wouldn’t have to be involuntary otherwise.

            (“Almost necessarily” because you can contrive some odd situations where it might not be. Enslaving the naturally depressed and joyful to serve the people with more easily changed emotional states, maybe.)

          • houseboatonstyx says:

            I don’t know how they can possibly justify eating meat in our contemporary society

            Here is one way. In my current state of health, money, lifestyle, family health and preferences, etc, I don’t have time or facilities to do vegetarianism (I did it for about 20 years) — so my/our performance in other areas would be reduced. By almost any calculation, fewer animals suffer for my diet than suffer whenever an acre of rainforest is degraded. The degradation of that acre will last much longer than my lifetime, causing suffering for whatever generations of animals manage to survive there, or to survive by crowding nearby acres.

            By continuing to eat what’s convenient to us, we have time=money to give directly to forest preservation.

            Or, for a possibly larger effect, to give to clean energy research, which will save such acres as well as helping other animals and humans to breathe better.

            Looking at wider consequences, this is more efficient.

    • Ghatanathoah says:

      I think we already cut people from those time periods a lot of slack. We executed the Nazis who masterminded the Holocaust, but we didn’t execute every single German who contributed to it in some way. Most of them didn’t even have to do jail time.

      • Wuwuwu says:

        Every single German who contributed to it in some way? Where does that end?
        It’s absurd to think that every soldier who obeys murderous orders, is a murderer. We all know that the great majority of the population of any state embroiled in a total war, would carry out a murderous order, knowing that if they refused, they’d be personally punished and someone else would carry out the murder anyways. I’d be surprised if anybody among the crews of the allied bombers who were ordered to drop nukes on Japan, or to firebomb civilian districts in Japan or Germany, refused on moral grounds. But I don’t think that they, the bomber crews, should be punished.
        Where I live the SS took hundreds of random innocent civilians and killed them as a reprisal for some partisan action. So imagine if i like the Nazis. But the people who carried out that atrocity were obeying orders, so I don’t hate *them*. Seriously, what would you expect them to have done? I’m not saying that disobeying and deserting wasn’t the right thing to do. But it takes huge balls and you can’t expect most people to have such guts.
        Now imagine if we were to punish every single American who “in some way” contributed to nuking Japan. It would never end.

    • Airgap says:

      It’s “Godwin”

  10. I’m surprised that I’ve not encountered anyone who explicitly uses the same idea I do: diminishing marginal returns of moral value. Two chickens are not twice as important as one chicken. At the limit, the sum of all chicken moral value is still far below the moral value of even a few humans. And yet, reducing the suffering of chickens remains a good thing.

    • But is that how you actually feel or are you just using it as a patch to avoid an undesirable conclusion? When I think about a chicken being in pain and then imagine two chickens being in pain, I don’t at all feel that I care about the second chickens suffering less because there already existed a suffering chicken.

      Do humans also have diminishing marginal moral value? Is there some maximum value so that for some value of X there is no moral difference between X humans being tortured and X times a trillion humans being tortured?

      • James Picone says:

        Isn’t something like that necessary to avoid the Repugnant Conclusion argument against utilitarianism?

        • Jeff Kaufman says:

          Another way to handle the repugnant conclusion is simply to accept it. It’s not actually something I find repugnant. When we say “lives barely worth living” we’re taking about people living lives of joy and suffering where on balance there’s more hit than suffering. This isn’t ideal, more joy would be great, but the idea that these lives wouldn’t, in very large aggregate, count as something very good is strange to me.

          • I think the problem is not so much that those lives are worthless, but rather that we would be obligated to move toward that situation.

          • Ghatanathoah says:

            I don’t find it strange at all. Most people seem to consider the mere addition of more people to be a bad thing at least some of the time. If you want evidence, look at how much trouble and effort people go to avoid pregnancy. They forgo a lot of pleasure and sometimes even inflict pain upon themselves in order to avoid adding more people to the world.

            This suggests to me that people accept the Sadistic Conclusion at least some of the time. The only difference between what people do in real life, and the Sadistic Conclusion, is that in the SC you create one new life and concentrate all the suffering in it, whereas in real life everyone subtracts a small amount from their own utility.

            I think the reason the SC sounds unreasonable is that most people have different gut feelings about creating a new suffering life versus spreading the cost against the whole population, even though the cost in pain and forgone pleasure is the same.

        • I think you can avoid the Repugnant Conclusion without saying human lives have diminishing marginal value. But I think the framing of deciding what you value so you can avoid conclusions you don’t like is the wrong way to think about the problem

          Let’s say Alice is a utilitarian, but really wants to kill Bob. If she reasons “standard utilitarianism leads to the I Should Not Kill Bob conclusion, so to avoid that i will use a utility function that disvalues all suffering and death except for Bob’s” she has made a mistake in moral reasoning right?

          • James Picone says:

            There’s an element of metaethical virtuousness here – obviously Alice is being a jerk, but I think the line of thought “That way of aggregating utility leads to conclusion X in situation Y, and that doesn’t seem right to me” is relevant sometimes – the obvious extreme example is something like “Uh, as far as I can tell if you aggregate the utilities like that murdering babies is fine. I think you have made a mistake”, and the Repugnant Conclusion (and similarly, utility monster arguments) are similarly serious flaws in utility aggregation.

            I’m not sure what the principled distinction between that and “That way of aggregating utility doesn’t let me murder Bob” is, though.

          • shemtealeaf says:

            I don’t think it’s really about avoiding conclusions you don’t like personally, so much as it’s about formulating a system of ethics that lines up with what you intuitively feel is ethical. I can simultaneously want to kill Bob and also feel that it’s unethical to do so. However, if I really do feel that killing Bob is the ‘right’ thing to do, then I probably should not adopt a system of ethics that doesn’t allow me to kill Bob.

        • Peter says:

          Part of the problem with the Repugnant Conclusion, at least with the examples produced by Parfit, is that they require things to scale in an unrealistic manner. Parfit states this, and it doesn’t bother him. For me, I have no problem at all with the notion that are intuitions give screwy answers for grossly unrealistic cases, that said I think Parfit has specific ideas about meta-ethics that require him to consider such cases.

          • RCF says:

            I don’t see how saying “the inconsistency doesn’t show except in unrealistic cases” is a valid counterargument. If there is in fact an inconsistency, then things must break down somewhere between reality and the unrealistic case, and you still have the question of where it breaks down, and how, and how to reconcile the difference.

        • Ghatanathoah says:

          Isn’t something like that necessary to avoid the Repugnant Conclusion argument against utilitarianism?

          Even if it is, you could avoid the Repugnant Conclusion by placing less value in more happy people and animals existing. You don’t have to also place less disvalue in more tortured people and animals existing. Morality doesn’t have to be symmetrical.

      • Whatever Happened to Anonymous says:

        I think there’s some truth to this, though. Would we feel the Holocaust was half as horrifying if it were only 3 million Jews? Is the Armenian genocide 1/6 as bad? does a murderer of 10 people elicit more rage than one of 9?

        These are all about feels, yes. But so is Scott’s post, most people “feel” bad about animal suffering, if we interpret this as assigning moral value, then the strength of said feels can an indicator of moral intuitions.

        • Godzillarissa says:

          I think one way to argue for a diminishing scale (or curve) is that the mere existence of a holocaust has a negative effect on society as a whole (say the loss of compassion, to a degree) that would not grow in a linear fashion if more people get killed.

          Although that wouldn’t be a diminishing curve, but one that rises exponentially the closer you get to the NOW-its-a-holocaust(-and-we-lose-aforementioned-compassion) mark and then diminishes.

          In reality, though, I’d expect every curve (when it comes to morality) is a superposition of underlying linear, exponential and other strange curves that aren’t necessarily the same to two people.

          • RCF says:

            I suppose there’s also the effect on family members. Let’s say killing someone gets -10 points for the harm that killing them does to that person, and -1 points for each family member/friend that grieves their death. Then looking at a married couple, killing one of them gets you -11 points, but killing both gets you -20 points, not -22.

        • Tracy W says:

          Isn’t part of this though just that’s very hard/impossible to imagine very large numbers?

          • Peter says:

            The other Scott A has an interesting essay on things including this. There’s a few paragraphs towards the end – search for “Whence the cowering before big numbers, then?” which are particularly relevant.

            Wild speculation; I’d be interested to see whether there’s a substantial correlation between how vivid people’s mental representations of large numbers are, and how intuitively appealing or unappealing they find utilitarianism.

      • roystgnr says:

        Introspection is hard, but when I imagine two chickens being in pain, there’s *something* going on akin to when I imagine both halves of a fissioning Ebborian being in pain. I bet the difference between a billion suffering chickens and a billion-plus-one would be even less. On the other hand, I think my marginal utility would start to increase again as we got to “all but two chickens are suffering” and risked moving to “all but one chicken is suffering”. I negatively value suffering chickens and positively value happy chickens, but the per-chicken magnitude of each value seems to depend on how “unique” they are.

        This may all be a failure on my part to appreciate the true individuality and complexity of chickenity.

      • >Do humans also have diminishing marginal moral value?

        Yes.

        >Is there some maximum value so that for some value of X there is no moral difference between X humans being tortured and X times a trillion humans being tortured?

        No. That’s not how limits work. At all points, another person suffering is worse, just not as big a difference as the last person was.

        Would you agree that it is worse to kill a million people when the world population is 5 million than when the world population is 6 billion?

    • Jeff Kaufman says:

      The standard argument against diminishing returns on moral value is that it breaks separability. It’s weird to say that the moral value of one life would be affected by lives that are completely separate from it.

      (More: see Parfit’s Reasons and Persons.)

      • Ghatanathoah says:

        I’m willing to bite the bullet regarding separability in certain ways. I agree that it’s weird to say that the moral value of an existing person’s preference-satisfaction/pleasure/plain could be affected by the lives that are completely separate from it. But it doesn’t seem that weird to me to place a different value on that life coming into existence in the first place depending on what other, separate lives already exist. That’s one reason I reject the Repugnant Conclusion.

        Of course, the discussion we are currently having is about the suffering of currently existing humans and animals. My particular form of bullet-biting doesn’t affect that argument one bit.

      • Charlie says:

        The standard reply is that non-separability is totally okay because the moral value is actually a fact about the thoughts in your head about chickens – it is not an intrinsic property of the chicken, like its mass. If chicken mass ever behaves non-separably, then I’ll be worried.

    • Forgive me, but this appears to be a rationalisation of one intuitive position rather than an attempt to discover the actual moral thing to do?

        • anodognosic says:

          Intuitive positions matter.

          • Anon says:

            Because intuition says so.

          • anodognosic says:

            Would you follow an airtight-seeming moral system off a cliff?

          • Peter says:

            Intuitions are not infallible oracles.

            Intuitive positions conflict. Intuitive positions are unstable. Intuitive positions can be manipulated by interventions as trivial as switching the order in which you present two questions about contrived moral dilemmas.

            In spite of that, intuition is indispensable. Reason can give us if-A-then-B but (arguably – I think Kant might have something to say about this) only intuition can give us the A. It can also give us “not B” and “reason is vitally important”.

            This, I suppose, is where something like reflective equilibrium comes in. My thought is – ponder questions in the right way, and you can get your intuitions to shift.

      • Illuminati Initiate says:

        I think there are two ways people generally look at constructing moral utility functions*.

        The first way sees there as being an “objectively” correct morality that they are trying to determine… somehow, and approximate as best as possible. Under this view it makes no sense to alter your utility function to fit your intuitions, if anything you should be doing the opposite.

        The second way sees morality as being essentially their own (arbitrary) preferences, which can be represented by a utility function, and sees attempting to formalize this utility function as useful both so that they can have an idea of what to do when conflicted personally, and so they can communicate their values to others or “others” to carry them out for them (most importantly when the “other” is an AGI under construction). They don’t know what the mathematical representation of their morality is, so they are trying to approximate it as best as possible.

        When people seem to be altering their “stated” utility function at will to match their “intuitions”, then they are probably in the second category.

        • Josh says:

          That’s a good distinction, and either way, utilitarianism / utility functions seems like a poor approach.

          From an “objective” standpoint, utilitarianism puts you in the unpalatable position of trying to explain why one pattern of molecules is objectively worse than another.

          From a putting-my-intuitions-on-a-rational-basis perspective, “utility functions” don’t seem to be how the mind actually works. A much better theoretical construct for explaining our morality is “we evolved instinctual reactions guided by game-theoretic principles”, which suggests that our intuitions have to do with maintaining relationships with the other players, rather than with consequences in the outside world. This viewpoint has absolutely no problem explaining why we feel animal cruelty is wrong (bad signaling) but don’t in practice put them as moral equals with humans (because we don’t have a reciprocal relationship where we expect them to respond to our treatment of them in a way that has meaningful consequences for us).

          • MicaiahC says:

            > From an “objective” standpoint, utilitarianism puts you in the unpalatable position of trying to explain why one pattern of molecules is objectively worse than another.

            Um, unless your morality has no contact to the real world or has nothing to say about moral dilemmas, I cannot think of a moral system which doesn’t implicitly say “this pattern of molecules is objectively worse than another”.

            Most moral theories do indeed seem to emphasize the internal state over the practical consequences, but you just replace “pattern of molecules” with “pattern of molecules in my brain” and you’re back to the original argument.

            I guess your moral theory could say that everything is subjective.

          • Marc Whipple says:

            Belgarath on the idea that killing someone is not the same thing as unmaking them, because I love that scene:

            “When you kill somebody, all you’ve really done is alter him a bit. You’ve changed him from being alive to being dead.”

            See also: Dr. Manhattan’s observation on the fact that it was very difficult for him to tell a live person from a dead one, since physically they were nearly identical.

          • Josh says:

            Re: “I guess your moral theory could say that everything is subjective.”

            Yeah, pretty much… why would we think there’s an objectively correct morality? The claim “such and such morality is objectively correct” sounds unfalsifiable to me.

            Just because morality isn’t objectively correct doesn’t mean we’re in moral-relativism land, though. You can still say “your system of morality will lead to a miserable world that I don’t want to be a part of, and I’m going to make you my enemy unless you change your ways”. But, there’s probably more than one moral system that “works” in the sense of leading to a pleasant society to live in.

            Marc: I love the Belgarath quote! Haven’t read those books in years…

          • MicaiahC says:

            You seem to be confusing “there exists an objectively correct morality” with “a morality makes objective judgments about the world”.

            Utilitarianism holds a lot of appeal for me because it seems to be the most robust, most powerful and most general tool. I acknowledge that not everyone thinks like me and think that them using other theories of morality could be “more correct” for them.

            The post that I initially replied to seems to imply that it’s not a good tool, but your followup implies that it’s more a definitional disagreement rather than shortcomings of the actual tool.

            I would like to see you expand more on the perceived flaws of utilitarianism as a moral reasoning tool.

        • Ghatanathoah says:

          >>>>When people seem to be altering their “stated” utility function at will to match their “intuitions”, then they are probably in the second category.

          Maybe. But it could also be that their intuitions are warning them that they have made a big mistake when formalizing the “objective morality” utility function. They have oversimplified it, Hollywood rationalized it, or something like that.

          In that case it makes perfect sense to alter things to fit your intuitions. It would be like someone managing to prove through bad and tortured logic that 1+1=456, and their intuition about basic math letting them know they screwed up.

      • What on Earth does “actual moral thing” mean? This is an attempt to make explicit my moral intuitions. It also does nice things like let me pick dust specks over torture, eat meat, and not be forced to devote my life to charity.

        • By “actual moral thing to do” I just mean the right thing to do. I’m not using it any special sense. You appear to admit that you’re not trying to discover the right thing to do, because you assume that the right thing is whatever your personal intuition tells you. I guess this means there is little for us to discuss.

    • kaninchen says:

      Why is “total happiness of chickens” an appropriate value to be balanced against other values and to which diminishing marginal returns of value applies, as opposed to (say) “total happiness of all sentient beings” or “happiness of each individual chicken”?

    • I’ve seen earlier attempts to base morality on the uniqueness of the individual. This does successfully give you a reason to mostly ignore chicken suffering.

      However, doesn’t it also mean that mindstate-clones are morally worthless, that really crazy/zany/unique people are most morally significant, and that we should try really hard to preserve some random tribe in the Amazonian? I don’t think this ethical system outputs a morality that the average SSC reader would consider sane.

      • Irrelevant says:

        doesn’t it also mean that mindstate-clones are morally worthless?

        No, it implies they’re morally fungible. As in the teleporter thought experiment, or hypothetical rejuvenation procedures involving destructive brain-modeling in conjunction with cloning, or abortion with a commitment to have a child later. If, rather than simultaneous or subsequent destruction then recreation of the mindstate, you make an additional copy and allow it to live, its mindstate diverges, it’s a unique individual, and we’re just discussing two people who had remarkably similar upbringings.

    • Airgap says:

      Is that necessary or contingent? Is it like total-chicken-value = log(num-chickens)? Or is total-chicken-value bounded above?

  11. efnrer says:

    Almost everyone cares about people/animals in the following order:

    1. Yourself.
    2. Friends/family.
    3. Strangers.
    4. Animals.

    I don’t see the problem with being honest about it. Nor do I see the problem of formalizing it by caring exponentially less about each item in the list. Finally, I don’t see the problem of your utility function being logarithmic in the first place (e.g. 10 subjects in the category ‘Animals’ is not worth ten times as much as 1 subject in the category ‘Animals’).

    • Rowan says:

      Are the 10 subjects and 1 subject supposed to be from the same category there? It does make sense when I think about it, but it looks wrong, perhaps should read “as much as 1 subject in the same category” instead of naming the category both times.

      • efnrer says:

        >Are the 10 subjects and 1 subject supposed to be from the same category there?

        Yes.

        >perhaps should read “as much as 1 subject in the same category” instead of naming the category both times.

        Both have the same meaning.

    • Ok you don’t see the problem with it. But that doesn’t really establish why any of those things are good or correct?

      • efnrer says:

        Good is subjective. It’s “correct” in the sense that it gives an acceptable first order description of almost everyone’s preferences with regards to other people/animals.

        • Ok, I won’t disagree with your logic, and hey I don’t think anyone that holds these preferences is bad exactly, but I think there was a sense of the article of trying to establish what is right, and not simply saying “folks like X”.

    • The problem is when you are supposed to be rationally defending your attitudes, rather than reporting them. One solution is to abandon the claim that your attitudes are what is actually right, in favour of the idea that they are a compromise between what is right and what is motivating.

      • efnrer says:

        “Rationally defending your utility function” makes no sense. Rationality is used to maximize your expected utility (according to your utility function) not defending it.

        >One solution is to abandon the claim that your attitudes are what is actually right

        Your utility function is what it is. There is no “right” and “wrong”. You can only use rationality to correctly identify it and find the most efficient way to maximize it.

        • I was talking about ethics. If your utility function doesn’t include rationality as a terminal values then there is a serious sense in which you are not a rationalist. If you cannot rationally justify your ethics, then you are disbarred from certain kinds of discourse, such as exhorting other people why they should adopt your ethics.

          • efnrer says:

            I suggest you take a look at the sequences. Especially this article:

            http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/

            Also:

            >then you are disbarred from certain kinds of discourse, such as exhorting other people why they should adopt your ethics.

            I’m not exhorting anyone to do anything, my post was purely descriptive. I don’t care what (you convince yourself) your utility function is.

          • I have read that material and disagree with it. I think it is a disaster to casually equate ethics with personal preferences , or “utility functions”.

          • efnrer says:

            Then you need to figure out what you mean by the word “rationality”. At the moment you’re just using it as an applause light.
            http://lesswrong.com/lw/jb/applause_lights/

            You would also benefit from reading this article:
            http://lesswrong.com/lw/nb/something_to_protect/
            Relevant quote:

            >I have touched before on the idea that a rationalist must have something they value more than “rationality”: The Art must have a purpose other than itself, or it collapses into infinite recursion.

            It should explain why your idea of having “rationality” as your terminal value will get you nowhere.

          • Having truth as a terminal value is not the same as having rationality as a terminal value… which is not itself the same as having rationality as the only terminal value.

          • @efnrer I can assure you that TheAncientGeek, with whom I disagree fiercly on a number of fundamental things, takes rationality a lot more seriously than you may think 🙂

          • efnrer says:

            Yesterday:
            > If your utility function doesn’t include rationality as a terminal values then there is a serious sense in which you are not a rationalist.

            Today:
            >Having truth as a terminal value is not the same as having rationality as a terminal value.

            It looks like you’ve learned something from the link I gave you, and that’s good. Hope you figure out what you mean by “rationality” soon so you no longer need to use it as an applause light.

          • @efnrer You’re linking an entertaining but ultimately amateur article to a guy that’s clearly studied formal philosophy.

          • efnrer says:

            @Citizensearth It was about time he learned the difference between having truth as a terminal value and rationality as a terminal value then.

          • @efnrer The problem is that Eliezer redefined rationality to mean what philosophy has always generally considered to be a subset of rationality, probably best mapped to instrumental rationality. Going against established terms is not forbidden if you really must, but expecting people outside Less Wrong to conform to an unconventional definintion and go against the established norm is, well, irrational. I’ll leave you to speculate as to what motivates someone to explicitly exclude things like truth-seeking as part of their definition of rationality.

          • efnrer says:

            @Citizensearth If you can’t see the need to differentiate between the truth and the method for finding the truth, then you have a lot of reading to do.

            Also:
            >The problem is that Eliezer redefined rationality to mean what philosophy has always generally considered to be a subset of rationality, probably best mapped to instrumental rationality. Going against established terms is not forbidden if you really must, but expecting people outside Less Wrong to conform to an unconventional definintion and go against the established norm is, well, irrational. I’ll leave you to speculate as to what motivates someone to explicitly exclude things like truth-seeking as part of their definition of rationality.

            This is false. Had you read the article I linked above you would have found this part:

            >What Do We Mean By “Rationality”?

            >We mean:

            >Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that.

            >Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning”.

            Spreading lies about things you haven’t even read is a character flaw I suggest you work on.

          • This is a selective misrepresentation and nonsense. Rationality is frequently used on LessWrong as a shorthand for instrumental rationality. And that’s the way you were using the word. I enjoy reading on LessWrong and I have no problem with Eliezer, just this particular way of using the term. In every post on this thread you insinuating that your interlocutor is stupid, rather than politely addressing their arguments. Perhaps it is you that needs to self-examine, my friend.

          • efnrer says:

            >This is a selective misrepresentation and nonsense. Rationality is frequently used on LessWrong as a shorthand for instrumental rationality. And that’s the way you were using the word.

            Wrong. We were discussing the difference between having truth as a terminal value and having epistemic rationality, the method for finding out the truth, as a terminal value. As I said above, if you cannot understand the difference between these two concepts you have a lot of reading to do.

            >In every post on this thread you insinuating that your interlocutor is stupid, rather than politely addressing their arguments. Perhaps it is you that needs to self-examine, my friend.

            I’m not the one who was caught lying and is desperately trying to backtrack and shift blame. Being able to admit when you’re wrong is considered one of the first steps to become more rational, hopefully you’ll learn it someday.

          • I don’t see how to reach you inside that meticulous self-deception. I hope things work out.

        • Baby Beluga says:

          I dunno, I think a person committing a Very Evil act from history (slave-owner, whatever you prefer) could use a similar argument that would seem equally valid to defend his preferences. He might be like, “I care about people in the following order:

          1. Non-slaves
          2. Slaves

          Also, I don’t care that I can’t rationally justify my ethics, because I’m not interested in exhorting other people why they should adopt my ethics.”

          The point is, regardless of what our hypothetical slave-owner’s preferences are, his mistreatment of his slaves is still morally problematic in some kind of absolute sense.

    • Marc Whipple says:

      I would argue with this a little bit. I would shoot somebody to protect my dog. I am not willing to go to Syria to shoot somebody to protect strangers.

      Frankly, if I had to choose between shooting somebody to save my dog and shooting somebody to protect a stranger (as in, there’s only time for one) I wouldn’t be surprised if I reflexively acted to protect my dog. I like my dog. Most people I can barely tolerate.

      • efnrer says:

        Then you count your dog in the category ‘Friends’ and not in the category ‘Animals’.

      • houseboatonstyx says:

        Leaving aside friendship, personal loyalty, and other factors, I know my dog has a happy life now, which is very likely to continue. I don’t know the stranger’s circumstances (except that humans sfaik are less happy than well treated animals).

    • Scott Alexander says:

      That’s kind of what I’m trying to say but I think “logarithmic” is the wrong word here. That is, I think most functions – linear, exponential, logarithmic, whatever – are going to fail to return the intuitively correct conclusion at some point – they’ll either get a “don’t care about this circle more than a thimble’s worth” or “care about this circle so much it swamps all other considerations”. That’s why I’m saying even my inconsistency is inconsistent.

      • Airgap says:

        There’s still “asymptotic.”

      • efnrer says:

        Ah, now I understand. You’re trying to care about everything but find it impossible to get the correct balance. Personally I care very little about animals beyond their utility for humanity, so I haven’t had your problem. “Asymptotic” as Airgap suggests is probably your best answer.

  12. Jeff Kaufman says:

    The other resolution to this dilemma is “most animals don’t matter”. Shaping definitions a little, let’s divide “pain” from “suffering”. The idea is that pain is a simple consequence of having neurons get negative feedback, something that can happen to everything from c. elegans to humans. Suffering, however, requires having someone there to actually experience the pain, and while humans can suffer I think most animals don’t have the reflective capacity to experience pain in an analogous way.

    Figuring out which animals have this kind of experiencing self (chimp? octopus? pig?) still needs research, and it’s not clear how to determine this from the outside, but this “pain vs suffering” distinction seems to capture my intuitions well here.

    • And Descarates’: animals are mere automata…

      The people who push the pain criterion mean a quale of hurting by “pain”. Since there is no non-hurting pain, there is no non mattering pain.

      • houseboatonstyx says:

        And Descarates’: animals are mere automata…

        Descartes’ idea is so much more convenient than it is probable, that I can’t help suspecting some element of bias in those who agree with it.

    • Baby Beluga says:

      Naw, c’mon–why would humans be the only ones to have “someone there”? I’ll grant you, I’m not certain that there’s “someone there” inside a chicken; hell, I don’t even know there’s “someone there” inside other humans. But it seems disingenuous to put a very low probability on there being someone there in animals whose brains are mostly like ours, conditioned on we ourselves being sentient.

      As for your other point–we should do more research to find out–I agree, but (and this point’s gonna be a much harder sell, I think) I don’t think we’re ever going to know whether other beings are “sentient,” because I don’t think we have a good way of explaining how it is that we ourselves are sentient. I can imagine an alien coming down, and I could tell zir, “Hey, I’m sentient, and my computer isn’t, so I don’t feel bad working my computer really hard.” And ze would be like “sentient? what do you mean by that word?” And I might say “You know, the thing that means there’s someone upstairs; the thing that makes me able to perceive through my eyes and not anyone else’s.” And ze might say, “I don’t understand what you’re talking about at all,” and I would have no further way of clarifying.

      • Ghatanathoah says:

        And ze might say, “I don’t understand what you’re talking about at all,” and I would have no further way of clarifying.

        Isn’t this just a variation on the whole “You can’t explain color to a blind person” idea.

        I do not know if our inability to explain what it feels like to be conscious and have qualia is part of the intrinsic nature of the experience, or if it is just some kind of software or hardware limitation in our brain. I suspect that it is the latter, and that a sufficiently superhuman AI would be able to understand consciousness and qualia merely by reading tons of books about them.

      • houseboatonstyx says:

        But it seems disingenuous to put a very low probability on there being someone there in animals whose brains are mostly like ours, conditioned on we ourselves being sentient.

        Assuming that conditional for a moment….

        When a traveler reports finding a Shangri-La where there is no cancer, no crime, or whatever, does Occam say “Wow, a superior race!” or “Aw, I bet they have some cancer and crime just like everybody else and you just didn’t find it.”

        Occam 1 – “If I didn’t find it, there is no reason to multiply entities by assuming c/c existed. Thus, there is a race lacking any cancer or crime.”

        Occam 2 – “The whole human race has some cancer and crime. Claiming a second race, a c/c-free race, exists, is multiplying entities (ie multiplying races). Rather than assuming a whole nother race exists, the simpler assumption is that they are just like the rest of us and your observation is imperfect.”

        Now switch in consciousness/sentience/whatever for c/c. Claiming a second group, a nonsentient group, exists, is multiplying entities (ie multiplying groups). The simpler assumption is that all creatures have it just like we do, and your observation standards are imperfect.”

        • Irrelevant says:

          The problem with your argument is that you’re applying it from the wrong end. We are certain there are living things which lack consciousness. We are uncertain our own consciousness is not illusory.

          Occam does not suggest animals have souls, he suggests humans are p-zombies.

          • houseboatonstyx says:

            Occam 3 – “Whatever sort humans may be, animals are the same sort. Claiming there are two sorts, one with souls and one without, multiplies sorts.”

            We are certain there are living things which lack consciousness.

            Being certain does not mean being right. The only way to conclude that, is to go with Occam 1.

          • Irrelevant says:

            You appear to have misunderstood me, I’ll be clearer. You said

            Claiming a second group, a nonsentient group, exists, is multiplying entities

            This is incorrect. It is not disputable that a non-sentient group exists, because we have a sufficiently complete understanding of the simplest organisms (bacteria, planarians) to know that they are deterministic machines. The multiplication of categories occurs when you assert the existence of things that have consciousness, not things that lack consciousness.

            So again, I’m fine with applying Occam’s Razor here, but you have to do it starting with the systems we do understand. Occam’s Razor suggests consciousness is some sort of measurement error and claims of “emergent properties” are hocus-pocus covering up a failure to understand the details of the system in question.

          • houseboatonstyx says:

            Thank you for your clarification.

            Occam does not suggest animals have souls, he suggests humans are p-zombies.

            I think we are in agreement on the most important point, that as Occam 3 says, there is most likely only one sort: either we and animals all have ‘consciousness’, or none of us do.

            I disagree whether the question is settled yet (or perhaps even can be). The important point I think, is that disregard for animals cannot be justified on grounds that we have some special ‘consciousness’ (sentience, whatever) which they lack.

          • Irrelevant says:

            In that case, I think you’ve completely misidentified the most important point. The differences in implication between the argument that all minds resemble human egos and the argument that all minds resemble planarian ganglia completely dwarf those of their region of agreement that the difference between human minds and chicken minds is quantitative rather than qualitative.

          • houseboatonstyx says:

            In that case, I think you’ve completely misidentified the most important point.

            I agree that we disagree about which point is the most important.

            I could leave it at that with a smiley. (I do think I understand your position now.) But — meta — “Important for what?”

            For me, when a practical matter (Scott’s “morality living in the world” iirc) must be decided today (such as which way to vote on a law regulating factory farming), I think it’s important to get side issues out of the way, such as whether humans have souls which chickens lack.

            I am curious — for what do you think it matters, that “The differences in implication between the argument that all minds resemble human egos and the argument that all minds resemble planarian ganglia completely dwarf [how? where?] those of their region of agreement that the difference between human minds and chicken minds is quantitative rather than qualitative.”

          • Irrelevant says:

            To make a colorful but lousy analogy, it’s the difference between a person who thinks the Saudi government is wrong to execute Sorcerers because Sorcery should be legal and a person who thinks the Saudi government is wrong to execute Sorcerers because Sorcery does not exist. You’ve found a small, if admittedly important, area of agreement between heavily clashing worldviews.

            In this case, the person who concludes chicken minds are something like human egos has changed their position on animals and is going to vote for the factory farm regulations, while the person who concludes human minds are something like planarian ganglia does not change their position on animals, but is going to consider* creating mentats.

            *(assuming the belief in ego was in fact his true objection to instrumentalizing humans.)

        • Ghatanathoah says:

          This Occam stuff sounds too ambiguous. I think we should just wait for the relevant neuroscience regarding animals to come in, and send a few more people to observe Shangri-La.

    • olivander says:

      This seems like its slipping into dualism. What does it mean for someone to be there? If anticipating pain, changing your behavior due to pain, and even destroying yourself to avoid further pain is not evidence of reflective capacity I’m not sure what would be.

      • Ghatanathoah says:

        This seems like its slipping into dualism. What does it mean for someone to be there?

        I think that it just means that you are conscious and aware of your own existence. I don’t think it’s dualist. I know that I am conscious and am aware of my own existence, and I suspect that this state of being is probably caused by my neurons being organized in a certain way, rather than anything supernatural.

    • houseboatonstyx says:

      Shaping definitions a little, let’s divide “pain” from “suffering”. The idea is that pain is a simple consequence of having neurons get negative feedback, something that can happen to everything from c. elegans to humans. Suffering, however, requires having someone there to actually experience the pain, and while humans can suffer I think most animals don’t have the reflective capacity to experience pain in an analogous way.

      A simpler route to a similar conclusion is that humans feel the current pain and also worry about future pain and other troubles, even after the current pain or threat is over. — Either way, from a same pain, animals suffer less than humans do. Which by my standard is a feature in animals. The greater suffering of humans is a bug in humans.

      If I may define a P-60-Utilon as an hour of pure, un-shadowed happiness, with no dread of future or lingering suffering from the past — very few humans get many of those, except some monks and perhaps John Galt. Animals, with less thought of past or future, can have more. Cf P-30-U, or P-5-U which children may get a lot of, but again animals can get more.

      So, to maximize P-Utilons, would involve a world with more animals in a free, natural, uncrowded state, or as pets in loving homes, and happy children. Donating to animal habitat preservation, or protection of yet-untraumatized children, would fit. Or, buying an acre or two of forest, living lightly within it, and cultivating pets and children there. Which requires a lot of sanity.

  13. Shenpen says:

    Scott,

    >So if you’re actually an effective altruist, the sort of person who wants your do-gooding to do the most good per unit resource, you should be focusing entirely on animal-related charities and totally ignoring humans

    At this point you just really need to ask yourself why exactly you want to be an altruist, let alone an effective one? Suffering is a certain kind of signal in a wet computer, largely meaning “this what is happening now is not suitable for the fitness of the organism”. Why exactly does this kind of signal wet computers other than yours matter for you?

    Now if the reason is compassion, in the literal sense, you feel bad about this signal in other wet computers and you want to take action to remove the bad feeling, then you don’t need any sort of utilitarian reasoning. Nor effective altruism. Just do whatever act of altruism that makes you feel warm and fuzzy and that is that.

    Of course, I understand that probably the reason is that you are too smart to screw over your conscience so easily, and your conscience is telling you “unless you think hard, use utilitarian reasoning, and figure out how to do it really effectively, I will keep giving you these bad feelings”.

    But I think you could argue back to your conscience saying “look, I am animal, I evolved to find it entirely right to march with my tribe and kill, loot and pillage other tribes, let alone help them. All this universalist approach to morality, where suffering outside my tribe matters, is a result of 2000 years of Christian-secular humanist training. A cultural feature. I got it so far that I don’t even torture frogs, and give money to help people so removed from my tribe that they don’t even live on the same continent. Just how much more you want exactly? Why exactly should all this culture override all my insticts? “

    • Interesting point. May I offer a brief counter point?

      > Suffering is a certain kind of signal in a wet computer, largely meaning “this what is happening now is not suitable for the fitness of the organism”. Why exactly does this kind of signal wet computers other than yours matter for you?

      Well, why does anything matter for you? Why do anything at all? People always do what they like, in a sense. That’s not the same as people doing what is hedonistic. I guess I feel you might be conflating two very different senses of “do what you like” here. I don’t think this sort of argument selects for any particular action over another, because by this logic all motives are equal anyway.

    • roystgnr says:

      Even if you start out totally selfish, and all you care about is that “defect” is the Nash Equilibrium in the Prisoner’s Dilemma:

      Add uncertainty about how many times the game will be repeated, and then “Tit for tat (among others who you can hurt and be hurt by)” seems to be the most effective strategy. Cooperation works better in the long run, so to ensure it you need to only defect to retaliate against earlier defectors, and to profit faster against agents who can’t retaliate.

      Add uncertainty about “hurt” and then “tit for two tats” becomes more optimal. A little excessive niceness isn’t as bad as starting interminable feuds over misunderstandings.

      Add uncertainty about “others” and then you’ve got an incentive to elide “and be hurt by”. Enjoying the suffering of those weaker than you may signal bad things which hamper your cooperation with those as strong as you, and may expose you to punishment from ethical agents who are stronger than you.

      For the most exaggerated scenario, see Scott’s sci-fi story.

      • Shenpen says:

        This is a decent point but with a strong Schelling point between humans and animals. Telling the stronger ethical agent “I did not care abou that chicken because it could not possibly care about me!” is probably workable

    • Harald K says:

      Now if the reason is compassion, in the literal sense, you feel bad about this signal in other wet computers and you want to take action to remove the bad feeling, then you don’t need any sort of utilitarian reasoning. Nor effective altruism. Just do whatever act of altruism that makes you feel warm and fuzzy and that is that.

      Another option would be to give yourself drugs/perform brain surgery on yourself/do the right “Dark Buddha”-meditation exercises to rid yourself of your compassion. Sure, you might feel horrible at the thought now, but the whole point of the self-modification is that afterwards, you won’t!

      Anyone who still balks at doing that, is in fact arguing that their moral intuitions have meaning, are real in some objective sense that matters. Whether they acknowledge it or not.

      • Irrelevant says:

        I think your conclusion overreaches. I am not unwilling to amputate my earlobes because my earlobes are Good, I am unwilling to amputate my earlobes because my earlobes are Mine.

        That is to say, our affinity towards things that we view as part of ourselves is strong enough that it’s sufficient to balk at your proposal without the need for any additional reason like belief in their intrinsic meaning.

        • Harald K says:

          If your earlobes give you a chronic problem that makes you unhappy, I would hope you give some consideration to amputating then.

          You change all the time. Why rule out change to the “better” (in the sense less painful)? You’d still be able to say you are you.

          As I see it, it’s because you believe that this part of you has intrinsic meaning, as opposed to, say, your earlobes, your tonsils, your current state of mind, your wardrobe or your thoughts and opinions about seaweed. All of which you are presumably willing to permanently alter.

          Asserting that morality is the core of who you are, is itself a moral choice and a huge leap of faith. If you go through the Dark Buddha training/brain surgery and banish compassion from yourself, you would almost certainly in that state feel like and say that you are still you.

          • Irrelevant says:

            I don’t disagree with that argument, on the positive side because I find transhumanism attractive and on the negative because I went through a period where “If Thine Eye Offend Thee…” was pretty appealing as literal advice.

            But you’re ramming against a bulwark of instinct here. Most people are going to automatically reject the notion of making elective amputations from their ego for the same reason they would their body, and their rejection doesn’t prove anything else about how they’re thinking.

          • Harald K says:

            I don’t see what instinct has to do with it. I’m not making a descriptive argument here – I don’t think anyone would actually surgically remove their concern for others, except perhaps a few hardcore “rational” people who had argued themselves into a corner.

            What I do say, however, is that you can’t rationally justify keeping your compassion/conscience/moral compass, (whatever you want to call it) without asserting some sort of objective validity to it. “It’s my instinct” is not a defense, because the whole point of the scenario is that you’re able to change that instinct.

  14. Robert Liguori says:

    Quick observational hack: animals don’t have moral worth, but people aren’t perfect reasoning agents, and there are strong indicators that failure to taboo displays of mammalian suffering has splashback on humans. That is to say, burning cats as entertainment upsets people, and there’s reasonable circumstantial evidence that societies which burn cats for entertainment are more likely to burn people, so in the absence of a strong, shared reason to burn cats, we shouldn’t do it. (And to extend the metaphor, if we did find a reason to, we should do it out of sight as much as possible, to avoid upsetting people as much as we can.)

    On one hand, I’m punting here, in that I’m firmly on the side of animals-are-not-moral-agents. On the other, though…if they were, to what level would we be obliged to sterilize-and-preserve the ecosystem until we could recreate an eden, to avoid countless animals perishing to Nature red in tooth and claw? Is it morally wrong when wolves starve? Is it morally wrong when a faun trips, breaks its leg, and is devoured by wolves who then don’t starve?

    • > though…if they were, to what level would we be obliged to sterilize-and-preserve the ecosystem until we could recreate an eden, to avoid countless animals perishing to Nature red in tooth and claw? 

      A lot under consequentialism, not much under deontology. You don’t have to deny moral relevance to animals to avoid that particular repugnant conclusion.

    • Deiseach says:

      Whatever about fawns, if any fauns were indeed found in the wild, I think we’d be extremely interested indeed 🙂

  15. When it comes to people, we probably need to take into account their own decision making processes when attempting to assist others. So it might not be a good idea to give away half your money to your brother if you advocate a rule utilitarian principle that each person ought to try to pay their own way or pull their weight, wherever reasonably possible. Widespread non-adherence to this would lead to unpredictable and probably quite negative consequences (basically, societies where suffering is accepted are universally bad by almost any measure). Of course you’d still need a system where efforts were fairly rewarded, but let’s not get political just now.

    You can also kind of apply this logic to non-humans. Animals live, suffer and die all the time outside of areas of human control. Minimising suffering definitely implies a total upheval of most natural systems in order to minimise the pain which is abundant in them. But perhaps suffering is not the central value as many believe. Perhaps aversion to suffering is more of a human virtue that we cultivate for the benefits it brings us for our more fundamental moral purposes. Pain, suffering, and torture is utterly abhorrent, but then so is burning books, despite those books having no instrinsic/terminal value. Just because something is instrumental doesn’t mean its morally permissable just because somebody feels like it is. But we probably should assume there is instrinsic moral purpose from which it must flows. The task is the discovery and then pursuit of that ideal.

    I personally don’t think intuition is the answer. Intuition often leads people to do awful, awful things. I think we should avoid a reasoning process where we simply try to justify our intuitions, and instead work to refine them by seeking to harmonise it with scientific evidence and with it’s own internal reasoning. We should be willing to go against or to discard an intuition if it comes to that. What intuition does provide us with is a starting point – most of us can sense morality is real, but know that our perception of it is incomplete and imperfect.

    My own theory (you can read it here in considerable detail) is that biological explanations are the most consistent and fundamental explanation to morality. So our quest for morality is the imperfect manifestation of an ongoing process of biological altruism, where different entities in a biological system sometimes find their purposes coming together around a common good. The components of this process (kin selection, reciprocation, group selection) are studied scientific phenomena, and those who believe both in seeking the truth (as opposed to justification and rationalisation), and who are authentically determined to be moral, are naturally drawn in their actions towards this process once they fully perceive and understand it. The widening circles of altruism are rooted not in ambiguous, vague notions of consciousness, or culturally relative intuitions, but in the nature of our biology.

    In practice I believe this means our relationship with non-humans is primarily one of species preservation, stewardship, protection. Essentially the virtuous leadership role in the biosphere. As individuals concerned with developing consistent principles towards this end, we are certainly required to develop a personal virtue, or if you like a social rule, that abhors human acts that create suffering. We can be determined in this while at the same time we can be aware of the bigger picture and our more fundamental goals.

    • ” I personally don’t think intuition is the answer. Intuition often leads people to do awful, awful things. I think we should avoid a reasoning process where we simply try to justify our intuitions, and instead work to refine them by seeking to harmonise it with scientific evidence and with it’s own internal reasoning. ”

      There is no intuition free epistemology or science. What you said would be better phrased as an objection to rationalistation.

      • I don’t advocate excluding intuition, merely treating it as a fallable (not the answer) starting point. Do you still feel I am presenting my ideas poorly?

        • TheAncientGeek says:

          If everything is ultimately based on intuition, and its fallible, how do you correct it?

          • I think we’ve reduced things pretty far here, and possibly altering our meaning of intuition a little, but I like the question! Well to start with, let’s say that all intuitions cannot be totally true if any of those intuitions involves basic logic, so long as some intuitions contradict others. And the question implies a premise that all intuitions cannot be totally false. But we still must refine and choose.

            One method I’d suggest we use (I am uncertain we can establish it as correct, as that would probably be based on intuition and be circular?) is to maximise acceptance of our intuitions by rejecting the exceptions and odd one’s out. So that brief moment in a dream where I think the dream is real, I assume is wrong in order to maximise acceptance of all my other intuitions. I’m not sure we can establish any logical reason to do so, but I can’t imagine any way to mentally function without doing this.

  16. Murphy says:

    You could imagine that an animals life/suffering as a red token and a humans as a blue. You don’t need to have an exchange rate that puts the 2 in the same league.

    Not torturing a mouse to death is better than torturing one to death but I’d happily shovel mice into a furnace all day if it meant someone within my close circles of concern got to not suffer a horrible death at a young age.

    1 blue token can be worth massively, massively more than 1 red token. it’s only if you *try* to put the values at a similar level that you hit any problem.

    • Rowan says:

      He does acknowledge this early on, with “either you shouldn’t care about animals at all or they should totally swamp every other concern”, he just doesn’t address the prong of that that doesn’t cause a problem. Although the problems are hit with chicken lives being worth a thousanth of a human life, so this isn’t exactly a “single scale of moral weight” issue.

      • Murphy says:

        Small weight isn’t the same as no weight.
        I don’t think I’d feel ok about driving whole species extinct to save one human life.

        Not caring about them at all implies you shouldn’t care about torturing your dog to death for fun.

  17. Justin says:

    First, consistency is highly overrated by most intellectuals. That’s one of the key points coming out of Tetlock’s Good Judgment Project. Of course, this is typically framed as “having a large and varied toolkit” rather than “applying whichever model appeals to your gut instincts”. Foxes beat hedgehogs and all that.

    Second, this is a non-problem for rights-based morality, at least a conservative versus a liberatarian view of rights based morality. We all have a duty to pursue the Good [bracketing a sketch of the Good], and that puts moral constraints on your actions even if those constraints don’t conflict with other rights-bearers. As Michel Sandel put it, “It is said the Puritans banned bear-baiting not because of the pain it caused the bears but because of the pleasure it caused the onlookers.” The bear-baiters were not pursuing the Good when they were bear-baiting.

    Of course, this raises new problems because we’ve now opened a new category of moral constraints on our behavior, and most people don’t like that. (unless you are one of those modern stoics who’s all in on no-fap, I suppose). But it’s there if you want to follow it.

    • > First, consistency is highly overrated by most intellectuals

      If we don’t value consistency, then for what reason would we remain consistent with this sentence? Consistency is one of the few things almost all philosophical schools have in common as a goal/value/premise/whatever.

      The rest of your points I find interesting and well made.

      • Justin says:

        Thanks for your comment. I do value consistency, but I also believe that other factors deserve some weight when we evaluate our ideas: explanatory scope, predictive power, simplicity, etc…

        Suppose Alice has a theory that has been shown to contain an inconsistency, but that it also has greater explanatory power than Brandon’s theory. The known inconsistency tells me that Alice’s theory is not “the truth”, but it’s superior explanatory power suggests that Brandon’s isn’t either.

        In medicine and the social sciences most theories are falsified. They aren’t like classical physics. Do you pick “the dog with the least fleas”. The logical positivist a wanted an epistemic principle that did not require judgment calls but, alas, we see through a mirror darkly. It was not to be. In the social sciences you have to balance empirical support, simplicity, surprising predictions, coherence, etc…. Philosophy is similar. Consistency is important, but other factors are also important.

        • Justin says:

          Hmmmm, no luck with editing comments on my phone. Damn you autocorrect!

        • MicaiahC says:

          I will point out that that there are various “nonlinear” phenomena in physics so that, when you plug in the naive problem, the answer returns “the universe blows up and is terrible”.

          Usually the answer is that some “small” effect that was neglected actually matters to reign it back, or that an infinity somewhere is obviously not infinity and just “really really big”.

          In this view, consistency is really a claim about local/greedy phenomena, inconsistency is really a claim about global ones.

  18. Whatever Happened to Anonymous says:

    One thing to take in count is that animal suffering isn’t just a pointless utility drain, there’s human utility being gained for each chicken killed.

  19. highly effective people says:

    some of the best advice i ever read from a modern author was related to circles of concern and seems like it is desperately needed here. paraphrasing a bit: you should trim down your circle of concern until it overlaps perfectly with the limits of your power to effect the world. and not in the ‘butterfly wings causing hurricanes’ sense, as in the sort of things you can realistically and reliably do yourself.

    extending your circle of concern to friends and family is very reasonable because they are people tightly interconnected with your life who are affected by nearly any big decision you make.
    extending your circle of concern to neighbors is commendable because your day-to-day choices can make their lives easier or harder.
    extending it to strangers you encounter is laudable as well because even though you will have a small influence on their life overall it’s still a great way to cultivate trust in the wider society. it also has the benefit of being relatively cheap since a string of one-off good deeds is much less taxing than investing in someone’s wellbeing long-term.
    and of course treating animals in your life as well as is reasonably possible is nice too.

    but extending your circle of concern to random foreigners you will never meet (for anyone outside of the state department or dod anyway) seems outright neurotic. and every animal on earth? a god would shudder at that responsibility!

    if you want to live a good life then you need to be realistic about your capabilities. unless you’re wearing a crown of thorns that i don’t know about i doubt you came here to die for the sins of all mankind, and pretending that you did will not make you a more virtuous person. we all have the chance to be good in our own way but you shirk that responsibility when you try and grab for charitable opportunities outside of your reach.

    • Rowan says:

      Well, the issue there is the “more virtuous person” point. I think at least a slim majority of us here are utilitarians of some description, so what does virtue and living the good life have to do with morality?

      And of course we are talking in the context of effective altruism, where the fact that we can affect random foreigners is kind of the point, we do it with donations to AMF, the problem is that while we might affect any given random person at risk of catching malaria in a number of foreign countries, saving all of them is far beyond our means. How the hell does that slot into a policy of adjusting your circles of concern to fit your power to affect the world? Do you care about 100 statistical Africans but not any specific individuals, and once you’ve saved that many strangers you can stop caring?

      • highly effective people says:

        malaria nets are exactly what my ‘butterfly wings’ comment was referring to actually. there are a lot of steps in between ‘donate $X’ and ‘1 fewer malaria death’ none of which you can directly observe. sure your money might be helping or it could be buying some dope a toxic fishing net.

        spending the same money/effort where you can see it working firsthand gives you the best incentives and the most information. if you want your altruism to be ‘effective’ then karitas and direct involvement is probably a safer choice than moral calculus and bureaucracy.

        as for why you should care about being the best possible human being… well that goes into a teleological argument i’m sure you’d find tedious and unconvincing. in my mind though virtue ethics and transhumanism spring from the same root desire to cultivate oneself though obviously most people in both camps would disagree.

        • Murphy says:

          So you hold that statistical deaths aren’t real deaths?

          To flip it around to the negative:

          If your factory produces smog that increases the cancer rate by half a percent you may have caused thousands of ‘butterfly wings’ deaths but since you can’t point to any individual and say “that person definitely died because of your smog” you’re morally in the clear? (after all there’s a 99.5% chance they got cancer for some totally different reason.)

          You can’t directly observe every step between your factory producing the smog and someone 100 miles away dying.

          hell perhaps there was even someone who was helped by your smog! (if some cells in their lungs that would have become a tumor got killed instead.)

          • highly effective people says:

            if you’re maximilian smog, founder and ceo of smog’s amalgamated widgets, then the smog produced by your factory is firmly within your circle of concern. you have near-total power over smog output and out of every person on the planet are probably the most knowledgeable on exactly what comes out of your smokestacks.

            if you’re joe ash, part-owner of three voting shares of s.a.w. as part of your company’s 401k plan, then placing the factory’s pollution within your circle of concern is madness. you have only a trivial amount of control over the outcome and are no better informed than any random person.

            that is to say: given that you have no significant power to effect the outcome and you have no meaningful way of determining the consequences of your actions, you should focus your limited resources on projects closer to home. in any other endeavour this would be so obvious that saying it would be patronizing but people are remarkably unrealistic when it comes to charity and politics.

        • Peter says:

          I think it’s a mistake to conflate the butterfly effect with complicated statistically-mediated effects. On weather and climate; the weather is in a sense unpredictable. However, it’s more likely to be warm in summer than in winter, at midday than at midnight, near the equator than near the poles, etc. Also: at high atmospheric CO2 than low atmospheric CO2. If I were to sign into law an act curtailing CO2 emissions, I’d be exerting a systematic effect on temperature, and also a non-systematic butterfly effect with the hand movments associated with the signing. Only the latter should be called the butterfly effect.

          Also also: IMO, the best possible human being probably uses capital letters.

          • Airgap says:

            ITT SUPEREROGATING LIKE A MOTHERFUCKER

          • highly effective people says:

            you’re right that i’ve been imprecise but hopefully the meaning of my statement hasn’t been obscured by the mistake.

            you’re probably right in terms of capitalization as well. it’s a distinctive style and reads more calmly but also can be a bit of an obstacle to communicating clearly. i’ll likely switch back in the next thread.

          • Peter says:

            Airgap: I thought that a pretty key Aristotelian Virtue Ethics thing was the Golden Mean, finding virtue between two extremes, and a better side to err on. So too many capital letters may be worse than too few… perhaps I should have left the caps off Virtue Ethics.

          • Airgap says:

            AS BARRY GOLDWATER POINTED OUT, EXTREMISM IN DEFENSE OF VIRTUE IS NO VICE.

    • I think that’s a good point, but it does seem neccessary to differentiate your moral goals with your practical goals. Also, it’s interesting to note that in a globally interconnected system the circle you describe is potentially pretty large, and not neccesarily restricted to family and friends. It’s more restricted by “do a task well or not at all”.

      • highly effective people says:

        i broadly agree with you particularly on the sentiment of doing things well or not at all. but the interconnected society thing needs to be elaborated on a bit because i think that idea creates a trap for most people:

        i and most other people could lift the front of a car off the ground with the right motivation (like if there was a child stuck under it). that doesn’t mean that when i go to the gym setting my deadlift at less than 2.5 tons is slacking off. the amount i can realistically expect to lift is much much much less than the maximum amount of force my muscles can exert

        that same sense of realism we naturally apply to our bodies applies to our emotions and our intellect as well. you cannot realistically value every person in the world equally (at least not for more than a few moments during a religious experience). you cannot realistically follow a chain of causality out dozens or hundreds of steps to discover the impact of any given action on people you have never and will never meet on the other side of the planet.

        no matter how interconnected society is your own limits will sharply demarcate the region where attempted action becomes wasted effort and self harm.

        • Marc Whipple says:

          The writers of “Superman” have visited the idea that Superman, if he really tried, could prevent most if not all crime and human suffering from time to time. (Somebody, might have been SMBC, also did a comic where it was pointed out that instead of randomly stopping petty crimes, Superman’s highest social-value use would be turning a giant treadmill to provide clean, unlimited power for the world.)

          Essentially, this argument is why it never sticks: not only would that be boring and a violation of human independence, whatever that means, but Superman is still mortal, and he’d go crazy.

          • Irrelevant says:

            In Kingdom Come, the Flash’s slow withdrawal from human-scale events ends in his being at full-speed constantly and creating a small but significant region that’s free of all (non-truly-simultaneous) crime and accidents. It’s not considered an intolerable outcome.

          • Marc Whipple says:

            IIRC Batman does something similar, but with robots.

        • I think you make your point well, but I’d like to offer a counter-point.

          > you cannot realistically follow a chain of causality out dozens or hundreds of steps

          I’d have to say that in a globally connected world the chain of causality has been reshaped. So youre predictive ability of the benefit of $50 of a child in poverty overseas is probably just as good as with your cousin who you see once a month.

          Also, where the opportunity for benefit is increased, we can also reasonably imagine that its worth a higher level of uncertainty down the causal chain. In a globally connected world, the number of opportunities for altruism to choose from are vastly increased, and so there is many predictable global opportunties that may even higher on-adverage benefit than those with your immediate family and friends.

          > ou cannot realistically value every person in the world equally

          I think you have a stronger point here around the emotional limits and motivation of normal people. I think Scott’s approach of asking for sacrifice, but not total sacrifice, is a solution I generally agree with. But I do think it’s important to discuss the moral ideal, even if it’s beyond our personal limit of motivation, because it identifies the moral direction, and to a lesser extent because it inspires to move towards it.

          So for these reasons I’m hesitant about advocating small moral circles. Even if there is better mental feedback, I feel that there is enough natural motivation to keep it rolling along, and we ought to focus our attention on those areas that come less naturally due to our cognitive limitations. Discussion of effective altruism and the like seem to do this.

    • Deiseach says:

      I suppose you could make the argument that, say, improving conditions in the lives of people in parts of Africa indirectly benefits the West, as in – people not being sick and/or hungry means less social unrest leading to demagogues seizing power in coups or interminable civil wars; better social conditions means less emigration so reducing not alone human suffering but the financial and cultural burdens on Western nations dealing with asylum seekers and refugees, legal and illegal immigrants, and the integration of foreign people into formerly monocultural societies.

      • highly effective people says:

        sure you could make that argument. you could just as easily make one advocating doing the exact opposite using much of the same evidence.

        i’m the furthest thing in the world from a protestant but that’s the logic behind martin luther’s statement that reason is a whore. you can make a pleasant-sounding rationalization for any arbitrary action because you aren’t engaging with the real world but the image of the world in your head. and there’s a word for the worshipping of images… always forget what it is though

        i realize how ironic it is to explain this to a catholic but charity which doesn’t spring from love or account for one’s own limitations is asking for trouble. i mean you guys literally wrote the book on this, i should be the one asking for tips here

        • Berna says:

          worshipping of images =idolatry

        • Deiseach says:

          I was trying not to drag religion into this 🙂

          I wanted to address the point you made about:

          (B)ut extending your circle of concern to random foreigners you will never meet (for anyone outside of the state department or dod anyway) seems outright neurotic.

          What I was trying to do was give a ‘material’ argument in favour of such involvement as self-interest, because even though you are not likely yourself to meet a Nigerian refugee, your country may have to deal with an influx of refugees and this will affect you indirectly (taxes, public policy, integration, cultural change).

    • Hemid says:

      Even God shows circles of concern. He favors (in a way) the Jews. Christian salvation is limited, maybe only to those who ask for it personally. Allah spares the repentant kuffar, but the rest can go to Hell—on Earth, too. Even universal(ist) reconciliation proceeds over (some idea of) time, relief moving outward from…the most deserving? To the least? Something like that. Somebody gets it last.

      I can’t think of a religion without some analogous feature, a sort of Gaussian beam distribution of grace (or whatever). Its a Thing We Know, one of the very few.

      So all these adding machine rolls of pseudo-calculation look like so much—so much—”Whether several angels can be at the same time in the same place?”-ing away from the painfully obvious.

      It is painful. There’s somebody right there right now you could and maybe should be “saving.” So of course don’t.

      You gave—or didn’t—at the office. Anyway that’s where the adding machine is.

    • Shenpen says:

      The point is, you could theoretically spend money on helping a human or animal on the other side of the planet, in that sense they are within your power. However, it is difficult to verify the effectiveness thereof and indeed, spending $50 on sandwiches and walking around in your own city and handing them out to those homeless who look the hungriest is probably an efficient way to spend money and time.

  20. Kaura says:

    >So if you’re actually an effective altruist, the sort of person who wants your do-gooding to do the most good per unit resource, you should be focusing entirely on animal-related charities and totally ignoring humans

    But even if the combined moral value of all animals is greater than that of humans, isn’t it still possible that the resources you have just can’t be used very effectively to aid animals? This is probably not currently the case, so this is mostly nitpicking, but it sounds plausible that someone sceptical about current animal rights strategies could believe (or alieve) that the problems humans have are something solvable while the suffering of animals is something they just can’t do enough about for it to be worth their efforts, even if they know it’s a worse issue.

    I think the main reason I feel I should use some of my resources on humans too is that I’m not sure about the quality and intensity of the suffering animals experience. Even social mammals lack many cognitive features humans have, some of which I guess could be those that actually make human suffering so bad. So there’s some probability that I shouldn’t really care about animals at all, and some probability that caring about animals should swamp every other concern, and while I estimate the latter probability to be greater, I put some of my money on the former too just to avoid the dreadful feeling of “what if I’m not doing *anything* useful *at all*”. If I knew more about the quality and intensity of animal suffering, and knew for sure that it was something that matters in the light of my moral intuitions, I probably wouldn’t have a problem with only focusing on animal issues. Actually reading this post made me feel encouraged to focus on animals, because it reminded me of how huge of an issue (expected-utility-wise) their welfare really is, despite of the uncertainty.

  21. I recognize the motivation problem you describe (should we realistically strive to be moderately good or unrealistically insist on being maximally good and fail?) but I don’t have anything useful to add.

    I’d like to make a small contribution to a topic I don’t think effective altruists are thinking about enough.

    Even assuming we manage to convince all humans to become vegetarian, what would then be our obligations to animals who, in the natural order of things, are some other animal’s prey?

    If it were possible to sustain a population of lions on synthetic food produced by humans, would we have an obligation to do so to spare the lives of the gazelles on which lions normally feed?

    And if these larger populations of gazelles started overgrazing their habitats to the detriment of other animals, would we have an obligation to synthetically feed the gazelles as well?

    and so on, and so on…

    if you know of any animalist/altruist with a clear answer to that, I’d like to hear about their ideas.

    • Deiseach says:

      Kill them all. The only non-suffering animal is a dead animal.

    • Kaura says:

      Reducing wild animal suffering is actually something that many people have thought about, but the reason it’s not currently a significant issue in EA is simply that no one knows a lot about how it could be done in practice effectively or at all, so reducing livestock suffering is a reasonable priority.

      Wikipedia has an article on wild animal suffering that also lists some people who have written about the subject. This paper by Brian Tomasik is a pretty thorough introduction.

      • Thanks Kaura,

        this is very useful/interesting.

      • houseboatonstyx says:

        I’m tagging in late here, but a few quick impressions. from the Wikipedia article, Tomaski’s, and some of their sources. Is there a convenient link to a cage match between advocates of ‘animals cannot suffer so vivisection of dogs and cats is fine’ and advocates of ‘even insects and (by implication) larvae and fetuses suffer so dreadfully that we should pre-emptively exterminate all wild animals for their own good’?

        Claims are made that wild animals are mostly miserable with disease, parasites, starvation, etc. One source of first-hand information would be taxidermists and others who process hunters’ takes.

        One paper cited cortisol levels (in feces) as indicating stress which would make happiness unlikely. Similar testing should be done with human prisoners, hospitalized humans, humans enjoying an exciting sport – and animals that have just been recorded doing strenuous play.

        I haven’t seen them making distinctions between wild animals in unspoiled habitat that they are adapted to, and those ‘reduced to raiding garbage cans’!

        ( Semi-quote marks indicate stuff quoted from memory and/or mashed-up. )

    • I think it’s hard to argue that being a gazelle-at-risk is worse than being a factory-farmed chicken. But yes, the end game may require intervention in nature at that level. (Note that managing fertility seems comparatively easy – if you can manage to keep all wild animals healthy and un-eaten, controlling their numbers should be a trivial-in-comparison additional expense.)

      • keranih says:

        On the scales of ‘dead/not dead’ and ‘sick/not sick’ being a farmed chicken is better than being a free range ‘wild’ chicken. Same with being a farmed deer vs being a wild-living deer.

        There are alternative scales, of course.

      • Deiseach says:

        Yes, but then they’re no longer wild animals, they’re pets. Because managing them includes operations and medicine when they’re ill, vaccinations, neutering/spaying, all the other veterinary intervention we do with domesticated animals – else you are not preventing suffering if you let your gazelles get parasites (or whatever ailments gazelles suffer).

        Isn’t this the same dilemma wildlife documentarians face? Do you let the cute meerkat pup get eaten by the predator because you’re observing, not intervening, and not interfering in the natural balance of life in the wild – or do you chase off the predator (which may starve, or have cubs of its own which will starve, with no meerkat to feed upon) in order to save the cute meerkat cub you’re filming?

  22. Justin says:

    Scott – As I said elsewhere, this moral calculus explicitly ignores the utility of the emotional payout for altruism. If you donate in ways that maximize your emotional resonance it decreases the costs of giving, therefore increasing the chances that you will continue or increase the proportion of your efforts directed at altruistic causes. In the end that can be a good thing.

    Obviously there needs to be a balance between giving to make yourself feel good and giving to what you rationally determine to be the most effective way to do the most “good,” but where we draw that line is no more arbitrary than how much we weight animal vs. human lives in the first place.

  23. Deiseach says:

    If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is at least equal to the importance of all human suffering.

    You had to have known someone would Google this to fact-check? Anyway, according to the Humane Society International, in 2012 over 66 billion chickens were farmed globally. According to U.N. population figures, global population of humans in 2012 was something over 7 billion.

    This is not one thousand chickens to one person, but something more like ten chickens to one person. If 1,000 hens = 1 human, then I have a lot more tasty chicken to eat before I have to start worrying about the suffering equivalent to eating one human.

    Okay. You were using rhetorical exaggeration to make a point. But my point is that there are animals I won’t worry about, e.g. mice. They stay out in the fields, my attitude is ‘live and let live’. They get into my house, keep me awake at night with their gnawing in the walls/roofspace, leave droppings and urine around the kitchen, I will put poison down and only feel the most momentary of twinges of compunction as I dispose of the cute little corpses in the days after. I agree that unnecessary suffering or cruelty for the sake of it should be avoided, and personally I see no sport in hunting, but you do what you can. Some people will travel the length and breadth of the country supporting animal rights protests, other people won’t.

    As an aside, I find it curious that GiveWell recommends 10% of income as charitable donations, since it reminds me of nothing so much as fundamentalist Protestant Christianity (at least, the American version I’ve encountered) which requires tithing (that is, contribute 10% to support of church purposes). As a Catholic, where (in Ireland at least) there is no set figure (the days of having the names of people and the amounts of “the dues” contributed read from the altar as public arm-twisting/naming and shaming being long-gone) and certainly people do not “tithe”, the coincidence amuses me 🙂

    • Vaniver says:

      This is not one thousand chickens to one person, but something more like ten chickens to one person. If 1,000 hens = 1 human, then I have a lot more tasty chicken to eat before I have to start worrying about the suffering equivalent to eating one human.

      How long is a hen’s lifespan, and how long is a human’s? Or, to put it another way, have you eaten over a thousand chickens in the course of your lifetime?

      Perhaps we don’t care about count of lives or deaths; perhaps we care about suffering-moments. Then the question becomes how much worse a hen is suffering than you are. Think back over the course of the last 45 days, and add up all the suffering you experienced. Suppose you had the possibility to replace that suffering with living in factory farm conditions for some amount of time. If it were tremendously short–say, a second–you would probably prefer the factory farm conditions. But as we increase the time in factory farm conditions, it gets worse and worse; when does it switch over to preferring your actual experiences?

      I live a pretty good life; for me, it’s probably around a minute, which implies my suffering is considerably less than a hundredth of a chicken’s suffering.

      • Baby Beluga says:

        IIRC, chickens live about 5 weeks on a factory farm, and the average American eats about 30 chickens per year. That means that if you’re eating chicken about as much as the average American, it’s sort of like you’re keeping around about 3 chickens (5 weeks / (50 weeks / year) * 30 chickens / year)
        alive at all times, as, like, extremely sad pets or something.

        I think that’s the way of quantifying the situation that makes the most sense, if your main concern is the suffering experienced by the chickens while they’re alive.

        Note that for other land animals (pigs, cows), these numbers are a lot more favorable, since pigs and cows are way bigger. For what it’s worth, I think cutting chickens and not pigs/cows out of your diet is a totally legit choice for this reason: assuming your concern is as above, it fits the rationalist 80/20 model nicely.

        There was a branch of PETA that ran a campaign called Eat the Whales, which basically argued this premise: eat big animals like whales, because you don’t have to kill/hurt very many to get a LOT of food.

        • Deiseach says:

          3 tormented chickens a year for my entire lifetime is still fewer than 1,000 chickens (even assuming I live to be 100 * 3 = 300 chickens).

          If 1,000 chickens = 1 human, I still come out ahead. We either need to bump up the value of chickens vis-a-vis humans (10 hens = 1 human?) or stop worrying about “the cumulative suffering of chickens outweighs the cumulative suffering of the humans in the world”.

          I do not say it is right to hurt hens. I do say I’ll worry about humans first, then chickens (or cows, or mice, or mosquitoes).

          • Baby Beluga says:

            Yeah… I mean, even if you think chicken’s suffering is less meaningful than your own, in a probabilistic way or otherwise, you should bear in mind that the sacrifice being asked of you is very small, compared to what’s being done to the chickens. The choice is not “put a human in a battery cage perpetually or put three chickens in a battery cage perpetually,” it’s “put three chickens in a better cage perpetually, or have a human perpetually eat food that is very slightly less tasty.” Eating food that is very slightly less tasty, I think, is probably less unpleasant than being kept in a battery cage by much more than a factor of 1000 (I also think that 1000-to-1 figure, thrown out by Scott as an arbitrary possible ratio, is too large, but I think I’m unlikely to convince you there).

            Also, just to clarify (I think you understood, but just to make sure), it’s not three chickens a year, or 300 chickens in 100 years, it’s just three chickens, full stop. Like, three eternally-suffering chickens, just hangin’ out. I dunno if that helps clarify the situation for you; it does for me. Note, also, that although animal-rights groups tend to bemoan factory-farmed animals’ absurdly short lifespans (under normal circumstances, chickens live about seven years), as utilitarians concerned for the chickens’ suffering, we’re actually rooting for their lives to be short, ironically, if we keep the conditions under which the chickens are kept fixed.

      • Deiseach says:

        But even at my worst, I think I probably have more self-awareness and raw intellectual capacity than a hen (there’s a reason thee’s a saying “foolish as a hen”).

        Yes, there’s undeniable suffering in the worst of factory farming methods, and yes, these should be alleviated. But for me to suffer the same as a hen, I would need my capacity to understand I was suffering be greatly reduced (not my capacity to suffer, my capacity to know and remember and anticipate suffering).

        To say that a hen suffers as much as a human may, indeed, be comparable in raw measure: how much pain experienced by the neurons totted up over the life span of a hen. If even a hen’s small brain is continually stressed to the maximum of its capacity for pain, then that is cruel and the animal suffers and it should be prevented or alleviated as much as possible.

        But to say that a hen and a human would suffer in the same way – even if a human was put in a human-sized cage and treated the exact same way re: feeding, lack of exercise, lack of air and freedom and overcrowding and all of it as a battery hen is treated – is not true.

        For it to be true, either the hen would need a huge boost in brain capacity, complexity and all the rest, or the human would need to be drastically lobotomised and pared back in intellect.

    • g says:

      I don’t think it’s a coincidence. The long history of tithes makes 10% a handy Schelling point. Further bonuses (not altogether coincidental): it’s a nice convenient figure, and it’s of the right order of magnitude that most people can afford it without disastrous-feeling lifestyle impact but if it were much bigger that might stop being true.

    • JME says:

      Wait a minute. You’re saying that with a 0.001 coefficient on chicken life valuation, consideration of chickens isn’t that big a deal because there are 7 billion people and 66 million “adjusted” chickens, but you’d have to consider whether that 66 million humans being treated comparably to chickens wouldn’t be a big deal.

      It’s like, if ISIL took over territory containing 80% of the population of Egypt, then locked them in huge, filthy warehouses without giving the average Egyptian enough room to stretch his or her arms (i.e., much, much worse than actual ISIL governance of the territories it controls), I think that would be kind of a big deal, even if we’re talking “only” 66 million people out of 7 billion.

  24. sass says:

    ‘except insofar as humans actions affect animals’

    This caveat is crucial to the point of being of overwhelming importance considering that the key limiting factor to our being able to improve (and ruin) the lives of animals, whether they are domesticated or wild or artificial, is our ability to affect the rate of our own development.

  25. vV_Vv says:

    If it takes a thousand chickens to have the moral weight of one human

    Interpersonal utility comparison is already a difficult and ill-defined problem when it comes to comparing the utilities of different humans. Comparing the utilities of a human and a chicken seems largely hopeless.

    Peter Singer talks about widening circles of concern. First you move from total selfishness to an understanding that your friends and family are people just like you and need to be treated with respect and understanding. Then you go from just your friends and family to everyone in your community. Then you go from just your community to all humanity. Then you go from just humanity to all animals.

    But this makes utilitarianism an essentially empty theory: by appropriately defining your circles you can accommodate for whatever moral position (or lack of thereof): sociopathic hedonism, nationalism, racism, religious intolerance, sexism, homophobia, transphobia, slavery, and so on can be always justified in terms of circles of concern.

    • Irrelevant says:

      That doesn’t make it an empty theory. Utilitarianism proposes an analysis method, not a value system.

      • vV_Vv says:

        Some forms of utilitarianism may have a descriptive value, but Scott and the Effective Altruism are trying to use utilitarianism as a normative theory.

    • Nicholas says:

      “The greatest good for the greatest number” includes a circle of concern (“the greatest number”) that’s about as large as it gets. People can choose to exclude things from the circle, but they aren’t supposed to. In the same way that a virtue-theorist could conceive of gluttony, ignorance, and cowardice as virtues, but it would overmuch stretch the term.

      • Irrelevant says:

        No, it’s “The Greatest Good for the Greatest Number of Morally Relevant Things.” You still have to decide what those are. “The Greatest Good for the Greatest Number of My Family Members.” and “The Greatest Good for the Greatest Number of People in My Political Faction.” are common responses.

        Though in modern society, it’s considered most polite to leave the limits implicit even to yourself, making it “The Greatest Good for the Greatest Number of People In Categories That I Typically Think About.”

      • vV_Vv says:

        “The greatest good for the greatest number” includes a circle of concern (“the greatest number”) that’s about as large as it gets. People can choose to exclude things from the circle, but they aren’t supposed to.

        “the greatest number” is ill-defined.
        Does it include all humans? Chickens? Trees? Bacteria? Rocks?

    • houseboatonstyx says:

      :: Peter Singer talks about widening circles of concern. First you move from total selfishness to an understanding that your friends and family are people just like you and need to be treated with respect and understanding. Then you go from just your friends and family to everyone in your community. Then you go from just your community to all humanity. Then you go from just humanity to all animals. ::

      Let’s see if I can get ahead of Deiseach this time.

      Chesterton, ORTHODOXY
      I use the word humanitarian in the ordinary sense, as meaning one who upholds the claims of all creatures against those of humanity. They suggest that through the ages we have been growing more and more humane, that is to say, that one after another, groups or sections of beings, slaves, children, women, cows, or what not, have been gradually admitted to mercy or to justice. They say that we once thought it right to eat men (we didn’t); but I am not here concerned with their history, which is highly unhistorical. As a fact, anthropophagy is certainly a decadent thing, not a primitive one. It is much more likely that modern men will eat human flesh out of affectation than that primitive man ever ate it out of ignorance. I am here only following the outlines of their argument, which consists in maintaining that man has been progressively more lenient, first to citizens, then to slaves, then to animals, and then (presumably) to plants. I think it wrong to sit on a man. Soon, I shall think it wrong to sit on a horse. Eventually (I suppose) I shall think it wrong to sit on a chair.

      Nice prediction that eventually chairs will become flesh, multiply without human involvement, and flock happily on the plains.

      Hm, Lewis said Jesus et al said this too:
      “Then you go from just your friends and family to everyone in your community. Then you go from just your community to all humanity.” Or at least to all Samaritans.

      • Deiseach says:

        houseboatonstyx, chairs are made out of wood. Wood comes from trees. To make a chair, you have to KILL a LIVING tree and then MUTILATE ITS CORPSE.

        But I guess you’re happy with your ghoulish collection of tree-corpse ‘furniture’, aren’t you? 🙂

        • houseboatonstyx says:

          Windfalls and driftwood. Given the right ratio between number of trees and number of humans, no problem.

    • Emile says:

      But this makes utilitarianism an essentially empty theory: by appropriately defining your circles you can accommodate for whatever moral position (or lack of thereof): sociopathic hedonism, nationalism, racism, religious intolerance, sexism, homophobia, transphobia, slavery, and so on can be always justified in terms of circles of concern.

      If the goal of your theory to describe/model what humans actually value (descriptive ethics) instead of telling them what to value (normative value), this is a feature, not a bug!

      (Dammit, it seems my previous comment was eaten as spam, so I’m reposting it, and he keeps complaining about duplicate comments)

      • vV_Vv says:

        If the goal of your theory to describe/model what humans actually value (descriptive ethics) instead of telling them what to value (normative value), this is a feature, not a bug!

        Whether or not utilitarianism, in some form, can be used to parsimoniously describe human moral intuitions is an empirical question.

        However, Scott and the Effective Altruists are trying to use utilitarianism as normative moral theory, something they are supposed to trust over their moral intuitions.
        The problem is that all simple forms of utilitarianism deviate so much from standard moral intuitions that they appear “insane”, as Scott says.
        Instead of rejecting utilitarianism as an unworkable normative theory, utilitarians add more and more free parameters to reconcile it with their moral intuitions. At this point, however, utilitarianism had lost all its normative content. It’s essentially a rationalization of whatever moral intuitions they already had.

        As an analogy, Christians often claim to base their morality on the Bible. Throughout the history, or even at any given time, Christians have held quite different moral positions, and yet all of them were able to support them by quote mining the Bible.
        When confronted with the fact that the Bible has passages which are exactly contrary to whatever moral position they support, they would usually come up with some rationalization to argue that these passages don’t matter somehow.
        This enables them to maintain their righteous stance that their morality is based on the Bible while actually allowing them to have whatever moral positions they want.

        It seems to me that “circles of concern” utilitarianism is essentially doing the same thing:
        It enables its adherents to maintain the righteous stance that their morality is based on “rationality” or “humanism” or “altruism” while actually allowing them to have whatever moral positions they want.

        • Emile says:

          I’m not sure Scott is “trying to use utilitarianism as a normative moral theory”; see https://slatestarcodex.com/2013/04/08/whose-utilitarianism/#comment-2821

          Personally I don’t care much for the label “utilitarianism”; but I’m a bit dubious about a lot of talk about normative ethics; in my eyes “allowing people to have whatever moral positions they want” is *good*; if your theory of ethics clashes with people’s intuitions, then it’s probably because your theory doesn’t match people’s actually morality, so isn’t worth much.

  26. Animal suffering is bounded by the capacity of animal imaginations, which are uncertain. For this reason I see no problem in valuing human suffering as being of a different kind, to animal suffering. A human holocaust victim can foresee death, imagine the cultural end of his or her group, and traditions, fear for his or her kin. Cattle may be aware of the suffering of others in the abattoir but they probably do not intuit the death of their species, and indeed their species is preserved over many by being eaten. Wanting not to harm animals unnecessarily is a good, but not one that need over-ride helping people.

    However, that defense only works – I think on ‘mass animal pain’ what about unique cases at near extinction levels? When there is only one tiger [inset sufficiently small number of any specific species] and its end is the end of the entire species, including any aesthetic pleasure we may get from admiring them forward through time, any evolution they could do, any medical value they might have [Dodo guano cures cancer, you know, doh!] : what proportion of our charitable/effective altruism efforts ought to go to the ‘one tiger genome scanning / cloning / species preservation’ fund and how little to starving children, of which there are lots.

    Siimon BJ

    • > Animal suffering is bounded by the capacity of animal imaginations,

      Is that a fact?

      • anodognosic says:

        At least to the extent that human suffering is compounded by human imagination.

        • Compounding isnt enough.

          The general idea is that only humans, and maybe a few other species, have Real Suffering, because they have SpecialExtraThing (which definitely isn’t an immortal soul)

          • anodognosic says:

            Ah, okay, that. I’m not entirely sold on the idea that something about humanity makes us particularly prone to actual suffering, but I think it’s reasonable. I don’t think it’s a SpecialExtraThing that is similar to but legally distinct from an immortal soul. I think it’s the possibility of conceiving that a state of affairs might be otherwise. It takes a kind of humanlike intelligence to do this. And it might actually be what we mean by suffering, as distinct from just pain.

  27. Outside of human consumption, farm animals provide no utility. That’s why few have any compunction about eating them, and that would be the humane thing to do since they would be killed by natural causes if released. Modern farming is a contributing factor for increases in average life expectancy during the 20th century, so animal welfare advocates , I think, have their priorities wrong because ending modern farming would cause much suffering and loss of life. Given a finite quantity of resources (time,money, labor) , related to the categorical imperative, the best policy is the one that maximizes utility to benefit society as a whole for the long run. For example, that means we should spend more money on gifted education programs since smart people tend to create more indirect value, whether it’s through entrepreneurship, public policy and research, than everyone else.

    • Godzillarissa says:

      It has been mentioned above that many people think factory animal’s lifes have a net negative value. Seeing how I’m one of those, I don’t get the argument along the lines of “They wouldn’t even be here if we didn’t eat them”. If one really believed that, mercy killings (and euthanasia) would be a no-go, as life, no matter how miserable, should be preserved. Which sounds a hell of a lot like fundamentalist christians, so maybe it’s not that otherworldly.

      As for your other point, “modern farming is a contributing factor for increases in average life expectancy”:
      I do believe that that was so, but I doubt it still is. It might have helped us settling into a local optimum, but if everyone went vegan and society and the industries adapted, I’m sure we’d be better off.

      Since that will not happen at once, I believe that making small changes that make us worse off now (re: imperfect use of finite resources) will be a step in the direction of an even better local optimum, increasing net gain over time.

    • Whatever Happened to Anonymous says:

      >Modern farming is a contributing factor for increases in average life expectancy during the 20th century, so animal welfare advocates , I think, have their priorities wrong because ending modern farming would cause much suffering and loss of life.

      Well, we’re in the 21st century now, and if similar life expectancies can be acheived without killing animals, then I don’t know how much weight this argument has… of course, how it would be possible to go about enforcing such a thing brings up images of the worst sides of progressive authoritarianism.

      • Douglas Knight says:

        Thanks for highlighting that sentence. It is an impressive non sequitur. Unfortunately, your response is also stupid.

        • Godzillarissa says:

          Apart from insults, would you maybe want to contribute a bit more to this conversation by pointing out how that response was stupid?

          (I made a similar one, btw, but noone noticed…)

          • Whatever Happened to Anonymous says:

            In all fairness (and assuming that’s what he means), I have no idea of why I posted that last sentence.

  28. Peter says:

    There’s something in Mill’s Utilitarianism here. “Duty is a thing which may be exacted from a person, as one exacts a debt.” and lots of other things from that paragraph and surrounding paragraphs. Mill is basically the sort of utilitarian to construct/derive something looking a lot like deontology (in some places, in others something looking a lot like virtue ethics) within utiliarianism – later in that section he has a justification of act/omission distinctions. I’m not sure whether I susbscribe to that reasoning, but I seem to lean that way a lot of the time.

    As I see it, the GiveWell 10% principle and other things like that, they’re sort of saying, “it’s realistic to try to get 10% from people, it’s not realistic to try for more, if we go for 10% we’ll get better uptake”, possibly even “we think utility will be maximised if we set 10% as the standard”. 10% seems to be the sort of amount my conscience can exact from me. Well, not so much my conscience, more the “I’m not doing enough” existential feelings.

    • Godzillarissa says:

      Re: your 2nd paragraph, that is the exact same way that Peter Singer argues in The Life You Can Save. Although he does ask for people to give 1%, if I remember correctly.

      The difference in asking is actually pretty cool, though, because that creates several levels of participation (there’s also the giving pledge for rich people which asks 50%). As long as the lower (1%) is more visible than the higher (10%) that might be a great way to optimize for participation.

      • Peter says:

        Time for (some of) my reservations on this sort of thinking. These aren’t polished enough to constitute an argument yet, just reservations.

        Having this situation where you’re prescribing principles while being bound by different principles seems… Machiavellian – to me. Even if I swallow this qualm, the fact that I have it suggests that other people will have it too, I’m guessing stronger but more implicitly – it’s going to detract from people’s ability to act as a (source/conduit of) moral authority.[1] Maybe there’s some intermediate principle that sits between the deep utilitarianism and the 10% rules. Maybe we can find a self-ratifying principle that avoids that problem altogether. But isn’t the search for self-ratifying principles just Kantianism[2]? And doesn’t the deep utilitarianism then sink so deep that you can do away with it entirely?

        [1] There’s quite a lot of people who really really hate Singer, will happily describe him as a “monster” (that exact word), etc.
        [2] Well, Kantian enough for Parfit, maybe. I don’t think Kant himself would be satisfied.

        • Godzillarissa says:

          I’m not sure how ‘prescribing principles while being bound to different principles’ applies here. Would you explain?

          Re [1]: I haven’t really seen any damning criticism of Singer that isn’t motivated by ‘life at all costs’ fanaticism. And I myself don’t think that is ground enough to dismiss him as a moral authority, although I do see how others will think differently.

          • Peter says:

            As in, if you’re formulating the 10% rule under “Maximise utility”, but the rule says “Don’t bother maximising utility, just give 10%”, then there’s a clash there – maybe even a contradiction if you’re a Kant fan. There’s a larger discussion under the heading of “self-effacing utilitarianism” which at least two people who don’t like utilitarianism find utterly hilarious.

            Singer – I have a lot of FOAFs who are big on the whole disability activism thing and some of them outright say “I don’t want to get involved with a movement that has Singer in a prominent role”.

          • Godzillarissa says:

            Peter – got it, thanks. I didn’t realize how the 10% rule was worded exactly, but I see how that might turn out to be problematic.

            Re Singer (one last time): The concept of ableism is pretty new to me, so I didn’t have much time to think about Singer vs. Anti-ableists. I shall do that right now, if you’ll excuse me.

          • Peter says:

            (For reference I don’t know the exact wording, or even if there is a canonical exact wording. Like I say, not really an argument, just reservations.)

          • Deiseach says:

            Godzillarissa, re: Peter Singer and ableism, see Unspeakable Conversations, articles by Harriet McBryde Johnson who was a disabled (wheelchair-bound due to progressive illness) lawyer and disability rights activist, on her debates with Peter Singer in 2002 about his championing the rights of parents to euthanise their disabled babies.

          • Peter says:

            Deiseach: yep, that’s my source for “monster”.

          • Godzillarissa says:

            Deiseach – that was not at all what I expected. It was shockingly honest, extremely human and gave me a whole new perspective on that topic.

            Thank you so much for linking to that.

  29. onyomi says:

    Or one could stop being a utilitarian and just follow one’s intuition that animal life has a non-zero value, but helping humans is a more pressing concern, even if the sum of all animals suffering right now is greater, given that squaring logic with your intuition (or “sanity”) is what seems to matter in the end.

  30. onyomi says:

    I’m weird in that I have more sympathy for humans than animals and more sympathy for my family and friends than for strangers, but little, if any more sympathy for people of my nationality and culture than people of other nationalities or cultures. I don’t consider myself a self-loathing American–there are a lot of things I think Americans do better than most others, I just don’t see why moderately poor people in the US should upset me more than starving people in Bangladesh. This makes me sound very mean in debates about US poverty.

    • Wrong Species says:

      I try to turn it around on those people who try to guilt me but they usually don’t see the inconsistency. It amazes me when people who say that they care about poor people oppose open borders because it would ruin the welfare state.

    • Ghatanathoah says:

      Three things:

      1. I generally agree with all of Scott’s writing on how it is acceptable to not give all your money to charity, and things like that. However, I do not extend this attitude to political debates and policies. If you are going to get involved in politics, you have a responsibility to support the morally correct policies. You should not engage in any sort of favoritism.

      I believe this for two reasons. The first is that supporting preferential policies is closer to an “act” than an “omission,” and while the “act/omission” distinction might not make sense from a consequentialist perspective, it is an extremely useful Schelling Fence. The second reason is that, as Bryan Caplan points out, in politics we can afford to be more idealistic and less self-interested, since the odds of us affecting a policy outcome are fairly small. So it is much less unreasonable to expect people to be perfectly altruistic in their political activities than it is to expect them to be perfectly altruistic in their finances.

      2. It is never acceptable for someone to consider you mean or immoral because you do not show favoritism when you try to improve the world. If someone gets mad at you for giving money to poor Africans instead of poor Americans, they are a jerk. I believe that it is acceptable to not give all our money to the most efficient charities. But if someone else decides to give more than you, or give more efficiently than you, it is not acceptable to condemn them.

      The people who say you are mean are either fools or jerks.

      3. I think that it is much less psychologically demanding to insist that we value all strangers equally in the abstract than it is to insist we value strangers as much as we value our selves, family, and friends. So I do not think that insisting we help strangers as efficiently as possible is nearly as big a demand for utilitarianism to make as the demand that we help strangers more than ourselves or our friends.

      • Airgap says:

        If someone gets mad at you for giving money to poor Africans instead of poor Americans, they are a jerk.

        There’s a great solution to this in the Bible: Do all your charitable giving secretly.

  31. Crowstep says:

    The only issue that I can see is that, at least in the wild, life for animals is mainly horrific suffering. Gwern posted a good paper about this recently I believe.

    Based on that, an effective altruist would presumably want to tear down the rainforest to try and reduce the number of animals that get eaten alive every day. Of course, if you apply the same ‘life is suffering/potential suffering’ logic to humans, then you get the anti-natalist movement, which is something else that most effective altruists don’t subscribe to.

    I’m kind of glad that I haven’t got to the ‘donating 10% of your income’ part yet. Right now my choices are much simpler.

    • Deiseach says:

      It would seem that Jainism is one attempt to live very strictly doing as little harm as possible to living creatures, including

      Jains don’t eat root vegetables such as potatoes, onions, roots and tubers, because tiny life forms are injured when the plant is pulled up and because the bulb is seen as a living being, as it is able to sprout. Also, consumption of most root vegetables involves uprooting & killing the entire plant. Whereas consumption of most terrestrial vegetables doesn’t kill the plant (it lives on after plucking the vegetables or it was seasonally supposed to wither away anyway).

      For the rest of us, this is why (a) moral theology and (b) its subdivision, casuistry, was developed. Because no, no system is ever completely consistent, otherwise it becomes inhuman rigidity (e.g. the absolute prohibition on lying means that when the persecution comes knocking on your door looking for the fugitive bishop, you direct the torturers to his hiding place because telling them you don’t know where he is hiding is a lie, and lying is a sin).

      • houseboatonstyx says:

        The link on foods makes some fine firm distinctions among plant types. But the broad soft overall thing Jains told me, was that you do the things that most people do — just less of each harmful thing. Drive a car, but minimize driving at night because that’s when the bugs are crossing the road.

    • Ghatanathoah says:

      The only issue that I can see is that, at least in the wild, life for animals is mainly horrific suffering. Gwern posted a good paper about this recently I believe.

      I very much doubt this. Many animals die in fairly horrific, painful ways, but I am not sure that the majority of the animal’s life consists of pain. It seems more likely that it consists of wandering around, looking for food, and sleeping.

      Many of the essays on wild animal suffering I have read have an assumption, sometimes implicit and sometimes explicit, that dying a painful death is so awful that one should be willing to give up years of life to avoid it (and therefore, by extension, short-lived animals who die painful deaths are a net negative). I think this assumption is terrible, and is based primarily on the fact that anticipating pain is often worse than the pain itself. I personally would at most give up one day per minute of pain, probably less. And based on my experiences with animals, I think they are more like me than they are like the algophobes who wrote those papers.

      • keranih says:

        Most wild animals spend their time so hungry, they risk their wellbeing by exposing themselves to predators and the elements in order to find food. Most wild animals spend significant portions of their lives subject to disease and infected with pests. Most wild animals die at the same rate as they are born, every year, from hunger, disease, and other animals.

        The same metrics which say that wild animals are living better lives than farmed animals would have that street beggars in New Deli are living better lives than apartment dwellers in NYC.

      • houseboatonstyx says:

        Many animals die in fairly horrific, painful ways [….]

        I am not sure that even this is correct. A large canine will grab a small prey and snap its neck in almost a single gesture — too quick for pain to register. Even a cougar pulling down a large deer is over quickly.

        • Gbdub says:

          But the deer spends a great deal of its life in fear of being eaten, or starving, or what have you, and struggling desperately to avoid that. Painful death is not the only negative utility.

          • houseboatonstyx says:

            Which deer? Where? – That’s rhetorical, I’ll read the Wikipedia reference.

            But an observer from Mars would note that we humans spend a large portion of our day in total alert at risk of being killed by a car, and almost all of our time working and worrying. That doesn’t mean our lives are miserable, or that we’d be better off in (to take a best case) Assisted Living.

  32. Tiago says:

    I think your premise that what is different about effective altruists gets it wrong. It is the “effective” that stands out, not the “altruists”. public debate already centers around the idea that foreigners have a lot of value in a number of ways – humanitarian interventions, climate change debate, tsunami relief, etc. -, even if not the same as nationals. What effective altruists mean to do is say – “well, if we all recognize that we value foreigners this much, what is the best thing we can do to help them the most?”. To make an analogy with the animal welfare movement, an effective altruist would be someone who was less concerned with dogs being eaten in South Korea and a lot more concerned with chickens.
    And I think this is an important distinction to try to attract more people to the effective altruist movement: you don’t necessarily have to be any more altruistic than you already are, but you could be a lot better at it. And that makes a lot of difference.

  33. Barnabas says:

    The temptation is to use concern for the outer circle, which you rarely encounter or only encounter on your terms, as justification to lord it over a more inner circle.

  34. Arthur B. says:

    You aren’t a pure utilitarian, and that’s fine. You’re human, and humans aren’t pure utilitarians, nor are they pure deontological ethicists, or virtue ethicists. Evolution has given you some innate moral preferences, and the ability to acquire some from your culture and environment. Moral preferences are useful because they facilitate kin selection, and they allow for credible commitments when cooperating with others.

    However, if moral preferences appeared to you as mere preferences, like a desire for ice cream, you’d be tempted to go around them when convenient, thus defeating the purpose. Therefore, genetic and cultural evolution have done something tricky, they invented moral realism. It’s really hard not to think of moral rules as external, and that way of thinking is pervasive in the language: we say “murder is wrong”, not “I dislike murdering people”, as if murders themselves were surrounded by an invisible field of “wrongness”.

    However, much like you can reason and understand that the arrows in the Muller-Lyer illusion have the same length, you can understand that morality is a creation of our minds, and not something external we must aspire to.

    Since morality is evolved, it is of course messy. Many of the moral propositions we hold to be true (not in the physical sense of “real”, but in the sense of a formal system) turn out to be contradictory. These contradictions are at the core of many moral disputes (chopping up hotel guests for organs) or paradoxes (trolley car problems). The reality is that these paradoxes are inevitable, and that “debugging” one’s morality will imply biting some bullet and throwing some principles out of the window.

    Utilitarianism draws from a few shared moral intuitions, but reaches conclusions that are so out of line with the rest of our moral principle that it’s no longer biting bullets, but heavy artillery shells.

    To come back to the original matter, altruism represents the concern we wish to extend to other moral agents beyond ourselves. Altruism towards my family means for instance that I empathize with their welfare, and thus enjoy helping them when I can. Altruism can extend to animals: I wouldn’t needlessly kick a rabbit, or hurt a pet rabbit that I would have grown fond of. However, I would have no problem eating a rabbit. Why? Because not all rabbits are equal in my mind; I have favorites, we all do. If you throw out a moral principle, throw out the moral principle that altruism should be blind.

  35. Salem says:

    Everyone knows that it’s foolish to start formulating theories before looking at the data. But somehow this becomes controversial when applied to moral theories. Rather than think up general ethical rules and then try to apply them, you need to build out your moral database, case-by-case, and then divine the rules out of the data. Similarly, you shouldn’t start with ethical conundra, and try and divine the general rules out of them, because “hard cases make bad law.” Instead, start with a multiplicity of easy cases, each subtly different, and work your way up to the hard case. This is a proven methodology.

    So you’re looking through the wrong end of the telescope. Because you can’t solve the hard case, you find yourself all at sea on a very easy case. You will be here until Domesday if you try to work out the (almost impossible-to-determine) exact contents of our duties to animals, then use that to work out whether we can eat meat. Instead, take the easy cases (of course it’s OK to eat meat, of course it’s wrong to torture a dog) and use that to help build up a picture of our duties to animals.

    • Yes, this. Ethics from first principles is as useless and frivolous as physics from first principles.

      • John Schilling says:

        +1. I intend to steal this and use it regularly; apologies in advance if I forget to credit you when I use it here.

    • blacktrance says:

      How do you know whether your solution to an easy case is correct? If all you’re relying on is your intuitions, someone else can reach a different conclusion and there may not be any way to reconcile the two. That’s fatal to every theory that’s not moral relativism.

      • Salem says:

        But this is an even greater problem for people who start with an ethical theory. If you don’t have any obvious starting points to test from, how do you know the theory is correct?

        I also wonder why some people consider hypothetical disagreement so fatal in ethics, but not in other fields. Another example of making observations, then divining laws out of them is “Physics.” That too relies on agreed and observable starting points. Some people are (or pretend to be) radical skeptics. Does that undermine your belief in Boyle’s Law? You may not know how to get those people to agree that Quantum Electrodynamics is a valid representation of reality, but did that bother you in any way as you studied the textbook? Then you know how I feel about people who want to deny, say, the wrongness of murder.

        • blacktrance says:

          If you don’t have any obvious starting points to test from, how do you know the theory is correct?

          Intuitions aren’t the only possible starting point. Another is Divine Command. A third is instrumental rationality.

          I also wonder why some people consider hypothetical disagreement so fatal in ethics, but not in other fields. Another example of making observations, then divining laws out of them is “Physics.”

          The analogy to physics would be if I repeatedly observed acceleration in a vacuum to be 9.8 m/s^2 and you observed it as 1 m/s^2. In physics (at least ideally) when different people repeat the same experiment, they get the same results. But if you limit the grounding of ethics to moral intuitions, different people considering the same thing will consistently get different results. In physics, there’s one territory that we’re all trying to describe. In intuitionist ethics, the real territory is inside our heads (so there are as many territories as there are people), but we try to project it outside ourselves as if it were a fact about the external world.

          • Salem says:

            I didn’t say anything about intuitions. I said easy cases. You need to build up a database of known results, and then divine general laws out of this, not vice versa. How you get those known results is up to you. If you are fortunate enough to have received commands directly from God, then lucky you – that should certainly help build up the database.

            There are people who deny that there is one territory we’re all trying to describe in physics, you know. What do you say to them?

          • blacktrance says:

            But how you get those results is the point of contention. Some say you have to start with first principles, and others say that you should use intuitions. I’m arguing against the latter camp.

            There are people who deny that there is one territory we’re all trying to describe in physics, you know. What do you say to them?

            I point them to the Sequences, or, more often, I judge the inferential distance to be too large and disengage from them.

          • Salem says:

            Some say you have to start with first principles

            It’s not clear what you mean by “first principles.” I am open to any method of getting easy, known results, and then divining the abstract, general rules from those results. That could include “first principles.” But I am worried that what you mean by “first principles” is abstract, general rules. It is obviously not sensible to start with abstract, general rules, apply those rules to get general results, and then claim that you are divining the rules from your results. All you’re really doing is asserting some rules, which you have no method of knowing are correct.

            I point them to the Sequences, or, more often, I judge the inferential distance to be too large and disengage from them.

            Right, but do radical skeptics and so on make you doubt physics? Of course not. Physics isn’t a game where you have to persuade other people, it’s the observation of an external world which remains valid whatever we believe about it, and even if we deny there is an external world to observe. Similarly, if someone says they don’t think murder is wrong and all morality is relative, that doesn’t make me doubt ethics. It just makes me think that they are a moral monster or (rather more likely) they are striking a pose.

          • blacktrance says:

            All you’re really doing is asserting some rules, which you have no method of knowing are correct.

            You don’t assert the rules arbitrarily, you derive them from something non-moral (as in the constructivist approach), God’s will (Divine Command), etc.

            Physics isn’t a game where you have to persuade other people, it’s the observation of an external world which remains valid whatever we believe about it, and even if we deny there is an external world to observe.

            The problem isn’t about convincing people, but about determining who’s right. When two people disagree about an observation about the external world, it is in principle testable in a way that resolves the disagreement. But that’s not the case in intuitionist ethics – disagreements are in principle irresolvable because there’s no way to determine who’s right.
            Someone saying murder is wrong doesn’t make me doubt ethics, but that’s because I’m not a moral intuitionist. If I were, I would face the problem of determining whose intuitions are correct, and how we’d know that.

    • Trying to divine reason from moral intuitions is… not easy. And common law doesn’t always make sense either. 😉

    • Baby Beluga says:

      Whoa! “Of course it’s OK to eat meat”? You can disagree with me on the answer to this one, but surely it should be admitted that this is not an “easy case”?

      • Irrelevant says:

        In its literal, “there is some meat here right now, am I morally allowed to eat it?” interpretation, it is indeed an easy case.

      • thirqual says:

        You can actually argue it is not OK to not eat meat (on a society level, not on an individual one, of course).

      • Salem says:

        There are no arguments even vaguely plausible for not eating meat. At best, there are arguments for why we should treat farm animals better.

        But hey, I’ll meet you half way. If animals are creatures of moral concern, then it’s wrong not to treat them as creatures of moral duty. So I’ll accept it’s wrong to eat meat when you convince lions of the same.

        • Baby Beluga says:

          I’ll accept it’s wrong to eat meat when you convince lions of the same.

          Why only then? Are you so threatened by lions in your everyday life that you’re unable to consider your moral duties?

          • Salem says:

            I didn’t say only then. I’m perfectly willing to consider my own moral duties, but as indicated above, there’s no argument for vegetarianism. In the spirit of comity, I was giving you an additional way to win me over.

  36. Rob says:

    “If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is at least equal to the importance of all human suffering.”

    But the number of chickens at any point is about 3x the number of humans. Are you saying their suffering per chicken is clearly 300x worse?

    http://reflectivedisequilibrium.blogspot.no/2013/09/how-is-brain-mass-distributed-among.html

    • gattsuru says:

      The majority of factory chickens are killed within three months, either for meat or to cull males. The issue is not just that there are three times the number of chickens than humans, but that over a single typical human lifespan (75 years), there will be three hundred times three times (~900x) the number of chickens than humans, held in conditions significantly worse than the average human and killed in conditions significantly worse than the average human dies.

      ((I kinda reject the base assumption, though. The level of depth of experience that chickens appear to have is close enough to nil to be a rounding error even given the huge numbers involved — objections to unnecessary animal cruelty aimed at chickens or similarly complex animals feel as much about what the actions do to the human’s mindset, more than anything else.))

      • Douglas Knight says:

        Scott said “suffering,” not “deaths,” so I think Rob is right to use days of life.

      • Ghatanathoah says:

        Comment removed because Douglas Knight said the exact same thing first and better.

        3:42pm.

    • Carl Shulman says:

      Note that even if one weights for information processing on that order of magnitude, the total of wild animals would still be larger than humans. Total wild animal information processing is a lot larger than human, even though total farm mammal/bird information processing is a lot less (and of course that information processing is subdivided among more individual shorter-lived animals).

      Also, while at that discount rate the chicken’s life won’t be as bad as the human’s is good, the gains humans get out of eating chicken are only a small portion of their overall welfare. There is a very big wedge between ‘factory farming outweighs all human ills’ and ‘factory farming is worse than meat-eating is good.’

      Zooming out, in the future there are likely to be uses of matter and energy much more efficient than human brains at producing experience, intelligence, and complexity, so this sort of consideration doesn’t alleviate the utility monster aspect of utilitarianism (ultimately devouring all the utilitarians once their instrumental value ends). Any response to that would ultimately be grounded in non-utilitarian concerns (including compromise with non-utilitarian agents) such as contractualism.

      https://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/

      • Ghatanathoah says:

        Zooming out, in the future there are likely to be uses of matter and energy much more efficient than human brains at producing experience, intelligence, and complexity, so this sort of consideration doesn’t alleviate the utility monster aspect of utilitarianism

        Just upload human brains to computers, then upgrade them into superhumans (after carefully researching how to do so safely). We don’t have to be fed to utility monsters because we’ll become them.

        • Carl Shulman says:

          You’ll then face a similar problem for different software.

          • Ghatanathoah says:

            When I said “Upgrade them into superhumans” I meant to convert them into whichever type of software is most efficient. I don’t think it’s impossible to rewrite our software into something more compact and efficient while keeping the parts of us that make us who were are (our values, personality, and memories).

  37. the physicist says:

    I’m 99% sure someone has brought this up already and it’s obvious, but Utilitarian moral calculations were initially formulated to answer questions of public policy.

    Using the circles of Singer, once we know that we value animals, we can use Utilitarianism to create policy that minimizes the harm to animals, or we can use Utilitarianism to cause the least sacrifice within our family… but we would not even compare the Utility of helping animals vs that of helping our family. Utility clearly works well within one circle, and faces challenges across them.

    But it is in attempting to step back and remove oneself from the calculation, that one makes themselves an altruist. So are you less of an altruist that you don’t take the final step to animal liberation? I’ve never really seen it that way.

  38. Evan says:

    I’m not sure I find the argument laid out in your 2nd-4th paragraphs very convincing. Suppose there’s a world with two charities–HumanHelp and AnimalAid–and you’ve decided to give $1,000 to charity. The decision of which charity to donate to shouldn’t be a function of whether total human suffering or total animal suffering is greater, as the reasoning in these paragraphs seems to imply (am I missing something?). The decision should be based on which charity alleviates more suffering with an additional $1,000. That depends on a lot of factors, including the ease with which animal vs. human suffering can be alleviated (which isn’t clearly a function of which type of suffering is greater in aggregate) and the administrative effectiveness of each of the charities.

    Those reasons alone could justify contributions to one charity even if the aggregate suffering targeted by the other charity is greater. But perhaps more important is that even if you assume similar efficacy from each charity, there are likely diminishing marginal returns to dollars spent on Animal suffering and dollars spent on Human suffering. That is, the first $1M spent on human suffering might address easy problems–e.g., delivering known inexpensive cures to diseased individuals. The next $1M might have to be spent addressing harder problems–e.g., discovering a cure for rarer diseases. You can imagine a similar scenario for Animals.

    In this world, the efficient or optimal donation pattern–in aggregate–would be such that the next dollar spent on Animal welfare has the same impact in terms of suffering (converted at whatever exchange rate you use between human suffering and animal suffering) as the next dollar spent on Human welfare.

    A clear argument for donating to one charity when you believe the other addresses a species with larger aggregate suffering might be that you believe the marginal usefulness of dollars is greater for that first charity–perhaps because others, influenced by the fact that the second charity addresses a form of suffering that is greater in aggregate, have donated excessively to that second charity.

  39. Marc Whipple says:

    This was a Bloom County story arc. Here’s how it started:

    http://www.gocomics.com/bloomcounty/1987/03/23

    Here’s what it eventually led to:

    http://www.gocomics.com/bloomcounty/1987/03/27

    And here’s how it ended up:

    http://www.gocomics.com/bloomcounty/1987/03/28

  40. Jiro says:

    I think Scott has just discovered that his principles are an extremely poor formalization of human morality. If not only hasn’t Scott actually followed them, but neither has anyone else, that indicates to me that the principles are unworkable and should be discarded.

    Remember reason as memetic immune disorder? Effective altruism seems to me like what you get when the same thing is applied to altruism instead of religion. So instead of terrorists following the logical consequences of believing in the Koran, and deciding to blow people up, you get Scott following the logical consequences of altruism and deciding that he should be doing things that are utterly ridiculous.

    (Also, there’s a solution for animals. Put a number on how valuable animal suffering is that is low. It’s hard, as Scott says, to use a number that puts them on par with humans, but it’s not hard to use a number that puts them much lower. It just seems unintuitive because people, when asked to pick low numbers, pick “low” numbers within a narrow range, such as 1% or 2%, rather than 0.00000000000000001.)

  41. Jos says:

    Isn’t it enough to say that I value animals welfare at almost zero? IMHO:

    – All else being equal, it’s better not to torture a human, dog, or jellyfish than to do so.

    – All else being equal, it’s better to torture a jellyfish than to torture a dog.

    – All else being equal, it’s better to torture a thousand dogs, or a million, or all of them, then to torture a 5 year old human child. (Since all else is equal, we’re not going to analyze the resource or opportunity cost of all that torturing – I assume a group of aliens has shown up and is going to torture one set or the other.)

    Weirdly, I think all else being equal, it’s better not to torture a videogame character than to torture her. (I’m very uncomfortable playing dark side paths of Star Wars games or Epic Mickey). But I’d rather torture all the video game characters than one jellyfish. I don’t know if this means my moral intuitions about the value of non-me entities are faulty, or if I think videogame characters have moral rights, just as many orders of magnitude below ants as ants are below humans.

    • Peter says:

      There was apparently an experiment where someone replicated the Milgram experiments, except that the “student” was a stuffed teddy bear. Apparently the “teachers” (i.e. the real experimental subjects) reacted pretty similar to how they reacted in the normal version.

    • FacelessCraven says:

      suggestion: a big part of what makes torture wrong is its effect on the torturer, not merely the tortured. That is why you disapprove even of “torturing” video-game characters.

      Likewise, a large part of why torturing a dog is wrong is because it makes you into a dog-torturer. The pain the dog experiences is also bad, but not nearly as bad as what you are doing to yourself.

    • Princess Stargirl says:

      It actually squicks me out you could accept the torture of so many dogs. Millions of chickens or fish or rodents fine. But not dogs! Dogs are precious!

      • Jos says:

        Would you ask the aliens to torture the 5 year old child instead?

        (They have picked out a 5 year old child from your town, and promise he or she isn’t related to you and that they won’t tell anyone that it was you who made the choice.).

  42. Sam says:

    Adam Smith covered the circles of concern issue quite nicely, I’ve always thought, in the Theory of Moral Sentiments:

    “Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befall himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own. To prevent, therefore, this paltry misfortune to himself, would a man of humanity be willing to sacrifice the lives of a hundred millions of his brethren, provided he had never seen them? Human nature startles with horror at the thought, and the world, in its greatest depravity and corruption, never produced such a villain as could be capable of entertaining it. But what makes this difference? When our passive feelings are almost always so sordid and so selfish, how comes it that our active principles should often be so generous and so noble? When we are always so much more deeply affected by whatever concerns ourselves, than by whatever concerns other men; what is it which prompts the generous, upon all occasions, and the mean upon many, to sacrifice their own interests to the greater interests of others? It is not the soft power of humanity, it is not that feeble spark of benevolence which Nature has lighted up in the human heart, that is thus capable of counteracting the strongest impulses of self-love. It is a stronger power, a more forcible motive, which exerts itself upon such occasions. It is reason, principle, conscience, the inhabitant of the breast, the man within, the great judge and arbiter of our conduct.”

  43. Shmi Nux says:

    So your solution is that of an economist: division of labor? You do simple moral things by yourself, but contract out complicated and fuzzy decisions and actions to a pro? Seems reasonable.

  44. doe says:

    Sounds like Dust Specks vs Torture. If you don’t think a number of dust specks equal torture, why would you think a number of chickens’ suffering equals that of a human?

  45. blacktrance says:

    Ethical egoism solves a lot of these problems. The moral circle is a flawed model that assumes utilitarianism at the outset. Rather, your own well-being is the ultimate justification of action – your happiness is the terminal value. Beyond that, everyone else’s well-being is an instrumental value that promotes your happiness to different degrees, and the increased welfare of those close to you makes you happier than the welfare of strangers. Thus, you should push one stranger to save five in the trolley problem because you assign positive value to the life of a stranger, but not push a friend in front of the trolley (because you value them more highly), and definitely shouldn’t sacrifice yourself. When it comes to animal welfare, you help them to the extent that makes you feel better, which may be not at all. This approach is consistent, meta-consistent, and sanity-preserving, but it may be counterintuitive. That, however, is no problem, because intuitions are an incredibly poor source of morality. Contrary to many of the commenters, rather than starting out with simple cases and determining how you feel about them, you should think really hard about what moral principles are correct and why they’re correct, and once you’ve achieved that to your satisfaction, apply them to specific cases and ignore what your intuitions say.

    More generally, when it seems like there’s a conflict between morality and sanity, it means you’re misunderstanding morality. Morality is supposed to promote your well-being, of which sanity is an important part.

    • What’s the source of morality better than intuition?

      • blacktrance says:

        One possible answer is that since intuitions don’t work, there’s no morality at all. But that’s not my answer – I say that the best (and only successful) source of morality is instrumental rationality.

        • How does that work?

          The standard approach terms to consist defining morality as whatever the individual wants to do, and not much else.

          That doesn’t match my intuitions about which acts are moral, OR what morality means.

          Plus it’s only non intuition based in the sense that it’s not based on anything.

          • blacktrance says:

            Morality is what one should do – that’s generally agreed upon. And what would be instrumentally rational for one to do may not be what one wants to do, so it’s not just “do whatever you want”.

            Plus it’s only non intuition based in the sense that it’s not based on anything.

            It’s based on what one would want after an ideal process of rational deliberation about one’s desires.

          • You still haven’t said where moral shoulds come from. I agree that there are instrumental shoulds, and I agree that they can differ from desires. But differing from desire is a necessary condition of being a moral should, not a sufficient condition. Slamming my hand in the door isnt a moral obligation just because I don’t want to do it.

            The claim that wants just are shoulda isn’t based on anything.

          • blacktrance says:

            Moral shoulds, to the extent that they exist, come from (ideal) wants. You could say that means that there aren’t any moral shoulds (only instrumental shoulds), or that there are moral shoulds and that they are instrumental shoulds. I prefer the latter, because at least some instrumental shoulds qualitatively feel like moral shoulds, and also because they answer questions delineated as “moral”.

          • I don’t see why reflection ones own desires should be what distinguishes moral shoulds. Refelection on other peoples desires seems a much better candidate. Which is based on intuition. Which is better than nothing,

          • blacktrance says:

            I don’t see why reflection ones own desires should be what distinguishes moral shoulds.

            Because it’s truth-apt, says what you should do, provides answers to moral questions, and qualitatively “feels like” morality.

            Refelection on other peoples desires seems a much better candidate. Which is based on intuition. Which is better than nothing,

            Why is it better than nothing? Given the variety of moral intuitions that people have, we have little hope that they converge to one thing – when two people disagree about morality, the truth may be impossible to resolve without appealing to something outside of intuition.

  46. Psy-Kosh says:

    Actually, it’s not obvious that, even given above assumptions, the theoretical ideal effective altruist ought be devoting all effort to animal welfare.

    The real question for _that_ would be marginal value of a dollar/resource/effort unit applied to whatever.

    ie, it’s not if “when added up, even taking into account the lower but nonzero value of various animals, net amount of badness caused by that outweighs bad stuff happening to humans” but “given a dollar/unit of effort/etc, can applying it to some form of animal welfare produce a greater amount of net good per unit effort”, right?

    Some units of effort may be less fungable than others, but at least for some stuff, the answer there to me is, conditional on all of what you said, not as obvious.

  47. John Schilling says:

    If donating ten percent of my income to Givewell-approved charities is sufficient for me to be deemed economically altruistic, does eating a 10%-vegan diet satisfy my ethical requirements w/re animal rights?

    I think if you insist on raw consequentialism, the only way to avoid patently nonsensical results and/or extreme self-loathing is to use a zero baseline: your obligation is not to use all of your available resources to create the best of all possible worlds, but simply to not make the world any worse than it is. Nobody does the former, nobody ever will, and as your opening quote suggests that just winds up with you devoting your time and talent to the creation of excuses and rationalizations.

    “Do no (net) harm”, that’s an attainable goal, and if achieved results in a world divided between isolationist hermits and people who are each making the world at least a little bit better. How much better, and in what ways and for whom, that’s up to you. W/re animal rights, don’t torture puppies. Don’t run the evil sorts of factory farms, unless maybe that’s how you’re preventing some great famine, and don’t express a preference for factory-farmed meat in your diet. If you want to do more than that, good for you, and the rest of us will probably think more highly of you for it, but it’s not obligatory.

    For the vast majority of humans who aren’t consequentialists, even if some of us will fake it during armchair philosophy sessions, it’s even easier. The rules mostly express ranked preferences rather than quantifiable values and are mostly prohibitions rather than commandments. Kindness to the outgroup is a virtue almost everywhere the concept of “virtue” still holds meaning, but loyalty to kith and kin is a higher one.

  48. pjz says:

    Note that ‘resources devoted to sanity’ are also ‘resources devoted to charity’… but longer term. Limiting case: if you keep no resources for self-upkeep and end up lowering your earnings, you’re lowering the amount of resources you can allocate to charity. If you really wanted to do all the math, you’d need to get a good actuary to build you a nice model that you could then try and optimize.

  49. Emeriss says:

    Curious – why would you expect your desires and values to be consistent, given that they’re godshatter? You could try to self modify to create consistency, but I’m not sure that it’s really there unless you do that. I’m reluctant to self modify in such a way, as I think this might shift my values to something that my current self does not endorse. Given that enforcing a condition of consistency enhances the impact of any given value, perhaps I’m not confident enough that my values = my CEV to be untroubled by the possibility that a small change of values in a consistent framework may have a very significant impact on an agent’s (my) actions. (Not even sure that my CEV values are consistent, complicating matters.)

  50. Why don’t you simply stop being or trying to be or pretending to be a utilitarian, and just care about some beings more than others, just like everyone else?

    • Godzillarissa says:

      Yeah Scott, have you tried, like, not being sad?
      Sorry, I could not possibly resist that one.

      Maybe it’s because trying to better oneself is part of some people’s self-constructed sense of life? It is for me, although I wouldn’t want to talk for Scott.

    • Princess Stargirl says:

      Everyone cares about some being more than others. This has almost nothing to do with whether one is a utilitarian or not.

      Morality is about what you should do, not what you will do.

      • So if one cares about some beings more than others, why be a utilitarian?

        • Irrelevant says:

          Because it’s still a useful analysis method even if you start weighting the variables.

          • Sure, that’s fine. But why does utilitarianism have any moral consequence? Since one prefers some people more than others, why not act that way?

          • Irrelevant says:

            Err, it doesn’t. I may be the wrong person to ask, but my understanding has always been that utilitarianism is a method, not a purpose. Declaring your moral goal is “maximize global utils” is making a mathematical translation of “show love for all mankind.”

  51. Desertopa says:

    So, this is a matter I’ve given some consideration to before. I think if I were to assess how much I value the worth of a chicken compared to a human, it would take rather more than a thousand chickens to add up to a human, but I’m not averse to the supposition that animal suffering swamps human suffering as a moral wrong. I think though that this is an issue which is probably best addressed technologically rather than using social levers to shuffle our resources around. Realistically, I don’t think we’re going to make much headway in getting people to minimize consumption of meat in a short term time frame, whereas with proper funding the technology to create meat from tissue cultures which were never part of a living animal could create a moral and ecologically viable alternative which wouldn’t require such radical behavior alteration at all. It’s a lot easier to make the decision to act in a way that accords with moral valuation of animals if this doesn’t actually require much effort or change.

    If there’s any charity which collects funding for such research, I think it would probably be a good candidate for assessment and promotion within Effective Altruist circles, but I believe that all such research is currently funded by industries and/or universities.

    • hylleddin says:

      New Harvest is a non-profit that does such research.

      I’m concerned that the “Ick Factor” that GMOs and “processed food” have to deal with may be a potential problem for eventual adoption.

  52. Neuralon says:

    This confuses two concepts, though- the concept of human suffering having equal weight when compared to animal suffering, and the concept of doing something to help with human suffering having equal weight to doing something to help with animal suffering. How effective we are at ‘doing’ may not be equal between the two, especially when we consider long-term effects. I think one solution to animal suffering would be to drastically reduce the number of animals we are causing to suffer- most people would not accept this similar argument as working for humans. I also don’t think it’s inconsistent to want to reduce the number of animals suffering (by neutering them, ideally) and not the number of humans suffering- animals are likely weighted much more heavily towards negative utilitarianism. They are capable of a lot of suffering, but probably not nearly as much joy as humans are capable of (for most animals, at least).

  53. Dagon says:

    This seems very related to the problem of valuing potential, rather than (or in addition to) actual human lives. There are a huge number of people who haven’t been born (and many who never will be), and taking them seriously implies that any utility calculus should put far more resources into bringing people into being than into small improvements for the lucky few existent.

    Yes, taking it seriously does imply the repugnant conclusion. As does taking animal welfare seriously. If you’re honest, as does taking distant human welfare seriously.

  54. ascientificchristian says:

    I’m amused that you’ve essentially just derived Augustine’s properly ordered loves from a different starting point (while, of course, neglecting the God bit).

  55. Worrible says:

    You are well on your way to becoming a paper-clip maximizer, one whose values are so foreign to the rest of us that we should fear you and unite to oppose any attempts you make to alter the world.

    • Ghatanathoah says:

      Actually, one of the reasons I go to SSC is that I think Scott has a much better grip on his human values than other people in the rationalist community. I can read and talk about rationalism here without encountering tons of people who give me the creeps.

    • Deiseach says:

      No, I have to say Scott doesn’t give me the wibblies like some others, who make me think “Do you actually know any real people out there in the world?”

  56. Ben Kuhn says:

    Minor comment: in second-to-last paragraph, “GiveWell” => “Giving What We Can.”

  57. Civilization14 says:

    Hey Scott, I was at the meetup, and I know at some point you asked Buck about Pascals wager, but how did he reply? I think the traditional answer is something like probabilities can very easily be infinitesimally small, dominating the chance of a large but necessarily finite reward. That is, conditionals quickly ruin the probability (e.g. flipping 250 heads in a row is probability 1/all atoms in the observable universe.). There’s also an anti- argument that you’ll just as likely do the opposite of what you say, and an argument that “if you really could cause such (dis)utility, you’d just prove you have that capability and I would pay”.

    The thing is, none of these responses work with x-risk, which has the low probability vs “absurdly high reward” characteristic of Pascal’s mugging. The “low” probabilities are clearly finite here though – even just considering the off-chance that a gamma ray burst or unseen asteroid hits us (easily over 1/trillion over next 100 years, if earth’s history means anything). Moreover, many of us suspect AI is much higher probably, and coming in the near-future.

    So for any utilitarian who claims to “bite the bullet”, this would mean that -safe- AND fast progress dominates animal/human suffering. Even if you’re significantly less likely than the average person on earth to influence these events (and you’re probably more likely to, if you’re reading this blog), that is still very finite (heck, just go breath on a scientist and you’re in finite territory) compared to missing the chance of a long utilitarian future tiling the absurdly enormous universe! Or even compared to the chance of just starting that future less-soon, because even if we start expanding outwards in the universe a few seconds later, the missing extra 1-second expanding “shell” of hedomium easily covers enough area to overwhelm the suffering experienced by humans or animals currently.

    Then, for a utilitarian, the only thing that matters is “positive” progress (progress that doesn’t increase chances of killing us, or ideally decreases it), vastly (!!!) dominating current suffering. Again the probabilities are small, but if you are utilitarian, you take expected value seriously! You can’t say I don’t like it, you must shut up and multiply (or do calculus, e.g. expanding shell integrated over time)! You can’t say “we don’t know exactly how to positively impact the future”, you must ask the obvious next questions! (e.g. if we don’t know, how could we learn? Are there any clearly non-harmful future-enhancing interventions, however minor?) This must dominate other considerations for a utilitarian, including animal suffering!

    With a caveat*, this allows one to claim humans matter (because they affect this future) in a way animals don’t, for the time being. But it also means, if you’re not maximizing for progress right now, you’re not a utilitarian. *That is, you don’t get to selectively ask for rigor and say “animals don’t matter because future dominates” and then proceed with your life, you have to also say “people only matter significantly more than animals insofar as they impact the future” and act accordingly.

    • Max says:

      [quote]Then, for a utilitarian, the only thing that matters is “positive” progress (progress that doesn’t increase chances of killing us, or ideally decreases it),
      [/quote]

      That depends on the utility function. It does not necessarily needs to be about “us”

      • Civilization14 says:

        Sure it depends on the utility function (fyi, in my example, “us” = hedonium-creatures), however that wasn’t the main point I was making. The point was that progress dominates all other considerations for most utilitarianisms that people actually believe (thinking specifically of Buck’s classical total utilitarianism), if they take their utilitarianism seriously. Dominates to the extend that it should “swamp every other concern” for the foreseeable future, in the same way Scott was saying here that animal suffering probably swamps concerns for human welfare. He thinks his rejection of this idea requires an unprincipled exception.

        My point is you don’t need an unprincipled exception to care about humans more than animals if you actually take utilitarian logic seriously.

        You can even justify (or, in fact, require) eating meat from factory farms, if it helps progress in the tiniest way (sketch: animal suffering doesn’t affect progress, human health does in some small but finite way -> omnivorous eating, done properly, is necessarily healthier than vegetarianism [omnivorous diet is superset of vegetarian diet], and thus justified [as per original argument about vast amounts of future hedonium dominating all current concerns]. Even/especially by “pure” utilitarianism that doesn’t irrationally exempt or de-weight animals from the calculation)

  58. Watercressed says:

    >So if you’re actually an effective altruist, the sort of person who wants your do-gooding to do the most good per unit resource, you should be focusing entirely on animal-related charities and totally ignoring humans (except insofar as humans actions affect animals; worrying about x-risk is probably still okay).

    Also, with sufficiently high values for animal welfare, human death becomes a net positive, because it will reduce the marginal demand for killing animals.

  59. Wrong Species says:

    Your whole sanity thing is a pretty terrible excuse. Lets say I’m a slaveowner who livelyhood depends on owning slaves. I decide that I can’t free all of my slaves so I will free 10% and let my conscience feel easy. Am I justified in that? No, of course not. Your intuitions are trying to tell you that there is at the very least some difference between not helping someone and actively hurting someone but you have to listen to them.

    • Ghatanathoah says:

      The topic at hand is about choosing whether to help humans or animals. Actively hurting someone isn’t even an option.

      • Jiro says:

        Changing your behavior by reducing the harm you cause, and changing your behavior by increasing the help you provide, are identical with respect to utilitarianism. The action/inaction distinction isn’t present.

  60. Mary says:

    “So why shy away from doing the same with animals?”

    Or plants!

    I have actually seen an argument against vegetarianism that after all, plants suffer and want to live, too. (Admittedly from a woman who was obviously crackers in many other respects.)

    • Jaskologist says:

      Fetuses remain exempt, of course.

    • houseboatonstyx says:

      When I asked a Jaina, “Shouldn’t we be eating one big plant for supper instead of a hundred bean sprouts?”, he said, “The ideal food is avocado. You eat the pulp and plant the seed.”

      So it is possible to figure a system that way. Least violent is food that the plants produce to be eaten (fruit, berries, etc). Then perhaps seeds/grains. Most painful would be cutting off part of a plant: greens (but remember that most such plants allow for loss of parts this way, and thrive on pruning). Eating a whole plant, like a green onion, well, there we’re back to bean sprouts.

      • keranih says:

        Look up fruitarians. This is actually a thing.

      • Deiseach says:

        I cannot resist the low and vulgar pleasure I get from shoe-horning in this quote from Chesterton’s “The Napoleon of Notting Hill”:

        And Mr. Mick not only became a vegetarian, but at length declared vegetarianism doomed (“shedding,” as he called it finely, “the green blood of the silent animals”), and predicted that men in a better age would live on nothing but salt. And then came the pamphlet from Oregon (where the thing was tried), the pamphlet called “Why should Salt suffer?” and there was more trouble.

  61. Irrelevant says:

    I’m probably too late to the party here already, but this discussion has come up incidentally in the comments before, so I’ll reiterate my stance:

    Rather than arguing over whether we’re weighting pain or complexity or intelligence or aesthetics or [pick your abstraction of choice], the moral position of animals is best described as “within the penumbra of human empathy.” That is to say, their moral weight is not intrinsic, but rather contingent on the existence of humans who feel hurt by proxy when they are harmed.

  62. Emile says:

    Goddammit; just earlier today I was taking notes on this concept of “concentric circles of empathy”, and trying to figure out whether that position had a standard name, and whether it had been discussed at length on LW or SSC. I didn’t find much, tho Gwern writes about it and so does Razib Khan. And now Scott writes a post about just that; Scott, do you have a time-turner?

    Anyway, it seems to me that the “concentric circles of empathy” (self, close family, acquaintances, compatriots, foreigners, dogs, insects) is pretty much the way humans naturally value each other, but 1) I can’t find a standard name for that position in ethics (meta ethics talks about universalism which is something different, and normative ethics describes the extreme positions of egoism and altruism, but I can’t see good middle grounds) and 2) I get the impression that the position is somehow considered controversial in some circles (utilitarians? liberals?) but it’s not really clear why.

    • BD Sixsmith says:

      …is pretty much the way humans naturally value each other…

      Yes. (If you ask a dog owner who is more important out of humans and dogs they will probably select the former but you can bet all your savings on the prediction that he or she will expend far more energy, time and cash on the hound they know than the man they have barely met.)

      The controversial nature of the idea becomes obvious when you reach “compatriots” and “foreigners”. Leftists tend to prefer “classes” and liberals often wonder why we can’t all be individuals. Everyone believes in concentric circles of empathy to some extent, though, because without them we would have no communities at all, and with no communities — well, it is not far from saying “with no food”.

    • Airgap says:

      Since SSC appears to not really care about what is natural for humans, and also to like advanced math, I propose we continue the theme of infinite-universe utility with “Empathy regions with non-orientable topologies.”

    • Alex says:

      Moderate cosmopolitanism” might at least be approximately a standard name. This would be as opposed to effective altruism’s “strict cosmopolitanism.”

      • Emile says:

        Thanks! I was looking for that kind of terminology. Confucius’s “love with distinctions” seems like a more fine-grained version (in that also covers the higher regard for one’s family).

  63. Azure says:

    The weird thing about full on utilitarianism with regard to animals is that all the consistent ends I can imagine it coming to end up looking very strange. And Edenic.

    Obviously, we would stop raising them in inhumane conditions and being generally awful to them. But.

    Considering human wellbeing, we don’t really consider it enough to NOT BE HORRIBLE to someone. That we have a moral obligation to hate and destroy suffering and nurture happiness.

    If we take animal happiness seriously, then the predator/prey/starvation dynamics we use to teach children about differential equations become serious issues. We wouldn’t blithely write off deer or coyotes starving to bring the population ‘into balance’ for the same reason we respond with such aversion to someone who smugly describes a famine in the developing world as just nature taking its course and having a Cull.

    Not to mention the whole predator/prey thing. Nature, red in tooth and claw, gets called that for a reason. If we switch ourselves over to tissue-culture meat, it’s not clear how we could even distribute it to all the carnivores of the world. And even if we did how we would get them to eat the tissue culture meat instead of their fellow beings. We could make /better/ carnivores that find something in the artificial stuff /delicious/ and regular old Blind Idiot God™ Brand Squirrels insipid and yucky by comparison. Or better carnivores with enough reason and a sufficient ethical faculty to avoid devouring their fellows. Or just wipe them out entirely, but that seems rather sad and surely making a /better/ coyote rather than hunting down all the coyotes in the world wins somehow.

    And, in the same way that there are inborn cognitive and emotional defects in humanity. (You know, like sucking at probability, and the false positive paranoia bias that works great on the savannah and makes you suspicious, miserable, and anti-social other places.) animals have similar stuff. If you look at the Siberian Fox experiments, there seemed to be a pretty strong correlation between how happy an animal was and how tame it was. There were also the mouse experimentss which, if you accept ‘willingness to give up and die’ as a proxy for general poor quality of life, showed that the more skittish, wary mice (that tended to do better in predator-rich environments) were generally /mierable/. And why not? One of the big defining characteristics of domestcation is that you don’t have fear hormones coursing through your body at all times priming you to run away or bite someone. The life-ruining powers of constant stress are well-known. (But the Blind Idiot God just cares how many children people like you have, not how happy you are.)

    I am not strawmanning by any means, I came to a realization on utilitarian grounds that I do have much more of an lobligation toward my fellow non-human creatures than I had realized. I haven’t let it swamp all human concerns completely, the plight of dairy cows does not make me ignore the plight of prisoners.

    But, if we are people who are willing to consider building a better human to advance the cause of human happiness and capability, considering animal welfare as equal to human welfare seems that it would require us not only to stop building better mousetraps, but start building better mice.

    • Irrelevant says:

      The Young-Earth Creationist stance solves this problem by a rather amusing method: they assert that the suffering of nature results from nature itself being sin-infected, that this is man’s fault, and that the solution is therefore more evangelism.

    • houseboatonstyx says:

      If we switch ourselves over to tissue-culture meat, it’s not clear how we could even distribute it to all the carnivores of the world. And even if we did how we would get them to eat the tissue culture meat instead of their fellow beings.

      If tissue culture meat becomes cheap, this might actually be practical in some controlled animal preserve (for starters). As for getting them to eat it, well-fed pet cats may ignore canned cat food and catch mice instead, but try offering it to some ferals. There will be some tradeoffs between quality, effort of hunting, and whether there are actually enough mice for the whole cat colony.

      As for wider distribution, breed beefsteak tomatoes that actually match beefsteak, and seed the planet with them. (At the same time, seeding it with contraceptives for the deer.)

      [Something] would require us not only to stop building better mousetraps, but start building better mice.

      That could be quite a good idea. Housebroken scavengers, who breed at a non-excessive rate.

    • Princess Stargirl says:

      “If we take animal happiness seriously, then the predator/prey/starvation dynamics we use to teach children about differential equations become serious issues.”

      You work with some smart children! Though those differential equations are not very complicated and you could tech them to some children.

    • Deiseach says:

      considering animal welfare as equal to human welfare seems that it would require us not only to stop building better mousetraps, but start building better mice

      The interesting question there is do we have the right (setting aside the duty)? If non-human animals have rights equal to human animals, and these rights arise out of their own animal natures, then if we cannot interfere with them to their detriment (domesticate them for pleasure and use, eat them, hunt them, kill them to take over their territory etc.), neither can we interfere with them for what we consider their good.

      It would be the same question as the White Saviour Complex, the White Man’s Burden, and colonialism in general: even with good intentions, Culture A should not steam in and interfere with Culture B by taking over and changing it.

      Without canvessing the opinion of the mice or coyotes, we are not entitled to change them (e.g. make them prefer to eat Tissue Vat Squirrel rather than Nature Brand Squirrel; neuter them surgically or chemically; move them onto reservations) on our own recognisance. No alteration without representation!

      • Azure says:

        That isn’t a bad point. It depends on your view of rights. My own inclination is that ‘rights’ are just a form of rule utilitarianism. Things you don’t violate because they usually lead to really, really awful results. In which case you would have to make sure that whatever kind of changes to an animal you might make would ACTUALLY benefit them on average.

        On the other hand, I have always held that there’s something to be said for all the different /kinds/ of delight. That having everyone think the same way would not only be boring, lose something, since you’d be slamming everyone into the same narrow piece of the eudaemonic pie instead of filling a greater area. So you’d want to avoid homogenizing all creatures great and small, if you could.

        Though if you take ‘rights’ as something other than a utilitarian heuristic, how do you let something consent to something they can’t understand? We already let parents determine things for their children, this might be similar. (We could imagine some situation where we analyze a coyote’s brain and decide whether some possible future Solar Sailing Space Coyote would rather go back to being less intelligent and earthbound, and if so leave it less intelligent and earthbound.)

  64. I feel like the whole thing becomes much less problematic if you simply embrace lexicographic orders and stop pretending everything comes from adding up individual utility functions.

    There’s basically no reasonable value of N for which I would choose “torture this person” over “torture these N chickens”, but I still prefer a world with less chicken torture in it (and indeed routinely pay a bit more money to back that preference up). Sure, this probably breaks down for N ~ 3^^^3, but I don’t really care about numbers like that because they don’t correspond to meaningful scenarios.

    (If you told me that free range chicken farming was a highly risky endeavour that on average resulted in an extra human death per million chickens per year then… honestly I have no idea how I’d feel. Fortunately I don’t think that’s true. If anything I suspect the opposite)

    • Anonymous says:

      (and indeed routinely pay a bit more money to back that preference up)

      Is there really no lexicographically preferred value you could spend that money on? I think if you take lexicographical ordering literally you end up spending all your effort only on the first-class values.

      Edit: I don’t mean to attack your values or accuse you of hypocrisy, I’m just questioning whether lexicographical ordering properly captures your values.

      • Irrelevant says:

        Most people seem to prefer a relatively shallow commitment to a lot of values rather than an extreme commitment to a handful. Matter of diminishing returns, I assume.

      • There is, but there are also other things I could and should be transferring money away from before I start buying factory farmed eggs so as to spend the money on malaria nets.

        I’m definitely not purporting to be some efficient optimiser of my preferred lexicographical ordering over the world, especially in how I spend my money. The money thing is just to point out that I clearly do have a strict preference of there being less chicken suffering even if I don’t consider it comensurable with human suffering.

    • anodognosic says:

      Regarding the risky free-range chicken scenario: crab fishing, i.e., the deadliest catch, is a thing. If free-range chicken is even moderately more profitable than factory farming, you can bet that people will do it.

      If anything, this tells me that the whole question is kind of a red herring in the present state of affairs.

    • Airgap says:

      I feel like the whole thing becomes much less problematic if you simply embrace lexicographic orders

      Givewell does this, which is why “Against Malaria” is rated higher than “GiveDirectly,” which is rated higher than “Schistosomiasis Control Initiative.”

    • Deiseach says:

      Fishing. Fishermen die.

      Farming itself is dangerous – people die in all kinds of accidents: according to this from accidents involving machinery, slurry pits, livestock, and falls from heights (and yes, people get killed by animals – ‘harmless’ cows are bigger than people and can easily crush or trample you).

      Construction industry. Chemical industry. Any range of human endeavour can and has resulted in injury, maiming and death.

      • Airgap says:

        Even math. Just last week, I heard two logicians were killed by the principle of explosion.

  65. Quixote says:

    So does that cash out as 10% of income to charity and 10% of charity income to animals?

  66. Ghatanathoah says:

    Imagine there is a war in Country A that causes a million QUALYs to be lost every year.

    There is also a plague in Country B that causes a hundred thousand QUALYs to be lost every year.

    You have a choice between donating to a peacemaking charity in Country A that is estimated to save 0.01 QUALYs for every dollar you give them, or a medical charity in Country B that is estimated to save 0.1 QUALYs for every dollar you give them.

    Which is a bigger problem? Obviously the war. Which should you give money to prevent? Obviously the plague.

    This is how I feel about the comparison Scott has made. Maybe it is true that there is more animal suffering than human suffering, so it is technically a “bigger problem.” But I see no evidence that charities directed at preventing animal suffering reduce suffering more than charities directed at preventing human suffering, even if you weigh humans and animals as having exactly the same moral value.

    • houseboatonstyx says:

      But I see no evidence that charities directed at preventing animal suffering reduce suffering more than charities directed at preventing human suffering, even if you weigh humans and animals as having exactly the same moral value.

      What the available charities focus on, is accidental not essential. But there are efficiency choices. Charities that focus on saving rainforest acreage benefit the animals who live there and benefit humans for many reasons. Some charities fund research that will will help both causes (as well as making the researchers happy).

      • Ghatanathoah says:

        What the available charities focus on, is accidental not essential.

        I got the impression from the OP that Scott thought his discussion created an immediate and pressing need for him to donate money to animal causes, not just that it might be more efficient in principle to donate to animal causes.

        I personally would probably donate a bunch of money to a “dump tons of morphine in the pig slop” charity or a “leave food out for wild animals during a famine” charity if it was efficient enough.

  67. suntzuanime says:

    lol if your widening circle of concern only includes animals and not rocks, fictional characters, fictional rocks, and logically incoherent propositions.

    • BD Sixsmith says:

      A religious friend of mine once posted on Facebook, “I have to stop praying for fictional characters.”

    • Mary says:

      I dream of a better world, in which chickens can cross the road without having their motives questioned.

    • Airgap says:

      Discrimination against minority incoherent propositions is crime against humanity. Gregory Chaitin is the Martin Luther King of formal logic.

    • Ghatanathoah says:

      I don’t see anything inconsistent about such a position. My circle of concern consists of entities that are capable of having certain mental states. Animals are capable of having a fraction of the same types of mental states as humans, so they are a fraction as valuable as humans. Rocks, fictions, etc. have no mental states, so they have no value.

  68. Airgap says:

    Suppose we accept panpsychism as our preferred answer to “Why is there consciousness?” (There has to be some answer.) Why do we not then extend our sphere of concern to all matter, such that our concern for the welfare of living cells is swamped by our concern for the welfare of other objects?

    To Alicorn: Professor Annihilation is not a bigot, but a defender of the most oppressed minority group of all time: inanimate objects. So typical of privileged “activists” to ignore the concerns of groups with far greater problems than their own, simply because they’re out of sight and out of mind.

  69. Alyssa Vance says:

    “If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is probably within an order of magnitude of all human suffering.”

    Wait, what? There are fewer than three chickens per human, not a thousand, and chickens are birds, not mammals. The population of all farm mammals combined is probably less than the human population.

    • Irrelevant says:

      The issue is that the 15 billion chickens alive now are multiple generations different than the 15 billion chickens that will be alive at any given point next year, so over the course of your life there will have been hundreds of times more total chicken lives than total human lives. And who said mammals?

    • Airgap says:

      There are fewer than three chickens per human

      Maybe at any given time, but the relevant comparison would chickens born per human born.

      • John Schilling says:

        Why?

        Is tormenting seventy creatures for one year somehow worse than tormenting one creature for seventy years?

        • Airgap says:

          Because if you undercount the chickens, you get the wrong answer.

          Suppose it takes 40 days to raise a chicken, each person eats one chicken per day, and poultry farms are 100% JIT: they add one chick per day per person. So there are 40 chickens/human at any given time. But a person will go through about 25000 chickens in a lifetime.

          Obviously, this doesn’t really matter due to the well-known deliciousness penalty to intrinsic moral worth. Still, getting math right is its own reward.

          • Deiseach says:

            I really don’t think I’ve eaten 25,000 chickens!

            And what about chicken-on-chicken cruelty and violence? The phrase “pecking order” comes from exactly that; a recent example as recounted by a colleague whose daughter keeps some chickens was an anecdote about two hens who were bullying another hen – they pecked her till she bled and was injured.

            She was smaller than them and actually another breed, which raises the question: can chickens be racist? If humans are just animals and all animals are equal and non-human animals are getting equal rights, then I say they should be held to equal standards.

            Stamp out racism amongst hens now! 🙂

          • Airgap says:

            Stamp out racism amongst hens now!

            — SPLCPCA Mission Statement

        • Irrelevant says:

          Why?

          Because I can’t kill one chicken seventy times. The one chicken/seventy chickens situation is only equivalent if the portion of the total suffering associated with killing the chicken is trivial compared with the portion associated with it sitting around in a cage.

          • John Schilling says:

            But the portion of the total suffering associated with killing the chicken is trivial compared with the portion associated with it sitting around in a cage. And I suspect that most people here are overestimating the suffering of a chicken sitting around in a cage.

            There may be wrongness associated with killing, but suffering and killing are two different things. And I’m guessing that if you are arguing the mathematics of suffering, you don’t have a good argument as to the wrongness of killing chickens.

  70. Dillon says:

    Question: Are animal welfare charities really that much more effective than human welfare charities? I agree that animal suffering seems likely to be more important than human suffering simply because there are so many more animals than humans, but I always thought it was more cost-effective to help humans. Am I wrong? Does anyone know how much it costs the most effective charities to save 1 animal DALY?

    • Airgap says:

      Insofar as lots of utilitarians consider a universe with two humans preferable to a universe with one human (more utility!), it’s entirely possible that animal welfare charities are net negatives because they’re not very effective at alleviating animal suffering, and they result in fewer total animals.

    • Anonymous says:

      You can look at Animal Charity Evaluators’ recommendations; they estimate that $1 donated to Mercy For Animals saves 8.8 animals from lives in factory farms. Even if this estimate is very optimistic it’s still much cheaper than saving humans.

      The reason for this is that animal charities can produce online ads and leaflets very cheaply, and even if few people are convinced to go vegetarian/vegan creating one new vegetarian saves hundreds or thousands of animals.

      @Airgap yep this is only good if you think fewer animals living in factory farms is better, but most utilitarians I know who’ve thought about the issue think that animals’ lives on factory farms are so terrible that they’re not worth living.

      • Airgap says:

        How do they differentiate between “not worth living” and “really shitty?” I suspect they don’t, really.

        Suppose I artificially reduce a chicken’s serotonin levels significantly, or otherwise do something reasonably susceptible to the interpretation “inducing depression.” Will it stop eating and die despite the presence of food? If so, factory farmed animal’s lives are worth living. They’ve said so. If not, how could we tell? The whole “How would you like to spend you whole life in a tiny cage…[speaker goes on for some time]” thing doesn’t really prove what it’s supposed to prove.

      • keranih says:

        I would be very interested in how MfA came up with that number, given the cost of housing animals that have been ‘saved’ from farms.

        Of course, the animals might just be dead, which would be a different sort of ‘saved.’

      • Ghatanathoah says:

        We can do a little more precise math, although it’s still pretty off the cuff.

        Since the majority of factory farmed animals are chickens, which live about six weeks on averages, I think seven weeks would be a good estimate for the average lifespan of a factory-farmed animal. Pigs, sheep, and turkeys do not live that much longer than chickens, cows do, but there are a lot less of them than there are chickens.

        Let’s assume that increasing a human lifespan by one average factory-farmed animal lifespan is exactly equivalent to reducing the amount of factory farmed animals by one. This is a pretty big assumption, since many think living on a factory farm is similar to torture, but we are just trying to do basic calculations.

        GiveWell rates their top charities as costing about $3,000 to save a child’s life. Assuming the child lives 60 additional years, that means saving one life is equivalent to saving 445 animals from being factory farmed.

        If we use the $1=8.8 animals estimation, the same $3,000 could save 26,400 animals. So that means that you would need to consider saving an animal from factory farming to be 59 times less valuable than extending a human’s life by an equivalent amount of time, based on my incredibly off the cuff calculations. The human brain’s mass is 337 times that of a chicken, I have no idea if that means their feelings are 337 times as strong, or if they are exactly the same strength.

        There are also other confounding factors, Mercy for Animals does its work primarily by persuasion, which has diminishing returns. Also, I don’t know if the majority of animals they save are representative of the majority of factory-farmed animals (i.e. mostly chickens) or if they are more diverse.

        • Airgap says:

          MfA does its work primarily by bullshit. I still remember when they caught some junior ranch hand hitting a cow for fun, and spun it as official ranch policy going all the way to the top. The Ranch’s response was basically, “Thanks for the tip, those responsible have been sacked.” I asked Nate (or whatever his name is) whether he was saying that the owners were complicit or not, and he replied “The images speak for themselves.” I.e. I won’t confirm or deny because I’d have to chose between being sued to death and admitting that what we uncovered was basically trivial. So I choose bullshit.

          tl;dr: MfA is run by duplicitous scum. Direct your money elsewhere.

  71. Av Shrikumar says:

    I generally describe myself as being a “heavy-tailed weighted utilitarian”, where the weights are predominantly governed by evolutionary urges. Evolutionary speaking, we value our own species *much* more highly than we value other species, and we value our family and loved ones *much* more highly than we value strangers. If you thought of this value as a weight, and plotted the weight as a function of closeness, for most people the graph would have a long tail (that is, as people get further away from you, they get less weight). The heavier you make the tail, the closer you move to true utilitarianism. However, I think having a perfectly flat tail, where everything has equal weight regardless of closeness, runs too contrary to evolutionary instinct to be feasible (Scott seems to refer to this idea as “for the sake of my sanity”). But I think there exists a tradeoff which is flatter than most people’s current curve and which would lead to a better society.

  72. Mr. Eldritch says:

    Because morality is a thing humans invent, and is mainly built in an attempt to formalize vague human intuitions about how they should do stuff and treat other humans, any moral system which is incompatible with humanity is broken – now matter how consistent, well-intentioned, and logically reasonable.

    That summing up utility in this fashion gives us this abhorrent result tells us that we thus cannot sum up utility like this. (Or the utility we attribute to animals must be miniscule, or utility is the wrong system to use.)

    • Airgap says:

      Law is also something we’ve invented, mainly built in an attempt to formalize vague notions about what we should do, which sometimes gives abhorrent results. Still, I’d rather have it than not have it.

      That’s no so much a reductio on your view as a suggestion that you’re leaving part of the analysis out.

      • Mr. Eldritch says:

        When laws give abhorrent results, usually we choose different laws.

        • Marc Whipple says:

          This is not my experience. In my experience either we just shake our heads and say, “Well, sometimes the system doesn’t work,” or we ignore the laws we don’t like. Actually changing them is the exception, not the rule, and it seems to be getting rarer as time goes on.

  73. hugues says:

    How about taking an “ecological” perspective, wherein instead of fixing the moral weight of an individual and deriving the moral weight of a group by adding the moral weight of its members, one would fix the moral weight of various groups or species based, e.g. on their contribution to the larger ecosystem or their level of sentience, and derive the moral weight of individual members by averaging the moral weight of the group/specie?

    This approach probably has a bunch of failure modes I haven’t considered but it seems to me that it would allow one to care about animal welfare without being “swamped” by it or having to make an unprincipled exception to utilitarian reasoning.

  74. Max says:

    “So if you’re actually an effective altruist, the sort of person who wants your do-gooding to do the most good per unit resource”
    What is “good”? Multiplying the number of species on earth? # Numbers of humans? What are the resources units of which we are dealing with? – Such fundamentals should be answered clearly in unambiguous terms . And “good” is as ambiguous as it gets

    This is an example of reason why philosophies “for all good against all bad” are dead end. For a goal to be reachable one should have a clear target. THE FOCUS. THE PRIORITY. Everything else should be evaluated against it . All the time. If something comes up which does have higher priority its time to review the philosophy.

    It does come down to the answer what is ultimate “evil” and what is ultimate “good”. Every argument should be resolved in goal system according to that answer , without logical contradictions. Otherwise it is just subjective biases/emotions serving as shaky beliefs with mutable values. And therefore such philosophy can not serve as a stable foundation.

    Animal welfare is not as important as human welfare. And human welfare of all people is not as essential as one of the select few. And humans themselves are only valuable as far they serve the goal(which they not always do). If one ventures into sophism of “everyone is important” then eventually he arrives to the point that nobody and nothing is.

  75. Dale says:

    All humans -> all animals; that’s too big a step!

    All humans -> all primates -> all mammals -> all chordates…

    Gives you a bit more time to give up on chicken and fish.

  76. Kai Teorn says:

    > If it takes a thousand chickens to have the moral weight of one human…

    …_and_ if “moral weights” are numbers that you can add up and compare in the usual fashion. Which is, to think of it, an absolutely arbitrary assumption that, to my knowledge, no one ever attempted to test.

    There _may_ be some sort of “moral algebra” with a non-zero predictive power, but we need to actually discover it before we can use it. If you simply postulate it’s a matter of summing numeric “weights” (apparently because numbers are so convenient to calculate, “every problem is a nail if you have a hammer”), you’re bound to run into all sorts of nasty paradoxes such as this one.

  77. RomeoStevens says:

    I’ve seen very little discussion of giving animals painkillers in my research on animal moral worth. This could be for a few reasons:
    1. It is a higher variance strategy and thus less appealing. By that I mean you are pursuing a small chance of having a very large suffering impact, as opposed to vegetarianism which is low variance, large chance of a tiny reduction.
    2. It simply hasn’t been spread to enough competent agenty people.
    3. Absurdity heuristic (concept itself or execution).
    4. The sort of person drawn to animal worth arguments is also the type not to find the idea of still killing and eating them but making this not as horrible very appealing.
    5. Worry that if successful this will reduce interest in veganism/vegetarianism, which they have a lot invested in as an identity.

    What else?

    • Airgap says:

      I think (3) and (4) account for most of it, and (5) will be a common rationalization but not the actual reason. Regardless, I love this idea, and think it’s a beautiful litmus test for distinguishing people who genuinely care about animal welfare vs. people who just have pugilistic personalities.

      One thought: you don’t have to limit it to painkillers. I don’t know psychiatry as well as some people here, but I understand we give or at least gave people lots of drugs in psych wards basically to keep nuts happy and complacent throughout their incarceration. This certainly seems like something factory farming opponents should support, at least as a stopgap measure, if they actually care.

      The more machiavellian might see it as a wedge issue: You have to give the chickens prozac so they’re happy in battery cages, and if makes them unsafe or disgusting to eat, so much the worse for factory farms. They can develop chicken-safe prozac or find another business.

  78. Leif K-Brooks says:

    I’m a lifelong vegan, but I’m all for prioritizing the prevention of human suffering over the prevention of animal suffering. I believe human and animal suffering have equal immediate utility, because pain is pain; but that over the long run, humans create more utility. Suffering humans wage wars, and efficiently cause negative utility. Non-suffering humans do art and science, and efficiently cause positive utility. I guess in a sense, humans are utility amplifiers.

  79. maxikov says:

    You can easily argue against torturing dogs without giving them any moral value, and only giving it to humans. For example, you may acknowledge that humans aren’t utility-maximizing automatons, and tend to feel empathy to the organisms in their direct proximity, whose actions they can patter-match to emotion expressions. Thus, if you want to torture dogs, you’d rather do it when nobody sees, or not do it at all. In addition, given the tendency of humans to anthropomorphize everything, we may conclude that if someone tortures dogs for their own amusement, they’re likely to be sadists towards humans too, and no the kind of sadist who cares about being safe, sane, and consensual, which is an antisocial trait, which we may be justifiably wary of. If, on the other hand, dogs or other animals are tortured for some reason beyond enjoying their suffering, like conducting scientific research or making some useful products, we may be fine with that, which is reasonably close to the moral intuition of most non-vegan, non-animal-rights, non-LW people. Well, with the particular exception of cats and dogs, who are uplifted to having an ethical value by most people, which I regard as a bizarre cultural artifact.

    Also, I don’t think it’s possible to make an argument for widening the circle of concern on non-morally-realist grounds. If someone’s core values discriminate entities based on spatial, social, temporal, genetic, etc. proximity to them, what can you say to them aside from “your values are bad, and you should feel bad”?

  80. DrBeat says:

    You appear to have a problem with thinking “I can assign numerical values to things. The numerical value of B is greater than A. Therefore, there MUST be some quantity of A I can amass to make it exceed B, and that is worrying.” But not everything works like that.

    3^^^3 people throwing “rock” are still all beaten by one person throwing “paper”. A hundred thousand Yamchas still can’t beat one Goku. And when the voracious Hypothetical Trolley ascends from his pit of foulness and demands his blood sacrifice, there’s no quantity of chickens he could threaten to make you sacrifice one human. Even if the choice was one person being trolleyed vs. one billion chickens being trolleyed, you’d still pick to run over the chickens.

    Because a trolley running through a billion chickens would be an amazing spectacle.

    • Deiseach says:

      Yes! A solution to the trolley problem at last! With 66 billion chickens in the world, surely a couple of thousand of them in exchange for the one person in exchange for the five will not be missed!

  81. Rafal says:

    We, the cold-hearted ones, have it so much easier. To build an efficient society there is no need to posit any duty beyond the duty of non-initiation of violence against members of the in-group (to express a large complex of ideas in a grossly simplified sound-bite). No one has a duty of “concern”, although we warmly welcome those who have concern for us, and we may choose to reciprocate. Does my dog love me? I may love it back, after a fashion. I have no duty to care. I ate a dolphin once. Tasted like chicken.

  82. At a slight tangent to your concern …

    Animals, like humans, raise the question of whether you are maximizing average utility or total utility. I think one can show that maximizing average utility is indefensible, but total utility raises problems.

    In the case of animals, it implies that having lots of chickens each of which gets some utility is probably better than having a few living very happy chicken lives and surely better than having none. Following out the logic of that, it’s possible that the best thing you can do as a utilitarian is find a very persuasive vegetarian, one who persuades lots of other people to imitate him, and assassinate him.

    I have an old article discussing the analogous problem for humans—how to compare alternative futures with different numbers of people in them.

    “What Does Optimum Population Mean?” Research in Population Economics, Vol. III (1981), Eds. Simon and Lindert.

    So far as I know it isn’t webbed, but the abstract is at:

    http://www.popline.org/node/394240.

    • Irrelevant says:

      Sounds like the generalized question here is how to make utilitarian judgments without requiring the usual (often implicit) constraint of metastability.

    • Peter says:

      One thing about average utilitarianism that’s not clear from the explanations I see – how does it work over time?

      One formula (I’ll call it A) I can think of is to calculate the average utility at time t, the average utility at time t+1, etc., then average that. Another formula (I’ll call it B) is to calculate the total utility over a large timespan and then divide by the number of lives lived. Consider a stable population of (wlog) 1000 with a life expectancy of 50 years, and another stable population – again 1000 – with a life expectancy of 100 years. Suppose moment-to-moment experiences are pretty similar, such that under total utilitarianism both situations are equally desirable. Under formula A, both situations are still equally desirable, under B you prefer the second because the total number of lives lived is half for the same total utility.

      I think B has some advantages – I think it’s what you get if you do a Rawls/Harsanyi thought experiment[1] without the “you’re not allowed to do probabilities” stipulation – if your contractors know they’re going to have the privilege of actually existing but otherwise know nothing (and presumably are going to be “diluted” by various amounts of non-contractors who still get bound by the contract – there are some odd assumptions about personal identity here). It has the weird property that having a big unhappy or happy population in the past can skew your optimal population now.

      B also has the nice property that it makes no sense to kill someone (who has a life worth living) for not being happy enough, whereas A gets close to mandating it. In fact I think B gives stronger reasons for not killing than total utilitarianism.

      [1] I think, if you’re leaning towards average utilitarianism, you should consider whether contract(arian/ual/???)ism might be doing a similar thing but better. Total utilitarianism feels less contracty to me, although you can sort-of do it if you allow potential people into the discussion.

      • The problem with average utilitarianism was pointed out long ago by Mead, one of several distinguished scholars who stole ideas of mine before I thought of them. His version is to imagine a world with two cities, A and B. Both are very attractive places filled with happy people, but the people in A are a little happier than the people in B and the two cities have no significant dealings with each other.

        Would it be a good thing if a plague painlessly killed everyone in B? Average utility goes up.

        My version was to imagine someone who lives his entire life on an island, never interacting with another human being. Does it make sense to say that his life is a good thing if the rest of the world’s population averages a little less happy than he does, a bad thing if they average little happier–his life being exactly the same in both cases?

  83. But there is no evidence that pain asymbolia is normal in most non human species.

  84. It’s adaptive to care more about ourselves, and then more about our relatives, and so on. In Robert Sapolsky’s Human Behavioural Biology lectures (which I highly recommended), he quotes someone as having said, “I will lay down my life for two full siblings, four half siblings, eight first cousins…”

    Also someone saying, “Me against my brothers, my brothers against my village, my village against their village…”

    There is moral value in the outlying areas, but it’s offset by its distance. To me, it feels like gravity – proportional to mass but also the square of the distance.

  85. FullMeta_Rationalist says:

    (I’ve only read a quarter of the comments, so I imagine someone has made this argument already. But I have to leave soon, so I can’t know for sure ATM.)

    I feel like we can still save utilitarianism if we, you know, actually follow the consequences. Let’s say we assign chickens a 1:1 ratio of worth compared to humans. What happens? Society directs the entirety of our resources towards the multitude of chickens in the world. Practically nothing is directed towards the rest of humanity. Civilization collapses. And in the end, we haven’t even started to have addressed the sea of problems chickens have. This seems unsustainable and suspicious to me and is probably the reason Scott calls this particular formulation of utilitarianism insane.

    I too used to think pain/suffering was the central utilon in the ideal moral calculus – but since reading SSC’s posts on society building, I think pain might be a red herring (intelligence too). Like sure, pain is ultimately still what we want to reduce, but pragmatically it’s the wrong way to frame things. Rather, I think the reason people don’t already de facto provide for the welfare of chickens is because they’re not productive or useful to us economically (except maybe for being eaten obviously).

    Some months ago, a friend asked me what would happen if we could talk to animals. As I’m the resident nerd, he expected a suffiently nerdy answer. After smoothing out a few axioms, one of the things I said was that we’d find some way to integrate them into the economy.

    The ability to communicate with animals means that we would be able to coordinate our activities towards mutually beneficial goals. I.e., we’d be able to form contracts. Cue PvP, game-theory, contractualism, etc. The ability to coordinate with the dominant species is a ticket to first class citizenship on the planet. But unless something like that happens, I think chickens will mostly be viewed as just another resource. No different than petroleum or iron.

    This is at least my model of how the calculus balances out descriptively. Not sure what this implies prescriptively yet.

    • Wrong Species says:

      If society collapses then we wouldn’t be able to factory farm chickens anymore. So you could say that it would still be better from a negative utilitarian argument.

  86. michael vassar says:

    I agree that it’s better to only extend out concern in a limited manner (I would say that Pareto Optimality makes a pretty good basis for doing this), but I also think that simply counting synapses gives a fairly plausible world-view where humans and animals are within an order of magnitude of parity. Counting cortical columns puts humans far ahead of animals. There are plausible similar options that do similar things.

    Synapse count times remaining synapse lifetime is fairly natural.

    Also, note that the next step after, or even before, third world humans might sensibly be future generations, not animals.

  87. Here’s a monkey wrench to throw into this: what about sentient aliens (whose intelligence is somewhere in the same ballpark as people) who live in another galaxy? Assuming you could do something to help them at a reasonable cost, would you do it?

  88. math_viking says:

    I’m going to express skepticism of the claim that “If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is probably within an order of magnitude of all human suffering.”

    Looking at this xkcd: http://xkcd.com/1338/ we see that there’s several times more weight of animals than humans, but mostly those are much heavier animals. It looks like the number of pets and livestock would be fairly close to the number of humans (wild animals are similarly small, and there are utilitarian arguments for not disturbing the environment too much on top of that). Birds are not included here, in particular chicken, but this xkcd whatif https://what-if.xkcd.com/11/ references a paper claiming there are around 300 billion birds, and this what if https://what-if.xkcd.com/74/ (why *am* I citing xkcd thrice in this one post anyway?) suggests only a fraction of that is chicken.

    You really think the average chicken suffers many thousands of times more than the average human?

  89. CJB says:

    I cope with this through thresholds.

    Animal suffering matters. Animal suffering is important.

    Human suffering matters. Human suffering is important.

    Stopping shoplifting matters. Shoplifting is important.

    Stopping murder matters. Murder is important.

    All these statements are true and useful. However, all the shoplifts that ever were or are will never equal one murder. Nor will all the dogfights that ever were or are. If the choice is strangling a hundred puppies or beating one old lady very badly……well ultimately, suffering isn’t…..mathematical. I can measure, quite effectively, the difference between the heat of a match and a bonfire. But if I put a match under your foot right now and a match under your foot while you’re being burned at the stake, that’s two very different alterations in pain levels.

    Which is also my answer to the Less Wrong problem of “which is worse- one person being lit on fire for a year or 10^6^6^6^6^6^6 people getting a speck of dust in their eyes?”

    The problem here is the same as “why is killing 40 trillion ameobas not the same as killing a person?”

    Because a pile of trillions of ameobas cannot be a person. It may mass the same as a person, but so does a log. Human consciousness, then, is not only an emergent phenomenon. It’s an emergent property that places us light years ahead of our nearest competitor.

    Arguably, the smartest non-human creature we’ve encountered is Koko the Gorilla, and the difference- not only in vocabulary but conceptually- is…..functionally limitless.

    Quantum leap, poor abused phrase that it is, actually applies here. We’re talking about different orbitals here. Cows and chickens and fish occupy various strata within the “animal” orbital, but there’s no gradual slope between “Animal” and “human”. The difference is evident and extreme.

    So should we feel worse about killing a chimp than a chicken? Yes. Should we kill a billion chimps or one person? A billion chimps. If the choice is between a billion chimps and 6 billion chickens, we have a moral discussion.

    Otherwise the question is functionally meaningless.

    As for severely mentally disabled humans- we roll them into humanity at large because, frankly, they’re rare enough that it’s not an issue worth getting into vs. the benefit of getting into it.

    • Princess Stargirl says:

      “However, all the shoplifts that ever were or are will never equal one murder. Nor will all the dogfights that ever were or are.”

      I will note this attitutde is about as far from Scott’s moral theory as you can get. And mine. I am not sure if these sorts of debates can really be resolved.

      • Marc Whipple says:

        Okay, I’m going to have to stop you right there, if you are going to argue with Mr. Pump.

        “Do you understand what I’m saying?” shouted Moist. “You can’t just go around killing people!”
        “Why Not? You Do.” The golem lowered his arm.
        “What?” snapped Moist. “I do not! Who told you that?”
        “I Worked It Out. You Have Killed Two Point Three Three Eight People,” said the golem calmly.
        “I have never laid a finger on anyone in my life, Mr. Pump. I may be–– all the things you know I am, but I am not a killer! I have never so much as drawn a sword!”
        “No, You Have Not. But You Have Stolen, Embezzled, Defrauded And Swindled Without Discrimination, Mr. Lipvig. You Have Ruined Businesses And Destroyed Jobs. When Banks Fail, It Is Seldom Bankers Who Starve. Your Actions Have Taken Money From Those Who Had Little Enough To Begin With. In A Myriad Small Ways You Have Hastened The Deaths Of Many. You Do Not Know Them. You Did Not See Them Bleed. But You Snatched Bread From Their Mouths And Tore Clothes From Their Backs. For Sport, Mr Lipvig. For Sport. For The Joy Of The Game.”

        • CJB says:

          But that’s precisely it- Mr. Pump IS wrong.

          Extend Mr. Pumps theory. Suppose I steal .1 cent from every single person on earth. (office space gone berserk) That’s seven million dollars- hardly the greatest heist of all time, but well into Mr. Pump territory.

          Show me the harm.

          Ok. Well, I steal 1$ from everyone on earth.

          But again- not doing equal amounts of harm. The dollar I stole from myself? Didn’t even notice it. Bill Gates didn’t even notice the blip in his wealth increasing.

          But to a guy in the CAR? That’s maybe a weeks wages, and may send him into a downward spiral that destroys his entire life, what’s left of it.

          So what we see here is that what matters isn’t the totality of the harm, but the impact of the harm. In other words, Mr. Pump isn’t even making the argument you think he’s making.

          Your argument (and scotts) is strictly linear. 1 Dead chicken +1 Dead Chicken +…….=1 Human, eventually.

          Mr. Pump doesn’t really argue that. He points out that when Moist stole, not .1 cents from everyone on the Discworld but 100K from THIS bank, this one right here- actual lives were destroyed. In other words, stealing a large amount of money isn’t just “stealing” in the same way that ganking a candy bar is- it impacts people’s actual lives and not just their bank balance.

          However, I’d like to see a moral argument that .1 cents from everyone on earth is equivalent to 7 million from One Dude. Or at least, under similar moral consideration.

          I’d say harm is….asymptotic. Torturing chimps for fun and profit gets worse and worse and worse….but it never quiiiiiiite reaches “human murder.”

          in other words, even though it adds up to seven million, you never actually HARMED anyone at all. .1 cent loss, like a dust mote in the eye, isn’t a form of HARM. Even in an additive system, you still need to add up ‘harm.”

          Which is another problem with super reductive arguments like this. No one is going to classify the loss of .1 cents as “harm.” Not even people in the CAR.

          But people will define stealing a dollar that way.

          So somewhere between .1 cents and 100 cents, there’s a threshold where harm starts to exist (I suspect for most westerners, this is around a dime- IE, an amount of money that is actually worth considering.)

          It’s like worrying about the change caused by one carbon atom in your body. In a hypothetical world, it’s an interesting thought. In the real world, it’s below consideration.

          • I’m not sure how one goes about proving moral arguments, but let me suggest that your intuition, while very natural, reflects the fact that humans are bad at intuiting very large numbers or very small probabilities.

            But let me try:

            We observe that humans are willing to trade a very small probability of dying against quite minor pleasures, such as the pleasure of eating an ice cream cone when you know you are a little overweight or of going to a movie when you know that if you went to no movies this year you could spend the money on an extra medical checkup which would have some (very small) chance of saving your life.

            So suppose you make it a gamble, offered to a billion people—a certainty of a dollar to spend against one chance in a billion of being murdered. Any reason to think they would not accept, given that people routinely accept larger risks than that for smaller payoffs? If they would all accept, what’s your basis for saying that murder is more important than the billion dollars?

  90. Glenstorm says:

    Claire Palmer’s book “Animal Ethics in Context” is pretty good at providing some principles about where to draw lines between care of humans and care of animals.

  91. Garrett says:

    Reading this post and the comments, I’ve been led to a few questions on contemporary utilitarianism:

    1) Is Utilitarianism supposed to be or expected to be prescriptive?

    2) If I’m reasonably well-off, and somebody stuck in a Ferguson-like coury system begs me for $20 to pay off a fine for something I don’t consider objectionable, or otherwise they will face greater fines, jail time, a loss of their job, housing and car, should I give them the money? Assume that the court system can reasonably verify the charges, and situation of the person involved.

    3) If a man in his early 20s claims to be suffering greatly because he is a virgin and informs you of this, can demonstrate he is free of disease, etc., and you are not married, are you ethically bound to offer to have sex with him?

    4) How do you handle the case in #3 but where the man in question is not suffering, but lying in order to get laid?

    • Airgap says:

      In re (4), the obvious answer is that you should always agree, and not inquire into or consider whether he is telling the truth in order to maximize total utility, unless it turns out that there are no single women in the Effective Altruism community that I find attractive, in which case I have no position on (4).

  92. 1. You seem to be making the assumption that suffering can be added, multiplied, and divided as a numerical quantity. It seems to me the easiest way out of this philosophical dilemma is to drop this assumption (especially since it seems to quickly yield absurd conclusions).

    2. I am reminded of the story (perhaps untrue, but that’s beside the point) of a group of animal-rights activists who liberated a bunch of turkeys from a turkey farm so that they could “live out their normal turkey lives.” Of course, from the turkeys’ standpoint, this means merely that they would be eaten by a fox or coyote rather than a human being.

  93. Pingback: Outside in - Involvements with reality » Blog Archive » The Rights Stuff

  94. Pingback: The Rights Stuff | Neoreactive

  95. Pingback: Outside in - Involvements with reality » Blog Archive » Chaos Patch (#52)

  96. Shanthi says:

    Very interesting thoughts, but I think that you may be mislabeling one thing. Rather than “sanity,” I think it’s really “comfort.” I don’t think you risk hits to your sanity choosing to give most of your money to starving strangers or to animal welfare. You risk (in fact, give away) comforts like a nicer house or apartment, nicer car (or even owning a car), etc. In fact, I’d argue that the greater risk to one’s sanity is NOT giving that money away and accepting that you put your own creature comforts above other people and animals not living their lives in pain.

    The danger, of course, in such a mislabeling is another level of justification. “The reason I don’t give more than 10% of my income to charity or choose to value strangers with deeper need over friends is because my ‘sanity’ will suffer and I should be valuing my sanity!” That’s not to say you shouldn’t value your comfort, but it creates a different vibe and it’s one should be cognizant of and accept for what it is, rather than recasting it as something more urgent and vital like “sanity.”

    • Highly Effective People says:

      I think it’s probably more a pithy way of expressing “I’m not going to give away 99% of my income anyway, but I can at least not be neurotic about it” than anything else.

      You’ve got to admit this is an odd bunch of people here. A story about a man trying to get himself chemically castrated because he was worried expressing his sexual interest constituted misogynistic oppression comparable to rape was met not with disbelieving confusion but a chorus of “yes! that was my experience too!” from the commentariat. A nontrivial number of people here are paying companies monthly fees to freeze their heads in anticipation of a literal Dues Ex Machina capable of restoring the dead and placing them into virtual reality paradises. And every time a psychiatric drug or nootropic comes up everyone compares notes like school kids seeing if anyone had tried the new flavor of Gushers.

      (None of that was intended as pejorative btw. Life would suck without interesting people.)

      I’d absolutely believe someone posting on SSC could have a breakdown from the cognitive dissonance involved in not wanting to give an anonymous ethiopian their cloak as well as their tunic.

  97. MadRocketSci says:

    By the time most people figure out what they’re doing they already accept at least friends, family, and community. But going from “just my community” to “also foreigners” is a difficult step that’s kind of at the heart of the effective altruism movement. In the same way that allowing animals into the circle of concern totally pushes out the value of all humans, allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it. So why shy away from doing the same with animals?

    I’m probably not a utilitarian, at least not in the sense used by most philosophers – the belief that there is something out there (a global utility function of some sort) that it makes sense to maximize doesn’t make much sense to me.

    I’m probably most influenced by my game theory class: Utility in economics is a rank ordering of preferred states over less preferred states for a single agent. An agent has a utility function, not a group of agents, or the nation, or the world. The utility function encodes information about what the agent would prefer, presumably what the agent would try to solve for if given influence over the state of a world or dynamic state machine. There are situations where collective good vanishes as a coherent concept (prisoner’s dillemmas, zero sum games, competitions, etc.)

    Because of this, I don’t find it unnatural or wrong that we care about our families and selves more than more distantly related groups.
    1. Our families are more likely to reciprocate any goodwill and aid. So on, to a much lesser extent, larger and more distant groups with other concerns.
    2. We do care about ourselves, our families, our autonomy and success more than some people we’ve never met. They care about themselves more than us. As long as our valuation of them is situationally nonzero, we have a basis for cooeperation if we meet, and as long as we aren’t jammed into zero sum situations, we (understanding and respecting the fact that they are solving for their own utility) can avoid conflict by pursuing strategies where neither party gets screwed. Positive sum games, etc.
    3. We each only have a finite amount of resources. After we’ve burned through our margin for charity to random strangers/countryment/etc, our margin for looking after our extended family (which in some cases is far more than what we reserve for ourselves), we have to look after ourselves/immediate family if we want to survive/pursue any future objective we may have.

    Anyone who actually tries to strictly follow utilitarianism is vulnerable to things like utility monsters, or infinite sinks of collective need.

  98. MadRocketSci says:

    Speaking of utilitarianism, chicken suffering, and utility functions:

    If you are a fox, your utility function probably includes not dying by eating well. Raiding henhouses makes a lot of sense to you.

    If you are a chicken, your utility function probably includes not dying by avoiding predators. Staying the heck away from foxes makes a lot of sense to the chicken.

    Which one is right? Where is the collective good in this situation? You could try to come up with some global utility function measuring total wellbeing or total suffering, by doing some arbitrarily weighted average of the different agent’s utlity functions. Then you could make some (any) arbitrary pronouncement about what the different parties in this situation “should” be doing. The thing is, the fox and the chickens don’t care about this pronouncement (and why should they?) The fox wants to eat. Caring about chicken survival isn’t going to help it acheive its goal.

    “Should” seems like it only really applies to strategies to acheive what fundamentally motivates you on a basic level. It doesn’t seem to me like it really applies to the fundamental motive.

    If you are a human in a relatively non-pathological society, you “should” try to suppress immediate gratification to get along with your neighbors and respect their rights. You “should” do this because it is a better strategy in the end to satisfy your foundational motivations (your survival, the survival of your family, wealth, security, self-actualization, friendship, etc). If you instead claim (as many predatory and pathalogical societies have throughout history) that you “should” neglect your foundational motivations – it doesn’t make any sense. You shouldn’t want wealth or security, you selfish !@#!ard! You shouldn’t want food, what about the starving people in $ARBITRARILY_DISTANT_LAND. Etc. Doesn’t matter how much someone shakes their finger. You do want those things, and cooperating with a society/cult/etc that tries to break you of it isn’t going to help you acheive your goals.

    If you are a fox raiding a henhouse, you probably aren’t in the habit of listening to utilitarian philosophers. What are you supposed to make of the claim you should value the chicken’s wellbeing more than your own? You don’t. Simple as that. The things you “should” do amount to strategies to best catch those chickens and satisfy your foundational motives.

    • houseboatonstyx says:

      You could try to come up with some global utility function measuring total wellbeing or total suffering, by doing some arbitrarily weighted average of the different agent’s utlity functions. Then you could make some (any) arbitrary pronouncement about what the different parties in this situation “should” be doing.

      Maybe they are already doing what produces the most and best-quality utilons for themselves, thus adding to the total number of positive utilons in the world. A fox enjoys eating till his stomach is full, then goes to his den and enjoys curling up with a full stomach — untroubled by thought or worry about tomorrow. A hen enjoys grazing in a flock of lookouts and alarm-sounders, doing her part with less stress than a human crossing a street. If the fox catches her, she has a couple of minutes of panic fleeing, then an almost instant death.

      Am I the only person who stubs their toe, feels the thump and cut and knows how much pain that will give … but the actual pain doesn’t happen for a second or two? In more serious injuries, there’s also shock to block the pain.

      Because both animals exist, different kinds of happiness are created. A rich variety of utilons.

  99. Pingback: mental models as a fandom | englebright

  100. CriticallyDisappointed says:

    I have been on the hunt for something that really deals with the issue of animal rights with complexity since reading this post. I found this: http://michaelpollan.com/articles-archive/an-animals-place/

    [Trigger warning, I want to talk about an “edge case” that includes elements of violence and dehumanization]
    As an addendum to the above, I think I’ve also come to agree with Daniel Dennet’s view that there are fundamental differences in cognition between humans and most animals. There are certainly interesting edge cases around this. I would propose one: how should you morally treat a brain-injured human reduced to chicken-level intelligence, requiring constant care to keep alive, and constant supervision or restraint to prevent the human from assaulting other people?
    Suppose I were to, in advance, declare that I regard such as state as nightmarish and that I would request physician-assisted-suicide were I in such a state, to be added to my standing request to have my life terminated if I enter a persistent vegetative state.
    Would we consider the brain-injured human “human,” or morally imperative to treat as human, only for surface, aesthetic, or emotional reasons?

    I also think the earlier mentioned argument about how some ideas about animal rights could be argued to give humans a moral imperative to destroy all of the natural world to prevent suffering is, well, both funny and a good point against the way that whole mess of arguments forms a flawed paradigm. I think the whole series of arguments really does hinge on the premise that animal pain can be considered to have moral equivalence to human suffering. Without that assertion, it doesn’t fly, and I think there are reasonable arguments against that assertion.

    The difficulty with this topic, I think, is that it is immensely easy to accuse anyone who defends meat-eating as selfish, whether they are doing it for pleasure or for genuine health reasons (a whole other barrel of eels). This overlays all the arguments in a very detrimental way, I think.

  101. HarveyNof says:

    Part of my misunderstanding was because sometimes when the Unprincipled Exception is cited, the former case appears to be in view, i.e. someone holds to a position but doesn t take it to its logical conclusion in situations where it would clearly be untenable yet still holds to the position overall.

  102. Daneel says:

    In terms of helping people closer to you more than those further away, I toyed with the idea for a while that one should rank all people in terms of how close they are to you and allocate an amount of your income proportional to 1/n to helping the closest n people in the best possible way.

    So 1/log(7 billion) ~ 1/22 of your money gets allocated to just yourself.
    half that much goes to helping you and your closest friend (though if they do not particularly need money it may be more efficient to spend it on yourself since you know your own needs)
    If you live in the US, about log(300 million)/log(7 billion) ~ 85% of your money should go to people in the US (assuming they are your 300million closest) and so on.

    It seemed like a way that made this idea concrete without making it feel totally insane in either direction (though I guess these numbers probably end up slightly on the overly burdensome end, but not nearly as much as the “you must donate all your income to Africa/animal rights/MIRI” theory).