The consistently one attempts to adhere to an ideology, the more one's sanity becomes a series of unprincipled exceptions.
— graaaaaagh (@graaaaaagh) February 5, 2015
Meeting with a large group of effective altruists can be a philosophically disconcerting experience, and my recent meetup with Stanford Effective Altruist Club was no exception.
Buck forced me to pay attention to an argument I’ve been carefully avoiding. Most people intuitively believe that animals have non-zero moral value; it’s worse to torture a dog than to not do that. Most people also believe their moral value is some function of the animal’s complexity and intelligence which leaves them less morally important than humans but not infinitely less morally important than humans. Most people then conclude that probably the welfare of animals is moderately important in the same way the welfare of various other demographic groups like elderly people or Norwegians is moderately important – one more thing to plug into the moral calculus.
In reality it’s pretty hard to come up with way of valuing animals that makes this work. If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is probably within an order of magnitude of all human suffering. You would need to set your weights remarkably precisely for the values of global animal suffering and global human suffering to even be in the same ballpark. Barring that amazing coincidence, either you shouldn’t care about animals at all or they should totally swamp every other concern. Most of what would seem like otherwise reasonable premises suggest the “totally swamp every other concern” branch.
So if you’re actually an effective altruist, the sort of person who wants your do-gooding to do the most good per unit resource, you should be focusing entirely on animal-related charities and totally ignoring humans (except insofar as humans actions affect animals; worrying about x-risk is probably still okay).
I acknowledged the argument was very convincing, but told Buck that I waS basically going to safe-word out of that level of utilitarian reasoning, for the sake of my sanity.
Buck pointed out that this shouldn’t be too scary, given that many utilitarians have already had to go through a similar process. Peter Singer talks about widening circles of concern. First you move from total selfishness to an understanding that your friends and family are people just like you and need to be treated with respect and understanding. Then you go from just your friends and family to everyone in your community. Then you go from just your community to all humanity. Then you go from just humanity to all animals.
By the time most people figure out what they’re doing they already accept at least friends, family, and community. But going from “just my community” to “also foreigners” is a difficult step that’s kind of at the heart of the effective altruism movement. In the same way that allowing animals into the circle of concern totally pushes out the value of all humans, allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it. So why shy away from doing the same with animals?
It’s a good question. After thinking about it for a while, I think my answer is that I never actually completed the process of widening my circles of concern and neither has anybody else, and because I’m thinking about this one in an abstract intellectual way I’m imagining actually completing it, which would be much scarier than the incomplete things I’ve done before.
Like, although I acknowledge my friends and family as important people whom I should try to help, in reality I don’t treat them as quite as important as myself. If my brother asked me for money, I’d lend it to him, but I wouldn’t give him exactly half my money no-strings-attached on the grounds that he is exactly as important to me as I am.
Likewise, although I acknowledge strangers as important people whom I should try to help, in reality I don’t treat them as quite as important as my friends. We all raised a lot of money to help Multi when she was in a bad situation, but there are thousands of other people in the same exact same bad situation and we’re not putting nearly as much effort into them.
You can try to justify this in terms of “well, I know myself better than I know my brother, and I know Multi better than I know strangers, so I’m more effective at helping me and Multi, so I’m just rationally doing the things that would have the most impact”. But I think if I bothered to dream up some thought experiment where that wasn’t true, I would prefer to help me and Multi to my brother and random strangers even after that factor had been controlled away.
This doesn’t come as a surprise to me and I’m not sorry. But…well…I guess my worry about the animal charity thing wasn’t that I was inconsistent, so much as that I was being meta-inconsistent; that is, I didn’t even have a consistent set of rules for deciding whether I was going to want to be consistent or not.
And now I think I might have a consistent policy of allowing some of my resources into each new circle of concern while also holding back the rest of it for the sake of my sanity. Thus my endorsement of GiveWell’s principle that you should donate at least 10% of your income to charity, but then feel okay about not donating more if you don’t want to. I am allowed to balance resources devoted to sanity versus morality and decide how much of what I have I want to send into each new circle of concern – without denying that the circle exists.
I think that armed with this idea I am willing to accept Buck’s argument about animal welfare being more important than human welfare, insofar as this means I should donate some resources to animal welfare without necessarily having to give up caring about human welfare completely. I don’t think I can make a principled defense of doing this. But I think I can claim I’m being unprincipled in a meta-consistent and effectively sanity-protecting way.