Ozy vs. Scott on Charity Baskets

I have invited Ozy to post to Slate Star Codex. I ended up disagreeing with their first post, so I’m going to include it along with my rebuttal.

Ozy:

A man goes up to a stockbroker and says, “You guys are so stupid. You invest in more than one stock. But there’s only one stock that is going to pay off the most. Why don’t you just put all your money in the stock that is going to earn the most money, instead of putting it in a bunch of stocks?”

With my usual quick and timely response, I would like to point out the fallacy within this article on effective altruism. The author offers up several things that would, in an effective altruist world, not exist:

If we all followed such a ridiculous approach, what would happen to:

1. Domestic efforts to serve those in need?
2. Advanced research funding for many diseases?
3. Research on and efforts in creative and innovative new approaches to helping others that no one has ever tried before?
4. More local and smaller charitable endeavors?
5. Funding for the arts, and important cultural endeavors such as the preservation of historically important structures and archives?

6. Volunteerism for the general public, since most “worthy” efforts are overseas and require a professional degree to have what Friedman calls “deep expertise in niche areas”?
7. Careers in the nonprofit sector?”

The answer to several of those is pretty obvious: people should work in the nonprofit sector if that’s their comparative advantage, who gives a @#$! about volunteerism or local charitable endeavors, arts funding comes out of people’s entertainment budgets they way it should, and resources are scarce and each donation to someone relatively well-off in the developed world trades off against resources from someone less well off. So far, so well-trammeled.

However, I think his points two and three are actually really interesting points. A lot of people seem to think of effective altruism as like the man who wants to invest in the best possible stock. However, in reality, just as a person who wants to maximize their returns invests in more than one stock, a society where everyone is an effective altruist would probably have a variety of different charities (although perhaps a narrower segment of charities), just as they do now.

To be clear, there are certain charities that are not effective at all and would probably not exist in a hypothetical effective altruist society. Make a Wish Foundation would probably not survive the conversion to a hypothetical effective altruist society (except, presumably, out of one’s entertainment budget). Nevertheless, the nonexistence of obviously ineffective charities doesn’t mean that we as a society would decide to have fewer charities, any more than not buying lottery tickets means that you are only allowed to invest in one stock.

(Note that I am using stocks as an analogy. Stocks and charity donations are unlike each other in a lot of ways. It isn’t a perfect metaphor. Also, I literally know nothing about stock investing.)

One of the reasons that people invest in more than one stock is uncertainty. Probably some stocks will go up and some stocks will go down. However, I, as an investor, don’t know which stocks will pay off more than other stocks. Therefore, I want to hedge my bets. Knowing that the market will go up in general, I choose to invest in a variety of different stocks, so that no matter what happens I keep some of my money.

A similar uncertainty applies to charities. For instance, it’s possible that Give Directly is run by crooks who steal all the donations. (As far as I know, Give Directly is an excellent organization and never steals anyone’s money.) If everyone has given to Give Directly, we’re screwed. If we have several different charities giving cash to people in the developing world, then it matters less that one of them is run by crooks. Similarly, we may be uncertain about whether malaria relief or schistosomiasis relief is the best bang for one’s charity buck. Given that it is impossible to eliminate all uncertainty, it’s best to direct some money towards both, so that in case malaria relief turns out to be a bust we haven’t wasted all our charitable budget.

In stocks, return is a function of risk. If there’s a chance of a large payoff, there’s an even larger chance of going bust and losing everything. If there’s not very much risk, you get payoffs that are barely larger than inflation. Therefore, you want a balanced investment strategy: have some high-risk investments that might make you rich, and some low-risk investments that have a less awesome payoff.

This also applies to charity donation, which is where we get to Berger and Penna’s concerns. Something like malaria relief is low-risk and relatively low-return. If you distribute malaria nets, it is pretty certain that people are going to have lower rates of malaria. However, there’s not much chance of getting a payoff higher than “people have lower rates of malaria, maybe no malaria at all,” which will save probably millions of lives. Compare this to, say, agronomy or disease research. With agronomy, there is a high chance that you will pour in millions of dollars and get nothing. Most agronomic research gets us, say, wheat that’s a little better at resisting weeds, or better understanding of the ideal growing conditions of the chickpea. However, there’s the slim chance that you’ll have another Green Revolution and save literally billions of lives. As effective altruists, we want to invest in both high-risk high-return and low-risk low-return charities.

Another important example of a high-risk low-return charity is a new charity, which I think is important enough that I’m going to talk about it separately. What happens if someone has a brilliant new idea about how to help people in the developing world? There’s potentially a high payoff if they can beat the current most effective charity; but new ideas for effective charities are probably not going to pay off, if for no other reason than ‘most new ideas are terrible.’ It is really important that we invest in new ideas.

What happens to a low-risk high-return charitable investment? Well, it is clearly the most effective place to donate and becomes our new baseline, and the same trilemma survives. Other charities are either comparable, and thus either higher return but higher risk or lower return but lower risk, or incontrovertibly better and the new baseline.

Please note that I’m not saying the individual should donate multiple places. Probably any individual only has time to investigate one family of charities and – for that matter – gets the most warm fuzzies from only one charity. I think that most people should probably only donate to one charity, because they can be certain they’re donating to the most effective charity they can. But what that charity is is different for different people. And, no, a hypothetical effective altruist society won’t totally lack scientific research.

Scott:

I think I disagree with this. That is, I’m sure I disagree with what I think it says, and I think it says what I think it says. I think it confuses two important issues involving marginal utility – call them disaster aversion and low-hanging fruit – and that once we separate them out we can see that diversifying isn’t necessary in quite the way Ozy thinks.

Disaster aversion is why we try to diversify our investments in the stock market. Although there’s a bit of money maximization going on – more money would always be nice – there’s also an incentive to pass the bar of “able to retire comfortably” and a big incentive to avoid going totally broke. This incentive works differently in charity.

Suppose you offer me a 66% chance of dectupling my current salary, and a 33% chance of reducing my current salary to zero (further suppose I have no savings and there is no safety net). Although from a money-maximizing point of view this is a good deal, in reality I’m unlikely to take it. It would be cool to be ten times richer, but the 33% chance of going totally broke and starving to death isn’t worth it.

Now suppose there is a fatal tropical disease that infects 100,000 people each year. Right now the medical system is able to save 10,000 of those 100,000; 90,000 get no care and die. You offer me a 66% chance of dectupling the effectiveness of the medical system, with a 33% chance of reducing the effectiveness of the system to zero. In this case, it seems clear that the best chance is to take the offer – the expected value is saving 56,000 lives, 46,000 more than at present.

The stock market example and the tropical disease example are different because while your first dollar matters much more to you then your 100,000th dollar, the first life saved doesn’t matter any more than the 100,000th. We can come up with strained exceptions – for example, if the disease kills so many people that civilization collapses, it might be important to save enough people to carry on society – but this is not often a concern in real-life charitable giving.

By low-hanging fruit, I mean that some charities are important up to a certain point, after which they become superseded by other charities. For example, suppose there is a charity researching a cure for Disease A, and another one researching a cure for Disease B. It may be that one of the two diseases is very simple, and even a few thousand dollars worth of research would be enough to discover an excellent cure. If we invest all our money in Disease A simply because it seems to be the better candidate, the one billionth dollar invested in Disease A will be less valuable than the first dollar invested in Disease B, since that first dollar might go to hire a mediocre biologist who immediately spots that the disease is so simple even a mediocre biologist could cure it.

This is also true with more active charities. For example, the first bed net goes to the person who needs bed nets more than anyone else in the entire world. The hundred millionth bed net goes to somebody who maaaaaybe can find some use for a bed net somewhere. It’s very plausible that buying the first bed net is the most effective thing you can do with your dollar, but buying the hundred millionth bed net is less effective than lots of other things.

In this case, at any one time there is only one best charity to donate to, but this charity changes very quickly. In a completely charity-naive world, Disease A might be the best charity, but after Disease A has received one million dollars it might switch to Disease B until it gets one million dollars, and then back to Disease A for a while, and then over to bed nets, and so on.

We can turn this into a complicated game theory problem where everyone donates simultaneously without knowledge of the other people’s donations, and in this case I think the solution might be seek universalizability and donate to charities in exactly the proportion you hope everyone else donates – which would indeed be a certain amount to Disease A, a certain amount to Disease B, and a certain amount to bed nets, in the hope of picking all the low-hanging fruit before you subsidize the less efficient high-hanging-fruit-picking.

But in reality it’s not a complicated game theory problem. You can go on the Internet and find more or less what the budget of every charity is. That means that for you, at this point in time, there is only one most efficient charity. Unless you are Bill Gates, it is unlikely that the money you donate will be so much that it pushes your charity out of the low-hanging fruit category and makes another one more effective, so at the time you are donating there is one best charity and you should give your entire donation to it.

Granted, people are not able to directly perceive utility and will probably err on exactly what this charity is. But I think the pattern of errors will be closer to the ideal if everyone is trying to donate to the charity they consider highest marginal value at this particular time rather than if everyone is trying to diversify.

The reasons for diversifying in the stock market are based on individual investors’ desire not to go broke and don’t really apply here.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

87 Responses to Ozy vs. Scott on Charity Baskets

  1. Anonymous says:

    Speaking of low-hanging fruit, nice job not touching on “Also, I literally know nothing about stock investing” in the rebuttal.

    • Chris says:

      The description of risk aversion in stock investing was fine, and you haven’t presented any counter-argument to it; you’re just punishing humility. That’s negatively useful.

      • RCF says:

        No, the description was quite misleading.

      • RCF says:

        No, the description was a bit misleading. The purpose of diversification is satisfying a utility function that is a non-linear function of dollar amounts. For instance, suppose Coin A pays off 4:1 and Coin B pays off 3:1. If you’re just maximizing expected value, you should put all your money in Coin A. But suppose you start out with 5 dollars and your utility function is log(x+10) where x is the amount of money you end up with. Then putting all your money in Coin A has an expected utility of .5log(10)+.5log(30) = 1.23. But if you put 4.5 dollars in Coin A and .5 dollars in Coin B, your EU will be .25log(10)+.25log(28)+.25log(11.5)+.25log(29.5) = 1.24. I’m too lazy to figure out the optimal mix, but this is clearly an improvement over a pure strategy.

        So if your utility function is linear in the number of lives saved, then you should invest in a single charity.

        Also, the reason to invest in a mix of high and low risk investment is if the two types have lower cross-correlation than internal correlation. If the correlation between high risk investments and other high risk investments is no larger than the correlation between high risk and low risk investments, there’s no need to have a mixture; if you’re disaster averse, you should just not invest part of your nest egg.

    • JTHM says:

      Well obviously, Ozy didn’t mean to imply that they didn’t know that stocks represent fractions of ownership in companies, that stocks can be traded on markets, that stocks can be used to control who gets to be CEO, etc. We all know what Ozy meant. Be nice.

      • Kiboh says:

        I’d like to put forward the hypothesis that Anonymous was sincerely complimenting Scott for passing up the opportunity to take a cheap shot at Ozy (Presumably they were unaware of Scott and Ozy’s relationship). In less civilised corners of the net, “Also, I literally know nothing about stock investing” would reliably be met with “Evidently!” or a similar piece of snark from someone in the process of rebutting their beliefs.

        Also, regardless of whether Anonymous was being sincere . . . three people independently correcting the same mistake seems a bit excessive, doesn’t it?

        • Chris says:

          I’d like to put forward the hypothesis that Anonymous was sincerely complimenting Scott for passing up the opportunity to take a cheap shot at Ozy

          No, I think you just didn’t get the problem. When someone admits that they aren’t an expert in an area (why to diversify stock portfolios), before displaying an *entirely correct* understanding of the relevant knowledge, they aren’t “setting up the opportunity to take a cheap shot” at them! They’re just being honest, and perhaps even overly humble.

          So even if we accept your description of Anonymous’ motivation, it doesn’t make the behavior any more appropriate here.

          three people independently correcting the same mistake seems a bit excessive, doesn’t it?

          It sends a stronger statement of unwelcome behavior. People being overconfident is merely annoying; I think people punishing someone else for refusing to show overconfidence has gone into “you are being an epistemically terrible person right now and you should feel bad” territory. The world needs less overconfidence, not people who equate a lack of overconfidence with a flawed argument.

        • Anonymous says:

          JTHM had it closest; Kiboh was also not completely off.

          I was trying to:
          a. tease Ozy for their misuse of “literally”
          b. demi-semi-sincerely compliment Scott for *not* teasing Ozy this way (or even bringing it up at all), since it was near-compulsive for me (although I recognized it was suboptimal behavior).

          I was trying for a light, jocular tone, but I noticed I really missed the mark (and noticed this more strongly after the edit window passed). Chris’ inferences seem kind of surreal, which is my fault in that interesting kind of unintended way.

          So… Joachim: it aimed at “true” and “kind”, winged “true”, and didn’t really hit “kind” at all.

  2. zac says:

    The right charity to give to is the one you’ll actually give to. If effective altruists push too hard on ‘give to only the best charity’, they could end up losing a lot of people due to risk-aversion biases.

    I know personally that I would have become very discouraged if I had only given to MIRI. Splitting my donation between them and givewell ensures that, when people criticize x-risk or I have disagreements with MIRI’s research focus/public relations I am not discouraged from charitable giving and effective altruism in general.

  3. drethelin says:

    In terms of world-optimizing: paying for charities that you like is a lot like paying for products you actually want in capitalism. It gives society a LOT of information. If we decided that PCs were the optimal computer on a dollar per processing power metric, a lot of Apple products simply wouldn’t exist. And yet the fact that they do gives us extremely important information about what people want out of their laptops. And the world is a better place for it.

    Kickstarter is probably the clearest and best quantified example of this but you can talk about things like local theater groups or animal shelters or whatever. Someone can pay different amounts of money to different kickstarters and this both encodes their values better than just giving all their money to the ONE optimal kickstarter and I think provides society with surplus value from having a variety of things created.

    Obviously effective altruism isn’t anywhere close to becoming a dictatorship that mistakenly destroys value via Goodhart’s law by optimizing for a single metric. Here’s hoping that doesn’t happen.

    • Matthew says:

      Most people aren’t currently purchasing fuzzies and utilons separately, though. This kind of market information tells you a lot about what gives people fuzzies. That doesn’t mean you shouldn’t try to convince them to purchase them separately from utilons.

      • drethelin says:

        contributing to your local theater groups gets you utils as WELL as fuzzies, and generates external utils for society.

        Utils vs fuzzies isn’t about serious donation vs frivolous donation, it’s about purchasing function vs feelings. When you donate to a theater group, you are functionally purchasing access to be able to see shows there, whereas if you’re donating to a charity to raise puppies in nairobi you’re only get an internal feeling of warm fuzzies.

    • Eli says:

      I’m pretty sure that argument applies to voting, not charitable donations.

  4. Army1987 says:

    But in reality it’s not a complicated game theory problem. You can go on the Internet and find more or less what the budget of every charity is. That means that for you, at this point in time, there is only one most efficient charity. Unless you are Bill Gates, it is unlikely that the money you donate will be so much that it pushes your charity out of the low-hanging fruit category and makes another one more effective, so at the time you are donating there is one best charity and you should give your entire donation to it.

    Well…

  5. Zakharov says:

    Stock market analogy: Let’s say you’re part of a collective of a thousand people. Everyone pools their money to buy stocks, everyone chooses a thousand dollars worth of stock to buy, and everyone splits the rewards. You’re deciding which stocks to choose. You’ve got good information that Google’s going to go up, and you also know the collective hasn’t already invested disproportionately in Google. The best strategy is to put your entire $1000 allocation into Google, because with the risk spread over a thousand investors, maximizing expected value is the best strategy.

    • Army1987 says:

      Unless the other people in the collective are similar enough to you that it’s likely that lots of them will come up with the exact same idea as you (which is especially likely if you describe your statement out loud in public), and your portfolio will end up containing almost exclusively Google shares.

  6. taelor says:

    Stock markets are fairly informationally efficient. This means that if one stock was expected to yield a higher risk-adjusted reward than the others, then it’s price would quickly rise until this ceased to be the case. Yes, investor biases and irrational exuberance prevent us from saying that they are perfectly efficient in the Strong Efficient Market Hypothesis sense, but they certainly are much more informationally efficient than charity selections.

  7. Kiboh says:

    >The reasons for diversifying in the stock market are based on individual investors’ desire not to go broke and don’t really apply here.

    I dunno. If I spent years giving most of my disposable income to one charity, without spreading it around . . . and then I found out that that one charity was run by incompetent frauds . . . the emotional effects would not be negligible. I’d say it could be worse than going broke, at least psychologically.

    I mean, if I went broke, I could always just move back in with my parents until I found a new job. If I found out that a major component of my self-worth, and something that I spent tens of thousands of dollars on in good faith, was useless, I could . . . ?

    Don’t get me wrong, I think Scott has the correct utilitarian approach. But he finished by saying something I see counterexamples to, so I’m pointing them out. Someone who donates with the primary objective of feeling better about themselves – or just a utilitarian who wants to make sure they don’t flip out and give up on the EA thing altogether if they find out their #1 charity is secretly awful – would be totally justified in regularly donating to more than one charity.

    (As it happens, Givewell’s recommendations change every few years, so someone who consistently follows their advice will end up with a diversified portfolio anyway if they keep going long enough. But I’ve never been one for tempering my hypothetical nitpicks with actual facts.)

  8. Jack says:

    Are you saying “if I were the Tzar in charge of distributing ALL aid of ALL sorts to a country, I would pour ALL of it into the best intervention?” Because that still seems wrong, and it’s hard to say exactly why, but I think it’s the sort of thing that turns out badly for several reasons, eg. it draws corrupt people into intervention #1 distribution, and it means you can’t start experimenting with intervention #2 and be in a good position to ramp it up once intervention #1 starts getting diminishing returns, and if you’re wrong about the best effectiveness, everyone doesn’t die before you start doing something useful.

    Are you arguing against that? Or does that seem reasonable, but you think every individual charity donor should pick what seems best to them, revisit it every 5 years, and usual differences of opinion will mean those donations will be naturally spread across the top 5-10 charities?

    • Benedict says:

      Presumably, an Aid-Distributing Tzar would be the sort of unlikely Bill Gates-type person mentioned, who would in fact be capable of pushing something out of the low-hanging fruit category, and that’s why that would turn out badly? That inertia sort of thing sounds like it could be a problem, but only if the Tzar is beholden to corrupt distribution investors for some reason, avoiding which is the point of having a Tzar in the first place. I think what’s being said is “invest in one thing until that thing stops being as effective as another thing, then invest in that new thing”.

      There might be more inertia effects than just corrupt distribution- you’re saying like, if we don’t maintain some level of investment in all the other intervention options, those intervention options will become less efficient or impossible in the future, trapping us in the now inefficient first investment? I can see that being worth addressing, but you’d probably have to show that these neglect effects actually have a significant impact.

    • Matthew says:

      Orthographic nitpick:

      Spelling borrowed from latin-alphabet Polish = czar.

      Spelling transliterated from Russian Cyrillic (царь) = tsar.

      Tzar is an odd-looking hybrid.

  9. Jack says:

    Also, I’ve struggled a lot with “why should I ever care about doing anything in my own country, when everything I do would be better off turned into clean-water-charity-donations”?

    But I remembered you saying something similar about how much to give to charity, and you said it made sense that even if you couldn’t say WHY to accept you were going to choose some arbitrary figure between 0 and 100%, and jump to the chase and arbitrarily give 10% gross or similar, and increase/decrease later if needed. Like many cultures and tithing. And I feel things like “make a wish” may fall into that category. It’s useless to persuade people not to care AT ALL about children dying of cancer, so pick a reasonable amount and care that much (distributed between other charity and health spending in your country, not worldwide aid).

  10. Chris says:

    Here’s an old article by an economist that makes exactly the same comparison (risk aversion differences between the stock market and charity), and comes to Scott’s conclusion:

    http://www.slate.com/articles/arts/everyday_economics/1997/01/giving_your_all.html

    Also, it has a “calculus sidebar” as if that’s a thing. But it’s not working. That’s sad.

    Also also, I think someone needs to get this dude an “I was an effective altruist in 1997” t-shirt.

  11. Robin Z says:

    Disaster aversion is why we try to diversify our investments in the stock market. Although there’s a bit of money maximization going on – more money would always be nice – there’s also an incentive to pass the bar of “able to retire comfortably” and a big incentive to avoid going totally broke. This incentive works differently in charity.

    Does that mean your analysis does not apply if your primary charity investments are aimed at reducing existential risk?

  12. Anonymous says:

    First essay is referring to effective altruist society. Second essay is referring to effective altruist individual. Not in disagreement.

  13. Phil R. says:

    The comment that is final as I write this (which I did not write) is offering me a “click to edit” link that apparently works. Bug or really odd feature?

    If feature, is it because it has given me the opportunity to discover that I’m not as much of a dick as I think I am, because my level of temptation to actually edit the comment is epsilon?

  14. Alex R says:

    Another important example of a high-risk low-return charity is a new charity

    Shouldn’t that be “high-risk high-return”?

  15. Alexander Stanislaw says:

    There are three big issues here:

    1: Is there in theory one correct charity form a utilitarian standpoint in principle?
    2: Should an effective altruist give to one charity?
    3: What the heck is the utilitarian standpoint and why should one follow it?

    Others have addressed 2 so I will address 1 and 3.

    Regarding 1, you (Scott) argued that there is indeed a best charity since there is only one charity with highest expected utility per dollar at any one point in time (although which charity this is can change). However, I think that to make this argument you have to invoke Aumann’s agreement theorem, otherwise the probability that a charity will save X lives will vary from person to person. However, this can fail very easily if people have different priors. If this is the case (and it most certainly is), then different people will have different probabilities and expected utilities for the same charities, even if all of the evidence is publicly known. There is no best charity, because there “the probability that charity X will save Y lives” is not well defined.

    Regarding 2, my biggest beef with effective altruism (or rather utilitarian effective altruism) is that it disregards the fact that people have different values – the only valid value system in utilitarianism is “optimize global utility”. If someone chooses to buy a piano for their local community centre, instead of buying 10000 malaria nets they are accused of optimizing for “warm fuzzes” and are acting immorally. Heck, _anything_ that anyone does other than helping the poorest is immoral under utilitarianism. Which is very strange, I value my local community more than a random village in Namibia – why wouldn’t I? And why is this a problem?

    • Elissa says:

      Heck, _anything_ that anyone does other than helping the poorest is immoral under utilitarianism. Which is very strange, I value my local community more than a random village in Namibia – why wouldn’t I? And why is this a problem?

      It’s a problem because, if everyone behaves this way, rich people will give within their own relatively rich communities, and the poorest will be neglected.

      Also, and by the way, I think the word “immoral” is probably too haunted with deontological and virtue ethicist connotations (as if giving in the wrong place makes you a “bad person” who “deserves” to have bad things happen to them), so we should probably not use it for all consequentially suboptimal actions.

      • jaimeastorga2000 says:

        It’s a problem because, if everyone behaves this way, rich people will give within their own relatively rich communities, and the poorest will be neglected.

        Unless rich people are following some kind of superrational decision theory I am not aware of, I don’t see how this kind of reasoning factors into Alexander Stanislaw’s decision making process.

        • Elissa says:

          Yeah dude just because my argument for superrationality here is implicit don’t make it wrong

      • Alexander Stanislaw says:

        What would you consider immoral under utilitarianism? Covertly murdering someone who has no connections to anyone else (say a homeless person), results in fewer lives lost than failing to give to charity. And I’m not asking what society should encourage under utilitarianism, or whether there should be a general moral guideline: “don’t kill people” – the specific act of failing to give to charity is worse under utilitarianism than killing a homeless person.

        It’s a problem because, if everyone behaves this way, rich people will give within their own relatively rich communities, and the poorest will be neglected.

        The alternative to “care about everyone equally” is not “care only about yourselves and your local community”. There are many alternatives. Case in point – I have donated nothing to people in rural villages but I have donated money to LGBT organizations. Why, even though the lives saved counter is lower? Because helping other LGBT people is consistent with my value system – I care more about them that random people who I’m have no connection to.

        • anonymous says:

          I care more about them that random people who I’m have no connection to.

          But are you absolutely sure that you value those people more than random people? I don’t think we have perfect awareness of our values like that and I wouldn’t set my values in stone, at least not without thorough thinking.

          I personally haven’t come up with any good reasons why some people should be more important just because they share some characteristics that I have or because they reside in the same local space-time region than I do. I recognize that those factors make me more emotionally invested in those people, makes me care about them more, but that doesn’t mean I should value those people more. There’s nothing that I can come up with except circular reasoning “I value local people more because I value them more” or is-ought type of reasoning.

          I think there are people that truly have completely different values, for example some psychopaths, but I think many, many people would come to different conclusions if they really worked out their beliefs and core values.

        • Alexander Stanislaw says:

          @anonymous

          Earnest question: do you think that you should value your own life more than other people, or do you think you should value your own life to exactly the same extent as everyone else?

          Follow up question – what proportion of your efforts do you focus towards yourself vs. others? And do you think it is the most moral to push this ratio towards 1/(7 billion)?

        • anonymous says:

          @Alexander Stanislaw

          Earnest question: do you think that you should value your own life more than other people, or do you think you should value your own life to exactly the same extent as everyone else?

          To be honest, I’m still very confused about my own values and I’m still working out these things. But yes, I probably value my life more than other people’s lives. If everyone was just a cold utility maximizer who didn’t value themselves, or local people or fun things and devoted all their time to helping distant poor people, I don’t think there would be much value in people’s lives.

          Also, since people are not Hollywood rationalists, it’s better on the long-term to value yourself more than other people. Psychologically normal people are simply not able to devote all their time helping distant poor people in a cold calculating matter, at least not without burning out very quickly. You need that local community, and doing fun things and helping fleshy people like buying pianos for local community centres, and it’s important to value those things too because feeling guilty about it helps no one. Pretending that you’re a Hollywood rationalist who doesn’t care about these things at all is the opposite of being effective because it probably leads you to become disillusioned with charity and very burned out. And those things are valuable, but in a different way than helping distant others. I’m still not sure what my optimal ratio between these other things and doings the most good for distant others is, but the rule of thumb is that if I’m happy, I’m probably more productive on the long term and a better effective altruist. I hope this answers your latter question.

          And do you think it is the most moral to push this ratio towards 1/(7 billion)?

          I will never be able to do that without being a superhuman. If you could somehow artificially do that, I think it would change the human condition so much, that you would lose all the things that makes human lives valuable, and the whole process would end up being pointless. But optimally it would probably be closer to that. By how much, I don’t know.

          http://lesswrong.com/lw/y3/value_is_fragile/

        • Alexander Stanislaw says:

          To be honest, I’m still very confused about my own values and I’m still working out these things.

          This is very reasonable. I think that anyone who says they are not confused about ethics is either a divine command theorist, an error theorist or is lying.

          You need that local community, and doing fun things and helping fleshy people like buying pianos for local community centres, and it’s important to value those things too because feeling guilty about it helps no one. Pretending that you’re a Hollywood rationalist who doesn’t care about these things at all is the opposite of being effective because it probably leads you to become disillusioned with charity and very burned out. And those things are valuable, but in a different way than helping distant others.

          This a beautiful exposition of value pluralism – what I have been endorsing – and imo a good argument against utilitarianism for all practical purposes, but more on that in a bit.

          I’d like to define “time/effort devoted to myself and the things that I value” as the “me-budget”. Note that things like spending money on a loved one are still part of the me-budget.

          I’m still not sure what my optimal ratio between these other things and doings the most good for distant others is, but the rule of thumb is that if I’m happy, I’m probably more productive on the long term and a better effective altruist

          My 1/7 billion number was ridiculous – actually the correct ratio is quite straightforward under utilitarianism. The correct me-budget is the one that maximizes the remainder of your earnings that will be devoted to charity.

          However this is still an extremely small quantity – people can live and work on very little. As you point out, the me-budget is important for reasons separate from allowing you to work and devote your time to others. The me-budget consists of the things that make life worth living in the first place.

          As for why this is an argument against utilitarianism – if humans are allowed to bend the rules and get a nice big me-budget for practical purposes, then its straightforward for me to divert some of my me-budget to people that I care about because I share something in common with them (or for any other reason). This is in total violation of the “value all people equally” principle behind utilitarianism.

        • AspiringRationalist says:

          I have donated nothing to people in rural villages but I have donated money to LGBT organizations. Why, even though the lives saved counter is lower? Because helping other LGBT people is consistent with my value system

          So are you making an effort to donate to the most effective ones? I think that’s the meta-point of effective altruism; even if your values are different from GiveWell’s, you should still be trying to optimize for your values, rather than just buying warm fuzzies.

    • Mary says:

      Because there is only one good under utilitarianism, the greatest good for the greatest number of people. You being a single person can’t possibly be the greatest number of people.

      This can be pushed to some pretty immoral conclusions, which on utilitarian grounds can be attacked only on the grounds that they are technically wrong, not on the grounds they are immoral.

    • jaimeastorga2000 says:

      I value my local community more than a random village in Namibia – why wouldn’t I? And why is this a problem?

      So utilitarianism is not an accurate description of your naive preferences. And since you don’t see a problem with that (i.e. you don’t want to value a random village in Namibia more than your local community), utilitarianism is not a description of your reflexive preferences, either. Big deal. I’m sure that’s true for most humans.

      Notice that you had to bite a bullet, though. “I value my local community more than a random village in Namibia, and this is not a problem.” Utilitarians are people who bite the opposite bullet, either claiming to value people in Namibia and people in their community equally, or (more likely) claiming that they should value them equally, even if they are imperfect at actually doing so.

      • Alexander Stanislaw says:

        So utilitarianism is not an accurate description of your naive preferences. And since you don’t see a problem with that (i.e. you don’t want to value a random village in Namibia more than your local community), utilitarianism is not a description of your reflexive preferences, either. Big deal. I’m sure that’s true for most humans.

        Indeed which is why most people are not utilitarians.

        Notice that you had to bite a bullet, though. “I value my local community more than a random village in Namibia, and this is not a problem.”

        I’m not sure why you call that biting a bullet.

  16. Elissa says:

    This is delicate, but I am not sure it’s quite polite for Scott to invite Ozy to post and then include a rebuttal in their very first post, virtually guaranteeing that loyal and/or lazy readers and commenters will side with him rather than considering Ozy’s post on its own merits. Especially since he mentions that he’s not sure he’s reading Ozy correctly, and as I understand it they live together, so it ought to be easy to check. Also, though I more or less agree with Scott (surprise!), I feel like this could have been more charitable: for instance, Ozy in their final paragraph specifically denies deducing that individual givers should diversify, but Scott spends three paragraphs arguing against that very claim.

    • Noumenon72 says:

      Yeah, why even invite someone to post on your blog if you’re going to undermine them? Have them post on their own blog and don’t link if it’s not good, or let them post on your blog and be supportive even if it’s not good.

      (I’d much prefer the second blog approach to this.)

    • Scott Alexander says:

      I talked to Ozy about this and we agreed on this plan.

      I’m not sure how this is different from Ozy publishing somewhere else and me arguing against it.

      The alternative to me being allowed to express my opinion about things posted on this blog is not allowing anyone else to post on this blog.

      • Elissa says:

        IMO a more polite way to handle it would have been to let the post stand alone and make your points in the comments, or even in a followup post. If Ozy’s cool with it then that’s all that really matters.

        • houseboatonstyx says:

          That might have looked like one of the two was going behind the other’s back. Like Scott had refrained from mentioning some weak points privately, then attacked those points in the comments. Or like Ozy had written and posted something without giving Scott a blog-owner’s privilege to look it over before it went up on his blog.

          As it is, the message of the double entry was that both had seen both halves, the procedure had been transparent to both, and now it was being made transparent to us.

        • Elissa says:

          @houseboutonstyx: That’s a fair worry. I will amend my suggestion in accordance: The rebuttal could have been a followup blog post that began by saying that these were some thoughts provoked by Ozy’s post on charity, and that he’d given Ozy a chance to read over them first to make sure he was reading their post correctly.

          (It’s a fine point, and I hate to harp on it, but making sure that everyone feels as if everyone in a dialogue is being treated fairly and with respect is the kind of thing I expect Scott to agree is important.)

        • houseboatonstyx says:

          @elissa
          It’s better as it is. Otherwise some people might worry about Scott choosing the clunkiness of a conversation gratuitously split between two windows. As it is, probably most people didn’t worry at all.

  17. Nathan says:

    Diversification in investing is not solely due to decreasing marginal utility of dollars. Diversification decreases risk while maintaining reward. If you have a set of stocks with uncorrelated risk that all individually have expected returns of 10%, a diversified portfolio of those stocks will still return 10%, but with much lower risk. This increases your return-on-risk directly, and is every bit as applicable to charities as it is to stocks.

    • Douglas Knight says:

      Nope.

      • Alexander Stanislaw says:

        Why not? Is it not the case that a diversified portfolio returns a higher expected return at the same risk?

        • Ken Arromdee says:

          Diversification returns a higher expected value with the same risk, but that doesn’t rule out the possibility of increasing both the expected return and the risk at the same time. For a portfolio this is a bad idea because you need to trade off return and risk. For a charity, you don’t.

          And at the absolute maximum return you can’t diversify because there’s only one charity at that point.

        • Alexander Stanislaw says:

          I see diversification only works if there are other options that have the same or similar expected utility. (or great expected utility and greater risk). If you chose the option with the great expected utility then there is nothing to diversify.

          However, in practice effective altruists don’t choose the option with the highest expected utility (probably X-risk reduction or some other research program with a low chance of success but a massive payoff). They choose the options with the highest expected utility that have a pretty good chance of working. In this case diversification is still applicable.

        • Douglas Knight says:

          Diversification only helps if you assume declining marginal utility (possibly instrumental). Nathan’s falsehood is because he has no idea what he is talking about, such as “utility” and “risk,” but is just parroting slogans based on reasoning he has never seen. In addition, Ken hasn’t bothered to read the thread.

        • BJ Terry says:

          Risk aversion is a result of decreasing marginal utility of wealth (this can be shown mathematically). It doesn’t make sense to talk about risk in the utility to be gained because utility just means “how you value these options considering all the available facts, including their risk and rewards.” Expected utility relates to choices, not to the value of outcomes. For example, you can’t say “Well, I would choose A because it has high expected utility, but the variance of utility across those scenarios is so large, I’m going to choose B.” The whole point of expected utilities is that you aren’t risk-averse in utility-space.

        • Alexander Stanislaw says:

          @BJ Terry

          I was careful to say expected return in my initial comment but I didn’t in my second. I meant expected return.

          @Douglas

          And I’m confused again.

          Suppose option A has the highest expected return (number of lives saved, dollar amount) for some variance. Then we can choose two options B and C which have a higher expected return* and a higher variance, but when we invest in both the variance is smaller than option A. We increased our expected return without having to increase our variance. No decreasing marginal utility of lives saved required? Where is the mistake?

          Edit: Nevermind I see it, your expected return only increases with variance assuming decreasing marginal utility.

          *As Ken stated, not possible if A has the highest expected return possible.

      • Nathan says:

        Douglas, your replies in this comment section have been both uncharitable and unhelpful. I don’t say this to start a fight, just to share my perspective.

        You’re right that I didn’t speak entirely accurately. I was trying to make the same point that both BJ and Keith did a better job of making farther down: taking advantage of buckets isn’t the only reason to diversify. Ken and BJ’s posts make clear that this benefit of diversification also requires diminishing marginal utility, which I’ll grant. However, I will then argue that we do in fact have diminishing marginal utility on charity contributions (as in almost everything).

        This point is easiest to make if we restrict ourselves to present lives (as many EAs choose to do). Given that restriction, we do not value saving 10 billion lives twice as much as saving 5 billion lives, because we don’t have 10 billion lives to save. I’ll grant that this is a somewhat pathological example, but given the extreme uncertainty we look at in many potential interventions, even this small amount of diminution may very well be relevant and make it worth considering diversification.

  18. Qiaochu Yuan says:

    I agree with your disagreement. The two most important parts are “In this case, at any one time there is only one best charity to donate to, but this charity changes very quickly.” and “The reasons for diversifying in the stock market are based on individual investors’ desire not to go broke and don’t really apply here.”

    I also disagree with “who gives a @#$! about volunteerism or local charitable endeavors.” The point of these activities isn’t to directly do good; at best, it’s to cultivate a habit / virtue of doing good. That’s important too.

  19. BJ Terry says:

    With a stock investment, it is difficult to beat the market because investors are competing over claims on future income. It’s that competition which makes everything have similar risk/reward, relatively speaking. At the time that you make the decision to buy shares in a company, the actual investment of capital into the business has already taken place, and you are just saying that you’re willing to take part in it’s profits at a certain price. In charities, on the other hand, the amount you donate will be “invested” in whatever is the output of the charity. If you could invest in businesses the same way you can invest in charities (each dollar of investment becoming an asset of the business, used to make profit), it would make sense to invest in those businesses that were highly capital efficient, because you could get more return for your investment. This is what we see in charities, and the variation in efficiency is dramatically wider than the variation in returns of stocks.

    A typical company can’t just make more money by spending more money because in the market THEY are competing in, supply and demand has been equalized at a given price, and more investment wouldn’t result in enough sales to justify the additional investment. Charities are spending money on altruistic effects, they don’t require any return. Because people don’t value altruistic effects as much as they value money return, altruistic effects are (arguably?) underinvested, so each dollar of donation yields a similar amount of altruistic effect for the given charity. It’s only if a charity gets large enough that it has solved all its cheap problems that it begins to face the same issue, but because altruistic effect and dollars aren’t directly comparable like dollar profits and dollar investments, charities never have to face a time when they say “We can’t accept any more giving this year, because we won’t be able to efficiently deploy your capital.” (If every charity were at such a limit and people valued altruistic effects very highly, perhaps we would begin to see the same dynamics as we see in stocks, where people would compete over the limited giving opportunities available. But the world would look very different in such a case.)

    GiveWell talks about room for funding. The best donation today is in a charity that most efficiently deploys the next dollar of capital, and the way to determine if they can efficiently deploy the next dollar is to look at their overall efficiency (altruistic effect per dollar) combined with room for funding (whether that altruistic effect is near its short term limit). As charities lose room for funding, the natural choice is to donate to the next most efficient charity that has more room for funding. Currently, because of the underinvestment, they believe there are still low-risk, high-reward opportunities. Once those have been exploited we can begin to look at the tradeoff between low-risk/low-reward and high-risk/high-reward charities.

    As a side note, the purpose of diversifying a stock portfolio isn’t to have exposure to multiple categories of risk, it’s to achieve the most reward per unit of risk because of uncorrelated returns across investments. This is Modern Portfolio Theory. If you want to make sure you don’t go broke, you just buy less risk.

    • anon says:

      I’d been vaguely confused about the workings of the stock market for a long while, but unable to pin down the difficulties I was having. This comment was extremely helpful to me, thank you a bunch.

      The reason the stock market works the way it does is because people are purchasing claims of future profits. I’d previously been thinking about it as though people are investing into the companies, but that’s not really true.

      Or at least, that’s what I took from your comment. But since I’ve just escaped one form of confusion, this may only be another one. If I’ve misinterpreted something, or my conclusions are incorrect, I’d be glad for correction.

      • Douglas Knight says:

        You are correct: the stock market is mainly about people trading shares of companies back and forth. Companies can sell new shares, as a means of raising money, but this is rare, except in Initial Public Offerings. Usually they raise money by selling bonds, which are not equity, but have fixed upside.

  20. Histocrat says:

    It’s also worth looking at from the other side: charities tend to prefer less variance in the donations they receive; they’d rather have $500 a year that they can count on than $1100 they might not get each year. So if your top-ranking charity gets your entire budget, no matter how close the second-place one is, you’re likely to switch charities frequently and the overall allocation gets more swingy than if you, say, divide your donations evenly among your top three.

  21. Kiboh says:

    There’s something Ozy seems to be saying I don’t think anyone’s addressed yet. It was kind of obscured by all the risky-returny stuff, but I think it’s worth talking about, so I’ll try to rephrase it without the stock market analogy.

    It goes like this:

    “If all the EAs fund the currently most-effective intervention, doesn’t that restrict growth of new, potentially better interventions?”

    Or in example form:

    “Currently-dominant Charity X has a marginal exchange rate of $2000/life. Currently-non-dominant Charity Y has a marginal exchange rate of $5000/life. However, this inequality is due to the fact that Charity Y hasn’t existed for as long as Charity X, and doesn’t have access to the same economies of scale as Charity X; if it did, it would have a marginal exchange rate of $500/life. If all the EAs are giving their money to Charity X, how is Charity Y supposed to take its rightful place at the top, and hence save the greatest possible number of lives?”

    This is a fun question because there are actually a number of potential answers, but none of them *quite* work. In other words, I (hopefully) get to look smart for producing them, but they don’t make Ozy look dumb for asking the question (assuming I’m not just misreading and putting words in their mouth).

    1. Charity Angel logic saves Charity Y. Robin Hanson makes the case (http://www.overcomingbias.com/2011/01/be-a-charity-angel.html) that you ought to be able to optimally contribute to the development of new charitable endeavours by supporting the current best charities. The idea is that if enough people do this, it becomes common knowledge that successful interventions reliably result in donations; once this is the case, Charity Y can borrow enough money to achieve the appropriate economies of scale, then pay back its creditors with some of the donations EAs make to it after it achieves supremacy. I think it’s a cool idea and one which kindasorta works, but since it relies on [markets doing complicated things well] AND [charities handling money cleverly] AND [charities doing weird things with money without looking shifty and alienating their backers] AND [reality actually making any sense], I’d say there’s still a gap there. Especially since if this reasoning worked perfectly, monopolies and associated market failures wouldn’t exist in the for-profit economy (note that the fact it does not work perfectly does not in itself preclude the possibility that it is still the best way to do things).

    2. Charity Y’s more effective methods can be tested and adopted by Charity X, making the problem moot. Unfortunately I don’t think this actually happens in the real world: I’ve never heard of an anti-TB charity saying “hey, those anti-intestinal-parasite people are getting more bang for their buck, let’s give up on this Tuberculosis thing and start doing what they’re doing”. Aside from anything else, its donors would probably resent it making this decision: if they’d wanted to treat intestinal parasites, they wouldn’t have given to the anti-TB charity.

    3. Non-EAs fund Charity Y. This answer asserts that non-EAs are already spreading sufficient quantities of money around in sufficiently random ways to ensure that Charity Y could probably get enough funding to supplant Charity X, so it’s unlikely for a real-world charity to get stuck in Charity Y’s position, and real-world EAs are therefore justified in donating only to frontrunners. I don’t know if this is actually true, though: I can totally see Charity Y being insufficiently fuzzy to attract non-EAs, and insufficiently utilony to attract EAs until after it starts outclassing Charity X.

    4. EAs fund Charity Y. Effective Altruists take into account the possibility that seemingly-suboptimal charities *could* be in Charity Y’s situation, attempt to determine the relevant quantities and probabilities, and adjust their estimates of de facto dollars-to-expected-lives-saved exchange rates appropriately. However, AFAIK EAs tend not to do this, which would leave Charity Y underfunded . . .

    . . . and this, if I understand Ozy correctly, is exactly the tendency their original post was highlighting and trying to correct.

  22. whales says:

    Right, you should only give to the one charity to which you assign the highest expected value, to the extent that you’re sure you don’t have reasons to be risk averse, you are a utilitarian or your values are at least commensurable, you’re not buying anything else with your donation (personal virtue, additional sources of consistency pressure for donating at all, decreased consistency pressure to donate to a particular cause should the evidence or your values change, social status, influence over others’ donations, personal good feelings, safeguards against regret, feedback to the charities you support, continuity for charities you no longer support quite as much, information for the broad charity market, motivation for the case where one charity flops, etc), you follow a giving strategy that allows you to adjust your allocation as frequently as the best EV charity might change, your expected value is calculated at the margin considering room for more funding & diminishing returns & the amount you expect the charity to raise absent coordination from its donor base & how you expect donations to be displaced given coordination of the donor base, you have low uncertainty about how others will donate of the kind that the evidence of your own donation split would reduce and diminishing returns are unlikely to set in within [your donation amount] of how much you expect the charity to actually raise, or if you do have high uncertainty about others’ donations then the extent that diminishing returns are unlikely to set in within [your donation amount plus the change in your expectation of donations from others given your donation amount] of the actual amount the charity raises, etc…

    If that last part isn’t obvious, here’s a contrived model that still captures the important aspects of the situation (which is not to say that this is necessarily anything like how people actually split their donations). Say I want to donate $1,000 to either LOVE (5 expected utilons per dollar) or JOY (9.6 expected utilons per dollar). We’ll say basically everyone agrees on these numbers, because they were published by a well-respected third party. To make things even simpler, neither is anywhere near diminishing returns, but LOVE is running a $100,000 matching drive where extra matching funds are not spent, so I can buy 10 utilons per dollar from them. So I should just donate everything to them.

    But wait! I naively estimate that they’ll just barely reach the matching goal, and my prior for the amount is uniformly distributed between $95,000 and $100,000. They don’t process and announce donations immediately (but they do eventually, which is why I’d cap my estimate at roughly $100,000), and the matching challenge has a deadline, so I can’t just wait and see. The $99-100k range comes out to a 20% chance that half of my donation is only buying 5 utilons per dollar. This brings me down to 9.5 utilons per dollar. So I should donate everything to JOY, right?

    Well, no. Expected utilons per dollar from LOVE in that case is a decreasing linear function of my donation. I can get a maximum of 9.68 utilons per dollar for a 400:600 LOVE:JOY split.

    But you’re perhaps rightly skeptical that my donation could ever be so large relative to the uncertainty, let alone large enough to give a difference in utility worth quibbling about. Well, what if I’m not sure about how much other people will donate? It’s perfectly reasonable to take my planned donation as evidence about the donations of others. If I came to this conclusion, maybe 10% of the other potential donors did, leaving LOVE with $6k less, so I’d in fact be better off donating everything to LOVE in the first place. But that would have occurred to at least some of them (or at least they’re making an equivalent intuitive guess)… So there’s probably some other split that’s more acceptable.

    There’s nothing magic, by the way, about making decisions like this, even if it helps to reason through it as though I’m acausally determining other people’s donations as a substitute for achieving reflective equilibrium about the evidence of my decision process, which itself isn’t even a thing that happens in real life. But, anyway, now there’s potentially more money at stake, so not only are the potential differences in outcome more stark than my first pass suggested, but they could also be relevant in less finely-tuned situations. I think there are in fact real-world situations where the drop in marginal utility is steep enough, and the size of the donation pool about which your split is the best evidence you have is large enough compared to the uncertainty in the total amount raised and location of the marginal utility cliff, that it can be worthwhile to split donations. This has at least a little bit to do with how people intuitively split donations in real life.

    (And explicitly coordinating donors with public claims brings in additional considerations, but note that GiveWell has encouraged “allocating your dollars in the same way that you would ideally like to see the broader GiveWell community allocate its dollars.” [I vaguely recall that they intentionally didn’t recommend a specific allocation in 2013 but I can’t find anything about that right now.])

    Anyway, most of the other things can be bought relatively cheaply, so if the conditions apply then it seems best to donate a large amount to one charity and small amounts to any others. In other words, I basically agree given these caveats, but I think a lot of people are turned off by what they see as the EA movement’s encouragement of acting on what look like casual, clever-looking or punchy but psychologically-morally-and-economically-unsophisticated arguments. So the caveats matter.

    • anon says:

      While I think you might have good ideas, I’m literally unable to process those sentences. Extraordinarily long.

      • whales says:

        Thanks, that’s what I was going for. Still, here’s a more linear explanation of that comment, because maybe it actually matters:

        Yes, people typically get diminishing marginal utility from wealth. That is, they are risk-averse and should diversify investments. Yes, under a certain moral framework, you can say that people do not have decreasing marginal utility of [amount of good done] by definition. In addition, charities do not typically have diminishing [amount of good done per dollar] on the scale of one individual’s donation. That is, there is apparently no risk aversion involved. Hence there is no reason to diversify. This is an interesting point. It’s plausible that things would be better if more people took this into account.

        But there are many moral, psychological, and practical complications. The conclusion holds only given certain simplifying assumptions about these complications. I list some of them. It is not entirely clear that these assumptions hold or should hold in practice.

        There are also many complications related to economics and decision theory. Again the conclusion holds only given certain simplifying assumptions about these complications. I also list some of these. It is not entirely clear that these assumptions hold or should hold in practice.

        Scott has dismissed one of these complications as unnecessary complicated game theory. I believe that this conclusion is premature. First we need to fully understand the circumstances in and mechanism by which diversification makes a difference. Thus I attempt to illustrate a case where diversification does matter. To do this I make explicit assumptions about diminishing marginal returns and uncertainty about total fundraising. I find that it makes sense to diversify.

        I then treat the output of my own decision process as evidence about total fundraising. This is a valid move without invoking CDT vs. EDT vs. TDT vs. superrationality vs. “universalization.” I find that diversifying matters even more.

        I believe that the assumptions I made are realistic enough to suggest that there may be good reasons to diversify in real life, even given all but a few of the anti-diversifying simplifications. (I’ll add now that uncertainty or confusion about this line of thinking alone is a good reason to diversify.)

        It is helpful to take these complications seriously, especially if you want to be taken seriously by smart people encountering these ideas for the first time.

  23. Caspian says:

    > But in reality it’s not a complicated game theory problem. You can go on the Internet and find more or less what the budget of every charity is. That means that for you, at this point in time, there is only one most efficient charity.

    Reality? I’m going to argue from theory. Game theory suggests other people’s future donations should be taken into account.

    > Unless you are Bill Gates, it is unlikely that the money you donate will be so much that it pushes your charity out of the low-hanging fruit category and makes another one more effective, so at the time you are donating there is one best charity and you should give your entire donation to it.

    If you are the metaphorical Bill Gates, you don’t donate so much to one charity that another becomes significantly more effective, you donate enough that the other one becomes about an equally effective use of further donation. And then for your next donation you donate to both these charities simultaneously. And then everyone else with the same charity preferences as you finds that there are two equal choices to donate to, and they want to donate to both. This doesn’t even need to start with one big donor, enough people with the same preferences and enough donations between them would do the same thing.

    That leaves another case, where you as a small donor disagree with this donor collective, and think one of the two is still a better use of donation than the other, while they are indifferent. So you donate to the better one, and they adjust their donations to be indifferent so the relative funding of the two charities still matches their preferences, not yours.

  24. Alrenous says:

    The recent Ghash.IO thing has taught me something about this issue.

    That Ghash.IO can attack bitcoin is not really a problem per se. Their self-interest is enlightened; they claim it’s a bad idea for them to make an attack, because it is. The problem is that now an attacker doesn’t have to defeat bitcoin, they only have to defeat Ghash.IO. Insofar as defeating Ghash.IO is cheaper than rebuilding Ghash.IO, they make bitcoin that much less secure.

    A single mega-charity has the same problem. The bigger your charity, the stronger the attraction for honest-to-Gnon psychopaths to breach its security and capture it. Many charities have already been so captured, for example MADD.

  25. Max says:

    To voice an agreement with Scott (at least with respect to his conclusions) and to add a rewording, I think his argument hinges largely non-linear utility of money – as in “Although from a money-maximizing point of view this is a good deal, in reality I’m unlikely to take it. It would be cool to be ten times richer, but the 33% chance of going totally broke and starving to death isn’t worth it” really says that extra money is few extra utils for Scott, but less money is a big negative utility hit. If utility were linear (technically, affine, but whatever, hopefully you know what I mean) in money, then Scott may be more likely to take the bargain (and in fact, would be directed to take it by some normative “rational agent” models).

    In the tropical disease example the utility is linear in money, and this sort of reasoning does not apply. In general, the fact that on the margin (aka for “non Bill Gates” donations) utility is mostly linear in money is just a reflection of the fundamental truth of calculus that says “on small enough scale everything is approximately linear” – and for a large enough problem individual donations are “small enough”.

    In this light, “investment vs charitable giving” comparison has problems largely because the things being discussed are measured in different “currencies” – money vs utils. Once you translate everything into utils, the situations become sufficiently different (as explained by Scott, essentially), and hence the conclusions/prescriptions can also be different.

  26. Phil Goetz says:

    It seems to me that the best charity at the moment, measured in utilons-per-dollar, in terms of both their track record and near-future prospects, is Google.

    Well, actually, in utilons per dollar it’s negative, because the “dollars spent” term is negative. What should the proper measurement be?

    I propose something like utilons per personal-utility-cost. Personal-utility-cost is the integral of (your maximum possible personal utility minus the utility you experienced) over time.

    Unfortunately this is wrong, as it leads to the conclusion that you should always do whatever provides you with the most personal utility, because max possible utility – max possible utility = zero.

    (Or is it?)

    Also, the denominator could go negative for actions that increase your lifespan.

    • Ialdabaoth says:

      The proper measurement should always be life-hours.

      We’re just used to using dollars because life-hours translate to dollars in a very easy way for most people, but ultimately, until we solve mortality, the only resource that you can ever truly and unequivocally ‘spend’ is time.

      • Phil Goetz says:

        Life-hours is too simple; it doesn’t count quality of life. That’s why I tried to say life-quality-hours.

        I think there is no proper measurement, because using an instantaneous measurement is not the optimal approach. It’s an optimization-over-the-future problem; you ultimately want to use dynamic programming to maximize utility over time. Which is also probably undefined, unless you can prove that exponential time-discounting cancels out exponential expansion of life and utility. And also ignores the claim that the utility we value is the first derivative of what we usually think of as utility. That supposition makes optimization quite difficult.

        • Ialdabaoth says:

          No, you’re double-counting.

          Utilitons are what you’re trying to get out; life-hours are what you put in. “Life-quality-hours” is just a measure of the exchange rate.

        • Phil Goetz says:

          I’m not double-counting. I specifically multiplied by personal utilons in the denominator. I want public utilons out of my actions; the cost of my actions is personal utilons. We have to measure both public utilons and personal utilons, otherwise the term “charity” makes no sense.

          (I am not convinced that making that public/private utility distinction makes sense, but if it doesn’t, neither does charity.)

  27. Quixote says:

    I think Scott’s post assumes a lot more confidence about the best charity than is justified. Some things are easy to measure and for those things we have a decent sense of what the best charity at delivering those things is. We don’t have a good sense of the best charities at delivering things that are hard to measure or how to conspire hard to measure thing a with easy to measure things.

    For instance we might well think that a reduction in government corruption would have a much larger impact in total welfare than saving N lives and may in the long run save greater than N lives. But it’s hard to measure corruption and it’s hard to say how to best purchase reductions in corruption. But difficulty in measurement doesn’t make things lower EV it just makes them more risk and more uncertain. And it a case like that where you don’t know what the best EV is, a diversified basket can make sense.

    • LRS says:

      I agree. I think Ozy makes the more convincing case for diversifying as a hedge against uncertainty.

      Scott’s response to this, if I’m understanding him correctly, is in his second-to-last paragraph. There, he acknowledges the difficulty of decision-making under uncertainty. But he seems to put faith in there being a sufficiently large field of decision-makers, so that all of them going all-in on their favorite charity leads to a distribution of funds no worse than what would result if all of them diversified.

      It seems to me that there are many ways this could go wrong. First, there does not seems to be a sufficiently large field of effective altruists to allow population effects to even out the aggregate distribution of funds under an everyone-goes-all-in rule. Second, the information available may be skewed in consistent ways that systematically push funds in a particular direction.

      • anon says:

        I think diversifying makes sense if we’ve got several charities all of which are estimated to be almost as efficient as each other and we’re uncertain about our ability to evaluate charities. The diversification has a low cost that’s worth increasing the outside view probability our donations are successful. And I think that charities are actually a lot like this, so I agree with Ozy.

        But in general, even when we’re in the presence of uncertainty, we should make the best decisions we can even if it means making ourselves vulnerable to risk. I think loss aversion is creeping into your thinking processes here, because you don’t actually give a reason that we shouldn’t put all our eggs into one basket if that’s the basket our calculations point to. And no such reason is possible to give.

        We shouldn’t expect a group of people to literally follow the same decision processes as us. But that’s a useful metaphor that makes it clear why we should tolerate risk, even though it seems uncomfortable.

        • LRS says:

          I agree that, generally speaking, the possibility of a bad outcome shouldn’t deter us from making a decision that, on expectation, is optimal.

          Is “low confidence in the accuracy of our calculations” not a good reason to avoid putting all our eggs into one basket? I didn’t explicitly proffer this as a reason, but the parent comment did and I agreed with it.

          In a world where rigorous calculations are extremely difficult, diversification seems like a good way to hedge against the high likelihood of calculation errors. I acknowledge that it is possible that this is just a fancy specious way of dressing up irrational loss aversion to make it seem legitimate.

  28. Ben Kuhn says:

    I’m late to the party on this one, but I recently collected a bunch of arguments for and against donating to only a single charity. Scott, you’re right that we shouldn’t be risk-averse in charitable donations the same way we are about stock investing, but there are some other nuances in favor of having a basket of charities. Empirically, I think most EA types donate to more than one (although I don’t have stats on this and could be mistaken).

  29. Keith says:

    This seems to be about the probability distribution of a payoff, its expected value (that is, its mean), and its associated uncertainty (variance of the pdf).

    Diversification plays three different roles in three different contexts:

    1. For charities one is arguably only interested in expected value. The highest expected value is preferred. The magnitude of uncertainty can be ignored. Diversification is avoided as this would not increase the expected value of the payoff.

    2. For egg-carrying, one diversifies. One literally does not carry all one’s eggs in one basket because of a nonlinear utility function. Having two eggs isn’t twice as good as having one, so diversification of egg payoffs increases expected utility.

    3. In stocks, one can diversify and increase the size of stock holding versus cash. Instead of x% cash and y% in one stock, one has (x-a)% in cash and (y+a)% in more than one stock. One’s entire portfolio can therefore have a higher expected payoff for the same distribution variance/risk/uncertainty.

    Number 3 doesn’t seem to have been mentioned and is a different argument for diversification than number 2. (Harry Markowitz won a Nobel prize in part for his work on point number 3.)

    • Alexander Stanislaw says:

      Regarding 3, see this thread. 3 requires decreasing marginal utility.

      • Keith says:

        Thanks Alexander. I missed that thread up there.

        Regarding my Point 3, I disagree that it requires decreasing marginal utility or any consideration of marginal utility whatsoever.

        Using leverage (borrowing cash) or sizing of the cash component, one can achieve the same risk/uncertainty/variance in a stock portfolio (as a non-diversified portfolio) and a higher expected return.

        That’s correct… a higher expected return with no increased risk(!)

        The reason for this diversification is therefore fundamentally different to the reason that one might diversify because of nonlinear utility functions (e.g. Point 2).

        (Of course, in practice one might struggle to estimate the expected payoffs, variances and correlations between payoff distributions.)

        In the academic finance lingo, one exposes one’s self to only priced risk and eliminates unpriced risk by diversifying.

        See Markowitz’s work on Modern Portfolio Theory (MPT) for more.

        As far as I can see the ideas behind diversification in MPT are of no use in forming a charity portfolio.

        • Douglas Knight says:

          Nope. Caring about “risk” is the same as having diminishing marginal returns. Stop using words you don’t understand.