Map Of Effective Altruism

In the spirit of my old map of the rationalist diaspora, here’s a map of effective altruism:


Thumbail – click to expand

Continents are cause areas; cities are charities or organizations; mountains are individuals. Some things are clickable links with title-text explanations. Thanks to AG for helping me set up the imagemap.

This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

102 Responses to Map Of Effective Altruism

  1. toastengineer says:

    Stupid question: When it comes to X-risk reduction, why do we never talk about the B612 Foundation? They seem pretty legit and the situation they want to prevent has already happened once…

    • Scott Alexander says:

      It’s possible to calculate the risk of a globally catastrophic asteroid strike in the next few centuries, and given how many centuries we’ve gone without one it’s very small. After more than a few centuries, either something else will have killed us already, or we’ll be at a point where we have lots more options.

      • Yep. See Andrew Snyder Beattie’s paper laying out this argument more clearly, and including other natural risks: https://www.nature.com/articles/s41598-019-47540-7 (I still slightly disagree with him about pandemics, but that’s a different point.)

      • John Schilling says:

        According to my cheat sheet on this, we’re already at six-nines for there being no 10+ km asteroid impacting the Earth in the next century, and 99.98% for no 1+ km asteroids. B612 could add another nine to both of those numbers, but their real value is going from 96% to 99.5% on no 100-meter asteroids hitting the Earth without warning in the next century.

        Having years rather than days (or maybe only hours) to evacuate the impact zone of a 100-meter asteroid is valuable, potentially great value if the impact area includes a city, but a 100-meter asteroid is a non-nuclear Tsar Bomba, not a dinosaur-killer. I’ll let someone else do the math on the Effectiveness of Altruistically reducing that probability by an absolute 3.5%, given that most of the Earth is ocean or otherwise very sparsely populated.

        There’s also long-period comets, which are an order of magnitude or so rarer than comparable undiscovered asteroids. But looking for those would be more than an order of magnitude more expensive than looking for sneaky asteroids, and it’s not what B612 is trying to do.

        • Nancy Lebovitz says:

          If you’ve got years, isn’t that enough time to nudge the asteroid into a safe orbit?

          • John Schilling says:

            Depends on how many years, and also on the political will to act. The median, and really the two-sigma high case, for B612 doing any good at all, is that they do the good thing of finding an asteroid that would otherwise hit somewhere in e.g. the Brazilian rain forest.

            Given say five years and twenty gigabucks, I’m 90% confident that we could divert that asteroid into a safe orbit. The other 10%, though, involves half-diverting the asteroid so that it hits somewhere else – and now whoever did that is responsible for all the death and destruction at “somewhere else”, while only a few rationalist weirdos will give them credit for damage reduction at the original impact site.

            The most likely outcome is that we spend two years and a gigabuck studying the problem while telling a few thousand Brazilians to make evacuation plans just in case, and the study never quite comes up with anything that anyone is willing to commit to given their own somewhat perverse incentives.

            If B612 finds an asteroid that’s going to hit San Diego in fifteen years, we’ll probably go ahead and divert it.

          • FeepingCreature says:

            Note that depending how early you can get to it, you only need to shift its orbit a very small amount. Haven’t done the math but I suspect that given years of warning, you could probably divert it with a few ordinary launches. Elon Musk could probably do it on his own.

            The farther it’s from the earth when you get a rocket to it, the larger the effect of your maneuver by the time it gets close to Earth.

      • Bugmaster says:

        Are you saying that the Singularity is more probable than an asteroid impact — because we have seen asteroids before, but we’ve never seen a Singularity ?

        • John Schilling says:

          He’s saying the Singularity is more probable than an extinction-level asteroid impact. And he’s saying this because we’ve seen all the X-risk class asteroids before. And plotted their orbits well enough to be highly confident that they aren’t going to X us any time in the next hundred years, and established their power-law size distribution well enough to be confident that we haven’t missed any.

          It is quite possible that you might get bitten by a flea or a mosquito in the next five minutes. It is remotely possible that you might get bitten by a rat in the next five minutes. What are the odds of your being bitten by a Bengal tiger in the next five minutes, and how long did it take you to establish that those odds are negligible? Bengal tigers are unambiguously real, so why aren’t you worried about one right now?

          • Bugmaster says:

            And he’s saying this because we’ve seen all the X-risk class asteroids before

            What, really all of them ? Are the chances of us having missed just one such asteroid literally zero ?

            Bengal tigers are unambiguously real, so why aren’t you worried about one right now?

            I am, a little ! But negligibly so, given what I know of tigers, their habitat, and their rarity. However, I know absolutely nothing about genetically engineered flying cyborg tigers. I know that genetic engineering is possible (I eat genetically engineered corn all the time); I know that tigers exist; and I know that cyborgs exist as well (I might become one of them, sadly, given my family’s history of heart disease). Flying cyborg tigers would have virtually no restrictions on habitat, and a voracious appetite for meat (as well as for electricity). How worried should I be about them ?

          • John Schilling says:

            What, really all of them ? Are the chances of us having missed just one such asteroid literally zero ?

            If you want to play the pedantic literalism game, scroll up five posts.

          • Bugmaster says:

            @John Schilling:
            The pedantry here is actually important, because we’re looking at very low probabilities across the board. If the probability of a killer asteroid impact is 1e-26, and the probability of a killer Singularity is 1e-50, then we should still allocate more money toward asteroids — even if “more money” translates into “not much at all”.

            The reason I brought it up is because you said “we’ve seen all the X-risk class asteroids before”, which is kind of an absurd statement, which leads me to believe that you are wildly off on your probability calculations. Can you really guarantee with total certainty that there isn’t some low-albedo asteroid hurtling towards us from the Oort cloud (or maybe even interstellar space) right now ? If not, then we’re talking about low-probability events, and we need to show our math; and saying “we can’t calculate P(X) and therefore we should worry about X” is not math.

          • John Schilling says:

            The reason I brought it up is because you said “we’ve seen all the X-risk class asteroids before”, which is kind of an absurd statement,

            It is probably a literally true statement, so please go recalibrate your absurd-ometer.

        • The relevant difference is not that we’ve seen asteroid collisions before. We’ve never seen vacuum decay, and here too the fact that it has never caused a global catastrophe in the billions of years the universe existed is strong evidence that it’s unlike to do so in the upcoming centuries. The relevant difference is that catastrophic asteroid collisions and vacuum decay are naturally phenomena independent of humanity and so are roughly equally likely to occur today as any other day in the past hundreds of millennia. On the other hand the Singularity is theorized as a consequence of the development of technological civilization, which we only observed happen once and in the previous few centuries. Of course the fact that we don’t have any past experiences of the long-term consequences of technological civilization makes speculation on this topic far more uncertain than asteroid collision or even vacuum decay (which as a basic physics phenomenon lends itself to very precise models).

          • Bugmaster says:

            Of course the fact that we don’t have any past experiences of the long-term consequences of technological civilization makes speculation on this topic far more uncertain than asteroid collision

            I agree that P(Singularity) is << P(asteroid), though Scott seems to disagree. That said, P(long-term consequences of technological civilization) > P(asteroid), given that global warming does already exist.

            (Amusingly enough, the Singularity reached backwards in time and made my original comment say the opposite of what I intended… Well, either that, or the HTML parser misinterpreted my angle brackets.)

  2. alyssavance says:

    I like the art, but you missed a lot of OpenPhil’s grant areas, in particular the science and policy spaces:

    https://www.openphilanthropy.org/giving/grants

    There’s been about $10 million given to in vitro gametogenesis, for example. Also things like YIMBY, central bank policy reform, criminal justice reform, etc.

  3. wearsshoes says:

    It might be worth noting that not all Givewell-recommended charities consider themselves EAs per se, especially in the Global Poverty space. A lot of them operate mostly independently of the EA-sphere, except where funding, effectiveness evaluations, and accountability are concerned. One charity representative that I spoke to found Givewell to be comparatively intrusive and frustrating compared to their other major donors.

    • Taymon A. Beal says:

      Unsurprising; GiveWell asks for a lot of information that most charities aren’t prepared to provide because gathering it requires effort and most donors don’t care about it. They have to do this in order for their thing to work at all. The earlier days of their blog contain a lot of posts about navigating this culture clash.

      I don’t feel too bad for the charities that have to put up with this because they are reasonably well-compensated if they make it to the “participation grant” stage ($100,000) and very well-compensated if they make it to top-charity status ($2.5 million per year from Open Phil, minimum, and that’s before counting any of the money that they influence from anyone else). Overall they try to be transparent about what it takes to engage with them and I think it seems worth it if you plausibly have a good program. But yeah, if you’re lucky enough to have a source of funding that gives you $2.5 million per year without asking so many questions then you’re probably going to find GiveWell’s process annoying and burdensome by comparison.

      (Also GiveWell itself was ambivalent about being identified with EA for a long time, although this has faded somewhat in recent years.)

      • felix says:

        That sounds really interesting, do you have a source handy on GiveWell being ambivalent about being identified as EA? I can’t find any signs of ambivalence in their blog’s “Effective Altruism” category, but these are all from 2013, and I’m not sure if you’re thinking of a time earlier or later than that…

        • Taymon A. Beal says:

          It’s not something that they ever wrote about on the GiveWell blog, as far as I know. Holden has written about some of this stuff here and here.

  4. 10240 says:

    Bug report: Giving What We Can and Rethink Charity lack links and tooltips.

  5. Taymon A. Beal says:

    This is great and I love it.

    I’ll put nitpicks and bug reports as replies to this comment, to make them easier to hide. Others are encouraged to do likewise, if they want. (I might not be able to consolidate them all in one comment because I’m likely to find more over time.) Scott, feel free to delete comments as they cease to be relevant, if you want.

    • Taymon A. Beal says:

      Jeff Kaufman is not an animal welfare person; he prioritizes global poverty.

      The following organizations are absent and seem like maybe they should be on here (source is my spreadsheet of all the recognized EA charities):
      – Center for Global Development (suggested by Open Phil staff)
      – Charity Science (receives donations via effectivealtruism.org)
      – Federation of Indian Animal Protection Organizations (ACE standout charity)
      – Fistula Foundation (receives donations via RC Forward)
      – Forethought Foundation (receives donations via effectivealtruism.org)
      – Global Priorities Institute (suggested by Open Phil staff)
      – Humane Slaughter Association (receives donations via EA Foundation)
      – Innovations for Poverty Action (receives donations via RC Forward)

      My spreadsheet also includes some charities that are recognized as in scope for EA by established EA meta-orgs but are working outside the four traditional focus areas, including on issues like criminal justice reform, climate change, and land use reform. Not sure if you want to consider those.

      Organizations not on my spreadsheet (usually because they’re not taking retail donations) that you might want to consider include:
      – DeepMind (has a team working on AI safety; Victoria Krakovna, who is listed by name on the map, works there)
      – Founders Pledge

      (In addition to my spreasheet, I would also encourage anyone interested in surveying the EA landscape to scroll through the 80,000 Hours job board and the Open Phil grants database.)

      Can you tell me what your source is for there being a specific EA org named the “Biosecurity Initiative” that’s apparently too obscure to be found on Google? I suspect that this is some kind of confusion and they’re actually named something else.

      • Taymon A. Beal says:

        Whoops, Founders Pledge is on there after all and I just missed it.

      • Taymon A. Beal says:

        Okay, I think this is the “biosecurity initiative” that the map is referring to, and yeah, they seem pretty embryonic-stage and don’t have a website yet. They’re part of the Center for International Security and Cooperation at Stanford.

      • egastfriend says:

        +1 on Jeff’s location. Also, why is he so far away from Julia?? They’re married!! Dustin and Cari got to have their mountains next to each other!

    • Taymon A. Beal says:

      The EA Hotel just renamed itself to the “Centre for Enabling EA Learning & Research”.

      Future Perfect is a vertical, not a column. (Yes, these are different things. It’s confusing.)

      I’m confused about why Dustin Moskovitz and Cari Tuna are on the global poverty continent given that Open Phil thinks it’s going to give most of its money to global catastrophic risk prevention.

      There are various important people not listed but I’m not going to try to enumerate them. (Would sure be interesting to try to get some kind of sense of who’s missing, though, given that only a few people can be included.)

      Bill Gates has no association with EA as far as I know although we obviously think he’s cool.

      The END Fund’s tooltip is screwed up due to an HTML quoting bug.

      The “Confused? Click here” tooltip has some mojibake in it.

      GAIN helps developing-country governments iodize their salt. (I think they also do other things, but they’re a GiveWell standout charity specifically for the salt thing, not for anything else.)

      Along similar lines, Malaria Consortium, Helen Keller International, Sightsavers, and the END Fund each have only one GiveWell-recommended program (seasonal malaria chemoprevention, vitamin A supplementation, deworming, and deworming, respectively), even though they also do other things. And I’m not aware of them engaging with the EA community outside of the context of their GiveWell recommendations (or The Life You Can Save recommendations which are informed by GiveWell).

      I think by “Leverhulme Institute” you really mean the Leverhulme Centre for the Future of Intelligence.

      The Albert Schweitzer Foundation is actually German, not British, though they have a British subdivision (the Schweitzer Institute). Their main website is https://albertschweitzerfoundation.org/. Might also be worth mentioning that their work includes a focus on fish welfare (though it’s not limited to that), as this is a big part of what sets them apart from other animal charities.

      In general, if you’re confused about what an ACE recommended charity does, the ACE website is a good place to look; they have short blurbs summarizing this. (E.g., the Nonhuman Rights Project is specifically about achieving legal rights for animals through judicial precedent; they focus on great apes because they think those have the strongest case for deserving something like human rights.)

      • Taymon A. Beal says:

        To be more specific: I will volunteer to help write up blurbs for the orgs that have non-descriptive blurbs if that is a thing that you would find valuable.

    • aarongertler says:

      Nitpick: “Centre for Effective Altruism” and “Centre for the Study of Existential Risks” should both use the British spelling of “Centre”.

    • Grantford says:

      A couple more bugs: The tooltip for Global Catastrophic Risk Institute is not present or is covered up by the the one for Center (Centre?) for the Study of Existential Risk. L214 is missing a hyperlink and a tooltip.

      • Taymon A. Beal says:

        L214 does have a hyperlink and tooltip but it’s a bit misaligned.

        • Grantford says:

          Ah, my mistake. It looks like Global Catastrophic Risk Institute also has an imperfectly aligned tooltip and hyperlink that I didn’t find at first.

    • Taymon A. Beal says:

      This is poking the hornets’ nest a little, but do you want to include Leverage? Or the Median Group?

      Paul Christiano and Carl Shulman should almost certainly be on this map.

      There’s also a question of whether the principals of other major organizations (Nate Soares, Dario Amodei, Tim Telleen-Lawton, Rob Mather, Leah Edgerton) belong here. Is there a particular principle regarding whether someone goes on here as an individual? Is it whether they have a blog?

  6. A few corrections:

    1) Bostrom’s mountain should be closer to FHI, or at least have a zipline.
    2) Dank EA memes isn’t a location, it’s the bedrock on which the entire map rests.
    3) The map of locations for existential risk is wrong; there’s a mountain of respectability over on the left size, with a descent into madness. Organization locations would need to be shuffled to reflect this. And it’s not an archipelago, but rather a dead-sea like depression that descends far past the sanity waterline. The DEAM bedrock is clearly exposed in some places, like PETRL, which is unfortunately missing.

  7. noyann says:

    @Scott, how did you set point locations? Calculated distances, rules of thumb, aesthetics, the feeling of a well-informed gut, …?

  8. Jacobs says:

    No Toby Ord mountain?

  9. Nice work! 😀 Thanks for sharing it.

  10. Frederic Mari says:

    Hi, Scott. What kind of map drawing tool do you use? Either for this map or the previous one about the rationalist diaspora?

    [EDIT : Can someone fill me on this diaspora and/or what happened at Less Wrong for a mass exodus to occur? I’m not interested in salacious details of past flame wars, just a general overview of what happened/its impact on the rationalist community (assuming there’s one)]

    • mwigdahl says:

      I would love to know this also!

    • Taymon A. Beal says:

      There wasn’t a big salacious flamewar or anything; LW 1.0 died gradually. The closest thing to a definitive explanation of why this happened that I’m aware of is a Reddit comment written by our esteemed host. Since that comment was published, LW 2.0 launched and activity levels are back up, but it’s under new management and the focus has shifted a bit compared to Scott’s description of LW 1.0.

      • Viliam says:

        The comment describes it well, and I’d like to add that the problem with karma exploitation was only solved by rewriting the entire website.

        The previous website was a fork of Reddit. The infamous Eugine Nier found a way to be extremely annoying, and trying to stop him was like playing whack-a-mole. He made literally hundreds of user accounts, not even trying to hide them (they usually had names like “eugine001”, “eugine002″… “eugine100”, occasionally switching to a different prefix). He somehow gained unlimited karma… I am still not sure how, and I don’t know if anyone knows, because he didn’t leave visible marks of doing that; most of his accounts didn’t even post coments. (My guess is it was something like: post a comment, have it upvoted by remaining 100 accounts, then delete it. Either automated, or he spent insane amounts of time doing this.) At that moment, moderation would have to became a full-time job. And he used the unlimited karma to downvote-bomb anyone he didn’t like. Not just their comments on politics, but like their entire comment history.

        Then the site went dead for about a year. Then a group of heroic volunteers rewrote the entire functionality from scratch, and imported the old content. That allowed them to add new features, both for users and for moderation. Eugine stopped being a problem. Site became active again.

        In case you are curious, yes, rewriting the entire website was easier than maintaining the old one. The Reddit code and database are so complex, that we had a few volunteers who said “okay, I am a professional software developer, what could be so difficult about this?”, and most of them left crying (that included me).

        I suspect that the karma-bombing by Eugine contributed a lot to the taboo on political debates on LW. Because before him, there was an occassional “political thread” with different rules (similar to the fractional Open Threads on SSC). But people got tired of participating in a debate where half of the participants predictably got karma-bombed afterwards.

        • Viliam says:

          By the way, this shows the power of an individual. One person made Less Wrong, and thereby started the worldwide rationalist community. Later, one person almost destroyed Less Wrong.

          Of course, it is more complicated than this. Eliezer started his blog with Robin Hanson, there were other people in the Singularity Institute, etc. And a group of people finally stopped Eugine. So the ability to gain collaborators and followers (and the ability not to piss off people so much they will unite against you and spend great effort to stop you) is also critical. And maybe the time was right, and some variant of the rationalist movement would have started anyway, only a few years later; this we will never know. But still, some people have a great impact, despite starting inconspicuously.

          (Similarly, SSC is a one person project. Also, most negative information about rationalist community originates with one person… the guy with most edits at the Wikipedia “Less Wrong” page. Etc.)

          • Bugmaster says:

            I think the power of the individual is quite strong in small communities. AFAIK both Less Wrong (at its peak) and SSC consist of fewer than 100 active posters, with maybe an order of magnitude more lurkers. That’s just not s lot of people, and thus a single individual can have a large impact. Contrast this with e.g. some large public corporation, where even the CEO has to struggle to make incremental changes.

          • The Nybbler says:

            @Bugmaster:

            Since the start of 2019, there have been over 3400 distinct commenters. Excluding any commenters with less than 10 comments gets us down to about 930. So the commentariat is quite a bit bigger than 100 people. About 100 people have posted more than 300 comments in that time. You are the 75th most prolific commenter since 2019, with 469 comments.

            Just this year there have been over 950 commenters, over 600 excluding one-offs.

          • Bugmaster says:

            @The Nybbler:
            When I referenced “active posters”, I meant something like, “people who consistently engage in conversations on most threads”. Such people, IMO form the backbone of any forum; naturally, there will be many more people who post comments occasionally, but it’s the regular posters who set the tone of the site. I think that your criterion for “prolific posters” matches mine reasonably well:

            About 100 people have posted more than 300 comments in that time.

            300 comments/year is roughly 1 comment/day (though obviously the real distribution would be more “clumped”).

            That said, I did vastly underestimate my own loquaciousness… Not sure if it’s a good thing or a bad one, but it is what it is.

      • 10240 says:

        What way did the focus shift?

        How does it intend to avoid the problems described in that comment?

        (Asking as someone who has been reading SSC for quite a while, but only knew LW from the occasional links to old posts.)

      • Frederic Mari says:

        Thanks, @taymon a beal for the reddit comment link. Clear and concise, as usual, our Scott. I’ll check LW 2.0 as well.

        And thanks to @viliam for the extra color…

        FWIW, I think it shows the power of the internet, to tie together people of similar interests who might be isolated in RL b/c only 0.1% of the pop care about what they/we care about. But once you connect on internet, you realise that 0.1% of the pop. is millions upon millions of people… 🙂

        That has happened in other communities, whether good or bad ones.

        What was the internet joke? “Hmm, Google, I’m embarrassed to admit it but I like to have sex… with goats… while they’re on fire…” Google replies – “Query too common, please precise goat breed to narrow down results”…

    • Douglas Knight says:

      Scott talked about drawing tools here:

      I use NBOS Fractal Mapper. It can generate random-looking land masses, or you can trace polygons that it then converts (not too well) to random-looking land masses. After I’ve got that I import it into Photoshop and do the rest of the work. The Cartographers Guild has all the tutorials you will ever need and then some.

  11. acymetric says:

    Is the map meant to be “to scale”?

  12. Alex M says:

    Can any altruism truly be said to be effective without the power of enforcement? For example, we can (and do) ship millions of boxes of food aid to Africa, but all that aid is ineffective if some local warlord confiscates the aid packages before they reach the people they are intended to help.

    We can (and do) spend billions of dollars assisting the third world to encourage them to invest in green energy, but all that aid is wasted if corrupt local governments embezzle it and spend it on superyachts.

    If we wanted our altruism to be TRULY effective, we would also invest some of our resources in such a way to ensure that we have the power to enforce that our altruism is used in the ways that it is intended. For example, to resolve the Africa problem, maybe we should spend half as much money on food aid, and use the excess to hire mercenaries to defend the food shipments from warlords. (Or, alternatively, just put landmines that can be detonated remotely inside the aid boxes – if some warlord tries to intercept the aid, simply blow it up to spite them. They’ll very quickly learn from these deterrents not to meddle in our efforts.) If a third world country doesn’t use our financial aid appropriately, send CIA assassins after the corrupt politicians who are embezzling. Would anyone shed a tear for people like that? We have the technology to be very effective at this methodology: all we lack are politicians with the will to make it happen.

    I’m sorry if this sounds rude, but it seems to me like most so-called “effective altruism” is actually extremely INEFFECTIVE because altruists seem to have some sort of inexplicable allergy to power. It’s disheartening to see such naivete.

    • Lambert says:

      Warlord, sitting on a throne made from packets of deworming tablets upholstered with inseciticide-treated bednets: ‘Wow, I’ve really got it made with this stealing-from-EAs thing.’

      Does EA even recommend any food shipments? Africa has a lot more farms than it has pharmaceutical factories.

      Also people don’t like it when you send a load of soldiers and bombs to their country (See: several decades of US adventurism).

      > inexplicable allergy to power.

    • AnteriorMotive says:

      You’re ranting at the wrong people. EA is dedicated to finding the most effective forms of charity. This means they specifically search out projects which are not likely to be co-opted by local powers-that-be. They mainly fund private organizations, able to leave a country if they’re no longer able to do their job effectively.

      You seem to be mainly complaining about foreign aid. The people allocating foreign aid are more interested in making a grand public gesture than getting results. EA was created precisely to counteract those kinds of incentives.

      Besides, my understanding is that foreign aid getting embezzled is so common because it’s seen as a feature rather than a bug. Paying off tin-pot dictatorships has much better optics when you do it with an “aid budget” instead of a “bribery fund.”

      • Alex M says:

        You’re ranting at the wrong people. EA is dedicated to finding the most effective forms of charity. This means they specifically search out projects which are not likely to be co-opted by local powers-that-be. They mainly fund private organizations, able to leave a country if they’re no longer able to do their job effectively.

        That’s a fair point. My apologies if my “rant” (as you put it) seems uncharitable. I disagree with a lot of your other points, but this one is sound.

        You seem to be mainly complaining about foreign aid. The people allocating foreign aid are more interested in making a grand public gesture than getting results. EA was created precisely to counteract those kinds of incentives.

        If the people allocating foreign aid are more interested in virtue signalling than achieving results, wouldn’t it be extremely effective and altruistic to use dirty tactics such as the Dark Arts to replace them or at least gain enough influence with them that you can force them or convince them to stop doing the ineffective virtue signalling and start doing effective altruism instead?

        Besides, my understanding is that foreign aid getting embezzled is so common because it’s seen as a feature rather than a bug. Paying off tin-pot dictatorships has much better optics when you do it with an “aid budget” instead of a “bribery fund.”

        Why are we paying off tin-pot dictatorships when we have the sophistication and technology to just squash them and replace them with better governments? I mean, what are they going to do, complain to the U.N. about being oppressed? The U.N. can barely wipe its own posterior without being vetoed by somebody on the Security Council. Maybe we should admit that our current global governance is a shambles and rather than propping a failing system up with life support, simply replace it with something more efficient that is based around pragmatic operating principles rather than a naive idealism that sounds great but completely fails to achieve results in the real world. Is that unreasonable?

        • Lambert says:

          The people allocating foreign aid are doing politics. They use far dirtier tactics on a day-to-day basis.

          >the sophistication and technology to just squash them and replace them with better governments?

          Like the better governments we installed in Iraq, Libya, Somalia…
          And no, you can’t cause regime change from the air. Arthur Harris taught us that much.

          • Alex M says:

            We don’t just have air power. We have AI and industrial automation (which can be easily converted to MILITARY-industrial automation). Let’s use it.

            The global order only exists in its current form because it was the equilibrium point of a world order set up half a century ago. We designed a system that we felt we could maintain based on the power and operational capabilities that we had at that point in time. The development of new technology means that our power and operational capabilities have changed substantially, so I feel like it’s time to renegotiate the existing equilibrium point, make new strategic alliances, and set higher expectations. A lot of the assumptions underpinning the rationale for our current global order are now obsolete, so at the very least, they need to be reexamined in light of our new capabilities.

            The people allocating foreign aid are doing politics. They use far dirtier tactics on a day-to-day basis.

            Dirty tactics are not the same as smart dirty tactics. With a few rare exceptions, most politicians seem to fall into the category of “charismatic but dumb.” The last Secretary of State we had allowed 4chan trolls to bait her into declaring a cartoon frog an enemy of democracy. These are not exactly top minds we’re dealing with here, so let’s not give them too much credit please.

          • Placid Platypus says:

            @Alex M as a general rule if you think you’re smarter and more skilled than the people who are currently winning in a highly competitive field they’ve been focusing on their whole lives, you’re probably drastically overconfident.

          • AlexOfUrals says:

            The last Secretary of State we had allowed 4chan trolls to bait her into declaring a cartoon frog an enemy of democracy.

            And she had come close second in the presidential election, which is better than ~100% of Americans who are legally qualified for it. You clearly have much to learn from her in terms of using the dark arts in general, and which purposes this particular move served in particular, if you think it was pure stupidity.

        • Viliam says:

          Day D: “Why don’t people use the Dark Arts to achieve their goals more efficiently? Let me show you how it is done properly!”

          Day D+100: “Why is everyone calling me a liar?”

          More info.

        • The Nybbler says:

          Why are we paying off tin-pot dictatorships when we have the sophistication and technology to just squash them and replace them with better governments?

          Leaving aside the complaining of the UN, because it rarely works, for various reasons. The people running the existing institutions (even once decapitated) will tend to hate us and undermine us. If we try to remove them, the task becomes running a country without deep local knowledge, which we couldn’t even pull off in post-WWII Germany. The bulk of the people of the country are likely to hate us as a foreign invader, and are likely to engage in sabotage, insurgency, assassination, and the other usual ways of creating chaos.

          And then there’s the people we install. Who are likely to be in it for interests other than those of the people of country whose government we squashed. Most likely their own, but possibly also the US’s. If you did find naive altruists they’d probably end up dead in a week, so someone more cynical operators are necessary.

          • Nornagest says:

            Also because it’s fucking expensive. Effective nation-building is hard even when you haven’t just bombed the whole region into the stone age (which is how we prefer to win wars); when you have, it’s ruinous.

        • John Schilling says:

          Why are we paying off tin-pot dictatorships when we have the sophistication and technology to just squash them and replace them with better governments?

          Well, first, some of them have the sophistication and technology to squash us, as do some of our allies who would soon cease to be such if we embarked on a squashing-governments-because-we-say-so policy,

          And second, we haven’t had the technology to replace tin-pot dictatorships with better governments in your lifetime.

          Pissing off all the world’s great powers to smoosh tin-pot dictatorships and replace them with tin-pot peoples’ republics or whatever, is not a winning move no matter how many dimensions your chessboard has.

    • Taymon A. Beal says:

      Other people have already replied to this but I also want to link GiveWell’s own blog post on it, “The lack of controversy over well-targeted aid”.

    • thisheavenlyconjugation says:

      “Actually, the true effective altruism is imperialism” is a bit more novel than “the true effective altruism is wokeness”, but not really much more interesting. I’m still waiting for a good faith “actually, the true effective altruism is evangelism”. Now that would be entertaining.

      • Viliam says:

        To play the Devil’s Advocate, imperialism once helped stop burning widows. How high would that score on the altruism scale?

        • Ozy Frantz says:

          Not enough to compensate for the Bengali famine.

          • Watchman says:

            I think you need some figures before you say that. It’s not as if famines didn’t kill people in India before the British (and the Mughals before them etc…) so how many extra died because of British rule in this case? Now how many more lives were extended or came into being over the period of the Raj due to various innovations of the British: improved medical care; stopping wars between states; eradicating the Thugees etc. No idea what the answer is but a rational response is not to fall back on a knee-jerk political point scoring. There may be a case in terms of lives saved and created that imperialism is a good thing, and one incident does not discredit that.

      • AlexOfUrals says:

        Wait, isn’t it a common point of view among very religious people that salvation of the soul is paramount and salvation of the body is secondary, so missionary work and other ways to promote religion are the most important [and effective] form of charity? I haven’t seen someone calculating this explicitly, but it seems that if you take the core assumptions of Christianity and just do the math, you’ll trivially end up with something along these lines.

      • profgerm says:

        I mean, isn’t that the party line for effective altruism?

        (I had been avoiding the 80,000 Hours people out of embarassment after their career analyses discovered that being a doctor was low-impact, but by bad luck I ended up sharing a ride home with one of them. I sheepishly introduced myself as a doctor, and he said “Oh, so am I!” I felt relieved until he added that he had stopped practicing medicine after he learned how low-impact it was, and gone to work for 80,000 Hours instead.

        “The most effective thing you can do is preach about effective altruism” is pretty much my takeaway from that whole post. Though I’d disagree: there’s nothing ordinary about the people of EA, and this has many pros and many cons.

    • TJ2001 says:

      Whats to say that the so called “Aid” is not accomplishing exactly what the organizations controlling such things intended?

      On the first order – yes.. It’s confiscated and sold for a profit to enrich the leader’s cronies…. but it’s ALWAYS confiscated and sold to enrich the leader’s cronies 100.000000000% of the time….

      So you have to assume the “Super powers” understand this. I mean it happens every single time without fail… And further that they have already planned that into their Political Calculus…..

      So then you have to ask…. What’s their real game?. What are they actually accomplishing?.. Because feeding The World’s Destitute obviously is not their plan….

  13. NLeseul says:

    It’s awfully depressing to see de Gray and SENS sitting there on those teensy islands.

    • MTSowbug says:

      We’ll raise a continent there, sooner or later. Just you wait!

    • Taymon A. Beal says:

      My hypothesis is that the biggest reason for this is that, in order to prioritize anti-aging as an EA cause, you have to both not shy away from weird speculative cause areas that the mainstream disdains, and not buy the arguments that X-risk prevention is more important than anything else. The correlation between those things is strong enough that the overlap is fairly low.

      (I’m also not totally convinced that SENS is the best bet for someone who cares about anti-aging, but that’s way outside my expertise.)

      • Pablo says:

        My hypothesis is that the biggest reason for this is that, in order to prioritize anti-aging as an EA cause, you have to both not shy away from weird speculative cause areas that the mainstream disdains, and not buy the arguments that X-risk prevention is more important than anything else.

        Yeah, I offered a very similar explanation here:

        Longevity research occupies an unstable position in the space of possible EA cause areas: it is very “hardcore” and “weird” on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the “common-sense” views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the “obvious corollary that curing aging is our number one priority”. As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.

  14. Matt M says:

    A little odd to see “Beyond Meat” and “Impossible Foods” included here. Are these not privately controlled, for-profit, corporations?

    Which isn’t to say that giving them money won’t improve society in general or animal welfare specifically. I’m hardly anti-corporate. But I’d also argue that in that case, we should probably add Shell and PG and Nabisco and every other major corporation that provides the necessities of civilization at a low cost to the “global poverty” continent, right?

    • Taymon A. Beal says:

      We think that the work that Beyond Meat and Impossible Foods are doing has positive externalities (i.e., things that the capital markets won’t take into account when deciding how much resources to allocate to them) that are important to EA. This probably isn’t true of Shell (whose externalities are very likely net negative due to climate change), PG, or Nabisco (whose externalities are probably more-or-less neutral-ish); the market already rewards them for fulfilling the basic needs of the global poor, in a way that it doesn’t reward Beyond Meat or Impossible Foods for saving farm animals from lives of torture. This doesn’t mean that you should literally donate money to them (I don’t think you can even do that at all), but EAs interested in animal welfare should consider working there if they have relevant skills (turns out scaling up manufacturing of a novel biotech product is hard and they really need people who are good at it), and funders interested in animal welfare should consider making for-profit investments in startups working on this stuff that are early-stage enough to still need their money (which Beyond Meat and Impossible Foods aren’t anymore, but Open Phil invested in Impossible Foods back when it was, and there are other companies that still are).

      • Matt M says:

        in a way that it doesn’t reward Beyond Meat or Impossible Foods for saving farm animals from lives of torture

        How does it not? Their products command a price premium, specifically because of their ability to save farm animals from lives of torture.

        You really think the stock performance of these companies has nothing to do with their aspirational social goals?

        If they started purchasing “animal torture offsets” in order to put them on moral parity with factory farms, do you think their business would be unaffected?

        • Taymon A. Beal says:

          Okay, fair, there’s some of that, but it still seems pretty clear that the market vastly undervalues animal welfare by EA standards. Only relatively rich people are willing to pay these premiums at all, for the most part, and the moral value of the amount of consumption that those premiums could otherwise buy for humans (even poor humans) is almost certainly much less than the moral disvalue of factory farming, at least given standard EA assumptions about the moral relevance of animals.

          Also, I think their business might do better in that weird hypothetical than you might think. A lot of the mainstream excitement about plant-based and cultivated meat focuses on climate change and health benefits (the latter is honestly pretty dubious with respect to the current offerings but let’s not get into that right now); I suspect this is because most consumers of these products aren’t interested in going vegetarian and therefore still don’t want to confront the moral questions around factory farming.

          • Matt M says:

            OK, that’s fine, I mostly agree with your second paragraph. I myself don’t care much about animal welfare, but I accept that this puts me outside the mainstream of EA.

            But I do think you’re not giving enough credit to “regular” corporations. To the extent that Johnson & Johnson finds a profitable way to make basic first aid and health care items available to the global poor at low prices, that’s providing a positive externality to “poverty reduction.”

          • Taymon A. Beal says:

            Yeah, sorry; of course regular corporations provide positive externalities, unless they’re monopolies or otherwise able to capture 100% of the gains from trade, and I should have been a lot more explicit about that.

            However, I’m not aware of any arguments that good things would happen for the global poor if we allocated more resources to Johnson & Johnson, compared to what the capital markets are already allocating to them. At least in the spherical-cow model, they get paid directly in proportion to the value they produce for the global poor (and everyone else), and since they’re a big established company, the capital markets have enough faith in them to provide them with whatever they need in order to produce that value. Nor are they hurting for labor, because the skills they need are largely transferable and they can afford to pay competitive compensation.

            Plant-based and cultivated meat companies are different because the moral value of them winning in the marketplace and obsoleting factory farming dwarfs the promising-but-uncertain return that their investors can expect, assuming that farm animals are morally valuable. If there were a for-profit startup that had the potential to produce outsize value for the global poor, but needed a bit more of a push beyond what the capital and labor markets would provide naturally in order to get there, it would likely attract EA interest. In fact, one could argue that this was the case with Wave, and it did attract EA interest.

  15. helloo says:

    So where is this site?

    Though it might be considered conceited to include yourself as an important person, you might still want to mark where you stand on the map. Or possibly on the rose as a bit of meta humor.

  16. Nicholas says:

    Do XY placement or landmass given a topic area denote anything in particular? Is meta EA just the archipelago, or does it extend to the existential risk ranges? Am I overthinking this: probably. It is weird how much EA is meta though. Spending money navel gazing about how to spend money most efficiently seems like it has to rank pretty low on the scale of efficient ways to spend money, or at least have a sharp diminishing returns curve.

    • Taymon A. Beal says:

      EA has a very broad scope and we live in Extremistan; the possible array of things that we could spend money on is vast and the best ones are likely to be orders of magnitude better than anything else. Plus, it’s still early days for the movement; the amount of resources that’s been spent on it thus far is but a small fraction of what’s likely to be in the future (even just going by Open Phil alone; Dustin and Cari have pledged basically their entire fortune and have only spent a relatively small amount of it), so it seems likely that we’re still on the left side of the diminishing-returns curve. At the current margin, spending 25% of resources* trying to figure out what we want to be doing sounds pretty reasonable. This number should probably be lower a decade or two from now, but even then 5-10% still sounds like a good idea, unless the world has changed radically or we otherwise gain strong reason to believe that we’ve found the most important thing to put all our resources into.

      * This number is made up; it is vaguely inspired by “meta” being one of the four traditional focus areas.

  17. kai.teorn says:

    What are the reasons that the three major continents on this map are these and not others? Is this kind of a historical artefact, or has there been some thinking/reasoning to show that it’s these three areas that maximize some property we want to maximize? I’m not implying these are unworthy causes, of course, but I wonder if there may be others just as worthy (and urgent) but not present on this map. Such as solving mental health, for one.

    • Taymon A. Beal says:

      Here is the canonical post outlining the four focus areas of EA. Subsequently, Michelle Hutchinson wrote two posts arguing that these focus areas aren’t arbitrary. TL;DR: They correspond to sets of beneficiaries, and between them cover everyone who is likely to be morally relevant except present-day humans in rich countries, who are likely to already be overprioritized by non-EA charities on the current margin, so we generally should not focus on them. (There are a few smaller EA cause areas, like criminal justice reform, where people think they’ve identified exceptional opportunities to do good with respect to present-day humans in rich countries.)

      There are a number of EAs interested in mental health. StrongMinds, which is on Scott’s map, works on it in poor countries. (I suspect the primary reason why they’re not a GiveWell top charity is because they can’t demonstrate to GiveWell’s evidentiary standards that the stuff they’re doing actually works.) Rich-country mental health is unlikely to be tractable (at least not if we’re talking about getting dramatically better results than current medical interventions can provide) or neglected (there are a lot of rich-country people highly motivated to spend money on it and it’s already a very prominent meme in the broader culture). The most likely EA case for rich-country mental health would be if you think you’ve identified a generally neglected approach that has very high potential upside (like literally “solving mental health”). Such claims generally have a poor track record, so there would need to be a convincing reason why any particular claimed intervention was likely to be different in this regard. This is what the Qualia Research Institute, which is also on Scott’s map, is ostensibly doing.

    • Lambert says:

      They maximise a combination of scope, tractabilty and neglectedness.

      Solving mental health will probably take thousands of psychologists many decades. Academia is already employing these people to figure out how minds and brains work. Mental health charities are usually trying to raise awareness and increase engagement with the mental health system, not solving mental health.

      Solving climate requires resources and co-ordination that can only be mustered by nation-states and international bodies.

      Solving politics is not tractable.

  18. Sandpaper26 says:

    I’m interested in the width of the sea between global poverty and animal welfare. In particular, it’s not outside the realm of the possible for global adoption of meat alternatives to greatly increase the potential food supply to low-income areas (low-cost meat alternatives may be both ethically beneficial to animals in general and more cost efficient to donate as a protein source).

    • Taymon A. Beal says:

      The reason why people are still starving in poor countries isn’t because growing food is too hard (we can easily grow enough food to feed everyone on Earth), it’s because the logistics of distributing it is too hard, and plant-based and cultivated meat won’t change that. Meanwhile, an uncomfortable-but-seemingly-likely hypothesis is that lifting countries from poor to middle-income will be very bad for animals on net, because this will cause demand for meat to rise as people start to be able to afford it, and there’ll be enough of it that large-scale factory farming will take off in those countries in order to meet that demand (the “poor meat-eater problem”). Plant-based and cultivated meat might be able to mitigate this if it can be made economical enough, but that’s a big “if” (we’re nowhere near there on the current margin), and in any case it doesn’t point to synergies between global poverty mitigation and animal welfare.

  19. Mev says:

    On behalf of Dank EA Memes: thank you for including us!

    On behalf of Dank EA Memes: how dare you slight us by denying us our rightful place as the capital of EA Meta.

    Also we’re probably closer to QRI than that because Andres has a long and generous record of shitposting in DEAM.

  20. thesilv3r says:

    I’m surprised Peter Singer isn’t a bigger mountain.

  21. Bugmaster says:

    TIL that there apparently exists a “Qualia Research Institute”, which does Exactly What It Says On The Tin ™. I’m not sure if that counts as “effective” — given that they apparently spend a sizeable portion of their revenue on philosophy — but it certainly is amusing…

  22. argentus says:

    One thought on native people and suicide rates. I think isolation of place also has something to do with suicide rates of “alienated” rural people. If you live in Podunk, East Texas, say, it’s comparatively easy to get out of Podunk for the day and go to some town or city of decent size. There’s also dozens of little towns around you within reasonable distance that collectively have things in them: a restaurant here, a theater there, a nice park there. That also means there’s more of a chance of some place with decent employment prospects, though you might have to be willing to drive 30-40 miles to get it. If you live in Podunk, West Texas, it’s entirely possible there’s nothing more exciting than a Wal-Mart for 300 miles in any direction and no job within 75 miles better than working at 7-11. I use this example because I come from Podunk, East Texas and understand what living in a small, dying rural town with nothing in it is like. I also know the difference in the sort of abiding melancholy that lingers in a place like my hometown and the raging demons of despair that howl in places like Ft. Stockton in West Texas where there is nothing but boulders and tumbleweeds for miles in all directions.

    Now think of how isolation is even more pronounced in some place like Greenland or the Far North. This is compounded if you are poor and have no transportation to speak of. I would be interested to see how suicide rates vary among native people when you consider both the size of the place they live and that place’s relative isolation from other places.