[Sorry for the interruption; we will return to our regularly scheduled Adversarial Collaboration Contest tomorrow.]
[Epistemic status: I’m linking evaluations made by people I mostly trust, but there are many people who don’t trust these, I haven’t 100% evaluated them perfectly, and if your assumptions differ even a little from those of the people involved these might not be very helpful. If you don’t know what effective altruism is, you might want to find out before supporting it. Like I said, this is for maximally lazy people and everyone else might want to investigate further.]
If you’re like me, you resolved to donate money to charity this year, and are just now realizing that the year is going to end soon and you should probably get around to doing it. Also, you support effective altruism. Also, you are very lazy. This guide is for you.
The maximally lazy way to donate to effective charity is probably to donate to EA Funds. This is a group of funds run by the Center for Effective Altruism where they get experts to figure out what are the best charities to give your money to each year. The four funds are Global Health, Animal Welfare, Long-Term Future, and Effective Altruism Meta/Community. If you are truly maximally lazy, you can just donate an equal amount to all four of them; if you have enough energy to shift a set of little sliders, you can decide which ones get more or less.
If you have a little more time and energy, you might want to look at the charities suggested by some charity-evaluating organizations and see which ones you like best.
GiveWell tries to rigorously evaluate charities that can be rigorously evaluated, which usually means global health. They admit that they have to exclude whole categories of charity that try to change society in vague ways, because those charities can’t be evaluated as rigorously. But they do a good job of what they do. Most of their top charities fight malaria and parasitic worms; this latter cause is interesting because these worms semipermanently lower school performance, concentration, and general health, suggesting that treating them could permanently improve economic growth. You can donate directly to GiveWell (to be divided up among their top charities at their discretion) here, or you can look at their list of top recommended charities for 2019 here.
Animal Charity Evaluators is the same thing, but for charities that try to help animals, usually by fighting factory farming. You can donate to ACE’s Recommended Charity Fund, again to be divided up among their top charities at their discretion, here, or see their list of top recommended charities for 2019 here.
AI Alignment Literature Review And Charity Comparison is a report posted by LW user Larks going over all the major players in AI safety, what they’ve been doing the past year, and which ones need more funding. If you just want to know which ones they like best, CTRL+F “conclusions” and run it through rot13. Or if you’re too lazy to do that and you just want me to link you their top recommended charity’s donation page, it’s here.
Vox’s report on the best charities for climate change lists ones that claim to be able to prevent one ton of carbon emissions for $0.12 and $1, compared to the $10 you would get on normal offset sites. Their top choice is Coalition For Rainforest Nations (but see criticism here), and their second choice is Clean Air Task Force.
You might also want to check out ImpactMatters (a version of GiveWell focused on literal First World problems), Let’s Fund (a site that highlights charities, mostly in science and technology, and runs campaigns for them), this post on the Effective Altruism forum about which charities people are donating to this year, and this list of what charities the charity selection experts at the Open Philanthropy Project are donating to.
And if you’re not actually lazy at all, you might want to check out some interesting individual charities that have been making appeals around here recently (others can add their appeals in the comments if they want).
The Center For Election Science tries to convince US cities (and presumably plans to eventually work up to larger areas) to use approval voting, a form of voting where third party candidates don’t “split the vote” and you can vote for whoever you want with a clear conscience. They argue this will make compromise easier and moderate candidates more likely to win. They’ve already succeeded in changing the ballot in Fargo, North Dakota, and as the old saying goes, “as Fargo, North Dakota goes, so goes the world.”
Happier Lives Institute wants to work directly on making people happier, but they realize nobody really knows what that means, so they’re doing a lot of meta-research on what happiness is and what the best way to measure it is. Aside from that, they seem to be working on cheap mental health interventions in Third World countries.
Machine Intelligence Research Institute works on a different aspect of AI alignment than most other groups; this comic explains the technicalities better than most sources. They are secretive and don’t talk a lot about their work or give a lot for people to evaluate them on, so whether or not you donate will probably be based on whether they’ve won social trust with you (they have with me).
Charter Cities Institute is trying to work with investors and Third World governments to create charter cities, autonomous cities with better institutions that can supercharge growth in the Third World. For example, a corrupt Third World country where doing business is near-impossible might designate one of their cities to be administered by foreign judges under an open-source law code, so that enterprise can take off. Think of it as a seastead, except on land, and with the host country’s consent (they’re hoping to profit off the tax revenue). David Friedman’s son Patri is leading another effort in this direction.
Finally, if you’re really skeptical and don’t believe any charity can accomplish much, you might want to consider GiveDirectly, which just gives your money directly to very poor people in Africa to do whatever they want with.