Monthly Archives: November 2017

Book Review: Inadequate Equilibria


Eliezer Yudkowsky’s catchily-titled Inadequate Equilibria is many things. It’s a look into whether there is any role for individual reason in a world where you can always just trust expert consensus. It’s an analysis of the efficient market hypothesis and how it relates to the idea of low-hanging fruit. It’s a self-conscious defense of the author’s own arrogance.

But most of all, it’s a book of theodicy. If the world was created by the Invisible Hand, who is good, how did it come to contain so much that is evil?

The market economy is very good at what it does, which is something like “exploit money-making opportunities” or “pick low-hanging fruit in the domain of money-making”. If you see a $20 bill lying on the sidewalk, today is your lucky day. If you see a $20 bill lying on the sidewalk in Grand Central Station, and you remember having seen the same bill a week ago, something is wrong. Thousands of people cross Grand Central every week – there’s no way a thousand people would all pass up a free $20. Maybe it’s some kind of weird trick. Maybe you’re dreaming. But there’s no way that such a low-hanging piece of money-making fruit would go unpicked for that long.

In the same way, suppose your uncle buys a lot of Google stock, because he’s heard Google has cool self-driving cars that will be the next big thing. Can he expect to get rich? No – if Google stock was underpriced (ie you could easily get rich by buying Google stock), then everyone smart enough to notice would buy it. As everyone tried to buy it, the price would go up until it was no longer underpriced. Big Wall Street banks have people who are at least as smart as your uncle, and who will notice before he does whether stocks are underpriced. They also have enough money that if they see a money-making opportunity, they can keep buying until they’ve driven the price up to the right level. So for Google to remain underpriced when your uncle sees it, you have to assume everyone at every Wall Street hedge fund has just failed to notice this tremendous money-making opportunity – the same sort of implausible failure as a $20 staying on the floor of Grand Central for a week.

In the same way, suppose there’s a city full of rich people who all love Thai food and are willing to pay top dollar for it. The city has lots of skilled Thai chefs and good access to low-priced Thai ingredients. With the certainty of physical law, we can know that city will have a Thai restaurant. If it didn’t, some entrepreneur would wander through, see that they could get really rich by opening a Thai restaurant, and do that. If there’s no restaurant, we should feel the same confusion we feel when a $20 bill has sat on the floor of Grand Central Station for a week. Maybe the city government banned Thai restaurants for some reason? Maybe we’re dreaming again?

We can take this beyond money-making into any competitive or potentially-competitive field. Consider a freshman biology student reading her textbook who suddenly feels like she’s had a deep insight into the structure of DNA, easily worthy of a Nobel. Is she right? Almost certainly not. There are thousands of research biologists who would like a Nobel Prize. For all of them to miss a brilliant insight sitting in freshman biology would be the same failure as everybody missing a $20 on the floor of Grand Central, or all of Wall Street missing an easy opportunity to make money off of Google, or every entrepreneur missing a great market opportunity for a Thai restaurant. So without her finding any particular flaw in her theory, she can be pretty sure that it’s wrong – or else already discovered. This isn’t to say nobody can ever win a Nobel Prize. But winners will probably be people with access to new ground that hasn’t already been covered by other $20-seekers. Either they’ll be amazing geniuses, understand a vast scope of cutting-edge material, have access to the latest lab equipment, or most likely all three.

But go too far with this kind of logic, and you start accidentally proving that nothing can be bad anywhere.

Suppose you thought that modern science was broken, with scientists and grantmakers doing a bad job of focusing their discoveries on truly interesting and important things. But if this were true, then you (or anyone else with a little money) could set up a non-broken science, make many more discoveries than everyone else, get more Nobel Prizes, earn more money from all your patents and inventions, and eventually become so prestigious and rich that everyone else admits you were right and switches to doing science your way. There are dozens of government bodies, private institutions, and universities that could do this kind of thing if they wanted. But none of them have. So “science is broken” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up”. Therefore, modern science isn’t broken.

Or: suppose you thought that health care is inefficient and costs way too much. But if this were true, some entrepreneur could start a new hospital / clinic / whatever that delivered health care at lower prices and with higher profit margins. All the sick people would go to them, they would make lots of money, investors would trip over each other to fund their expansion into new markets, and eventually they would take over health care and be super rich. So “health care is inefficient and overpriced” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up.” Therefore, health care isn’t inefficient or overpriced.

Or: suppose you think that US cities don’t have good mass transit. But if lots of people want better mass transit and are willing to pay for it, this is a great money-making opportunity. Entrepreneurs are pretty smart, so they would notice this money-making opportunity, raise some funds from equally-observant venture capitalists, make a better mass transit system, and get really rich off of all the tickets. But nobody has done this. So “US cities don’t have good mass transit” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up.” Therefore, US cities have good mass transit, or at least the best mass transit that’s economically viable right now.

This proof of God’s omnibenevolence is followed by Eliezer’s observations that the world seems full of evil. For example:

Eliezer’s wife Brienne had Seasonal Affective Disorder. The consensus treatment for SAD is “light boxes”, very bright lamps that mimic sunshine and make winter feel more like summer. Brienne tried some of these and they didn’t work; her seasonal depression got so bad that she had to move to the Southern Hemisphere three months of every year just to stay functional. No doctor had any good ideas about what to do at this point. Eliezer did some digging, found that existing light boxes were still way less bright than the sun, and jury-rigged a much brighter version. This brighter light box cured Brienne’s depression when the conventional treatment had failed. Since Eliezer, a random layperson, was able to come up with a better SAD cure after a few minutes of thinking than the establishment was recommending to him, this seems kind of like the relevant research community leaving a $20 bill on the ground in Grand Central.

Eliezer spent a few years criticizing the Bank of Japan’s macroeconomic policies, which he (and many others) thought were stupid and costing Japan trillions of dollars in lost economic growth. A friend told Eliezer that the professionals at the Bank surely knew more than he did. But after a few years, the Bank of Japan switched policies, the Japanese economy instantly improved, and now the consensus position is that the original policies were deeply flawed in exactly the way Eliezer and others thought they were. Doesn’t that mean Japan left a trillion-dollar bill on the ground by refusing to implement policies that even an amateur could see were correct?

And finally:

For our central example, we’ll be using the United States medical system, which is, so far as I know, the most broken system that still works ever recorded in human history. If you were reading about something in 19th-century France which was as broken as US healthcare, you wouldn’t expect to find that it went on working when overloaded with a sufficiently vast amount of money. You would expect it to just not work at all.

In previous years, I would use the case of central-line infections as my go-to example of medical inadequacy. Central-line infections, in the US alone, killed 60,000 patients per year, and infected an additional 200,000 patients at an average treatment cost of $50,000/patient.

Central-line infections were also known to decrease by 50% or more if you enforced a five-item checklist that included items like “wash your hands before touching the line.”

Robin Hanson has old Overcoming Bias blog posts on that untaken, low-hanging fruit. But I discovered while re-Googling in 2015 that wider adoption of hand-washing and similar precautions are now finally beginning to occur, after many years—with an associated 43% nationwide decrease in central-line infections. After partial adoption.

Since he doesn’t want to focus on a partly-solved problem, he continues to the case of infant parenteral nutrition. Some babies have malformed digestive systems and need to have nutrient fluid pumped directly into their veins. The nutrient fluid formula used in the US has the wrong kinds of lipids in it, and about a third of babies who get it die of brain or liver damage. We’ve known for decades that the nutrient fluid formula has the wrong kind of lipids. We know the right kind of lipids and they’re incredibly cheap and there is no reason at all that we couldn’t put them in the nutrient fluid formula. We’ve done a bunch of studies showing that when babies get the right nutrient fluid formula, the 33% death rate disappears. But the only FDA-approved nutrient fluid formula is the one with the wrong lipids, so we just keep giving it to babies, and they just keep dying. Grant that the FDA is terrible and ruins everything, but over several decades of knowing about this problem and watching the dead babies pile up, shouldn’t somebody have done something to make this system work better?

We’ve got a proof that everything should be perfect all the time, and a reality in which a bunch of babies keep dying even though we know exactly how to save them for no extra cost. So sure. Let’s talk theodicy.


Eliezer draws on the economics literature to propose three main categories of solution:

There’s a toolbox of reusable concepts for analyzing systems I would call “inadequate”—the causes of civilizational failure, some of which correspond to local opportunities to do better yourself. I shall, somewhat arbitrarily, sort these concepts into three larger categories:

1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;

2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information

3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.

The first way evil enters the world is when there is no way for people who notice a mistake to benefit from correcting it.

For example, Eliezer and his friends sometimes joke about how really stupid Uber-for-puppies style startups are overvalued. The people investing in these startups are making a mistake big enough for ordinary people like Eliezer to notice. But it’s not exploitable – there’s no way to short startups, so neither Eliezer nor anyone else can make money by correcting that error. So it’s not surprising that the error persists. All you need is one stupid investor who thinks Uber-for-puppies is going to be the next big thing, and the startup will get overfunded. All the smart investors in the world can’t fix that one person’s mistake.

The same is true, more tragically, for housing prices. There’s no way to short houses. So if 10% of investors think the housing market will go way up, and 90% think the housing market will crash, those 10% of investors will just keep bidding up housing prices against each other. This is why there are so many housing bubbles, and why ordinary people without PhDs in finance can notice housing bubbles and yet those bubbles remain uncorrected.

A more complicated version: why was Eliezer able to out-predict the Bank of Japan? Because the Bank’s policies were set by a couple of Japanese central bankers who had no particular incentive to get things right, and no particular incentive to listen to smarter people correcting them. Eliezer wasn’t alone in his prediction – he says that Japanese stocks were priced in ways that suggested most investors realized the Bank’s policies were bad. Most of the smart people with skin in the game had come to the same realization Eliezer had. But central bankers are mostly interested in prestige, and for various reasons low money supply (the wrong policy in this case) is generally considered a virtuous and reasonable thing for a central banker to do, while high money supply (the right policy in this case) is generally considered a sort of irresponsible thing to do that makes all the other central bankers laugh at you. Their payoff matrix (with totally made-up utility points) looked sort of like this:

LOW MONEY, ECONOMY BOOMS: You were virtuous and it paid off, you will be celebrated in song forever (+10)

LOW MONEY, ECONOMY COLLAPSES: Well, you did the virtuous thing and it didn’t work, at least you tried (+0)

HIGH MONEY, ECONOMY BOOMS: You made a bold gamble and it paid off, nice job. (+10)

HIGH MONEY, ECONOMY COLLAPSES: You did a stupid thing everyone always says not to do, you predictably failed and destroyed our economy, fuck you (-10)

So even as evidence accumulated that high money supply was the right strategy, the Japanese central bankers looked at their payoff matrix and decided to keep a low money supply.

It should be horrifying that this system weights a small change in the reputation of a few people higher (who will realistically do well for themselves even with a reputational hit) higher than adding trillions of dollars to the economy, but that’s how the system is structured.

In a system like this, everybody (including the Japanese central bankers) can know that increasing money supply is the right policy, but there’s no way for anyone to increase their own utility by causing the money supply to be higher. So Japan will suffer a generation’s worth of recession. This is dumb but inevitable.

The second way evil enters the world is when expert knowledge can’t trickle down to the ordinary people who would be the beneficiaries of correct decision-making.

The stock market stays efficient because expertise brings power. When Warren Buffett proves really good at stock-picking, everyone rushes to give him their money. If an ordinary person demonstrated Buffett-like levels of acumen, every hedge fund in the country would be competing to hire him and throw billions of dollars at whatever he predicted would work. Then when he predicts that Google’s price will double next week, he’ll use his own fortune, or the fortune of the hedge fund that employs him, to throw as much money into Google as the opportunity warrants. If Goldman Sachs doesn’t have enough to do it on their own, JP Morgan will make up the difference. Good hedge funds will always have enough money to exploit the opportunities they find, because if they didn’t, there would be so many unexploited great opportunities that the rate of return on the stock market would be spectacular, and everyone would rush to give their money to good hedge funds.

But imagine that Congress makes a new law that nobody can invest more than a thousand dollars. So Goldman Sachs invests their $1000 in Google, JP Morgan invests their $1000, and now what?

One possibility is that investment gurus could spring up, people just as smart as the Goldman Sachs traders, who (for a nominal fee) will tell you which stocks are underpriced. But this is hard, and fraudulent experts can claim to be investment gurus just as easily as real ones. There will be so many fraudulent investment gurus around that nobody will be able to trust the real ones, and after the few experts invest their own $1000 in Google, the stock could remain underpriced forever.

Something like this seems to be going on in medicine. Sure, the five doctors who really understand infant nutrition can raise a big fuss about how our terrible nutritional fluid is killing thousands of babies. But let’s face it. Everyone is raising a big fuss about something or other. From Eliezer’s author-insert character Cecie:

We have an economic phenomenon sometimes called the lemons problem. Suppose you want to sell a used car, and I’m looking for a car to buy. From my perspective, I have to worry that your car might be a “lemon”—that it has a serious mechanical problem that doesn’t appear every time you start the car, and is difficult or impossible to fix. Now, you know that your car isn’t a lemon. But if I ask you, “Hey, is this car a lemon?” and you answer “No,” I can’t trust your answer, because you’re incentivized to answer “No” either way. Hearing you say “No” isn’t much Bayesian evidence. Asymmetric information conditions can persist even in cases where, like an honest seller meeting an honest buyer, both parties have strong incentives for accurate information to be conveyed.

A further problem is that if the fair value of a non-lemon car is $10,000, and the possibility that your car is a lemon causes me to only be willing to pay you $8,000, you might refuse to sell your car. So the honest sellers with reliable cars start to leave the market, which further shifts upward the probability that any given car for sale is a lemon, which makes me less willing to pay for a used car, which incentivizes more honest sellers to leave the market, and so on.

In our world, there are a lot of people screaming, “Pay attention to this thing I’m indignant about over here!” In fact, there are enough people screaming that there’s an inexploitable market in indignation. The dead-babies problem can’t compete in that market; there’s no free energy left for it to eat, and it doesn’t have an optimal indignation profile. There’s no single individual villain. The business about competing omega-3 and omega-6 metabolic pathways is something that only a fraction of people would understand on a visceral level; and even if those people posted it to their Facebook walls, most of their readers wouldn’t understand and repost, so the dead-babies problem has relatively little virality. Being indignant about this particular thing doesn’t signal your moral superiority to anyone else in particular, so it’s not viscerally enjoyable to engage in the indignation. As for adding a further scream, “But wait, this matter really is important!”, that’s the part subject to the lemons problem. Even people who honestly know about a fixable case of dead babies can’t emit a trustworthy request for attention […]

By this point in our civilization’s development, many honest buyers and sellers have left the indignation market entirely; and what’s left behind is not, on average, good.

The beneficiaries of getting the infant-nutritional-fluid problem right are parents whose kids have a rare digestive condition. Maybe there are ten thousand of them. Maybe 10% of them are self-motivated and look online for facts about their kid’s condition, and maybe 10% of those are smart enough to separate the true concern about fats from all the false concerns about how doctors are poisoning their kids with vaccines. That leaves a hundred people. Even if those hundred people raise a huge stink and petition the FDA really strongly, a hundred people aren’t enough to move the wheels of bureaucracy. As for everyone else, why would they worry about nutritional fluid rather than terrorism or mass shootings or whatever all the other much-more-fun-to-worry-about things are?


To see how an inadequate equilibrium might arise, let’s start by focusing on one tiny subfactor of the human system, namely academic research.

We’ll even further oversimplify our model of academia and pretend that research is a two-factor system containing academics and grantmakers, and that a project can only happen if there’s both a participating academic and a participating grantmaker.

We next suppose that in some academic field, there exists a population of researchers who are individually eager and collectively opportunistic for publications—papers accepted to journals, especially high-impact journal publications that constitute strong progress toward tenure. For any clearly visible opportunity to get a sufficiently large number of citations with a small enough amount of work, there are collectively enough academics in this field that somebody will snap up the opportunity. We could say, to make the example more precise, that the field is collectively opportunistic in 2 citations per workday—if there’s any clearly visible opportunity to do 40 days of work and get 80 citations, somebody in the field will go for it.

This level of opportunism might be much more than the average paper gets in citations per day of work. Maybe the average is more like 10 citations per year of work, and lots of researchers work for a year on a paper that ends up garnering only 3 citations. We’re not trying to ask about the average price of a citation; we’re trying to ask how cheap a citation has to be before somebody somewhere is virtually guaranteed to try for it.

But academic paper-writers are only half the equation; the other half is a population of grantmakers.

In this model, can we suppose for argument’s sake that grantmakers are motivated by the pure love of all sentient life, and yet we still end up with an academic system that is inadequate?

I might naively reply: “Sure. Let’s say that those selfish academics are collectively opportunistic at two citations per workday, and the blameless and benevolent grantmakers are collectively opportunistic at one quality-adjusted life-year (QALY) per $100.8 Then everything which produces one QALY per $100 and two citations per workday gets funded. Which means there could be an obvious, clearly visible project that would produce a thousand QALYs per dollar, and so long as it doesn’t produce enough citations, nobody will work on it. That’s what the model says, right?”

Ah, but this model has a fragile equilibrium of inadequacy. It only takes one researcher who is opportunistic in QALYs and willing to take a hit in citations to snatch up the biggest, lowest-hanging altruistic fruit if there’s a population of grantmakers eager to fund projects like that.

Assume the most altruistically neglected project produces 1,000 QALYs per dollar. If we add a single rational and altruistic researcher to this model, then they will work on that project, whereupon the equilibrium will be adequate at 1,000 QALYs per dollar. If there are two rational and altruistic researchers, the second one to arrive will start work on the next-most-neglected project—say, a project that has 500 QALYs/$ but wouldn’t garner enough citations for whatever reason—and then the field will be adequate at 500 QALYs/$. As this free energy gets eaten up (it’s tasty energy from the perspective of an altruist eager for QALYs), the whole field becomes less inadequate in the relevant respect.

But this assumes the grantmakers are eager to fund highly efficient QALY-increasing projects.

Suppose instead that the grantmakers are not cause-neutral scope-sensitive effective altruists assessing QALYs/$. Suppose that most grantmakers pursue, say, prestige per dollar. (Robin Hanson offers an elementary argument that most grantmaking to academia is about prestige.9 In any case, we can provisionally assume the prestige model for purposes of this toy example.)

From the perspective of most grantmakers, the ideal grant is one that gets their individual name, or their boss’s name, or their organization’s name, in newspapers around the world in close vicinity to phrases like “Stephen Hawking” or “Harvard professor.” Let’s say for the purpose of this thought experiment that the population of grantmakers is collectively opportunistic in 20 microHawkings per dollar, such that at least one of them will definitely jump on any clearly visible opportunity to affiliate themselves with Stephen Hawking for $50,000. Then at equilibrium, everything that provides at least 2 citations per workday and 20 microHawkings per dollar will get done.

This doesn’t quite follow logically, because the stock market is far more efficient at matching bids between buyers and sellers than academia is at matching researchers to grantmakers. (It’s not like anyone in our civilization has put as much effort into rationalizing the academic matching process as, say, OkCupid has put into their software for hooking up dates. It’s not like anyone who did produce this public good would get paid more than they could have made as a Google programmer.)

But even if the argument is still missing some pieces, you can see the general shape of this style of analysis. If a piece of research will clearly visibly yield lots of citations with a reasonable amount of labor, and make the grantmakers on the committee look good for not too much money committed, then a researcher eager to do it can probably find a grantmaker eager to fund it.

But what if there’s some intervention which could save 100 QALYs/$, yet produces neither great citations nor great prestige? Then if we add a few altruistic researchers to the model, they probably won’t be able to find a grantmaker to fund it; and if we add a few altruistic grantmakers to the model, they probably won’t be able to find a qualified researcher to work on it.

One systemic problem can often be overcome by one altruist in the right place. Two systemic problems are another matter entirely.

The third way evil enters the world is through bad Nash equilibria.

Everyone hates Facebook. It records all your private data, it screws with the order of your timeline, it works to be as addictive and time-wasting as possible. So why don’t we just stop using Facebook? More to the point, why doesn’t some entrepreneur create a much better social network which doesn’t do any of those things, and then we all switch to her site, and she becomes really rich, and we’re all happy?

The obvious answer: all our friends are on Facebook. We want to be where our friends are. None of us expect our friends to leave, so we all stay. Even if every single one of our friends hated Facebook, none of us would have common knowledge that we would all leave at once; it’s hard to organize a mass exodus. Something like an assurance contract might help, but those are pretty hard to organize. And even a few people who genuinely like Facebook and are really loud about it could ruin that for everybody. In the end, we all know we all hate Facebook and we all know we’re all going to keep using it.

Or: instead of one undifferentiated mass of people, you have two masses of people, each working off the other’s decision. Suppose there was no such thing as Lyft – it was Uber or take the bus. And suppose we got tired of this and wanted to invent Lyft. Could we do it at this late stage? Maybe not. The best part of Uber for passengers is that there’s almost always a driver within a few minutes of you. And the best part of Uber for drivers is that there’s almost always a passenger within a few minute of you. So you, the entrepreneur trying to start Lyft in AD 2017, hire twenty drivers. That means maybe passengers will get a driver…within an hour…if they’re lucky? So no passenger will ever switch to Lyft, and that means your twenty drivers will get bored and give up.

Few passengers will use your app when Uber has far more drivers, and few drivers will use your app when Uber has far more passengers. Both drivers and passengers might hate Uber, and be happy to switch en masse if the other group did, but from within the system nobody can coordinate this kind of mass-switch occuring.

Or, to take a ridiculous example from the text that will obviously never happen:

Suppose that there’s a magical tower that only people with IQs of at least 100 and some amount of conscientiousness can enter, and this magical tower slices four years off your lifespan. The natural next thing that happens is that employers start to prefer prospective employees who have proved they can enter the tower, and employers offer these employees higher salaries, or even make entering the tower a condition of being employed at all. The natural next thing that happens is that employers start to demand that prospective employees show a certificate saying that they’ve been inside the tower. This makes everyone want to go to the tower, which enables somebody to set up a fence around the tower and charge hundreds of thousands of dollars to let people in.

Now, fortunately, after Tower One is established and has been running for a while, somebody tries to set up a competing magical tower, Tower Two, that also drains four years of life but charges less money to enter. Unfortunately, there’s a subtle way in which this competing Tower Two is hampered by the same kind of lock-in that prevents a jump from [Facebook to a competing social network]. Initially, all of the smartest people headed to Tower One. Since Tower One had limited room, it started discriminating further among its entrants, only taking the ones that have IQs above the minimum, or who are good at athletics or have rich parents or something. So when Tower Two comes along, the employers still prefer employees from Tower One, which has a more famous reputation. So the smartest people still prefer to apply to Tower One, even though it costs more money. This stabilizes Tower One’s reputation as being the place where the smartest people go.

In other words, the signaling equilibrium is a two-factor market in which the stable point, Tower One, is cemented in place by the individually best choices of two different parts of the system. Employers prefer Tower One because it’s where the smartest people go. Smart employees prefer Tower One because employers will pay them more for going there. If you try dissenting from the system unilaterally, without everyone switching at the same time, then as an employer you end up hiring the less-qualified people from Tower Two, or as an employee, you end up with lower salary offers after you go to Tower Two. So the system is stable as a matter of individual incentives, and stays in place. If you try to set up a cheaper alternative to the whole Tower system, the default thing that happens to you is that people who couldn’t handle the Towers try to go through your new system, and it acquires a reputation for non-prestigious weirdness and incompetence.


Robin Hanson’s review calls Inadequate Equilibria “really two separate books, tied perhaps by a mood affiliation”. Everything above was the first book. The second argues against overuse of the Outside View.

The Inside View is when you weigh the evidence around something, and go with whatever side’s evidence seems most compelling. The Outside View is when you notice that you feel like you’re right, but most people in the same situation as you are wrong. So you reject your intuitive feelings of rightness and assume you are probably wrong too. Five Outside View examples to demonstrate:

1. I feel like I’m an above-average driver. But I know there are surveys saying everyone believes they’re above-average drivers. Since most people who believe they’re an above-average driver are wrong, I reject my intuitive feelings and assume I’m probably just an average driver.

2. The Three Christs Of Ypsilanti is a story about three schizophrenics who thought they were Jesus all ending up on the same psych ward. Each schizophrenic agreed that the other two were obviously delusional. But none of them could take the next step and agree they were delusional too. This is a failure of Outside-View-ing. They should have said “At least 66% of people in this psych hospital who believe they’re Jesus are delusional. This suggests there’s a strong bias, like a psychotic illness, that pushes people to think they’re Jesus. I have no more or less evidence for my Jesus-ness than those people, so I should discount my apparent evidence – my strong feeling that I am Him – and go back to my prior that almost nobody is Jesus.”

3. My father used to get roped into going to time-share presentations. Every time, he would come out really convinced that a time share was the most amazing purchase in the world and he needed to get one right away. Every time, we reminded him that every single person who bought a time share ended up regretting it. Every time, he answered that no, the salespeople explained that their time-share didn’t have any hidden problems. Every time, we reminded him that time-share salespeople are really convincing liars. Eventually, even though he still thought the presentation was really convincing, he accepted that he was probably a typical member of the group “people impressed with time-share presentations”, and almost every member of that group is wrong. So even though my father thought the offer sounded too good to be true, he decided to reject it.

4. A Christian might think to themselves: “Only about 30% of people are Christian; the other 70% have some other religion which they believe as fervently as I believe mine. And no religion has more than 30% of people in the world. So of everyone who believes their religion as fervently as I do, at least 70% are wrong. Even though the truth of the Bible seems compelling to me, the truth of the Koran seems equally compelling to Muslims, the truth of dianetics equally compelling to Scientologists, et cetera. So probably I am overconfident in my belief in Christianity and really I have no idea whether it’s true or not.”

5. When I was very young, I would read pseudohistory books about Atlantis, ancient astronauts, and so on. All of these books seemed very convincing to me – I certainly couldn’t explain how ancient people built whatever gigantic technological marvels they made without the benefit of decent tools. And in most cases, nobody had written a good debunking (I am still angry about this). But there were a few cases in which people did write good debunkings that explained otherwise inexplicable things, and the books that were easily debunked were just as convincing as the ones that weren’t. For that and many other reasons, I assumed that even the ones that seemed compelling and had no good debunking were probably bunk.

But Eliezer warns that overuse of the Outside View can prevent you from having any kind of meaningful opinion at all. He worries about the situation where:

…we all treat ourselves as having a black box receiver (our brain) which produces a signal (opinions), and treat other people as having other black boxes producing other signals. And we all received our black boxes at random—from an anthropic perspective of some kind, where we think we have an equal chance of being any observer. So we can’t start out by believing that our signal is likely to be more accurate than average.

There are definitely pathological cases of the Outside View. For example:

6. I believe in evolution. But about half of Americans believe in creation. So either way, half of people are wrong about the evolution-creation debate. Since I know I’m in a category, half of whom are wrong, I should assume there’s a 50-50 chance I’m wrong about evolution.

But surely the situation isn’t symmetrical? After all, the evolution side includes all the best biologists, all the most educated people, all the people with the highest IQ. The problem is, the true Outside Viewer can say “Ah, yes, but a creationist would say that their side is better, because it includes all the best fundamentalist preachers, all the world’s most pious people, and all the people with the most exhaustive knowledge of Genesis. So you’re in a group of people, the Group Who Believe That Their Side Is Better Qualified To Judge The Evolution-Creation Debate, and 50% of the people in that group are wrong. So this doesn’t break the fundamental symmetry of the situation.

One might be tempted to respond with “fuck you”, except that sometimes this is exactly the correct strategy. For example:

7. Go back to Example 2, and imagine that when Schizophrenic A was confronted with the other Christs, he protested that he had special evidence it was truly him. In particular, the Archangel Gabriel had spoken to him and told him he was Jesus. Meanwhile, Schizophrenic B had seen a vision where the Holy Spirit descended into him in the form of a dove. Schizophrenic A laughs. “Anyone can hallucinate a dove. But archangels are perfectly trustworthy.” Schizophrenic B scoffs. “Hearing voices is a common schizophrenic symptom, but I actually saw the Spirit”. Clearly they still are not doing Outside View right.

8. Every so often, I talk to people about politics and the necessity to see things from both sides. I remind people that our understanding of the world is shaped by tribalism, the media is often biased, and most people have an incredibly skewed view of the world. They nod their heads and agree with all of this and say it’s a big problem. Then I get to the punch line – that means they should be less certain about their own politics, and try to read sources from the other side. They shake their head, and say “I know that’s true of most people, but I get my facts from Vox, which backs everything up with real statistics and studies.” Then I facepalm so hard I give myself a concussion. This is the same situation where a tiny dose of Meta-Outside-View could have saved them.

So how do we navigate this morass? Eliezer recommends a four-pronged strategy:

1. Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball.

2. Less than a majority of the time: Think about how reliable authorities seem to be and should be expected to be, and how reliable you are — using your own brain to think about the reliability and failure modes of brains, since that’s what you’ve got. Try to be evenhanded in how you evaluate your own brain’s specific failures versus the specific failures of other brains. While doing this, take your own meta-reasoning at face value.

3. And then next, theoretically, should come the meta-meta level, considered yet more rarely. But I don’t think it’s necessary to develop special skills for meta-meta reasoning. You just apply the skills you already learned on the meta level to correct your own brain, and go on applying them while you happen to be meta-reasoning about who should be trusted, about degrees of reliability, and so on. Anything you’ve already learned about reasoning should automatically be applied to how you reason about meta-reasoning.

4. Consider whether someone else might be a better meta-reasoner than you, and hence that it might not be wise to take your own meta-reasoning at face value when disagreeing with them, if you have been given strong local evidence to this effect.

But then he mostly spends the rest of the chapter (and book) treating it as obvious that most people overuse the Outside View, and mocking it as “modest epistemology” for intellectual cowards. Eventually he decides that the Outside View is commonly invoked to cover up status anxiety.

From what I can tell, status regulation is a second factor accounting for modesty’s appeal, distinct from anxious underconfidence. The impulse is to construct “cheater-resistant” slapdowns that can (for example) prevent dilettantes who are low on the relevant status hierarchy from proposing new Seasonal Affective Disorder treatments. Because if dilettantes can exploit an inefficiency in a respected scientific field, then this makes it easier to “steal” status and upset the current order.

So if we say something like “John has never taken a math class, so there’s not much chance that his proof of P = NP is right,” are we really implying “John isn’t high-status enough, so we shouldn’t let him get away with proving P = NP; only people who serve their time in grad school and postdoc programs should be allowed to do something cool like that”? I know Eliezer doesn’t believe that. Maybe he believes it’s only status regulation when it’s wrong? But then wouldn’t a better explanation be that people are trying a heuristic that is right a lot of the time, but misapplying it? I don’t know.

I found this part to be the biggest disappointment of this book. I don’t think it grappled with the claim that the Outside View (and even Meta-Outside View) are often useful. It offered vague tips for how to decide when to use them, but I never felt any kind of enlightenment, or like there had been any work done to resolve the real issue here. It was basically a hit job on Outside Viewing.

I understand the impetus. Eliezer was concerned that smart people, well-trained in rationality, would come to the right conclusion on some subject, then dismiss it based on the Outside View. One of his examples was that most of the rationalists he knows don’t believe in God. But if they took the Outside View on that question, they would have to either believe (since most people do) or at least be very uncertain (since lots of religions have at least as many adherents as atheism). He tosses this one off, but it’s clear that he’s less interested in religion than in worldly things – people who give up on cool startup ideas because the Outside View says they’ll probably fail, or who don’t come up with interesting contrarian ideas because the Outside View says most contrarians are wrong. He writes:

Whereupon I want to shrug my hands helplessly and say, “But given that this isn’t normative probability theory and I haven’t seen modesty advocates appear to get any particular outperformance out of their modesty, why go there?”

I think that’s my true rejection, in the following sense: If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty.

But these are some very artificial goalposts. The point of modesty isn’t that it lets you do great things. It’s that it lets you avoid shooting yourself in the foot. Every time my father doesn’t buy a time-share, modesty has triumphed.

To be very uncharitable, Eliezer seems to be making the same mistake as an investing book which says that you should always buy stock. After all, Warren Buffett bought stock, and look how well he’s doing! Peter Thiel bought stock, and now he’s a super-rich aspiring oceanic vampire! And (the very rich person writing the book concludes) I myself bought lots of stock, and now I am a rich self-help book author. Can you name a single person who became a billionaire by not buying stock? I didn’t think so.

To be more charitable, Eliezer might be writing to his audience. He predicts that the people who read his book will mostly be smarter than average, and generally at the level where using the Outside View hurts them rather than harms them. He writes:

There are people who think we all ought to [use the Outside View to converge] toward each other as a matter of course. They reason:

a) on average, we can’t all be more meta-rational than average; and

b) you can’t trust the reasoning you use to think you’re more meta-rational than average. After all, due to Dunning-Kruger, a young-Earth creationist will also think they have plausible reasoning for why they’re more meta-rational than average.

… Whereas it seems to me that if I lived in a world where the average person on the street corner were Anna Salamon or Nick Bostrom [people Eliezer knows who are very good at rationality], the world would look extremely different from how it actually does.

… And from the fact that you’re reading this at all, I expect that if the average person on the street corner were you, the world would again look extremely different from how it actually does.

(In the event that this book is ever read by more than 30% of Earth’s population, I withdraw the above claim.)

The argument goes: You’re more rational than average, so you shouldn’t adjust to the average. Instead, you should identify other people who are even more rational than you (on the matter at hand) and maybe Outside View with them, but no one else. Since you are already pretty rational, you can definitely trust your judgment about who the other rational people are.

Eliezer makes the assumption that only unusually rational people will read this book (and the preliminary hidden assumption that he’s rational enough to be able to make these determinations). I think this is a pretty safe claim; I don’t object to it in real life. But I worry about it in the same way I worry about the philosophical Problem Of Skepticism. I don’t think I’m a brain in a vat. But I’m vaguely annoyed by knowing that an actual brain in a vat would think exactly the same thing for the same reason.

This section’s argument runs on the same principle as a financial advice book that says “ALWAYS BUY LOTS OF STOCKS, YOU ARE GREAT AT INVESTING AND IT CANNOT POSSIBLY GO WRONG” that comes in a package marked Deliver only to Warren Buffett. It may be appreciated, but it’s not any kind of deep breakthrough in financial strategy.


Inadequate Equilibria is a great book, but it raises more questions than it answers. Like: does our civilization have book-titling institutions? Did they warn Eliezer that maybe Inadequate Equilibria doesn’t scream “best-seller”? Did he come up with a theory of how they were flawed before he decided to reject their advice?

But also, it asks: how do things stay bad in the face of so much pressure to make them better? It highlights (creates?) a field of study, clumping together a lot of economic orthodoxies and original concepts into a specific kind of rational theodicy . Once you start thinking about this, it’s hard to stop, and Eliezer deserves credit for creating a toolbox of concepts useful for analyzing these problems.

Its related question – “when should you trust social consensus vs. your own reasoning?” – is derivative of the theodicy section. If there’s some giant institution full of people much smarter and better-educated than you who have spent much more time and money investigating the question, then whether you should throw away your own opinion in favor of theirs depends a lot on whether that giant institution might fail in some unexpected way.

Its final section on the Outside View and modest epistemology tries to tie up a loose end, with less success than it would like. Should you trust your own opinion over the giant institution’s on the object level question? Surely you could only do so if certain conditions held – but could you trust your own opinion about whether those conditions hold? And so on to infinity. The latter part of the book acts as if it has a definitive answer – you can trust yourself, or at least trust yourself to correctly assess how trustworthy you are relative to others – but depends on Eliezer’s judgment that the book will probably only find its way to people for whom that is true.

I think you should read Inadequate Equilibria. Given that I am a well-known reviewer of books, clearly my opinion on this subject is better than yours. Further, Scott Aaronson and Bryan Caplan also think you should read it. Are you smarter than Scott Aaronson and Bryan Caplan? I didn’t think so. Whether or not your puny personal intuition feels like you would enjoy it, you should accept the judgment of our society’s book-reviewing institutions and download it right now.