Book Review: Inadequate Equilibria

I.

Eliezer Yudkowsky’s catchily-titled Inadequate Equilibria is many things. It’s a look into whether there is any role for individual reason in a world where you can always just trust expert consensus. It’s an analysis of the efficient market hypothesis and how it relates to the idea of low-hanging fruit. It’s a self-conscious defense of the author’s own arrogance.

But most of all, it’s a book of theodicy. If the world was created by the Invisible Hand, who is good, how did it come to contain so much that is evil?

The market economy is very good at what it does, which is something like “exploit money-making opportunities” or “pick low-hanging fruit in the domain of money-making”. If you see a $20 bill lying on the sidewalk, today is your lucky day. If you see a $20 bill lying on the sidewalk in Grand Central Station, and you remember having seen the same bill a week ago, something is wrong. Thousands of people cross Grand Central every week – there’s no way a thousand people would all pass up a free $20. Maybe it’s some kind of weird trick. Maybe you’re dreaming. But there’s no way that such a low-hanging piece of money-making fruit would go unpicked for that long.

In the same way, suppose your uncle buys a lot of Google stock, because he’s heard Google has cool self-driving cars that will be the next big thing. Can he expect to get rich? No – if Google stock was underpriced (ie you could easily get rich by buying Google stock), then everyone smart enough to notice would buy it. As everyone tried to buy it, the price would go up until it was no longer underpriced. Big Wall Street banks have people who are at least as smart as your uncle, and who will notice before he does whether stocks are underpriced. They also have enough money that if they see a money-making opportunity, they can keep buying until they’ve driven the price up to the right level. So for Google to remain underpriced when your uncle sees it, you have to assume everyone at every Wall Street hedge fund has just failed to notice this tremendous money-making opportunity – the same sort of implausible failure as a $20 staying on the floor of Grand Central for a week.

In the same way, suppose there’s a city full of rich people who all love Thai food and are willing to pay top dollar for it. The city has lots of skilled Thai chefs and good access to low-priced Thai ingredients. With the certainty of physical law, we can know that city will have a Thai restaurant. If it didn’t, some entrepreneur would wander through, see that they could get really rich by opening a Thai restaurant, and do that. If there’s no restaurant, we should feel the same confusion we feel when a $20 bill has sat on the floor of Grand Central Station for a week. Maybe the city government banned Thai restaurants for some reason? Maybe we’re dreaming again?

We can take this beyond money-making into any competitive or potentially-competitive field. Consider a freshman biology student reading her textbook who suddenly feels like she’s had a deep insight into the structure of DNA, easily worthy of a Nobel. Is she right? Almost certainly not. There are thousands of research biologists who would like a Nobel Prize. For all of them to miss a brilliant insight sitting in freshman biology would be the same failure as everybody missing a $20 on the floor of Grand Central, or all of Wall Street missing an easy opportunity to make money off of Google, or every entrepreneur missing a great market opportunity for a Thai restaurant. So without her finding any particular flaw in her theory, she can be pretty sure that it’s wrong – or else already discovered. This isn’t to say nobody can ever win a Nobel Prize. But winners will probably be people with access to new ground that hasn’t already been covered by other $20-seekers. Either they’ll be amazing geniuses, understand a vast scope of cutting-edge material, have access to the latest lab equipment, or most likely all three.

But go too far with this kind of logic, and you start accidentally proving that nothing can be bad anywhere.

Suppose you thought that modern science was broken, with scientists and grantmakers doing a bad job of focusing their discoveries on truly interesting and important things. But if this were true, then you (or anyone else with a little money) could set up a non-broken science, make many more discoveries than everyone else, get more Nobel Prizes, earn more money from all your patents and inventions, and eventually become so prestigious and rich that everyone else admits you were right and switches to doing science your way. There are dozens of government bodies, private institutions, and universities that could do this kind of thing if they wanted. But none of them have. So “science is broken” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up”. Therefore, modern science isn’t broken.

Or: suppose you thought that health care is inefficient and costs way too much. But if this were true, some entrepreneur could start a new hospital / clinic / whatever that delivered health care at lower prices and with higher profit margins. All the sick people would go to them, they would make lots of money, investors would trip over each other to fund their expansion into new markets, and eventually they would take over health care and be super rich. So “health care is inefficient and overpriced” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up.” Therefore, health care isn’t inefficient or overpriced.

Or: suppose you think that US cities don’t have good mass transit. But if lots of people want better mass transit and are willing to pay for it, this is a great money-making opportunity. Entrepreneurs are pretty smart, so they would notice this money-making opportunity, raise some funds from equally-observant venture capitalists, make a better mass transit system, and get really rich off of all the tickets. But nobody has done this. So “US cities don’t have good mass transit” seems like the same kind of statement as “a $20 bill has been on the floor of Grand Central Station for a week and nobody has picked it up.” Therefore, US cities have good mass transit, or at least the best mass transit that’s economically viable right now.

This proof of God’s omnibenevolence is followed by Eliezer’s observations that the world seems full of evil. For example:

Eliezer’s wife Brienne had Seasonal Affective Disorder. The consensus treatment for SAD is “light boxes”, very bright lamps that mimic sunshine and make winter feel more like summer. Brienne tried some of these and they didn’t work; her seasonal depression got so bad that she had to move to the Southern Hemisphere three months of every year just to stay functional. No doctor had any good ideas about what to do at this point. Eliezer did some digging, found that existing light boxes were still way less bright than the sun, and jury-rigged a much brighter version. This brighter light box cured Brienne’s depression when the conventional treatment had failed. Since Eliezer, a random layperson, was able to come up with a better SAD cure after a few minutes of thinking than the establishment was recommending to him, this seems kind of like the relevant research community leaving a $20 bill on the ground in Grand Central.

Eliezer spent a few years criticizing the Bank of Japan’s macroeconomic policies, which he (and many others) thought were stupid and costing Japan trillions of dollars in lost economic growth. A friend told Eliezer that the professionals at the Bank surely knew more than he did. But after a few years, the Bank of Japan switched policies, the Japanese economy instantly improved, and now the consensus position is that the original policies were deeply flawed in exactly the way Eliezer and others thought they were. Doesn’t that mean Japan left a trillion-dollar bill on the ground by refusing to implement policies that even an amateur could see were correct?

And finally:

For our central example, we’ll be using the United States medical system, which is, so far as I know, the most broken system that still works ever recorded in human history. If you were reading about something in 19th-century France which was as broken as US healthcare, you wouldn’t expect to find that it went on working when overloaded with a sufficiently vast amount of money. You would expect it to just not work at all.

In previous years, I would use the case of central-line infections as my go-to example of medical inadequacy. Central-line infections, in the US alone, killed 60,000 patients per year, and infected an additional 200,000 patients at an average treatment cost of $50,000/patient.

Central-line infections were also known to decrease by 50% or more if you enforced a five-item checklist that included items like “wash your hands before touching the line.”

Robin Hanson has old Overcoming Bias blog posts on that untaken, low-hanging fruit. But I discovered while re-Googling in 2015 that wider adoption of hand-washing and similar precautions are now finally beginning to occur, after many years—with an associated 43% nationwide decrease in central-line infections. After partial adoption.

Since he doesn’t want to focus on a partly-solved problem, he continues to the case of infant parenteral nutrition. Some babies have malformed digestive systems and need to have nutrient fluid pumped directly into their veins. The nutrient fluid formula used in the US has the wrong kinds of lipids in it, and about a third of babies who get it die of brain or liver damage. We’ve known for decades that the nutrient fluid formula has the wrong kind of lipids. We know the right kind of lipids and they’re incredibly cheap and there is no reason at all that we couldn’t put them in the nutrient fluid formula. We’ve done a bunch of studies showing that when babies get the right nutrient fluid formula, the 33% death rate disappears. But the only FDA-approved nutrient fluid formula is the one with the wrong lipids, so we just keep giving it to babies, and they just keep dying. Grant that the FDA is terrible and ruins everything, but over several decades of knowing about this problem and watching the dead babies pile up, shouldn’t somebody have done something to make this system work better?

We’ve got a proof that everything should be perfect all the time, and a reality in which a bunch of babies keep dying even though we know exactly how to save them for no extra cost. So sure. Let’s talk theodicy.

II.

Eliezer draws on the economics literature to propose three main categories of solution:

There’s a toolbox of reusable concepts for analyzing systems I would call “inadequate”—the causes of civilizational failure, some of which correspond to local opportunities to do better yourself. I shall, somewhat arbitrarily, sort these concepts into three larger categories:

1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;

2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information

3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.

The first way evil enters the world is when there is no way for people who notice a mistake to benefit from correcting it.

For example, Eliezer and his friends sometimes joke about how really stupid Uber-for-puppies style startups are overvalued. The people investing in these startups are making a mistake big enough for ordinary people like Eliezer to notice. But it’s not exploitable – there’s no way to short startups, so neither Eliezer nor anyone else can make money by correcting that error. So it’s not surprising that the error persists. All you need is one stupid investor who thinks Uber-for-puppies is going to be the next big thing, and the startup will get overfunded. All the smart investors in the world can’t fix that one person’s mistake.

The same is true, more tragically, for housing prices. There’s no way to short houses. So if 10% of investors think the housing market will go way up, and 90% think the housing market will crash, those 10% of investors will just keep bidding up housing prices against each other. This is why there are so many housing bubbles, and why ordinary people without PhDs in finance can notice housing bubbles and yet those bubbles remain uncorrected.

A more complicated version: why was Eliezer able to out-predict the Bank of Japan? Because the Bank’s policies were set by a couple of Japanese central bankers who had no particular incentive to get things right, and no particular incentive to listen to smarter people correcting them. Eliezer wasn’t alone in his prediction – he says that Japanese stocks were priced in ways that suggested most investors realized the Bank’s policies were bad. Most of the smart people with skin in the game had come to the same realization Eliezer had. But central bankers are mostly interested in prestige, and for various reasons low money supply (the wrong policy in this case) is generally considered a virtuous and reasonable thing for a central banker to do, while high money supply (the right policy in this case) is generally considered a sort of irresponsible thing to do that makes all the other central bankers laugh at you. Their payoff matrix (with totally made-up utility points) looked sort of like this:

LOW MONEY, ECONOMY BOOMS: You were virtuous and it paid off, you will be celebrated in song forever (+10)

LOW MONEY, ECONOMY COLLAPSES: Well, you did the virtuous thing and it didn’t work, at least you tried (+0)

HIGH MONEY, ECONOMY BOOMS: You made a bold gamble and it paid off, nice job. (+10)

HIGH MONEY, ECONOMY COLLAPSES: You did a stupid thing everyone always says not to do, you predictably failed and destroyed our economy, fuck you (-10)

So even as evidence accumulated that high money supply was the right strategy, the Japanese central bankers looked at their payoff matrix and decided to keep a low money supply.

It should be horrifying that this system weights a small change in the reputation of a few people higher (who will realistically do well for themselves even with a reputational hit) higher than adding trillions of dollars to the economy, but that’s how the system is structured.

In a system like this, everybody (including the Japanese central bankers) can know that increasing money supply is the right policy, but there’s no way for anyone to increase their own utility by causing the money supply to be higher. So Japan will suffer a generation’s worth of recession. This is dumb but inevitable.

The second way evil enters the world is when expert knowledge can’t trickle down to the ordinary people who would be the beneficiaries of correct decision-making.

The stock market stays efficient because expertise brings power. When Warren Buffett proves really good at stock-picking, everyone rushes to give him their money. If an ordinary person demonstrated Buffett-like levels of acumen, every hedge fund in the country would be competing to hire him and throw billions of dollars at whatever he predicted would work. Then when he predicts that Google’s price will double next week, he’ll use his own fortune, or the fortune of the hedge fund that employs him, to throw as much money into Google as the opportunity warrants. If Goldman Sachs doesn’t have enough to do it on their own, JP Morgan will make up the difference. Good hedge funds will always have enough money to exploit the opportunities they find, because if they didn’t, there would be so many unexploited great opportunities that the rate of return on the stock market would be spectacular, and everyone would rush to give their money to good hedge funds.

But imagine that Congress makes a new law that nobody can invest more than a thousand dollars. So Goldman Sachs invests their $1000 in Google, JP Morgan invests their $1000, and now what?

One possibility is that investment gurus could spring up, people just as smart as the Goldman Sachs traders, who (for a nominal fee) will tell you which stocks are underpriced. But this is hard, and fraudulent experts can claim to be investment gurus just as easily as real ones. There will be so many fraudulent investment gurus around that nobody will be able to trust the real ones, and after the few experts invest their own $1000 in Google, the stock could remain underpriced forever.

Something like this seems to be going on in medicine. Sure, the five doctors who really understand infant nutrition can raise a big fuss about how our terrible nutritional fluid is killing thousands of babies. But let’s face it. Everyone is raising a big fuss about something or other. From Eliezer’s author-insert character Cecie:

We have an economic phenomenon sometimes called the lemons problem. Suppose you want to sell a used car, and I’m looking for a car to buy. From my perspective, I have to worry that your car might be a “lemon”—that it has a serious mechanical problem that doesn’t appear every time you start the car, and is difficult or impossible to fix. Now, you know that your car isn’t a lemon. But if I ask you, “Hey, is this car a lemon?” and you answer “No,” I can’t trust your answer, because you’re incentivized to answer “No” either way. Hearing you say “No” isn’t much Bayesian evidence. Asymmetric information conditions can persist even in cases where, like an honest seller meeting an honest buyer, both parties have strong incentives for accurate information to be conveyed.

A further problem is that if the fair value of a non-lemon car is $10,000, and the possibility that your car is a lemon causes me to only be willing to pay you $8,000, you might refuse to sell your car. So the honest sellers with reliable cars start to leave the market, which further shifts upward the probability that any given car for sale is a lemon, which makes me less willing to pay for a used car, which incentivizes more honest sellers to leave the market, and so on.

In our world, there are a lot of people screaming, “Pay attention to this thing I’m indignant about over here!” In fact, there are enough people screaming that there’s an inexploitable market in indignation. The dead-babies problem can’t compete in that market; there’s no free energy left for it to eat, and it doesn’t have an optimal indignation profile. There’s no single individual villain. The business about competing omega-3 and omega-6 metabolic pathways is something that only a fraction of people would understand on a visceral level; and even if those people posted it to their Facebook walls, most of their readers wouldn’t understand and repost, so the dead-babies problem has relatively little virality. Being indignant about this particular thing doesn’t signal your moral superiority to anyone else in particular, so it’s not viscerally enjoyable to engage in the indignation. As for adding a further scream, “But wait, this matter really is important!”, that’s the part subject to the lemons problem. Even people who honestly know about a fixable case of dead babies can’t emit a trustworthy request for attention […]

By this point in our civilization’s development, many honest buyers and sellers have left the indignation market entirely; and what’s left behind is not, on average, good.

The beneficiaries of getting the infant-nutritional-fluid problem right are parents whose kids have a rare digestive condition. Maybe there are ten thousand of them. Maybe 10% of them are self-motivated and look online for facts about their kid’s condition, and maybe 10% of those are smart enough to separate the true concern about fats from all the false concerns about how doctors are poisoning their kids with vaccines. That leaves a hundred people. Even if those hundred people raise a huge stink and petition the FDA really strongly, a hundred people aren’t enough to move the wheels of bureaucracy. As for everyone else, why would they worry about nutritional fluid rather than terrorism or mass shootings or whatever all the other much-more-fun-to-worry-about things are?

Likewise:

To see how an inadequate equilibrium might arise, let’s start by focusing on one tiny subfactor of the human system, namely academic research.

We’ll even further oversimplify our model of academia and pretend that research is a two-factor system containing academics and grantmakers, and that a project can only happen if there’s both a participating academic and a participating grantmaker.

We next suppose that in some academic field, there exists a population of researchers who are individually eager and collectively opportunistic for publications—papers accepted to journals, especially high-impact journal publications that constitute strong progress toward tenure. For any clearly visible opportunity to get a sufficiently large number of citations with a small enough amount of work, there are collectively enough academics in this field that somebody will snap up the opportunity. We could say, to make the example more precise, that the field is collectively opportunistic in 2 citations per workday—if there’s any clearly visible opportunity to do 40 days of work and get 80 citations, somebody in the field will go for it.

This level of opportunism might be much more than the average paper gets in citations per day of work. Maybe the average is more like 10 citations per year of work, and lots of researchers work for a year on a paper that ends up garnering only 3 citations. We’re not trying to ask about the average price of a citation; we’re trying to ask how cheap a citation has to be before somebody somewhere is virtually guaranteed to try for it.

But academic paper-writers are only half the equation; the other half is a population of grantmakers.

In this model, can we suppose for argument’s sake that grantmakers are motivated by the pure love of all sentient life, and yet we still end up with an academic system that is inadequate?

I might naively reply: “Sure. Let’s say that those selfish academics are collectively opportunistic at two citations per workday, and the blameless and benevolent grantmakers are collectively opportunistic at one quality-adjusted life-year (QALY) per $100.8 Then everything which produces one QALY per $100 and two citations per workday gets funded. Which means there could be an obvious, clearly visible project that would produce a thousand QALYs per dollar, and so long as it doesn’t produce enough citations, nobody will work on it. That’s what the model says, right?”

Ah, but this model has a fragile equilibrium of inadequacy. It only takes one researcher who is opportunistic in QALYs and willing to take a hit in citations to snatch up the biggest, lowest-hanging altruistic fruit if there’s a population of grantmakers eager to fund projects like that.

Assume the most altruistically neglected project produces 1,000 QALYs per dollar. If we add a single rational and altruistic researcher to this model, then they will work on that project, whereupon the equilibrium will be adequate at 1,000 QALYs per dollar. If there are two rational and altruistic researchers, the second one to arrive will start work on the next-most-neglected project—say, a project that has 500 QALYs/$ but wouldn’t garner enough citations for whatever reason—and then the field will be adequate at 500 QALYs/$. As this free energy gets eaten up (it’s tasty energy from the perspective of an altruist eager for QALYs), the whole field becomes less inadequate in the relevant respect.

But this assumes the grantmakers are eager to fund highly efficient QALY-increasing projects.

Suppose instead that the grantmakers are not cause-neutral scope-sensitive effective altruists assessing QALYs/$. Suppose that most grantmakers pursue, say, prestige per dollar. (Robin Hanson offers an elementary argument that most grantmaking to academia is about prestige.9 In any case, we can provisionally assume the prestige model for purposes of this toy example.)

From the perspective of most grantmakers, the ideal grant is one that gets their individual name, or their boss’s name, or their organization’s name, in newspapers around the world in close vicinity to phrases like “Stephen Hawking” or “Harvard professor.” Let’s say for the purpose of this thought experiment that the population of grantmakers is collectively opportunistic in 20 microHawkings per dollar, such that at least one of them will definitely jump on any clearly visible opportunity to affiliate themselves with Stephen Hawking for $50,000. Then at equilibrium, everything that provides at least 2 citations per workday and 20 microHawkings per dollar will get done.

This doesn’t quite follow logically, because the stock market is far more efficient at matching bids between buyers and sellers than academia is at matching researchers to grantmakers. (It’s not like anyone in our civilization has put as much effort into rationalizing the academic matching process as, say, OkCupid has put into their software for hooking up dates. It’s not like anyone who did produce this public good would get paid more than they could have made as a Google programmer.)

But even if the argument is still missing some pieces, you can see the general shape of this style of analysis. If a piece of research will clearly visibly yield lots of citations with a reasonable amount of labor, and make the grantmakers on the committee look good for not too much money committed, then a researcher eager to do it can probably find a grantmaker eager to fund it.

But what if there’s some intervention which could save 100 QALYs/$, yet produces neither great citations nor great prestige? Then if we add a few altruistic researchers to the model, they probably won’t be able to find a grantmaker to fund it; and if we add a few altruistic grantmakers to the model, they probably won’t be able to find a qualified researcher to work on it.

One systemic problem can often be overcome by one altruist in the right place. Two systemic problems are another matter entirely.

The third way evil enters the world is through bad Nash equilibria.

Everyone hates Facebook. It records all your private data, it screws with the order of your timeline, it works to be as addictive and time-wasting as possible. So why don’t we just stop using Facebook? More to the point, why doesn’t some entrepreneur create a much better social network which doesn’t do any of those things, and then we all switch to her site, and she becomes really rich, and we’re all happy?

The obvious answer: all our friends are on Facebook. We want to be where our friends are. None of us expect our friends to leave, so we all stay. Even if every single one of our friends hated Facebook, none of us would have common knowledge that we would all leave at once; it’s hard to organize a mass exodus. Something like an assurance contract might help, but those are pretty hard to organize. And even a few people who genuinely like Facebook and are really loud about it could ruin that for everybody. In the end, we all know we all hate Facebook and we all know we’re all going to keep using it.

Or: instead of one undifferentiated mass of people, you have two masses of people, each working off the other’s decision. Suppose there was no such thing as Lyft – it was Uber or take the bus. And suppose we got tired of this and wanted to invent Lyft. Could we do it at this late stage? Maybe not. The best part of Uber for passengers is that there’s almost always a driver within a few minutes of you. And the best part of Uber for drivers is that there’s almost always a passenger within a few minute of you. So you, the entrepreneur trying to start Lyft in AD 2017, hire twenty drivers. That means maybe passengers will get a driver…within an hour…if they’re lucky? So no passenger will ever switch to Lyft, and that means your twenty drivers will get bored and give up.

Few passengers will use your app when Uber has far more drivers, and few drivers will use your app when Uber has far more passengers. Both drivers and passengers might hate Uber, and be happy to switch en masse if the other group did, but from within the system nobody can coordinate this kind of mass-switch occuring.

Or, to take a ridiculous example from the text that will obviously never happen:

Suppose that there’s a magical tower that only people with IQs of at least 100 and some amount of conscientiousness can enter, and this magical tower slices four years off your lifespan. The natural next thing that happens is that employers start to prefer prospective employees who have proved they can enter the tower, and employers offer these employees higher salaries, or even make entering the tower a condition of being employed at all. The natural next thing that happens is that employers start to demand that prospective employees show a certificate saying that they’ve been inside the tower. This makes everyone want to go to the tower, which enables somebody to set up a fence around the tower and charge hundreds of thousands of dollars to let people in.

Now, fortunately, after Tower One is established and has been running for a while, somebody tries to set up a competing magical tower, Tower Two, that also drains four years of life but charges less money to enter. Unfortunately, there’s a subtle way in which this competing Tower Two is hampered by the same kind of lock-in that prevents a jump from [Facebook to a competing social network]. Initially, all of the smartest people headed to Tower One. Since Tower One had limited room, it started discriminating further among its entrants, only taking the ones that have IQs above the minimum, or who are good at athletics or have rich parents or something. So when Tower Two comes along, the employers still prefer employees from Tower One, which has a more famous reputation. So the smartest people still prefer to apply to Tower One, even though it costs more money. This stabilizes Tower One’s reputation as being the place where the smartest people go.

In other words, the signaling equilibrium is a two-factor market in which the stable point, Tower One, is cemented in place by the individually best choices of two different parts of the system. Employers prefer Tower One because it’s where the smartest people go. Smart employees prefer Tower One because employers will pay them more for going there. If you try dissenting from the system unilaterally, without everyone switching at the same time, then as an employer you end up hiring the less-qualified people from Tower Two, or as an employee, you end up with lower salary offers after you go to Tower Two. So the system is stable as a matter of individual incentives, and stays in place. If you try to set up a cheaper alternative to the whole Tower system, the default thing that happens to you is that people who couldn’t handle the Towers try to go through your new system, and it acquires a reputation for non-prestigious weirdness and incompetence.

III.

Robin Hanson’s review calls Inadequate Equilibria “really two separate books, tied perhaps by a mood affiliation”. Everything above was the first book. The second argues against overuse of the Outside View.

The Inside View is when you weigh the evidence around something, and go with whatever side’s evidence seems most compelling. The Outside View is when you notice that you feel like you’re right, but most people in the same situation as you are wrong. So you reject your intuitive feelings of rightness and assume you are probably wrong too. Five Outside View examples to demonstrate:

1. I feel like I’m an above-average driver. But I know there are surveys saying everyone believes they’re above-average drivers. Since most people who believe they’re an above-average driver are wrong, I reject my intuitive feelings and assume I’m probably just an average driver.

2. The Three Christs Of Ypsilanti is a story about three schizophrenics who thought they were Jesus all ending up on the same psych ward. Each schizophrenic agreed that the other two were obviously delusional. But none of them could take the next step and agree they were delusional too. This is a failure of Outside-View-ing. They should have said “At least 66% of people in this psych hospital who believe they’re Jesus are delusional. This suggests there’s a strong bias, like a psychotic illness, that pushes people to think they’re Jesus. I have no more or less evidence for my Jesus-ness than those people, so I should discount my apparent evidence – my strong feeling that I am Him – and go back to my prior that almost nobody is Jesus.”

3. My father used to get roped into going to time-share presentations. Every time, he would come out really convinced that a time share was the most amazing purchase in the world and he needed to get one right away. Every time, we reminded him that every single person who bought a time share ended up regretting it. Every time, he answered that no, the salespeople explained that their time-share didn’t have any hidden problems. Every time, we reminded him that time-share salespeople are really convincing liars. Eventually, even though he still thought the presentation was really convincing, he accepted that he was probably a typical member of the group “people impressed with time-share presentations”, and almost every member of that group is wrong. So even though my father thought the offer sounded too good to be true, he decided to reject it.

4. A Christian might think to themselves: “Only about 30% of people are Christian; the other 70% have some other religion which they believe as fervently as I believe mine. And no religion has more than 30% of people in the world. So of everyone who believes their religion as fervently as I do, at least 70% are wrong. Even though the truth of the Bible seems compelling to me, the truth of the Koran seems equally compelling to Muslims, the truth of dianetics equally compelling to Scientologists, et cetera. So probably I am overconfident in my belief in Christianity and really I have no idea whether it’s true or not.”

5. When I was very young, I would read pseudohistory books about Atlantis, ancient astronauts, and so on. All of these books seemed very convincing to me – I certainly couldn’t explain how ancient people built whatever gigantic technological marvels they made without the benefit of decent tools. And in most cases, nobody had written a good debunking (I am still angry about this). But there were a few cases in which people did write good debunkings that explained otherwise inexplicable things, and the books that were easily debunked were just as convincing as the ones that weren’t. For that and many other reasons, I assumed that even the ones that seemed compelling and had no good debunking were probably bunk.

But Eliezer warns that overuse of the Outside View can prevent you from having any kind of meaningful opinion at all. He worries about the situation where:

…we all treat ourselves as having a black box receiver (our brain) which produces a signal (opinions), and treat other people as having other black boxes producing other signals. And we all received our black boxes at random—from an anthropic perspective of some kind, where we think we have an equal chance of being any observer. So we can’t start out by believing that our signal is likely to be more accurate than average.

There are definitely pathological cases of the Outside View. For example:

6. I believe in evolution. But about half of Americans believe in creation. So either way, half of people are wrong about the evolution-creation debate. Since I know I’m in a category, half of whom are wrong, I should assume there’s a 50-50 chance I’m wrong about evolution.

But surely the situation isn’t symmetrical? After all, the evolution side includes all the best biologists, all the most educated people, all the people with the highest IQ. The problem is, the true Outside Viewer can say “Ah, yes, but a creationist would say that their side is better, because it includes all the best fundamentalist preachers, all the world’s most pious people, and all the people with the most exhaustive knowledge of Genesis. So you’re in a group of people, the Group Who Believe That Their Side Is Better Qualified To Judge The Evolution-Creation Debate, and 50% of the people in that group are wrong. So this doesn’t break the fundamental symmetry of the situation.

One might be tempted to respond with “fuck you”, except that sometimes this is exactly the correct strategy. For example:

7. Go back to Example 2, and imagine that when Schizophrenic A was confronted with the other Christs, he protested that he had special evidence it was truly him. In particular, the Archangel Gabriel had spoken to him and told him he was Jesus. Meanwhile, Schizophrenic B had seen a vision where the Holy Spirit descended into him in the form of a dove. Schizophrenic A laughs. “Anyone can hallucinate a dove. But archangels are perfectly trustworthy.” Schizophrenic B scoffs. “Hearing voices is a common schizophrenic symptom, but I actually saw the Spirit”. Clearly they still are not doing Outside View right.

8. Every so often, I talk to people about politics and the necessity to see things from both sides. I remind people that our understanding of the world is shaped by tribalism, the media is often biased, and most people have an incredibly skewed view of the world. They nod their heads and agree with all of this and say it’s a big problem. Then I get to the punch line – that means they should be less certain about their own politics, and try to read sources from the other side. They shake their head, and say “I know that’s true of most people, but I get my facts from Vox, which backs everything up with real statistics and studies.” Then I facepalm so hard I give myself a concussion. This is the same situation where a tiny dose of Meta-Outside-View could have saved them.

So how do we navigate this morass? Eliezer recommends a four-pronged strategy:

1. Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball.

2. Less than a majority of the time: Think about how reliable authorities seem to be and should be expected to be, and how reliable you are — using your own brain to think about the reliability and failure modes of brains, since that’s what you’ve got. Try to be evenhanded in how you evaluate your own brain’s specific failures versus the specific failures of other brains. While doing this, take your own meta-reasoning at face value.

3. And then next, theoretically, should come the meta-meta level, considered yet more rarely. But I don’t think it’s necessary to develop special skills for meta-meta reasoning. You just apply the skills you already learned on the meta level to correct your own brain, and go on applying them while you happen to be meta-reasoning about who should be trusted, about degrees of reliability, and so on. Anything you’ve already learned about reasoning should automatically be applied to how you reason about meta-reasoning.

4. Consider whether someone else might be a better meta-reasoner than you, and hence that it might not be wise to take your own meta-reasoning at face value when disagreeing with them, if you have been given strong local evidence to this effect.

But then he mostly spends the rest of the chapter (and book) treating it as obvious that most people overuse the Outside View, and mocking it as “modest epistemology” for intellectual cowards. Eventually he decides that the Outside View is commonly invoked to cover up status anxiety.

From what I can tell, status regulation is a second factor accounting for modesty’s appeal, distinct from anxious underconfidence. The impulse is to construct “cheater-resistant” slapdowns that can (for example) prevent dilettantes who are low on the relevant status hierarchy from proposing new Seasonal Affective Disorder treatments. Because if dilettantes can exploit an inefficiency in a respected scientific field, then this makes it easier to “steal” status and upset the current order.

So if we say something like “John has never taken a math class, so there’s not much chance that his proof of P = NP is right,” are we really implying “John isn’t high-status enough, so we shouldn’t let him get away with proving P = NP; only people who serve their time in grad school and postdoc programs should be allowed to do something cool like that”? I know Eliezer doesn’t believe that. Maybe he believes it’s only status regulation when it’s wrong? But then wouldn’t a better explanation be that people are trying a heuristic that is right a lot of the time, but misapplying it? I don’t know.

I found this part to be the biggest disappointment of this book. I don’t think it grappled with the claim that the Outside View (and even Meta-Outside View) are often useful. It offered vague tips for how to decide when to use them, but I never felt any kind of enlightenment, or like there had been any work done to resolve the real issue here. It was basically a hit job on Outside Viewing.

I understand the impetus. Eliezer was concerned that smart people, well-trained in rationality, would come to the right conclusion on some subject, then dismiss it based on the Outside View. One of his examples was that most of the rationalists he knows don’t believe in God. But if they took the Outside View on that question, they would have to either believe (since most people do) or at least be very uncertain (since lots of religions have at least as many adherents as atheism). He tosses this one off, but it’s clear that he’s less interested in religion than in worldly things – people who give up on cool startup ideas because the Outside View says they’ll probably fail, or who don’t come up with interesting contrarian ideas because the Outside View says most contrarians are wrong. He writes:

Whereupon I want to shrug my hands helplessly and say, “But given that this isn’t normative probability theory and I haven’t seen modesty advocates appear to get any particular outperformance out of their modesty, why go there?”

I think that’s my true rejection, in the following sense: If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty.

But these are some very artificial goalposts. The point of modesty isn’t that it lets you do great things. It’s that it lets you avoid shooting yourself in the foot. Every time my father doesn’t buy a time-share, modesty has triumphed.

To be very uncharitable, Eliezer seems to be making the same mistake as an investing book which says that you should always buy stock. After all, Warren Buffett bought stock, and look how well he’s doing! Peter Thiel bought stock, and now he’s a super-rich aspiring oceanic vampire! And (the very rich person writing the book concludes) I myself bought lots of stock, and now I am a rich self-help book author. Can you name a single person who became a billionaire by not buying stock? I didn’t think so.

To be more charitable, Eliezer might be writing to his audience. He predicts that the people who read his book will mostly be smarter than average, and generally at the level where using the Outside View hurts them rather than harms them. He writes:

There are people who think we all ought to [use the Outside View to converge] toward each other as a matter of course. They reason:

a) on average, we can’t all be more meta-rational than average; and

b) you can’t trust the reasoning you use to think you’re more meta-rational than average. After all, due to Dunning-Kruger, a young-Earth creationist will also think they have plausible reasoning for why they’re more meta-rational than average.

… Whereas it seems to me that if I lived in a world where the average person on the street corner were Anna Salamon or Nick Bostrom [people Eliezer knows who are very good at rationality], the world would look extremely different from how it actually does.

… And from the fact that you’re reading this at all, I expect that if the average person on the street corner were you, the world would again look extremely different from how it actually does.

(In the event that this book is ever read by more than 30% of Earth’s population, I withdraw the above claim.)

The argument goes: You’re more rational than average, so you shouldn’t adjust to the average. Instead, you should identify other people who are even more rational than you (on the matter at hand) and maybe Outside View with them, but no one else. Since you are already pretty rational, you can definitely trust your judgment about who the other rational people are.

Eliezer makes the assumption that only unusually rational people will read this book (and the preliminary hidden assumption that he’s rational enough to be able to make these determinations). I think this is a pretty safe claim; I don’t object to it in real life. But I worry about it in the same way I worry about the philosophical Problem Of Skepticism. I don’t think I’m a brain in a vat. But I’m vaguely annoyed by knowing that an actual brain in a vat would think exactly the same thing for the same reason.

This section’s argument runs on the same principle as a financial advice book that says “ALWAYS BUY LOTS OF STOCKS, YOU ARE GREAT AT INVESTING AND IT CANNOT POSSIBLY GO WRONG” that comes in a package marked Deliver only to Warren Buffett. It may be appreciated, but it’s not any kind of deep breakthrough in financial strategy.

IV.

Inadequate Equilibria is a great book, but it raises more questions than it answers. Like: does our civilization have book-titling institutions? Did they warn Eliezer that maybe Inadequate Equilibria doesn’t scream “best-seller”? Did he come up with a theory of how they were flawed before he decided to reject their advice?

But also, it asks: how do things stay bad in the face of so much pressure to make them better? It highlights (creates?) a field of study, clumping together a lot of economic orthodoxies and original concepts into a specific kind of rational theodicy . Once you start thinking about this, it’s hard to stop, and Eliezer deserves credit for creating a toolbox of concepts useful for analyzing these problems.

Its related question – “when should you trust social consensus vs. your own reasoning?” – is derivative of the theodicy section. If there’s some giant institution full of people much smarter and better-educated than you who have spent much more time and money investigating the question, then whether you should throw away your own opinion in favor of theirs depends a lot on whether that giant institution might fail in some unexpected way.

Its final section on the Outside View and modest epistemology tries to tie up a loose end, with less success than it would like. Should you trust your own opinion over the giant institution’s on the object level question? Surely you could only do so if certain conditions held – but could you trust your own opinion about whether those conditions hold? And so on to infinity. The latter part of the book acts as if it has a definitive answer – you can trust yourself, or at least trust yourself to correctly assess how trustworthy you are relative to others – but depends on Eliezer’s judgment that the book will probably only find its way to people for whom that is true.

I think you should read Inadequate Equilibria. Given that I am a well-known reviewer of books, clearly my opinion on this subject is better than yours. Further, Scott Aaronson and Bryan Caplan also think you should read it. Are you smarter than Scott Aaronson and Bryan Caplan? I didn’t think so. Whether or not your puny personal intuition feels like you would enjoy it, you should accept the judgment of our society’s book-reviewing institutions and download it right now.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

479 Responses to Book Review: Inadequate Equilibria

  1. suchscience says:

    “I feel like I’m an above-average driver.”
    I feel like I’m a below-average driver. Likewise, I increasingly find driving stressful and dangerous, plus there are more and more good alternatives to driving that are often cheaper and faster and kinder to the environment. I also read a lot about accident statistics, so I’m hyper-aware of just how dangerous driving is compared to e.g. taking the train.

    • Viljami Virolainen says:

      Exactly how I feel too. Though I did not fully understand the madness that is humans driving a car in a city as a transportation method before I took my drivers licence course.

    • SUT says:

      Like modern art, there’s no single agreed upon evaluation metric for what makes a good driver. So we can have more than 50% of people, who are maximizing their own personal metric, and think [rightfully] they are above average at that metric. And thus due to tradeoffs, below average on other metrics which they subjectively give less weight to.

      First there’s ‘how comfortable do I feel behind the wheel’: parent poster is the neurotic type where stress >> thrill. These people are the worst uber drivers; they’re ride is more jerky than taking the bus as they are constantly applying the brake if they see their speedometer go 1 mph the posted limit. Then there are the opposite type where thrill >> stress. The problem here is self-evident, but there ride is smooth and arrivals times quick.

      Second, there are formula1-type drivers, performing multiple lane switches on the highway with grace and (usually) without making anyone else have to adjust to them. They are virtusosos, and most people would crash trying to mimic their style so in that sense they are above average at driving. But they also have a failure mode: the classic child running out into the middle of the street chasing a ball, something which “shouldn’t happen but sometimes does”. These drivers create a surplus of accidents where it’s “not their fault”.

      Finally there’s textbook drivers that lack a theory of the mind for other drivers. This includes self-driving cars in their current incarnation. These drivers can mange normal suburban and highway driving well, but create delays and frustrations in special situations like a lane merge in New York city. By failing to act like the cars around them, the traffic around them starts to treat them as an adversary, making themselves and those around them at a higher risk of accident and also more delayed.

      There’s probably some other success-cum-failure modes I’m forgetting here.

      • AC Harper says:

        There’s my favourite – people that say they have had no motor accidents, although they have noticed many around them. They drive at 40mph everywhere, in a car park, outside a school, on a slip road joining fast moving traffic, in heavy rain or fog.

      • bbartlog says:

        When it comes to driving, there’s a hierarchy of virtues that help avoid accident – judgment, awareness, and technique. The ‘formula1-type’ driver ultimately fails because he has good technique, but poor judgment (and judgment is more important). Of course having great judgment may include things like deciding not to drive at all due to shortcomings in the other two areas.

      • Paul Zrimsek says:

        There’s also a whole slew of biases capable of affecting people’s perception of the other term in the comparison, the “average driver”.

      • Murphy says:

        @AC Harper

        One thing that stuns me when I see footage of pileups.

        like this

        https://www.youtube.com/watch?v=W9fI5M6_XVk

        Where dozens or >100 vehicles end up smashing into each other.

        How?

        It seems like there’s this massive group of people: the kind of people who believe that the speed limit is “just a suggestion” see snow and thick fog and… don’t change anything. They keep barrelling along at 80 mph while chanting to themselves “it’s anyone slower than me that causes accidents!” before barrelling into and killing anyone who switched to a vaguely sane speed for travelling in low visibility and snow.

        you can see that a large fraction of the pileup seems to be “professional” drivers of whom you’d expect better. Who somehow aren’t automatically stripped of their licences for gross negligence when they utterly fuck up and fail to drive safely for conditions.

    • The Nybbler says:

      Last Friday I drove from suburban NJ into Manhattan, found a free parking spot within sight of my destination, and drove out of Manhattan without encountering a delay. By Outside View, I believe this indicates I’m delusional, but I will take EYs advice and reject that. (I have in fact done this twice before…. but I guess delusional Jesus is Jesus all the time)

      (And taking NJ Transit is enough to convince one that safety isn’t everything. There are people who have enough money to do a helicopter commute. This is blatantly insane, involving a landing on a thin strip of pavement not much larger than the helicopter, between the river and a fence. But people do it, and it’s because every other way into the city is terrible. I would do it if I had the money… so maybe I should be taking that Outside View)

      • Protagoras says:

        I don’t know about taking EY’s advice in a case like this; that does sound pretty clearly delusional. Perhaps you should see a professional before concluding for certain that you’re OK?

  2. Sniffnoy says:

    Scott Aaronson’s review of the book made an interesting point: The first half of the book is all about how badly structured systems can cause rational agents to still get bad outcomes. But in reality things are even worse than that, because people are frequently just stupid or irrational. To quote:

    First, Yudkowsky is brilliant in explaining how institutions can produce terrible outcomes even when all the individuals in them are smart and well-intentioned—but he doesn’t address the question of whether we even need to invoke those mechanisms for more than a small minority of cases. In my own experience struggling against bureaucracies that made life hellish for no reason, I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone, but chose not to. It simply wasn’t the case, I don’t think, that I would’ve been equally obstinate in the bureaucrat’s place, or that any of my friends or colleagues would’ve been. I simply had to accept that I was now face-to-face with an alien sub-intelligence—i.e., with a mind that fetishized rules made up by not-very-thoughtful humans over demonstrable realities of the external world.

    (See also: The Basic Laws of Human Stupidity, Carlo M. Cipolla, 1976 😛 )

    There’s a lot to be said on this topic, but, I just want to relay something interesting I recently heard. The Bank of Japan situation mentioned generalizes to the whole “nobody ever got fired for buying IBM” idea — in cases where you’ll be blamed if you try something new and it goes wrong, and won’t be blamed if you try conventional wisdom if it goes wrong, this disincentivizes going against conventional wisdom even if that’s the right thing to do. And an interesting example is sports teams — I was at a talk by Steven Miller the other day, who studies a lot of sports statistics stuff, and someone asked, so why don’t more sports teams use better statistics in deciding what to do? Given all the competition, you know. And he said it’s largely due to this “you won’t be blamed for following conventional wisdom” effect. And how one of the hardest things Billy Beane had to do in his famous reformation of the Oakland A’s was getting buy-in from the team and the owner for his unorthodox approach. OK. Straightforward case of bad incentives, right? Except. It’s not entirely that. Because he mentioned recently that he had spoken to the owner of a soccer team, who was complaining to him about how bad the conventional wisdom is in soccer, and what teams should be doing instead, and how he’d explicitly told the manager of the team, no, you can go against the conventional wisdom, it’s OK, I own the team and I have your back… and he still couldn’t get the team to go along!

    (Of course, Zvi Mowshowitz also discussed this problem recently, and had a different answer. Though possibly a related one.)

    Anyway, don’t really know what to make of that, but it seemed relevant so I thought I should repeat it.

    • poignardazur says:

      I think Eliezer Yudkowsky’s point is as follows:

      The efficient market is basically God. For the purpose of this discussion, it’s extremely smart, it knows everything we know, it has near-perfect decision making, and it’s absolutely ruthless in achieving its goals (usually “make more money”). If the guy in charge is an idiot, God will replace him by someone smarter.

      So the question is: if the efficient market is very smart and very powerful and wants efficiency, why are there inefficient things?

      The naive answer is: maybe God is actually stupid.

      Eliezer’s thesis is: there are specific cases in which institutions are beyond the reach of God. In these cases, any combination of problems can occur (bad incentives, lack of coordination, and yes, incompetent people) and create inefficiency.

      But “people are idiots” alone isn’t a valid answer. There must also be a “this institution is beyond the reach of the efficient market” component.

      • Freddie deBoer says:

        Yes but the efficient market hypothesis is wrong. Which has been demonstrated over and over again.

        • Doctor Mist says:

          Granted, at least for the sake of argument, and now we’re examining the circumstances in which it fails and trying to understand why that happens.

          Unless you’re claiming that the efficient market hypothesis is wrong under almost any circumstances. But you’re smarter than that, because you can observe that you’re not rich. (ETA: That sounds snarkier than I meant. I know you’re smart; I’ve read your stuff. If the EMH were wildly wrong, you could take advantage of the fact, to your profit.)

          You sound sort of like the guy who dismisses airplane crash analysis by saying, “Gravity makes things fall; get over it.”

          • Michael Arc says:

            There are innumerable alternatives to the efficient market hypothesis for explaining why any given person isn’t rich. At some point, it’s time to let go of the idea that some claims get a good faith null hypothesis.

          • rlms says:

            It didn’t seem to be granted by the person Freddie was replying to.

          • poignardazur says:

            I’m not sure who is trying to say what to whom here. My point was that, as far as I’m aware, the efficient market hypothesis is right all the time, except for some categories of situations which EZ tries to list.

          • Doctor Mist says:

            There are innumerable alternatives to the efficient market hypothesis for explaining why any given person isn’t rich.

            Well, sure; I was too specific by asserting that Freddie himself should be rich. For instance, any given person might not be interested in being rich. But that’s irrelevant; if the EMH wasn’t usually correct, you would know lots of people who were rich because they saw an opportunity the market missed, and you don’t. It’s like what goes wrong with lots of “explanations” for the Fermi Paradox.

            the efficient market hypothesis is right all the time, except for some categories of situations which EZ tries to list.

            That was precisely what I was trying to say. Perhaps I should have gone with my first attempt, in which I was going to ask Freddie how to distinguish his assertion from yours. Perhaps his response would have been that there are very, very many categories of failure that EZ’s list does not approach; in that case I’d have asked for some examples.

          • benquo says:

            if the EMH wasn’t usually correct, you would know lots of people who were rich because they saw an opportunity the market missed, and you don’t.

            There’s a motte-and-bailey going on here. Prices could be an actual random walk, or a popularity contest, or any other thing that doesn’t have anything to do with “fundamentals,” and most of the conclusions people like to draw from the EMH would be false.

        • Brian Donohue says:

          Nah, the question as always is “how efficient?”

    • Salem says:

      I agree that the thinking is a bit naive. It’s very rarely a super-fragile equilibrium where “nobody ever got fired for buying IBM” and everyone prefers Dell. Think about what that would imply in terms of uniformity of incentives. Rather, there’s going to be a range, from true IBM fans, to the uncommitted, to people who prefer Dell, to people who prefer HP. It may be that for that set of preferences, “nobody ever got fired for buying IBM” is not a unique equilibrium, but it’s the diversity of preferences, and in particular the genuine preference for IBM on the part of some participants, that keeps it stable.

      For example, in your example of “soccer,” everyone agrees that the conventional wisdom is wrong, but no-one agrees which parts are wrong or what the alternative is. We could give a signaling-incentive-based explanation whereby the manager is afraid of defying the conventional wisdom for fear he won’t get a job with another conformist owner. But more likely is an epistemology-incentive-based explanation where the manager is worried that if he follows the owner’s damn fool ideas, he’ll be the one blamed when the team inevitably loses. And that further breaks down into where the manager thinks the conventional wisdom in this area is correct, and where the manager thinks conventional wisdom is wrong, but the owner’s ideas are even worse.

      What’s particularly interesting about “soccer” is that people don’t even agree on what the conventional wisdom is. That adds a strange meta aspect to the whole question.

      • Deiseach says:

        What’s particularly interesting about “soccer” is that people don’t even agree on what the conventional wisdom is.

        Loathe though I am to quote him, I have to agree with Sir Alex Ferguson here: “Attack wins you games, defence wins you titles.”

        As Liverpool continue to manage to lose games by the simple lack of “not conceding a hatful of goals”, this is pretty much as conventional as football wisdom gets.

        The most basic is “score more goals than the other lot”.

        • Salem says:

          That’s one example of conventional wisdom, which is interesting because it was true in the 80s and 90s but has got less true recently. But it’s also trivially self-limiting, in the sense that you won’t win the title if you can’t score – so it’s also conventional wisdom (but not actually true!) that you need a 20-goal-a-season striker to win the title, which sits uneasily with the above quote.

          Consider the topical question: “Is Sam Allardyce a good manager?” What’s the answer? What’s the conventional wisdom on the answer?

          • Deiseach says:

            You don’t trick me into disrespecting Big Sam that easily! 🙂

            Liverpool’s back four leaking like a colander is no problem as long as up front we can keep banging them in faster than the other side, but the problem happens when – as recently – we seem to concede a goal, then collapse and waste a hatful of chances. I have to agree with this from today’s match report by The Guardian:

            Liverpool’s habit of frittering away leads means opponents will always hope for a comeback.

            Although today’s result was heartening – woo hoo! 😀 We just have to hope that now they’ve found their shooting boots, they don’t mislay them between now and Wednesday – FiveThirtyEight is giving us 67% to 14% against Spartak Moscow, let’s hope the stattos are right on this!

    • Walter says:

      Thanks for that link. I hadn’t read the Basic Laws of Human Stupidity before. It is great.

      • I haven’t read it and didn’t know it existed. But I have long been a fan of Cipolla’s Money, Prices and Civilization in the Mediterranean World.

        • yodelyak says:

          ETA: I think I figured it out. Nevermind.

          I think a comment of mine got eaten. Someone see what I said that was a problem?

    • baconbacon says:

      It’s not entirely that. Because he mentioned recently that he had spoken to the owner of a soccer team, who was complaining to him about how bad the conventional wisdom is in soccer, and what teams should be doing instead, and how he’d explicitly told the manager of the team, no, you can go against the conventional wisdom, it’s OK, I own the team and I have your back… and he still couldn’t get the team to go along!

      The coach is still in a tricky position. If he goes against conventional wisdom he can still get fired for other reasons and then his resume looks like “went against grain, got fired, do not hire” and he is out of a job. In the NBA David Blatt got one shot at coaching, had great results for 1.5 years, got fired mid season and couldn’t get another job in the NBA and is back overseas. This one job isn’t your career in sports. It gets compounded because as a manger/coach you have to get players to buy in, and they have the same issues. If things don’t go well they might not be ‘blamed’ by their owner, but they will see their market value diminish since, by definition, the majority of teams run on conventional wisdom.

      • Deiseach says:

        I own the team and I have your back

        In football management,that’s the infamous vote of confidence (known at least in newspaper headlines as “the dreaded vote of confidence”) – as soon as the Board or the owner says “I have complete faith in Al Samardyce and his methods and I’m sure we will be a title-winning side, given time and patience”, you know that Big Al will be looking for a new job very soon 🙂

        The real problem there is that a public expression of support is usually only necessary in a crisis: after a run of bad results, or the team is not performing to expectations, or the fans hate the guy and want to see the back of him. So the vote of confidence is wheeled out before a ‘make or break’ match which is often the last chance to show that you made the right choice/your team selection is valid/your tactics will work, and if the team loses that match, then the manager gets the chop.

        • poignardazur says:

          Interesting. Is that like a “reverse wisdom” incentive curve? Like if you say “I believe in X’s unorthodox methods” and X fails, you just sweep it under the rug and say “it was worth a try”, and if X succeeds, then you make a lot of noise about it and go “Hey, look at Mr X, nobody believed in him, but we did!”

    • disciplinaryarbitrage says:

      The soccer example hinges (in addition to the reputational risk mentioned by baconbacon) on trust on the manager’s part that the owner will stick with them through the learning curve of trying something new. Many firms pay lip service to innovation, experimentation, and risk-taking, but few actually have a payoff matrix favorable to someone pursuing unconventional strategies (many of which will fail, at least at first) vs. doing things the usual way and making a safe, reliable return. For someone in a new firm (managing a new team, etc.) it’s difficult to tell whether a claimed commitment to innovation is a real value or lip service, meaning the would-be innovator is wise to sit back, let someone else make mistakes, and observe whether the principle is upheld when it’s inconvenient.

      • suntzuanime says:

        Yeah, this is the explanation, I agree. It’s way too easy for naive modelers to round off the difference between “I told this person X” and “this person knows that X”, and this creates a lot of puzzlement in the social sciences.

    • actinide meta says:

      A system that can be broken by a stupid, or lazy, or malicious person almost certainly will be. So you might as well just blame the system for inadequacy. Compare: complaining about “hackers” instead of insecure software.

      Also, my guess is that people who think a bureaucrat could solve their problem at “no cost to anyone” are just wrong. Maybe the cost to the decision maker would be 15 minutes of work, or a small risk of having to explain their decision to a superior who disagrees, and the benefit would be a million lives saved, but so what?

    • secret_tunnel says:

      Related: shooting free throws underhanded is apparently a much better way of doing it, but no one does because of social stigma (there’s no rule that says you have to shoot overhanded!).

      Actually, it makes me wonder if competitive gamers having lower social status than professional athletes (who doesn’t?) causes them to be more willing to embrace “cheap” tactics:

      Doing one move or sequence over and over and over is a tactic close to my heart that often elicits the call of the scrub. This goes right to the heart of the matter: why can the scrub not defeat something so obvious and telegraphed as a single move done over and over? Is he such a poor player that he can’t counter that move? And if the move is, for whatever reason, extremely difficult to counter, then wouldn’t I be a fool for not using that move? The first step in becoming a top player is the realization that playing to win means doing whatever most increases your chances of winning. That is true by definition of playing to win. The game knows no rules of “honor” or of “cheapness.” The game only knows winning and losing.

      A common call of the scrub is to cry that the kind of play in which one tries to win at all costs is “boring” or “not fun.” Who knows what objective the scrub has, but we know his objective is not truly to win. Yours is. Your objective is good and right and true, and let no one tell you otherwise. You have the power to dispatch those who would tell you otherwise, anyway. Simply beat them.

    • Mengsk says:

      I suspect that for collaborative endeavors, it’s a lot harder to adopt an unorthodox strategy because, even if it’s is better than the conventional strategy in principle, your might not be able to execute it as well, because people might not be good at adopting the new strategy.

    • 27chaos says:

      All the people without empty skulls would have required higher pay. The dynamic is still present, it’s just a little more hidden.

  3. martinw says:

    Even if those hundred people raise a huge stink and petition the FDA really strongly, a hundred people aren’t enough to move the wheels of bureaucracy.

    And even if they succeed, their success will probably come too late to save their own child. But if you’re one of those elite few parents who are aware of the problem, then there are easier solutions as far as your own child is concerned. E.g. Eliezer mentions in the book that there is one hospital which makes the correct formula, and some parents will make a multi-hour trip once per week to obtain it. So they solve the problem for themselves, but the problem remains for everybody else.

    Likewise, Eliezer cured his wife’s SAD, but he didn’t then go and spend a lot of effort bringing the solution to the attention of other SAD sufferers and their doctors. He does use it as an example in a book about equilibria, but most readers of that book won’t have SAD and vice-versa.

    So maybe that’s the fourth way in which evil enters the world: the people who know the solution to a problem, can solve it for themselves, after which they no longer have an incentive to help solve it for everybody else as well.

    • Scott Alexander says:

      I’m actually still not clear how the SAD thing happened, or how Eliezer thinks the equilibrium failed. I feel like publishing a study showing brighter lights work better is at least as cool a result as one showing that supramaximal doses of drugs work better, and people publish those all the time.

      • martinw says:

        I guess these special light boxes for SAD sufferers are made by pharmaceutical companies? Or at least medical equipment companies. Presumably they are patented and go through all kinds of special approval processes, and the payoff is that they can be sold at a high markup.

        So the makers of those boxes have an incentive to prove that they work. On the other hand, when the solution for SAD is “buy a bunch of bright LED lamps at the local hardware store and wire up your house with them” then you’re not going to involve a pharmaceutical company. You’ll either do it yourself or hire an electrician. So no incentive for a pharma company to sponsor that study.

        Maybe a bright young med student could do it on their own budget. But Eliezer talks about spending about $600 on lamps to wire up his own house. To run an experiment, you’d need to find a bunch of SAD sufferers and offer to wire up their houses for them. How easy is it to get a budget of several tens of thousands of $ for a med study when there isn’t a pharma company footing the bill? (Not a rhetorical question, I have no idea.)

        You’re going to need a custom approach for every house, which will make it harder to evaluate the results, and of course there’s no way to make this experiment double-blind.

        So I can see how doing a controlled study on this might not be quite as trivial as it might seem at first. Still, if this is the kind of coordination problem which our civilization can’t find its way out of, that’s pretty bad..

        • Aapje says:

          They are just bright lights. There is no rule that you can’t sell bright lights to people, even if you market them as ‘therapy.’

          You can find these things on Amazon for upwards of $50, which is not very expensive. One such manufacturer pretty clearly is just a fairly amateurish small company, not (part of) a big pharmaceutical company.

          So as far as I can see, anyone could just start a company selling ‘True-Sun Light Therapy Lamps®’. I suspect that this is even likely to work as a Kickstarter project.

          PS. Note that the brightest light therapy lamps seem to output 10,000 lux, while the sun outputs 100,000 lux on a bright day.

          • Deiseach says:

            Even as just “bright lights”, there are some possible health hazards that need to be considered (mainly it seems the effect of glare on the retinas).

            So sure you could get a dodgy outfit selling LEDs off the shelf as “health therapy” or whatever, but then after sitting under/next to them for hours a day you burn your eyes out.

            There probably is some room between “pricey light boxes as now” and “any guy can wire his house up like it’s Christmas every day”, but I’m going to bet this is another one of those “why can’t we build a stairway for $500/this is why” situations.

      • TheZvi says:

        Yeah, I still don’t get why his explanation for this one really works. More coming on that and some related things tomorrow.

      • nimim.k.m. says:

        This part of review piqued my interest.

        After rudimentary search, illuminance emitted most things sold as standard light boxes sold in ordinary stores may range from 2 500 lux to 10 000 lux; search a little bit more and you can find setups that have even higher lux levels but, they are rare.

        According to Wikipedia, full daylight corresponds to about 10 000 – 25 000 lux (edit. I was careless while reading some tables; 10 000 – 25 000 lux is for daylight but in non-direct sunlight; direct sunlight is 100 000 as pointed by Aapje); and I’d bet then there’s things like the spectrum of the emitted light to consider (you want it to be close to sunlight if you want to simulate sunlight).

        Do you have actual stats for the setup Eliezer created, instead of, “very bright light”?

        Or wait, let me look myself. Ctrl-F “lux” brings us the book chapter 2:

        Suppose it were the case that some cases of Seasonal Affective Disorder proved resistant to sitting in front of a 10,000-lux lightbox for 30 minutes (the standard treatment), but would nonetheless respond if you bought 130 or so 60-watt-equivalent high-CRI LED bulbs, in a mix of 5000K and 2700K color temperatures, and strung them up over your two-bedroom apartment.

        Would you expect that, supposing this were true, there would already exist a journal report somewhere on it?

        How annoying that he does not bother to specify the lux levels and other details of his study setup right on the onset (instead of rambling “why nobody tries more light!?”. I spent hour googling to how to calculate the lux level of his unprecedentedly well-illuminated apartment and ended up with an estimate of 3000 lux, which certainly is impressive compared to regular indoors lightning, but also kind of weaksauce compared to the best of available light box products.

        Only after doing that, I noticed he does give a measure (2000 lux) and some more details later in the same chapter:

        Sometime after putting up the first 100 light bulbs or so, I was working on an earlier draft of this chapter and therefore reflecting more intensively on my process than I usually do. It occurred to me that sometimes the best academic content isn’t online and that it might not be expensive to test that. So I ordered a used $6 edited volume on Seasonal Affective Disorder, in case my Google-fu had failed me, hoping that a standard collection of papers would mention a light-intensity response curve that went past “standard lightbox.”

        Well, I’ve flipped through that volume, and so far it doesn’t seem to contain any account of anyone having ever tried to cure resistant SAD using more light, either substantially higher-intensity or substantially higher-duration. I didn’t find any table of response curves to light levels above 10,000 lux, or any experiments with all-day artificial light levels comparable to my apartment’s roughly 2,000-lux luminance.

        I say this to emphasize that I didn’t lock myself into my attempted reasoning about adequacy when I realized it would cost $6 to perform a further observational check. And to be clear, ordering one book still isn’t a strong check. It wouldn’t surprise me in the least to learn that at least one researcher somewhere on Earth had tested the obvious thought of more light and published the response curve. But I’d also hesitate to bet at odds very far from 1:1 in either direction.

        And the higher-intensity light therapy does seems to have mostly cured Brienne’s SAD. It wasn’t cheap, but it was cheaper than sending her to Chile for 4 months.

        If more light really is a simple and effective treatment for a large percentage of otherwise resistant patients, is it truly plausible that no academic researcher out there has ever conducted the first investigation to cross my own mind? “Well, since the Sun itself clearly does work, let’s try more light throughout the whole house—never mind these dinky lightboxes or 30-minute exposure times—and then just keep adding more light until it frickin’ works.” Is that really so non-obvious? With so many people around the world suffering from severe or subclinical SAD that resists lightboxes, with whole countries in the far North or South where the syndrome is common, could that experiment really have never been tried in a formal research setting?

        On my model of the world? Sure.

        A couple of thoughts.

        I can easily see why nobody would bother investigating the difference between “ceiling full of LEDs” setup and the ordinary lightbox setup; that is, I can see it even without writing a book about the topic, only with a judicious application of common sense: Eliezer’s alternative treatment does sound kind of difficult and impractical to administer. Consider the idea: “Sign up for the test, you have to rig this expensive light system on your ceiling that is equivalent of (or better than) hospital operating theater instead of this standard light box that become the standard because we noticed it could help impressive amount of the SAD patients” (Curing half of the patients probably was very impressive result when the light therapy was first trialled, no wonder that it became the baseline local optimum where the research become stuck.)

        And the regular SAD lightbox is relatively expensive item at 100-200 USD (larger and thus easier to use the box, steeper the price). I’m betting the Eliezer’s “ceiling full of LEDs” treatment would be more expensive (“cheaper than Chile” does not sound exactly cheap). Commercially sold easy-to-install equipment to achieve the same illuminance levels might be even more expensive than “let’s order LEDs and install them on my free time” because that’s how the things usually work out in capitalism.

        Another point. Eliezer also describes that he changed the treatment exposure time from 30 minutes to full day. Are there people who read the prescription (as cited by Eliezer) as command that forbids them sitting next to the light box more than 30 minutes if it is not enough?

        First things first, let’s look what a little bit of Googling has to say about the treatment times.
        This helpful email page mentions varying amount of exposure time from 30 minutes to 1 hour. This one mentions 1 hour of bright light and 2 hours of dawn simulation. This study mentions two daily doses, 45 min each. So the idea of trying different exposure periods does not appear to be unknown to the literature (these are from the first page of results on “light therapy SAD” search on Google Scholar), but again, consider limitations imposed by the standard light box. The treatment that would consist of sitting in front of a single light source for the whole day (or even half a day, 4 hours) is probably too inconvenient to try out. I already mentioned that a single light box is relatively expensive item; having them all around your house and workplace (so that you have one always sufficiently nearby) is going to be even more expensive.

        So yeah, I guess that lesson is that sometimes relatively low-hanging fruit simply exists. SAD treatments do not appear to be that big an industry, and treatments that sound more like a minor house renovation project than “buy a device, use 30 min a day” is slightly too far-off idea that nobody would have done a study.

        I grant that “sitting in front of a light box several hours or buying lots of them so that you have one always near you are both too impractical to try it out” fits in the Eliezer’s theory somewhere. And I like the theory, that “fixes that would require coordinated action of more than one altruistic actor are difficult to pull off”. But I’d like to add that this attitude, “Fascinating how everyone else, both normal people and the high academic elite establishment, have not thought about this idea of more light. Wonder why only an unprecedented singular star like me could come up with a solution? Let me write a book pondering how can that be: there must be elaborate status games and evil disincentive structures behind all of this!” is the reason why I can’t stand Eliezer’s writing.

        • Aapje says:

          Wikipedia says 100,000+ lux for bright sunlight, as one might experience when walking around outside during the middle of the day in summer.

        • Deiseach says:

          With so many people around the world suffering from severe or subclinical SAD that resists lightboxes, with whole countries in the far North or South where the syndrome is common, could that experiment really have never been tried in a formal research setting?

          I’m tending to agree with the view that sure it’s been considered and maybe even tried, but imitating full summer-day sunlight in your entire house is freakin’ expensive, so unless you have money to burn, ordinary people are going to have to make do with running a lightbox or two for an hour a day (because I would also imagine this impacts your electricity bills) therefore the standard therapy recommended for standard people is “what’s cheap and works for most of ’em?”

          Honestly, if you can afford to move south for three months of the year and then move back again, doesn’t it make more sense to move somewhere sunnier year-round on a permanent basis? (And if you can’t afford it and ordinary light boxes don’t work, I imagine you just have to suck it up).

          I spent hour googling to how to calculate the lux level of his unprecedentedly well-illuminated apartment and ended up with an estimate of 3000 lux, which certainly is impressive compared to regular indoors lightning, but also kind of weaksauce compared to the best of available light box products.

          Stupid maths on my part via this source gives:

          LED lamp = 80-100 lm/W

          lux = 10.76391 × watts × (lumens per watt) / (square feet)

          lux = 10.76391 x 60 watts x 100 lm/w /(square feet of apartment which is unknown, lets assume “2-bedroom apartments in San Francisco have an average size of 1,013 sq. ft.”) = 63.75 lux/sq ft and I suppose if we add in that there are 130 LEDs = 8,288 lux/square foot. I have no idea if this is remotely accurate because, as I said, stupid maths.

          How does the “ceiling full of even brighter than standard light boxes and sit under them for hours at a time” stack up with the kind of risks associated with over-use of tanning beds? Because I’m an idiot about physics but I can’t help thinking that level of light must be putting out some kind of radiation (especially if you’re attempting to mimic sunlight, where we get enough warnings about exposure and cover up and sun cream) as well as the warnings about the effects on your eyes from screens and monitors.

          Is there such a thing as reverse SAD? I prefer these greyer/bright but cold days to full-on summer; when the autumn came round and the grey skies rolled in, I perked right up!

          • nimim.k.m. says:

            How does the “ceiling full of even brighter than standard light boxes and sit under them for hours at a time” stack up with the kind of risks associated with over-use of tanning beds?

            Probably not at all, if your light source does not emit UV which it should not (it’s UV radiation that causes tanning). Some popular articles I found recommended discussing the treatment (ordinary light box) with your doctor if you have an eye condition, and others mentioned disruptions to the daily circadian rhythm if applied too late in the evening?

          • engleberg says:

            @Is there such a thing as reverse SAD?

            I like to take a short walk at night, just long enough for my eyes to adjust. Things look nice and shadowy. Then I like the jeweled look on everything when I come back inside.

            Eliezer’s wife may be pleased to see evidence that her husband wants her to be happy all around her house. This pleasure could bury her SAD. To test this, she’d have to have someone who does not love her install bright lights in her house. I’d skip that.

            I would not want every cop who drives past my house to think I’m growing something inside. Never a good war, never a bad peace works for the war on drugs.

          • Gerry Quinn says:

            Ordinary light is a form of radiation too, just less energetic than UV. The link you gave above suggests that the blue (high-energy) component of light from LEDs can have adverse effects. Maybe incandescent lights would be better if you could still get them, could afford the power, and could stand the heat they will emit at that level of luminosity!

            Incandescent lights emit less blue than LED or fluorescent sources (that’s why the light is yellower). Also the blue emitted by the other sources tends to spike at a single frequency. I presume light boxes could be made with less blue, but the frequency spike – if it has any biological effect – is probably hard to eliminate.

          • balrog says:

            Here is simpler math version:
            wiki says: “Irradiance on Earth’s surface is ~ 1000 W/m^2”.
            For your 100m^2 apartment you need 1000 100W incandescent light bulbs (which is probably a bit more red then Sun).
            At that point your fuses would blow up.

            Going to LED would cut power costs 7 times, your fuses might still blow up. You also wouldn’t feel that nice (or awful) heat sensation of being sunbathed.

          • Paul Zrimsek says:

            One thing working in your favor is that unless your wife is actually suffering from Indoor Affective Disorder, you only need to replace the amount of summer sunlight that’s making it in through your windows.

          • Nancy Lebovitz says:

            There is such a thing as people getting depressed in the summer– I don’t know whether it’s more light than they can handle or the heat.

          • poignardazur says:

            @engleberg Yeah, I thought about that too. Causation-ambiguous correlation is a bitch.

      • Eliezer does not have good evidence that his method works in general, and it would be utterly unsurprising if it failed to replicate.

    • Ilya Shpitser says:

      See also: the obvious problem with “this diet worked for me!”

      There is a big step between “first I tried A, then B happened” and “we are bringing a solution to SAD to market that will help a big cohort we can market to.”

  4. jeff daniels says:

    I also think it might have been useful for Eliezer to include a section titled “HOW TO TELL IF YOU ARE WARREN BUFFETT AND THEREFORE THIS BOOK IS FOR YOU” rather than just saying ‘If you found this, you’re probably at least Buffett-adjacent,” which – while probably not a terrible proxy – doesn’t make it seem like he’s taking that question very seriously.

    • Scott Alexander says:

      I guess that would be nice, but I think he’s probably right that anyone smart enough to read a book that doesn’t contain any dreamily-handsome vampires is probably in the top 10% (and so should be very wary of rounding down to average opinion, and probably has some of the skills necessary to bootstrap their way to finding out who’s smarter than they are).

      I’m more concerned with the foundational issue of how he can know he’s smart enough to write that chapter, how you can know you should believe he is, etc – not because I doubt it, just because I feel like that philosophical loose end has been left unresolved.

      • jamesbarney says:

        I would start with the heuristic. People who spend lots of time trying at something tend to be better at it.

        Say for example you need to translate an English word into Esperanto, and because you speak no Esperanto you are completely unable to judge someone’s competency. It’s probably a pretty good heuristic to trust the guy that spends all his free time talking with other people about Esperanto and reading books on Esperanto.

        Well I think the same is true of Eliezer. You can start off with just the fact he cared to write the book. A large portion of the population is not motivated enough by the truth(in a general sense) to ever read much less write a book on truth seeking. There is a large community that dedicates a bunch of time and energy to talking about and trying to understand the defects in the human mind and it’s defects in order to get better at truth seeking. And a large majority of the people in that community view Eliezer as an authority.

        If you’re deciding to play Lord of the Rings trivia or not against a random person for high stakes and all you know is you’re one of the 0.01% of people who has ever read any book LotR ever? It’s probably a safe bet.

        And the strategy to learn more about LotR is to rely on people who you think know more about LotR than you is probably an effective one.

        Granted this all breaks down if rationalism.ever gets popular, and truth seeking behavior starts to have non-intrinsic rewards.

        • Aapje says:

          That is a rather weak heuristic, though. Lots of people are not very open-minded and spend lots of time building a case for their prejudices and write that down.

          I expect anti-vaccination advocates to have more knowledge about vaccination studies than the average person. However, I also expect them to put way too much trust in poor studies whose outcomes match their prejudices and to automatically dismiss the studies that don’t, claiming that those are fake science from big pharma.

          • poignardazur says:

            Yeah.

            My personal experience is with my father; he’s a little racist, very anti-immigration and anti-europe, etc; and he also knows way more about history, religion, the Maghreb, Islamic culture in general than I do.

            That’s humbling in a way. I still don’t know how much I disagree with him, but that does make me wary of the “I spend a lot of time thinking about it therefore I’m right” heuristic.

      • Deiseach says:

        anyone smart enough to read a book that doesn’t contain any dreamily-handsome vampires is probably in the top 10%

        Now I’m helplessly confused, because I have read books with no dreamily-handsome vampires, books with dreamily-handsome vampires, and books slagging off the books with the dreamily-handsome vampires by means of even more (parodically) dreamily-handsome vampires.

        So does that make me very smart or very stupid?

        • stanprollyright says:

          So does that make me very smart or very stupid?

          Yes.

        • You are in the top 10%, as the first part of your response would imply.

          • Deiseach says:

            You are in the top 10%, as the first part of your response would imply

            But that’s only one-third of three-thirds! And two-thirds of my reading did include dreamily-handsome vampires, thus by the Ypsilanti Protocol (“if two out of three Christs are fakes, then that makes three out of three Christs fakes”), my remaining one-third should be discounted (maybe the dreamily-handsome vampires were there in between the lines, or in disguise, or symbolised by something else in the text)!

            So plainly I cannot be a ten-percenter?

          • Baeraad says:

            But that’s only one-third of three-thirds! And two-thirds of my reading did include dreamily-handsome vampires, thus by the Ypsilanti Protocol (“if two out of three Christs are fakes, then that makes three out of three Christs fakes”), my remaining one-third should be discounted (maybe the dreamily-handsome vampires were there in between the lines, or in disguise, or symbolised by something else in the text)!

            So plainly I cannot be a ten-percenter?

            That doesn’t follow at all. The question was whether you were capable of reading non-dreamily-handsome-vampire books, not whether how frequently you did so. Have you read a minimum of one (1) book that didn’t have dreamily handsome vampires in it? Yes? Congratulations, then by this theory you are probably in the top 10%.

            Of course, it’s a very sloppy and sarcastic theory that should probably not be taken very seriously, but nonetheless.

      • Luke the CIA Stooge says:

        Come on Scott thats an easy one. The questions Eliazer, and people who want to take his advice need to ask is:

        Does this question have major implications for my personal morality, tribe, or identity, such that it is TOO PAINFUL for me to consider the object level of what it would be like if I where wrong?

        In the case of the 3 jesus’s it isn’t the case that the one considers the other two, considers how this would be evidence for him being wrong, re-evaluates the situation considering the possibility of him being wrong against his prior evidence then adjusts his confidency accordingly. The Jesus sees the evidence sees the implication and then quick flinches back to confidence because the posibility of having his identity ripped away from him is TOO PAINFUL.
        The same happens with creationists, math cranks, conspiracy theorists, delusional american idol contestants and [insert outgroup]. It happened to your friend when you suggested they’re the ones who should read sources they disagree with.

        Even if you don’t believe something you should be able to create a mental model of what the world and your experiences would be like if it were/ weren’t the case and you should be able to reference that agaisnt your actual experiences. It doesn’t garantee you’ll be right once you’ve done the exercise, but you should be able to play around with various models given x, given notx, in order to check for cheap and easy insights.

        Whereas when your thinking goes really bad and it becomes tied up with your morality, or tribe, or identity, your mind stops being able to seriously and honestly consider the possibility that you might be wrong. You might make a show doubting to yourself, but you won’t be able to create a mental model of what the world might look like if your cherished hypothesis wasn’t there. Itd just be too painful.
        The third jesus can’t just say to himself “for arguments sake I’ll assume i am crazy and try to model myself and the world around me from there and see how far i get.” He intuitively knows there’s an uncomfortably high chance that that exercise would be too costly and he flinches away.

        It’s not appealing as a test because it can only be reliably done on yourself, it requires real introspection (that you can lie to yourself about) and you can’t expose other people with it to win status games, but like being awake you know when you’ve done it and seriosuly engaged with the object level of both your idea and the world in which its wrong.

        And you can tell after a bit of conversing whether the other person has done this with their beliefs.
        If someone can seriously and itellectually converse on a issue without resorting to ad hominens or dodging the issue or trying to derail the conversation away from the object level issue, then they may be wrong but they’re not delusional. They aren’t just trying to run from cognitive dissonance.

        And it pains the hell out of me when i see political discusions and people who nominally agree with me do exhibit the telltale signs of delusion and cognitive disonance!!
        Every day i see people i agree with say they believe what i belief and then act as though the slightest bit of enquiry, the slightest bit of intellectual discussion, the slightest possibilty that someone might, gasp!, present an argument, is garanteed to pull down the edifice and expose some horrid truth they knew in there heart but didn’t want to believe.

        Its really obvious when someones delusional because they know theyre delusional and act accordingly. They know which beliefs they have to protect form the other ones. They know which beliefs they’ll not bet money on.

        This is a very easy test to admister to yourself, if you have the heart to do it.

        • 27chaos says:

          No, non-crackpots would also be horrified by the prospect that they’ve been wasting their lives.

          • Luke the CIA Stooge says:

            Again the point of the test is to administer it to yourself. (Although it can be very apparant when someone else is incappable of adminstering it to themselves)

            You try to create the most detailed simulation possible of the world where you are wrong you consider the implications of it, you consider if your just holding onto a belief for its emotional value, when you find a contradiction that invalidates that simulation you move onto other ways you could be wrong and try to simulate those worlds.
            The point isnt to determine whether your beliefs are right or wrong (thats what argumentation and contemplation are for). The point is recognize if you already know that your wrong and are burying that knowledge to protect yourself emotionally.

            If you can honestly complete the exercise and give a detailed answer to the question “what if I’m wrong?” Then you aren’t a delusional crackpot, you might still be wrong,
            but you don’t have to worry that your so far gone that no amount of evidence would ever be able to convince you where wrong.

        • poignardazur says:

          I don’t think this model is accurate.

          I’ve been wrong and even deluded about things before, and it’s never felt like “I must keep having this opinion because it’s important to me”, even in retrospect. When I’ve clinged to opinions in the past, it mostly be out of a sense that I HAD to win the argument, which didn’t last very long, or because the opinion did feel true to me and alternate ideas didn’t.

          My one experience talking with a creationist (n=1, they are more rare in France :P), I was able to… not convince her, but convince her that maybe there was something to what I was saying, by using reason-based arguments; the part that really resonated was when we talked about the scientific method, and the idea that people didn’t just write smart ideas in books, but actually went “Hmm, maybe I’m wrong about this. I wonder how I could check?”

          My point is, I never felt like she had very strong blocks around learning something that could change her life. I’m guessing American Creationists are different since it’s more of a political issue there, but I wonder how much of it is just ideas that aren’t communicated properly through political barriers.

      • rlms says:

        “in the top 10%, and so should be very wary of rounding down to average opinion”
        Depends what you mean by average opinion. In the case of creationism, yes. But a lot of topics are only considered by the top x%. If only 15% of people think about an issue, a ten-percenter should move towards average opinion on it.

      • benwave says:

        I’m not sure the point was that ‘you are likely to be better than average’ so much as, if everyone went around thinking ‘I am unlikely to be better than average’, a whole lot of useful things would not get done. (According to the first half of the book you’d expect that to be the case only where human civilisation is inadequate – in the adequate parts someone will pick up the slack even if you don’t – but of course a great deal of our civilization falls short of adequacy)

        Perhaps the meta lesson at play here is that Eliezer has ‘more people in positions of power subscribe to rationalist modes of thought’ as a victory condition. He may, by use of these arguments being made largely exclusively to the rationalist community, cause rationalist people to pick up power-free-energy where otherwise they would not have tried and in so doing empower them? It’s sort of a stretch but it could be a contributing factor.

        One thing I’ve definitely noticed about Eliezer is that to him, the fact that the P(You’re Smart | You read this book) Is something he would be highly likely to believe, AND to make use of in making the book serve its purpose. He even states explicitly somewhere in it that he would take back things he said if 30% of the word read this book (note that this can be read two ways, and is correct in each of them but for different reasons! A. 30% of the world is highly rational and the world is a different place, or B. too many people have read this book for the conditional probability filter on reading this book to be valid anymore. This is inconsequential, but cute!)

        In any case, Eliezer Does put forward his alternative – which is to put uncommon effort into improving your prediction by practicing, practicing, practicing. Bet on everything. Update on everything. It is that record of past prediction on non-expert subjects which will let you rationally discard the outside view, or rather to give yourself different priors in outside-view maths such that inside-view and outside-view arguments begin to converge. How can I have more confidence than average? Because of a track record of being better than average In Situations Where I Am Not Highly Experienced.

        On another note, I never really put it past Eliezer to be adding a certain amount of philosophical noise to his pieces like this, as part test of rationalist power and part screening wisdom off to those that don’t pass this test. Those powerful enough would discard the noise and pick up on the signal. I think it’s more likely that he simply falls short of an ideal author, but I’m not willing to go below about 20% odds that the above is true.

    • Doesntliketocomment says:

      As a non-Rationalist, this aspect is what leads me to question whether this book is worthwhile, since assuring your readers that they are special/gifted simply for reading your book seems to be a pretty transparent piece of fluffery.

      • toastengineer says:

        I think the point that he’s trying to make is that if you’ve even heard of this book then you’ve probably already been studying rationality for a while anyway, not that reading the book makes you “special.”

      • poignardazur says:

        I think the whole “You read this book, therefore you’re probably in this category” idea is ridiculous.

        That said, the book does raise interesting questions and explains interesting economy concepts.

  5. BlindKungFuMaster says:

    Did the Japanese matrix change while I was reading or should I go get another coffee? When I went back it suddenly made sense.

    • Fossegrimen says:

      it changed
      (or we both had the same hallucination)

      • Deiseach says:

        Read it before lunch, it had the obvious error; read it again after lunch, it was fixed.

        Clearly this is an example of the observer effect!

    • Scott Alexander says:

      It changed twice, because someone pointed out my correction was also wrong.

      • BlindKungFuMaster says:

        Well, it primed me to catch the “higher (…) higher” a couple of sentences further down. I suggest you fix that by expanding the sentence in the bracket beyond your readers ability to track single word dependencies.

      • MarkRoulo says:

        “It changed twice, because someone pointed out my correction was also wrong.”

        So we should expect that it is still wrong? Correct? Because two out of three Japanese Jesuses being wrong imply that the third one is wrong, too?

  6. shakeddown says:

    Eventually he decides that the Outside View is commonly invoked to cover up status anxiety. If we say something like “John has never taken a math class, so there’s not much chance that his proof of P = NP is right,” then what we really mean is “John isn’t high-status enough, so we shouldn’t let him get away with proving P = NP; only people who serve their time in grad school and postdoc programs should be allowed to do something cool like that.”

    There’s a lot of things that annoy me about Eliezer, but I think this one wins. As anyone who’s been in a math department knows, you get several emails a year (or a month) by non-credentialed people claiming to have solved open problems in Math. Often this open problem is “We found the real value of pi, not an approximation like you sucky normie mathematicians use”. But sure, the reason we ignore them is that they’re attempting to steal our status.

    • Scott Alexander says:

      The math example was my unfair hostile reductio ad absurdum. Eliezer is very aware of math crackpots and dislikes them as much as anyone else.

      I’ve edited the text to make this clearer.

      • Sniffnoy says:

        So, I think there’s a few different situations here that you’re jumbling together.

        One is “outsider claims to have solved problem”. Cranks, I agree, should be disregarded. But cranks aren’t just outsiders; cranks are people who are, like, obviously not thinking clearly. I think it’s easy to see why to disregard cranks for perfectly valid reasons. But if you have an outsider who seems to be thinking clearly, that might be worth a listen. The SAD example is instructive here — although I guess that’s not merely a case of thinking clearly, but is in fact a case of doing what should be obvious. It’s a bit hard to imagine an example like that happening in math, I’ll admit, but if it does happen, don’t ignore it, you know? There is low-hanging fruit in mathematics; I’ve gotten a paper published that was pure LHF. Hell arguably I count as the “clear-thinking outsider” here, since it was in an are I don’t specialize in, even!

        Then you have the case of “outsider claims to have solved ridiculously famously hard problem like P vs NP”. Like, P vs NP… that’s a problem with so many skulls in front of it, that has famously tripped up so many people, who likely were otherwise good mathematicians, that yeah, there if someone who’s not specifically an expert on that in particular claims to have solved it, you really can safely ignore it. (And with P vs NP, even if someone who is an expert claims to have solved it, you can probably safely ignore it.) That sort of thing is just so many levels above your ordinary math problem, that “oh huh this guy sounds like a pretty clear thinker and not a crank” just isn’t going to cut it. Mere clear thinking just isn’t going to cut it here.

        (Robin Hanson raised an interesting point in his review: Yudkowsky often claims in the book that a good thinker ought to be able to, in many cases, determine which side of an expert dispute is correct, even if they’re not an expert themself. Hanson points out, though, that Eliezer never says how to do this! Which I honestly hadn’t noticed when I read the book, because I guess I mentally filled in my own way: Look and see if the arguments on one side are filled with what’s just, well, bad thinking. Even if you don’t know the area, if you know a thing or two about good thinking vs bad thinking, you may be able to pick out a correct side. This happens more often than it should, if you ask me. Of course mind you a lot of the time neither side is thinking particularly poorly and this technique won’t help you at all. But sometimes it does. Of course now I’m wondering if this really is what Eliezer had in mind, or if he intended something else…)

        • BlindKungFuMaster says:

          I actually have a purported proof of the Goldbach conjecture in a drawer somewhere. A colleague of my sister pushed her to give it to me, which she did, under the christmas tree … I skimmed the first page once. It did look like math. But there are a thousand things I’d do before I work my way through it to find the inevitable mistake(s).

          About determining the correct side in a (factual) debate without expert knowledge:
          – Which side is backed by the big money? More likely to be wrong.
          – Which side is backed by the moral sensibilities of the society at large? More likely to be wrong.
          – Which side is excited about more and better data and makes clearcut statistical arguments? More likely to be right.
          That’s what I go by anyway.

          • Salem says:

            Which side is backed by the big money? More likely to be wrong.

            This doesn’t strike me as obvious, and I’d be interested in your reasoning here. Indeed, in my (perhaps naive) view, the side backed by the big money is more likely to be right.

            For example, suppose we are trying to determine whether a Location L contains a valuable Resource R. It strikes me that the opinion of the well-capitalized Resource Extraction companies are more likely to be more reliable than enthusiasts on either side of the issue. They are the ones with skin in the game. If they are spending big money to extract R from L, then if there isn’t really any R they’ll lose out (and similarly, if they don’t extract and are missing an opportunity).

          • BlindKungFuMaster says:

            I’m talking about the situation in which the big money has an interest in a certain side winning the debate. Tobacco and cancer, sugar and obesity, coal and climate change, finance and high frequency trading, housing and immigration, that kind of thing.
            In your example big money doesn’t back one side, it backs determining the truth, whatever it might be.

          • Salem says:

            You’re right that in the example I gave, “big money” doesn’t back one side regardless of the truth, it backs the correct side. That’s precisely why the side backed by big money isn’t more likely to be wrong!

            Admittedly, there may well be “big money” players with a material interest in one side regardless of truth, but it’s unclear to me why this applies especially to “big money” as opposed to anyone else – small proprietors, government institutions, academic organisations, etc.

            In reality, we rarely come to a dispute ahead of time, so it’s hard to tell the difference between someone who has chosen a side for material as opposed to epistemological reasons. Indeed, the two get bound up together. If I believe P, I will tend to take actions that will benefit me if P is proven true. For example, if I believe my fancy new diet is going to cure obesity, I’ll invest in it. Alternatively, maybe I’m just a huckster trying to make a profit from a fraudulent diet. Hard from the outside to tell the difference, but the more money is bet, and the more established and repeating the players, the more likely that the material interests are downstream of the epistemology. Hence “big money” tracks truth.

            In almost all the examples you give, there are huge material interests on both sides. For example, the disputes about HFT are almost entirely between big money players, with both claiming that their position benefits a notional “little guy.” It’s not clear to me what conclusions you can draw from such a situation.

        • Ilya Shpitser says:

          Furthermore, Yudkowsky’s own track record in doing this is awful.

          • entobat says:

            Please present evidence when you make statements like this.

          • Ilya Shpitser says:

            Say someone makes claims like “we need to adopt interpretation X of QM.” Or “we need to adopt X epistemiology.” Or “we need to adopt X approach to AI safety.”

            How are we evaluating this?

          • Sebastian_H says:

            On the one hand, one clear example of craziness shouldn’t be infinitely damning. But on the other hand the basilisk incident showed that there was either a lot more status/control concerns than he like to admit, or that the rationality level isn’t as secure as he thinks.

          • Nornagest says:

            We could reasonably disagree on how competent Eliezer is as a philosopher or a cognitive scientist or even a pop-science writer, but when it comes to forum administration I think his track record’s fairly clear: he’s disengaged to the point of negligence, over-reliant on technical gimmicks, and prone to overreaction. The basilisk incident is just the earliest and most famous one; there have been several others.

            On the one hand, that’s a specialized enough skill that it shouldn’t diminish our confidence outside it much. But on the other, when he’s claiming domain-independent improvements in reasoning and there’s at least one domain he’s demonstrably poor at…

          • entobat says:

            @Ilya: I wasn’t necessarily disagreeing with you on any object-level point. I just think it’s dangerous for us to make such statements about anyone, particularly a controversial member of our own community, without fleshing them out. It makes it harder to have productive discussions about the topic and allows people to project their own concerns with him onto your comment.

          • Ilya Shpitser says:

            I agree with this.

            I guess I am trying to describe a slightly more general version of “if you are so smart, why aren’t you rich.” I am willing to be pretty generous about what “rich” means, as in not necessarily a giant pile of literal utility, but unique insights that are universally (or nearly so) recognized as such and so on.

            So for example, Ramanujan demonstrated “being rich” to my satisfaction. He was an extremely odd diamond in the rough character, and was not formally trained in any kind of “mage’s guild.” But he had real insights about math that everyone agreed were interesting.

          • Futhington says:

            What was the “basilisk incident”?

          • Deiseach says:

            The basilisk incident is just the earliest and most famous one

            The tiny bit I’ve managed to understand of this leaves me rather sympathetic than otherwise to this; if he mishandled it, it’s hard to see what the proper way of handling it was.

            Given that Less Wrong was set up in part as “we think the unthinkable, no idea is too out there to be considered and examined and critiqued”, and also given that the rationality community (or at least the segment of it that overlaps with effective altruism) seems unusually prone to scrupulosity, and also given that a great part of Less Wrong was also “don’t trust experts, don’t take anyone’s word for it, work it out for yourself”, then this made them uniquely ripe to be exploited by something like the basilisk.

            In short, he was trying to herd a bunch of cats who were stricken with genuine terror that they were sinners in the hands of an angry god, yet who resisted with all their might the idea of coming under the guidance of a spiritual director who would authoritatively tell them to stop obsessing over this (the recognised method of dealing with this in religion). Blowing the entire debate to kingdom come with dynamite was about as good as anyone in that situation could have done, once it built up that head of steam.

          • Viliam says:

            What was the “basilisk incident”?

            Several years ago, Eliezer deleted a comment containing a speculation that a truly good AI would torture everyone who had the option to contribute money to its construction, but didn’t. The reasoning was more or less that an AI which is truly good and highly capable would want to be built as soon as possible, because the sooner it is built, the more human suffering it can prevent, so the trade-off of torturing a few greedy humans is worth it. (Yes, this is an approximation. We could keep nitpicking endlessly.)

            First, this is an obviously bad PR. If anyone wants to say that the rationalist community is a dangerous cult, they can’t get much better material than this. Second, some people complained to Eliezer that they can’t stop thinking about this topic, and get nightmares. So Eliezer removed the comment. Predictably, people kept posting it again. So Eliezer nuked the whole thread, and banned further discussion on the topic.

            What happened next is that the so-called “Rational Wiki” published the whole thing, making it sound like “this is what the rationalist community is truly about”. Rational Wiki is an atheist+ website edited by a handful of people; one of them, David Gerard, is a passionate hater of Less Wrong. (These days he makes a majority of edits on the Wikipedia article about Less Wrong, and if you are a LW reader and you read the wiki article, you wouldn’t recognize it.)

            What keeps happening since then is that anytime people google about LW, they find the article at RW, and everyone comes asking “so, what is this Roko’s basilisk actually about?”. There is barely any newspaper article about LW not mentioning the basilisk. Of course, the fact that the newspapers mention the basilisk in articles about LW just provides further evidence that deep inside, the rationality community is truly about the basilisk; they just want to keep is secret from the non-cultists.

            There is no way to win this. If you don’t talk about the basilisk, it just proves that you keep a dark secret. If you say “okay, so let’s talk about the basilisk now”, that’s just proves that you actually care about the basilisk. If you say “okay, we kept talking about the basilisk for a long time, could we please stop now and return to the traditional LW topics?” it is censorship. Essentially, anything other than talking endlessly about the basilisk makes some people unhappy.

            Then everyone gets bored. Then a few months later someone say “by the way, you remember Roko’s basilisk?” and some new person asks “uhm, what is the Roko’s basilisk? I never heard about it”. And then the debate starts again. (For some reason, the fact that many people never heard about the basilisk is not considered to be sufficient evidence that the rationalist community is actually not truly about the basilisk.)

            So the new round of debate starts… now!

          • Incurian says:

            In short, he was trying to herd a bunch of cats who were stricken with genuine terror that they were sinners in the hands of an angry god

            To clarify here, this is not just Deiseach’s normal religious hyperbole, it’s a really good description.

          • Sebastian_H says:

            Viliam. Your description of the basilisk incident seems mostly right to me, but a little too soft on Yudkowsky.

            Caveats: this is from memory since many of the relevant archives were deleted or are otherwise obscured.

            That said, the word that I keep remembering Yudkowsky using regarding the basilisk is that it was “dangerous” to think about and “dangerous” to talk about. He also seemed super coy about what was dangerous about it. He didn’t often seem to position it as something like a distraction/rabbit hole, which after a time led to the inference that he bought into the idea that a good AI might torture people who were inadequately committed to the cause (but only those who realized this incentive, because the rest wouldn’t be incentivized by the thought of being tortured because they didn’t think about it).

            Now maybe he thought deep down that it was a dangerous DISTRACTION, but he was uncharacteristically less than up front about it if that was what he was thinking. Or if he mentioned that motivation somewhere he didn’t do his usual tactic of driving the point into the ground by repeating it a large number of times.

            So here is the speculative part. Revelation of priors–I was raised in one of the first evangelical mega churches so I have developed an allergy to cultish groups. And I mean ‘allergy’ in the ‘it is quite possibly an unhealthy OVERREACTION’ sense. But that said, as the basilisk thing developed, it struck me as exactly the kind of thing you would tell someone who is already a true believer in order to strike the fear of god into them and put them in their ‘place’. But if it got out into the curious believer community it would look horrible, so once that happened you want to shut down discussion of it WITHOUT directly disavowing it (because you don’t want to directly contradict yourself to the true believer(s) you used it against).

            Edit, from many years later I see this quote from EY in one of his Harry Potter reddit threads

            …a Friendly AI torturing people who didn’t help it exist has probability ~0, nor did I ever say otherwise. If that were a thing I expected to happen given some particular design, which it never was, then I would just build a different AI instead—what kind of monster or idiot do people take me for? Furthermore, the Newcomblike decision theories that are one of my major innovations say that rational agents ignore blackmail threats (and meta-blackmail threats and so on). It’s clear that removing Roko’s post was a huge mistake on my part, and an incredibly costly way for me to learn that deleting a stupid idea is treated by people as if you had literally said out loud that you believe it, but Roko being right was never something I endorsed, nor stated.

            Which is pretty close to saying that he doesn’t believe it. I don’t remember anything nearly that strong at the time though. And even here it isn’t quite the denial you’d want. My over-lawyerly sense doesn’t like “nor did I ever say otherwise” instead of “which is what I said at the time…” The talk of “particular design” is another thing like that, as no particular designs for AI are on the books even yet.

          • Rob Bensinger says:

            I think what happened is something along the lines of: (1) Eliezer was legitimately outraged that someone would deliberately spread a meme to their friends and acquaintances that they themselves thought was really harmful and dangerous; (2) he thought that as an argument against CEV-like approaches the post was just dumb, and it wasn’t a useful line of inquiry in the first place, so if this was triggering people’s anxiety or scrupulosity there wasn’t much reason to keep it around; and (3) he was confident Roko’s specific thing was wrong, but he hadn’t explored the space of similar ideas thoroughly enough to be super confident that all the similar ideas you could come up with if you groped around in the area were harmless. So as a matter of principle he wanted to err on the safe side and push back hard against LW even starting down that path, at least until he’d had a little more time to think about it and confirm his impression that the neighboring space of ideas is harmless.

            And then the whole thing backfired amazingly. It looks to me like an object lesson in ‘if it feels to you like the best strategy for lashing out at someone you’re personally annoyed by is also the best strategy for one or two unrelated goals, be skeptical of that first impression and try to pursue each goal separately’.

            So there’s a germ of real disagreement where people who aren’t living in AI-Safety-Land (and Newcomblike-Problems-Land) could reasonably find it baffling that this is even the kind of thing you’d want to hedge against in a “trying to take into account model uncertainty and not trust my own clever arguments” way. You need to already think that this is a qualitatively serious kind of topic to be thinking about; Szilard in 1935 hedging against model uncertainty on ‘my inside-view model says nuclear weapons won’t readily ignite the Earth’s atmosphere’ would be one thing, Szilard in 1935 hedging against model uncertainty on ‘my inside-view model says the Prime Minister of Hungary is not mind-controlled by aliens’ would be another.

          • Viliam says:

            he hadn’t explored the space of similar ideas thoroughly enough to be super confident that all the similar ideas you could come up with if you groped around in the area were harmless.

            Back then I suppose it was easy to imagine the following scenario:

            Imagine that Eliezer would instead completely calmly address the issue, do some mathematical magic, publish a formal proof that the Basilisk would actually not work, and invite other people to do the peer review. (This would be the virtuous thing to do, wouldn’t it?)

            Two weeks later, someone would say: Okay, the proof used some mathematical argument X to prove that the Basilisk wouldn’t work as advertised. But I guess that could actually be fixed. Here is the new improved Basilisk 2.0, which does not suffer from X.

            Then, Eliezer or someone else would do some mathematical magic and publish another formal proof that Basilisk 2.0 also wouldn’t work, because of another mathematical argument Y.

            Two weeks later, there would be a Basilisk 3.0 proposal, fixing both X and Y. Followed by an article saying that even Basilisk 3.0 wouldn’t work, because of Z. Followed, two weeks later, by Basilisk 4.0, made immune against X, Y, and Z…

            Okay… one possibility is that this line of research would actually never lead to anything truly dangerous. All proposed Basilisks would fail, perhaps each for a different reason, but ultimately nothing bad would happen, only a few people would spend a lot of time doing unusual math. And maybe we would learn something new and exciting as a side effect.

            But, for the sake of argument, imagine that after a lot of research, Basilisk 9000 would finally turn out to be correct, after all possible problems were found, addressed, and fixed. How incredibly stupid would this whole endeavor seem in hindsight. So much effort spent… for what purpose exactly? To build a machine that will reliably torture you, and to achieve a few karma points on a web forum before that happens?

            Simply, it’s like announcing a competition to build an Unfriendly AI. Maybe you think the probability of success is 0% anyway, and maybe you are right, but my point is that such competition is stupid even if (and especially when) it succeeds to achieve its goal.

            And imagine trying to stop this process in the middle. Imagine that after publishing the research disproving Basilisk 5.0, Eliezer would say: “Okay guys, your first attempts were rather naive, but now things are getting serious and people are now using real math to address the challenge, and I just think that… it would be better if we simply stopped trying to make a more powerful Basilisk 6.0, because we might accidentally stumble upon the functional solution, and no sane person would actually want that… right?” What would most likely happen then? My guess is that two weeks later, Basilisk 6.0 would be published anyway. If not at Less Wrong, then simply somewhere else. The fact that Eliezer would object against it would merely make the whole thing much more fun.

            By the way, “building a fully functional and mathematically proved Basilisk 9000” is not the only way this scenario could end up badly. You could also make a design that is mathematically flawed, but seems convincing enough to most people. Or maybe the whole issue is a complete nonsense, but talking about it would inoculate people against the whole idea of Friendly AI, forever. Or maybe just one really powerful and sufficiently paranoid person would get nervous, and decide that the safest solution is to make the whole MIRI team disappear. Or someone unrelated would start a literal cult of Basilisk, and declare that all AI researchers who refuse to build it have to be punished (any maybe Basilisk 6.0 also promises to eternally torture people who won’t join the punishing). Shortly, the actually bad Basilisk does not even have to be mathematically correct, only easy to abuse for psychological manipulation.

            But the real emotional issue, I guess, is the shock reaction after you meet some very smart people and say: “look, I have invented a gun, and this is how it works”, and one of them immediately takes the gun, puts it to his temple, and tries to pull the trigger. Even after the gun fails to fire, your faith in sanity of at least a very tiny proportion of humanity gets seriously damaged.

            EDIT: For those who believe that the idea of superhuman AI is nonsense in principle, just imagine any other area of research. For example, imagine a website teaching people about genetic engineering, where some smart person immediately publishes their prototype of a super-deadly virus, saying “hey, this is pretty cool, isn’t it? by the way, I am releasing it in the wild” and expects a pat on the head for being a clever boy.

          • Jiro says:

            One of the reasons that the basilisk made LW look bad is that just the fact that people took it seriously, even to reject it, makes LW look bad. The ideas that you need to believe in order for the basilisk to even register as plausible are profoundly weird.

          • Eponymous says:

            One data point: I’m an economics professor, and Eliezer is the only non-economist I’ve encountered whose writings do not trigger Gell-Mann Amnesia in me. In fact, I once learned something new and important about my own field from him, though I doubt he recognized the significance of his own comment.

        • There is one real case nearly equivalent to the outsider providing a proof of P v NP–Ramanujan’s letter to Hardy.

          As a bit of evidence that one can, to some degree, identify the smart people and weight their views more heavily, I was corresponding with Robin well before he went back to school to become an official economist–because it was obvious that he was a smart guy who had an original and important idea in my field (idea futures). And I had identified Jimbo Wales as almost the only person on HPO (Humanities Philosophy Objectivism) worth arguing with many years before he started Wikipedia.

          I have a short list of people who, when they disagree with me, cause me to seriously consider that I may be mistaken.

          • Nick says:

            There is one real case nearly equivalent to the outsider providing a proof of P v NP–Ramanujan’s letter to Hardy.

            There’s a couple more actually: Wittgenstein, with Russell, and Walter Pitts, also with Russell. Cambridge mathematicians are just magnets for tragic lone geniuses or something.

          • rlms says:

            Another case (from Yudkowsky’s Facebook).

        • The Nybbler says:

          Well, Wiles did prove Fermat’s Last Theorem, and Appel and Haken gave their supremely unsatisfying proof of the Four Color theorem not long before. So just because something has resisted solution doesn’t mean any given proof is wrong. But that’s the way to bet.

          • Sniffnoy says:

            Some of these examples are just… really bad. I can only conclude that I didn’t make my point very well. The point about P vs NP isn’t just that lots of people have tried to solve it and failed. I guess I focused on that particular fact; but I really was making a point about P vs NP and maybe a few other similar problems, because that particular problem stands out in other ways.

            Like, Wiles’s proof of Fermat’s Last Theorem is really not any sort of example here; in addition to Wiles not being any sort of outsider, he was following a well-studied approach that had had a lot of work done on it already. Not a lot there to indicate one should disbelieve it. By contrast, nothing like that really exists for P vs NP. The partial results on P vs NP are just so ridiculously far away — consider that in order to prove P vs NP, you’d have to prove P vs PSPACE, and even that is laughably out of reach — and many were achieved using methods we know can’t work for P vs NP itself. To the best of my knowledge (this isn’t really my area) the only thing resembling a real “program” to prove P vs NP is probably geometric complexity theory, but even Ketan Mulmuley doesn’t expect that to yield something like P vs NP anytime soon.

            Appel and Haken, again, similar reasons. Not outsiders, not an exception. Yes, famous open problems do get solved; as I said above, “disbelieve solutions to most open problems” was not my point. P vs NP really is pretty exceptional.

            I only have a vague knowledge of Ramanujan’s letter to Hardy. That’s a definite outsider case obviously. But like P vs NP? I really, really doubt it. Nick’s cases I had to look up, those too are definite outsider cases, but neither seems at all like P vs NP (certainly not Pitts — pointing out errors is not like solving a famous open problem).

            The case of Thomas Royen is an odd one and I’ll get back to it in a moment. But basically we have a few different things going on here. I’d say at least — A: Is this person an outsider? B: Is this person a crank? C: Did this person claim to solve a famous open problem? D: Did this person claim to solve something on the order of P vs fricking NP, which I don’t think any of the other examples here have been?

            Obviously these are somewhat related. Cranks are (almost) always outsiders. (There’s some professors out there who act like cranks, for sure.) Claiming to have solved a famous open problem while an outsider is something of a sign of crankhood. But note here that the thing about cranks is that it’s always the most famous problems that draws them. If someone claims to have proven P != PSPACE, you should generally ignore it, because that problem is basically on the level of P vs NP — but while I’d bet that the person is wrong, it’s a small bit of evidence against them being a crank, because a crank is more likely to go straight for P vs NP.

            So — let’s ignore the P vs NP thing. Because people have focused on that but I don’t think that’s a useful thing to focus on. Let’s focus on the case where there’s not something like P vs NP under consideration. Then the question becomes, how do you tell cranks from mere outsiders?

            As has already been mentioned, Appel, Haken, and Wiles are not outsiders at all. Ramanujan — definite outsider, good on Hardy for recognizing he wasn’t a crank. To me his stuff looks on the border, with hints both ways, though I’m probably not the best judge. Depends on the details, I suppose. (Worst case, if you’re in Hardy’s position, you can try to get one of your grad students to look at it for you… I’ve had to do that, with things that were substantially more crankish than Ramanujan’s letter.)

            Pitts, again I don’t know the details, but a letter just pointing out errors doesn’t seem crankish at all. Wittgenstein Russell apparently initially thought was a crank but kept him on anyway for some reason.

            That leaves Royen. Not a full outsider but still basically an outsider for the purposes here. OK. But very little about Royen or his work seems crankish, just outsiderish. There’s one big red flag, though — publishing in a vanity journal. That seems distinctly crankish. I think I would correctly recognize Royen as not a crank… unless I knew in advance about the vanity journal thing. Then I would probably have mistakenly written him off.

            I’m hoping maybe now I’ve made my point a little clearer.

          • The Nybbler says:

            I mostly brought up FLT and 4-coloring because of your parenthetical “(And with P vs NP, even if someone who is an expert claims to have solved it, you can probably safely ignore it.)”. Yes, the eventual solvers were not outsiders.

        • Rob Bensinger says:

          I guess I mentally filled in my own way: Look and see if the arguments on one side are filled with what’s just, well, bad thinking. Even if you don’t know the area, if you know a thing or two about good thinking vs bad thinking, you may be able to pick out a correct side. This happens more often than it should, if you ask me. Of course mind you a lot of the time neither side is thinking particularly poorly and this technique won’t help you at all. But sometimes it does. Of course now I’m wondering if this really is what Eliezer had in mind, or if he intended something else…

          From ch. 4 of the book:

          I did not work out myself what would be a better policy for the Bank of Japan. I believed the arguments of Scott Sumner, who is not literally mainstream (yet), but whose position is shared by many other economists. I sided with a particular band of contrarian expert economists, based on my attempt to parse the object-level arguments, observing from the sidelines for a while to see who was right about near-term predictions and picking up on what previous experience suggested were strong cues of correct contrarianism.
          (Fn: E.g., the cry of “Stop ignoring your own carefully gathered experimental evidence, damn it!”)

      • Freddie deBoer says:

        My unfair hostile reading on the rationalist community is that it is chock full of people who make the same basic errors as math crackpots.

        • Ilya Shpitser says:

          I agree, and I think the specific selection bias mechanism involved here is “the type of person who would find something as inane as the sequences a life-transforming experience.”

          This is not entirely fair in the sense that the rationality community is also full of very nice, and well meaning, and often smart people who I am quite sympathetic to, and in who I find much to admire. But this is sort of the tragic flaw that unites them.

          • Rationalists and religious people have two different flawed ways of thinking, and math crackpots and conspiracy theorists take those two flawed ways of thinking and draw out their consequence with even more harmful conclusions.

            Consider Porphyry’s pretty reasonable objection to the idea of Jesus’s resurrection: why did he appear to just a few and only to friends, instead of to many and to his enemies? The way of thinking that leads to the conclusion that religious beliefs (as concrete claims about the world) are reasonable is fundamentally the same way of thinking that leads to the conclusion that conspiracy theories are reasonable. Of course the critical implication here is not infinitely strong: some conspiracy theories will turn out to be true, and some religious-like beliefs will probably turn out to be true. But the large majority of both are false.

            In a similar way, the rationalist error here would be overemphasizing personal ability or at least the abilities of their (rather local) tribe, and this is fundamentally the same way of thinking that leads to math crackpots. Again the critical implication is not infinitely strong. Nonetheless, the implications of this way of thinking are mostly false; as I said in another comment, it is certainly true that Yudkowsky and his associates are smarter than average. But it is *certainly false* that they are extra super-duper smarter than average, and Yudkowsky surely believes this falsehood.

            These might be even more closely associated than I suggested, in the sense that both kinds of errors (the religious and rationalist) are an overemphasis of an Inside View to the neglect of an Outside View. Still, they are not exactly the same thing.

            I called this the fair way of putting things because it is not literally true that religious people are (as such) conspiracy theorists, and similarly not literally true that rationalists are math crackpots. But the roots of those errors are there.

        • I think the fair way to put this would be rationalists:math crackpots::religious people:conspiracy theorists.

    • Glass Merchant says:

      Mathematics has to be literally the least applicable field for the phenomenon.

    • Walter says:

      How nice of these non-credentialed folks to share the real value of pi with you. I hope you thank them appropriately.

    • Barr says:

      The things the math department does to protect their status is limitless. I’ve tried to get help on significant problems from various math departments and gotten nothing back even in cases where the problem could have been solved trivially with their help. The vast majority of academics, despite being publicly financed, are precisely useless for society outside of teaching.

      • Jiro says:

        What makes you think the problem could have been solved trivially with their help? (Bearing in mind that you aren’t the only person who asks them such problems, and answering 100 trivial questions can be a lot of work.)

  7. Fossegrimen says:

    Funny about the timeshare thing. I know several people who have bought the things, enjoyed it a lot for a decade or so and then sold their share again for around 3% annual ROI. I had no idea they were supposed to be a con.

    • ManyCookies says:

      Nice try RCI.

    • Tarpitz says:

      I’m familiar with the trope, but my Dad bought into one getting on for twenty years ago, over which time various family members have used it for various pleasant holidays. I have no idea how much he paid and could perfectly well believe it was poor value for money compared to AirBNB or what have you, but I don’t think his experience could reasonably be described as getting scammed.

  8. ShawnSpilman says:

    Can the actual book possibly be even half as entertaining as your review of it, Scott? I can’t help wondering, though, whether your choice to make an example of (long defunct) Bear Sterns was intentional.

    • poignardazur says:

      I’ve read a few chapters, and yeah, it’s entertaining. The guy did write HP:MoR. The book isn’t as easy to read as Scott’s review, but it has a lot of non-superfluous insights and information and examples. The bits with the three characters talking to each other are funny.

  9. poignardazur says:

    I think you should read Inadequate Equilibria. Given that I am a well-known reviewer of books, clearly my opinion on this subject is better than yours. Further, Scott Aaronson and Bryan Caplan also think you should read it. Are you smarter than Scott Aaronson and Bryan Caplan? I didn’t think so. Whether or not your puny personal intuition feels like you would enjoy it, you should accept the judgment of our society’s book-reviewing institutions and download it right now.

    Wow. That’s the first time someone used the “I’m more competent than you, now do this” card on me and it sort of worked. (on the other hand, I was mostly planning to read this before this review came out)

    I was really excited about the preview chapters laying out the questions. Too bad the answers aren’t all that great (assuming your judgment is accurate)

    • Deiseach says:

      Whether or not your puny personal intuition feels like you would enjoy it, you should accept the judgment of our society’s book-reviewing institutions and download it right now.

      Given that my last foray into “reviewing a recommended book” ended up with me being temporarily banned for being a horrible nasty meanie, I think not 😉

    • Sebastian_H says:

      I’m pretty sure that was tongue and cheek.

  10. Edge of Gravity says:

    Remember how WhatsApp succeeded in the crowded field of instant messaging in 2009? What market inefficiency/inadequate equilibrium did they exploit? How did they manage? Do similar opportunities still exist?

    • Iain says:

      Quora’s answers seem pretty reasonable here.

      • Edge of Gravity says:

        It doesn’t answer the question how they found the free energy everyone else missed, despite looking very hard. Yahoo messenger, MSN messenger, Skype, Google Chat and others were already in that space, but yet…

          • pelebro says:

            @userfriendlyyy
            I’m quite pissed at that article. It missed the biggest difference between signal and telegram, signal is foss, this alone makes it more secure, and you can commission (or carry out yourself if you have the time an expertise) an independent security audit if you want, this alone makes it way more secure. They also mention that signal is not anonymous, signal already says the same, this is public record, no other voip service that I’m aware of is anonymous I’m not even sure it’s technically possible and in any case it would be way less convenient, what signal does is protect the content of your messages, only that (and it does so over insecure channels, it does not matter if it servers are given by Amazon or are otherwise compromised). In the end I agree with “military contractors” like the eff (wtf?) they’re attempting to undermine public trust in encryption.

    • Eponymous says:

      If there is a market niche where it makes sense for there ultimately to be one dominant player, then when that market niche first appears there will be at least (probably exactly) one company making a play for that space that will (ex post) be wildly successful. The hard part is identifying which one will win ex ante.

      See also search engines and internet retailers in the late 90s, social networks ~2003, PC operating systems in the 80s, etc.

    • benwave says:

      The principle of the book is not that it’s impossible to win in a crowded field – after all, Somebody must.

      Does WhatsApp resemble a $20 bill on the ground at Central Station? It surely most resembles a $20 bill at a bank window. It was possible to pick it up and have it but not without doing the work to earn it. Put together the team, get the funding, do the development, do the marketing, purchase the servers, make it secure etc etc.

      For another useful analogy, compare the work of a hunter gatherer (not onerous) to one of a farmer (onerous). It’s always* possible to make more food by working hard and at the limit of the technology, capital, and organisational forms available to you. Gathering free energy is always desirable, not always possible.

      *yes, yes, alright, not always.

  11. Benito says:

    Super fun review!

    I found this part to be the biggest disappointment of this book. I don’t think it grappled with the claim that the Outside View (and even Meta-Outside View) are often useful. It offered vague tips for how to decide when to use them, but I never felt any kind of enlightenment, or like there had been any work done to resolve the real issue here. It was basically a hit job on Outside Viewing.

    Conversely, I found the book gave short but excellent advice on how to resolve the interminable conflict between the inside and outside views – the only way you can: empiricism. Take each case by hand, make bets, and see how you come out. Did you bet that this education startup would fail because you believed the education market was adequate? And did you lose? Then you should update away from trusting the outside view here. Et cetera. This was the whole point of Chapter 4, giving examples of Eliezer getting closer to the truth with empiricism (including examples where he updated towards using the expert-trusting outside view, because he’d been wrong).

    You quote “Eliezer’s four pronged strategy” But I feel like his actual proposed methodology was in chapter 4:

    Step one is to realize that here is a place to build an explicit domain theory—to want to understand the meta-principles of free energy, the principles of Moloch’s toolbox and the converse principles that imply real efficiency, and build up a model of how they apply to various parts of the world.

    Step two is to adjust your mind’s exploitability detectors until they’re not always answering, “You couldn’t possibly exploit this domain, foolish mortal,” or, “Why trust those hedge-fund managers to price stocks correctly when they have such poor incentives?”

    And then you can move on to step three: the fine-tuning against reality.

    This is how you figure out if you’re Jesus – test your models, and build up a track record of predictions.

    You might respond “But telling me to bet more isn’t an answer to the philosophical question about which to use” in which case I repeat: there isn’t a way a priori to know whether to trust experts using the outside view, because you don’t know how good experts are, and you need to build up domain-specific skills in predicting this.

    You might respond “But this book didn’t give me any specific tools for figuring out when to trust the experts over me” in which case I continue to be baffled and point you to the first book – Moloch’s toolbox.

    Finally, you might respond “Thank you Eliezer I’d already heard that a bet is a tax on bullsh*t, I didn’t require a whole new book to learn this” to which I respond that, firstly, I prefer the emphasis that “bets are a way to pay to find out where you’re wrong (and make money otherwise)” and secondly that the point of this book is that people are assuming way too quickly the adequacy of experts, so please make more bets in this particular domain. Which I think is a very good direction to push.

    • Deiseach says:

      Hmm – the “Three guys think they’re Jesus, naturally they’re nuts” only works because we can say “But none of these guys can be Jesus!” (given that we’ll ignore the Second Coming). So reasoning “66% of the Jesus claimants are wrong therefore I may/must be wrong too” only works in that case.

      Now say we have three guys claiming to be Scott Alexander 🙂 Now, in this case, it is not impossible for there to be a real guy called Scott Alexander. One of them, therefore, may be the real Scott Alexander. If a psychiatrist reasoned “66% of the guys claiming to be Scott Alexander are wrong, therefore the third guy may/must be wrong too”, they would be incorrect. And it would be very unfair to take the fact that someone claims insistently, and keeps on claiming even after being shown that dazzling proof, “But I am the real Scott Alexander!” and use that as the basis for “yeah, plainly he’s nuts, send him up to the locked ward”.

      At the very least, look at his driver’s licence and see if the name matches the face before deciding he’s a fruitcake 🙂

      • youzicha says:

        Of course we checked the driver’s license, we’re careful psychiatrists.

        Afterwards he kept screaming “No no, you don’t understand! It’s my middle name!” and got really antagonistic, so we gave him a sedative and committed him.

    • Wrong Species says:

      Unfortunately, not everything can be resolved with a bet. If I wanted to figure out whether String Theory is right, there aren’t any predictions I can make to test it anytime soon.

  12. arancaytar says:

    at the level where using the Outside View hurts them rather than harms them

    (I’m assuming the “harms” should be “helps” in that line)

    • RandomName says:

      Corrections thread? I think this line “So of everyone who believes their religion as fervently as I do, at least 30% are wrong.” is wrong. Shouldn’t it be 70%? No religion has more that 30% representation, so no matter which one is right, at least 70% of people are wrong.

      Also, Scott uses “Fuck you” twice in this review. I don’t particularly mind and thought it was funny, but it stuck me as unusually snappy for him.

      • spinystellate says:

        I noticed the same thing (should be 70%, not 30%). But I Ctrl-F’d through the comments, and only saw that one other person noticed this, and SSC readers are you usually pretty smart and observant, so the Outside View says that RandomName and I are wrong.

  13. limestone says:

    This is surely an interesting read, but I think it belongs to epistemic rationality, not instrumental rationality.

    1. Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball.

    This.

    It’s not just that object level thinking is important because it contains all the actual reasoning about the problems and their solutions. It is also because the errors in object level thinking are best resolved in the object level too. Suppose you have researched some problem, and found a solution that experts think isn’t going to work. What is the best course of action: to determine if your status is high enough to doubt their conclusions; or to actually evaluate their arguments against your solution at the object level? Clearly it’s the latter.

    Of course checking a belief on the object level takes more effort than making a simple status comparison, but if you really care about the correctness of that belief, it is usually worth it. If not, well, then it doesn’t matter anyway.

    Example 1: One of your scientific interests is Somemathguy’s conjecture. One day, you get an idea about how it might be proved. But then you think “well, hundreds of mathematicians have tried to solve this and failed, and I’m just a grad student”. So you forget about it and get back to your usual life.

    Example 2: One of your scientific interests is Somemathguy’s conjecture. One day, you get an idea about how it might be proved. So you write it down, show to some of your math savvy friends and they fail to find an error. One of them points out an error in your proof. Turns out, you have incorrectly understood one of the math results you used in your derivations. Now you know a bit more about Somemathguy’s conjecture.

    Example 3: One of your scientific interests is Somemathguy’s conjecture. One day, you get an idea about how it might be proved. So you write it down, show to some of your math savvy friends and they fail to find an error. Then you ask your math professor to take a look at it, and he fails to find an error too. Then you publish it on the internet, and people there also can’t find any errors. So you try to publish it in a scientific journal, and, after a long process of reviewing you get your just reward for proving Somemathguy’s conjecture.

    Example ?: You feel like you are an above-average driver. But you know there are surveys saying everyone believes they’re above-average drivers. So you ponder over how you can actually evaluate your driving performance for a few minutes. Then you realize you don’t actually care about your rank among drivers as long as you can safely drive from point A to point B, so you go back to your daily business.

    Example ?!: You are in a psych ward and firmly believe that you are Jesus. Since you are gone too far down the schizoid path to think straight, neither object- nor meta-level thinking are going to be of much help. However, after a while a doctor appears, injects some haloperidol or whatever, and things start to get better.

    In essence, I don’t think that “meta-level” reasoning is actually any good in practice, except for glaringly obvious cases (“here is my 3-line proof of P != NP”), or when you need a really quick judgment heuristic.

    • Mr Mind says:

      Thank you for this. It is important to remember that there’s no cognitive process that in isolation can get us closer to the truth, and that this must be the central pillar of Rationality.

      Also: example 2 is literally how I purse mathematics. I have an online document called “crazy ideas” full of sketches of ideas that gets patched or rejected the more I find out about the topic.

      “Mirror calculus: do quantum mechanics on the natural numbers with spans on simplex.” Nope, the symmetrizator of the simplex category is not bifunctorial.

      “Combine monoidal computation and quantum category.” Already been done, it’s called the Deutsch-Turing machine.

      And so on.

  14. OptimalSolver says:

    So if we say something like “John has never taken a math class, so there’s not much chance that his proof of P = NP is right,” are we really implying “John isn’t high-status enough, so we shouldn’t let him get away with proving P = NP; only people who serve their time in grad school and postdoc programs should be allowed to do something cool like that”?

    Well from the physics side, when uncredentialed cranks aka geezers in garages are almost always so off the mark that they’re “not even wrong,” you tend to adjust your priors accordingly. If someone without a physics background builds a working cold fusion reactor in their basement, and it’s subsequently confirmed by multiple distinguished labs, I will absolutely give credit where credit is due.

    I know Eliezer doesn’t believe that.

    How do you figure? From my reading of the sequences, I got the impression that this is exactly what he believes.

    • B_Epstein says:

      How do you figure? From my reading of the sequences, I got the impression that this is exactly what he believes.

      I share the sentiment. It is in line with EY frequently being excessively dismissive of academia. A good example can be found in this very post, with the two towers* story. Phrases like “magically slices four years off your lifespan” don’t give an impression of an objective, rational attitude. Somewhere between all the lectures on status and prestige, I seem to recall a course or forty on math, physics, CS etc. Is it conceivable that this magical tower gives one more than just a diploma for signalling purposes, on occasion? Perhaps the best students attend the best schools for non-Hansonian reasons?

      • Protagoras says:

        Yes, some people certainly learn things in college. EY seems to put too much weight on the fact (which seems incontrovertible to me) that they could have learned those things in cheaper and (perhaps!) quicker ways, while ignoring the (equally incontrovertible) fact that they probably wouldn’t have.

        • B_Epstein says:

          Is it incontrovertible that they could have reliably learned those things cheaper and (perhaps!) quicker, on a comparable level?

          it does not strike me as incontrovertible, and therefore isn’t : )

          • Protagoras says:

            I don’t know that we disagree; it sounds like you’re referring to part of the same vague constellation of issues I meant to be alluding to when I said that they probably wouldn’t have.

        • Wrong Species says:

          One disadvantage of formal education is that people generally care more about graduating than learning, meaning they are more likely to forget something. Most high school students learn about mitosis in biology. How many of them still remember ten years later?

          • eyeballfrog says:

            Presumably the ones that go into a biology-related field do. The ones who don’t, well, it’s probably not that big of a loss.

          • suntzuanime says:

            I think more of this sort of education sticks around than most people think. Yeah, if you were asked to pass a biology test after twenty years you wouldn’t be able to do it, but if you for some reason had to care about mitosis and were motivated to relearn about it twenty years later, you would find parts of it familiar and you would have a much easier job relearning it. And if somehow mitosis were relevant to something you were doing later, you might at least remember the name, which would help in looking it up.

        • poignardazur says:

          As a student in a French hybrid college/coding-boot-camp, I’d definitely say there’s free energy here we’re not picking because of status games and hard-to-align incentives (teachings to students vs having them show up in class, etc). It’s getting better, and the entire standard education system shouldn’t be thrown away, but there are a lot of things in it that should.

  15. Strawman says:

    There should be a counter keeping track of how many people have downloaded Inadequate Equilibria, so future readers know how to properly discount reading it as evidence of above-average rationality.
    Of course the Demiurge or some other sufficiently capable adversary could set up a botnet to artificially inflate the number of downloads, which would give rise to fun anthropic problems of its own; “Ho hum, looks like fifteen billion people have read the book already, by a conservative estimate at the very least half of these are enemy spambots, and the greater I think this fraction is, the stronger I can expect having read the book myself to be an indicator of cognitive prowess, if I were a human reader, however, the more likely it is that I might just be an enemy spambot instead…”

  16. deluks917 says:

    People who are not planning to retire soon should buy almost all stocks*. Though you should probably buy index funds not individual stocks. Wealthfront seems to agree. If you set your risk score to the max (which you should if you are not retiring soon) they put you in 90% Stock, 5% municipal bonds, 5% Natural Resources (not sure what they buy for NR). I think buying an holding a reasonably diversified set of individual stocks is worse than indexing but better than putting money into bonds. If you believe in the EMH you should do ok buying and holding a portfolio of random stocks.

    *I am intentionally ignoring the question of more speculative investments such as bitcoin.

    • actinide meta says:

      Theory says you should hold something like the world wealth portfolio, plus be long or short a risk free asset. (If some investors’ behavior is distorted by taxes or irrationality or something, that should change your behavior slightly). A stock index fund is by far the easiest and cheapest *part* of this portfolio to buy, but only about $80 trillion out of $250+ trillion of world wealth. So to some extent it’s looking for your keys under the streetlight. I don’t know why the global finance industry hasn’t managed to synthesize a nice liquid way for retail investors to hold the rest – this seems like a trillion dollar bill on the sidewalk if anything is.

  17. Wency says:

    This is a slight quibble, but Scott appears to have a misconception about what investment banks do, as many non-finance people seem to. The investment banking business as such consists of underwriting and advising, not investing (or banking). This is somewhat confusing because nowadays most of the large investment banks do basically everything: commercial banking, investment banking, trading, institutional asset management, individual wealth management/brokerage. Smaller firms are usually more focused. Though some large firms (e.g. Fidelity) are pure asset managers.

    This is analogous to the oil & gas business, where large firms like Exxon or Shell do basically everything (exploration, production, pipelines, refining, retail) but smaller firms tend to have a narrower focus, only doing maybe 1-2 of those. Calling Exxon a refiner while talking about exploration and production is incongruent.

    Of course, thanks to regulations (e.g. Glass-Steagall), different functions were split up in the financial industry much longer than in oil & gas.

    All that to say I’d replace the phrase “investment banks” in the post with something like “money managers”, which is what Scott is really talking about.

  18. weareastrangemonkey says:

    “But also, it asks: how things stay bad in the face of so much pressure to make them better? It highlights (creates?) a field of study, clumping together a lot of economic orthodoxies and original concepts into a specific kind of rational theodicy . Once you start thinking about this, it’s hard to stop, and Eliezer deserves credit for creating a toolbox of concepts useful for analyzing these problems.”

    From reading the review only, none of this sounds new. It seems to be more a layman’s summary of subfields in microeconomic theory. There are plenty of papers on every single one of the above points. That doesn’t mean the book is not valuable; such summaries, by the way in which they collate the concepts, can be useful to layman and expert alike. However, from the review, it really does not sound like Eliezer deserves credit for creating a toolbox of concepts for analyzing these problems or for raising new kinds of problems. People started thinking about this a long time ago, have developed a toolbox that contains these concepts as a strict subset, and have definitely not stopped working on them.

    • poignardazur says:

      EY absolutely addresses that in the book. I don’t remember exactly what he said, something like “As far as I’m aware, economists are aware of these notions, but there’s no study of them as a coherent field” (I think it’s in the Moloch’s Toolbox chapter)

      • Rob Bensinger says:

        Yeah, the first few chapters in particular are meant to be very standard, hence Eliezer nicknaming this perspective “conventional cynical economics.” From Ch. 1:

        If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. My view is that this is best done from a framework of incentives and the equilibria of those incentives—which is to say, from the standpoint of microeconomics. This is the main topic I’ll cover here.

        From Ch. 2:

        I am now going to introduce some concepts [i.e., inexploitability and adequacy] that lack established names in the economics literature—though I don’t believe that any of the basic ideas are new to economics. […]

        Since the idea of civilizational adequacy seems fairly useful and general, I initially wondered whether it might be a known idea (under some other name) in economics textbooks. But my friend Robin Hanson, a professional economist at an academic institution well-known for its economists, has written a lot of material that I see (from this theoretical perspective) as doing backwards reasoning from inadequacy to incentives. If there were a widespread economic notion of adequacy that he were invoking, or standard models of academic incentives and academic inadequacy, I would expect him to cite them. […]

        There’s a whole lot more to be said about how to think about inadequate systems: common conceptual tools include Nash equilibria, commons problems, asymmetrical information, principal-agent problems, and more.

        • weareastrangemonkey says:

          Note my comment was in response to Scott’s particular claim – that this was a new set of problems and tools. It sounds like Eleizer is being more modest. Until I have read the book I cannot say for sure, but inadequate seems a lot like inefficient – especially in Scott’s description. Exploitable sounds like an adjective version of arbitrage opportunity. I wonder if he bothered to ask Robin, it reads like he didn’t. I just hopped over to Robin’s blog and it is clear that if Yudkowky didn’t he ignored him, Robin said pretty much what I said:

          there exist what we economists call “agency costs” and other “market failures” that result in “inefficient equilibria” (which can also be called “inadequate”).

          It’s much easier to come up with new ideas and concepts if you don’t cite the prior literature.

          • Rob Bensinger says:

            Robin and other economists reviewed the book prior to publication, and the book talks quite a bit about EMH efficiency and Pareto efficiency. Adequacy in the sense Eliezer defines it in the book can be understood by analogy to EMH efficiency, so if you’re just writing a quick blog post about the topic, I think it would be fine to say “here’s an idea I’ll call ‘efficiency’, by analogy to EMH efficiency”. For a book-length treatment of the same idea, Eliezer thought it was a better idea to explicitly distinguish the concepts to reduce confusion/conflation.

  19. Deiseach says:

    But central bankers are mostly interested in prestige, and for various reasons low money supply (the wrong policy in this case) is generally considered a virtuous and reasonable thing for a central banker to do, while high money supply (the right policy in this case) is generally considered a sort of irresponsible thing to do that makes all the other central bankers laugh at you.

    I think that’s a bit cynical (and I’m inclined to be cynical myself). Central bankers and other people presumably also try to do their job properly, and not making the economy blow up is part of that. These Japanese central bankers also presumably did not decide for themselves that low money supply is virtuous; I imagine David Friedman can explain this a lot better, but I’d assume there are business schools teaching students economics that tell them this and provide the proofs in class that this is the correct thing to do (and the reasons why schools of economics that say high money supply is good are all nuts).

    So the Japanese central bankers were not alone operating on “I want to be considered the Alpha Dog of Central Bankers”, they were (justifiably) concerned about “HIGH MONEY, ECONOMY COLLAPSES: You did a stupid thing everyone always says not to do, you predictably failed and destroyed our economy, fuck you (-10)”.

    Because even if the economy collapsing was nothing to with picking the high money supply option, everyone would blame them anyway, and blame them for “a stupid choice even a first year economics student knows is the wrong one!” People want simple answers when their money is suddenly worth nothing and their house is negative equity and now they are up to their eyeballs in debt, and the obvious scapegoats are “who are the guys who were supposed to be looking after the economy? You central bank guys? What did you do to let this happen? YOU WHAT? THAT’S A DUMB MISTAKE EVEN A FIRST YEAR STUDENT KNOWS IS WRONG, AND YOU GUYS ARE SUPPOSED TO BE EXPERIENCED BANKERS AND ARE PULLING DOWN HUGE SALARIES!”

    And every economist in the world will be writing opinion columns and giving interviews to TV and radio about how yes, the Japanese central bankers made the stupidest elementary mistake, and this is the proof that it was wrong because of this economic theory which is backed up by this evidence.

    If you are not entirely sure how to fix an economy and you certainly don’t want to pick the choice that will make it blow up, and everyone says “option B is the right option”, then I think you don’t have to be a mere prestige seeker to choose option B.

    • A Definite Beta Guy says:

      You’re right. I haven’t followed Eliezer’s monetary economics too closely, but I vaguely remember him mentioning Scott Sumner. Scott Sumner has been making the push that the Western economies have had too restrictive economic policy, which CAUSED the Great Recession (as opposed to the financial crash being the principal cause), and has mentioned Japan in the past.

      But here is also what Scott Sumner said:
      http://econlog.econlib.org/archives/2014/09/four_things_i_b.html

      1. The Great Recession was caused by tight money at the Fed, and other major central banks. Period. End of Story.

      2. Fed policy almost never strays far from the consensus view of professional economists.

      3. To prevent the Great Recession, Fed policy would have had to stray far from the consensus view of professional economists.

      4. Ergo, professional economists (as a group) caused the Great Recession.

      As Sumner points out, in the US right now, 40% of economists think monetary policy is too stimulative as of 2014. I don’t know what Eliezer thinks about the US specifically, but I suspect he agrees with Sumner (and myself) that this view is totally, completely, 100% incorrect. Like “2+2=5” wrong. The logic is the same logic that suggests Japan was wrong not to print more money.

      So it’s not that Central Banks are just running prestige-maximizing operations. A large number of the experts are wrong.

      Keep in mind, though, that the basic reason for the failure of economists here is more or less spelled out above: there is no real incentive for them to be right.

      I’d say the economics profession has shifted a lot more in the “print more money” direction.

      EDIT: A complicated problem, because I am 90% confident that the majority of these 40% of economists are smarter than me, and have more subject matter expertise…and I still think they are totally wrong.

      • Deiseach says:

        the Western economies have had too restrictive economic policy, which CAUSED the Great Recession (as opposed to the financial crash being the principal cause)

        To be fair, at least in Ireland, some of our banks were just taking the piss. See the Seán Quinn and Anglo-Irish Bank saga, in which a prominent wealthy businessman is encouraged to buy buy buy shares in the bank, he does so by a tangle of companies and complex financial instruments, now that he’s a big shareholder and director he treats the bank like his personal piggy-bank and takes out huge loans from it to buy more shares and invest in his property development empire, eventually the entire house of cards collapses (but not before yet more vast loans in an allegedly illegal scheme are arranged by the directors of the bank to buy back these shares to create the appearance that it is in good shape) and he becomes (again allegedly) from a personal fortune of €4 billion euro to “I have to sign on for Social Welfare”, even as his family members tried shifting assets overseas (e.g. by buying property in Russia) to move them out of the bank’s reach.

        In a proper democracy, there would have been a revolution and a few exemplary tumbrils to the guillotine to sort all this out. We simply knuckled under to our government taking out huge debts to prop up the banks and bondholders and engaged in eight years of hairshirt economics. Though he did spend some time in jail – a whole nine weeks! – and his son and nephew — though the nephew went back over the border to Northern Ireland rather than turn up for jail and has not been imprisoned yet – both got three months’ jail sentences, so that is some recompense (note use of sarcasm here).

    • limestone says:

      If you are reasonably sure that high supply is better for the economy, and decide against it because you fear becoming a scapegoat in case something goes wrong, then you are in fact prioritizing your prestige over doing your job properly. Perhaps expecting people to stick with the right choice in situations like these is a standard too high, but still.

      • Deiseach says:

        If you are reasonably sure. If you’re not, or not convinced enough that you are right and the conventional wisdom is wrong, then I don’t think it’s quite fair to accuse you of merely being motivated by prestige.

        I also think it’s harder to pick the “right choice” than presented; hind sight is great to vindicate or condemn a decision. This is like Scott’s election post and the rain in Philadelphia. If the Japanese central bankers had picked Yudkowsky’s solution and the economy still tanked, everyone would be going “Well, they should have known! Everyone knows what you do in a situation like this!” When they tried option A and it didn’t work, so then they tried option B and it did work, I think that saying “Plainly, option B was the right one all along and the guy who was saying, in the teeth of all the opposition, to pick option B is the genius here”, is over-stating the matter. If option A had not worked and option B had not worked either, would we still be claiming “Yudkowsky is a genius for picking option B as the obvious solution”? I don’t think so!

    • Eponymous says:

      Ben Bernanke basically did write about how dumb the Japanese were in the 90s. Or look at Paul Krugman lately. A huge number of academic economists correctly predicted that the Euro would be a huge disaster. I could go on and on.

      So why do they keep doing it? Beats me.

  20. Subb4k says:

    Hi, I posted a long comment on why I was annoyed at the dialogue part of the book drawing obvious parallels with Galileo’s “Dialogue Concerning the Two Chiefly World Systems”. A few minutes later I edited the conclusion which was not very well formulated, and the comment disappeared. Is it because I used a banned word accidentally (I didn’t mention any of the banned topics AFAIK but I don’t know how large the banned vocabulary is)? If so, can Scott retrieve it and censor whatever is deemed problematic? Or is it lost forever until I retype it?

    Or I guess it could also be that I accidentally misclicked self-reported or deleted it.
    EDIT: At least self-reporting your comment doesn’t immediately destroy it unlike I thought it did. Sorry for creating additional work this way, I actually expected a prompt to confirm the report (Seeing that prompt would have helped me determine whether I had already clicked through it by mistake).

    • Aapje says:

      It’s gone. Use Lazarus, it has saved me many times. If you use Firefox, this may work.

    • beleester says:

      The Report button is, AFAIK, still broken. It pops up “Cheating, huh?” and then does nothing.

      • entobat says:

        I don’t know what either of us was expecting, but I clicked “report” on your comment and it seems to have worked.

        Um, sorry.

      • Harry Maurice Johnston says:

        I ran into that a while back. Try logging out of the site (click your logon name at the top right-hand corner of the page and select Log Out) and then back in.

    • Subb4k says:

      Actually, it’s probably not such a tragic loss anyway: I checked and most of the thing I was saying have already been mentioned in this comment on Kolmogorov Complicity a month ago, and are expanded on in the blog post linked to in the following comment.

      In very short: Galileo was a great scientist, but DCtCTWS is bad science and arguing in bad faith and shouldn’t be emulated or held as an example/inspiration.

    • Deiseach says:

      I posted a long comment on why I was annoyed at the dialogue part of the book drawing obvious parallels with Galileo’s “Dialogue Concerning the Two Chiefly World Systems”.

      You would not believe how interested I am to read this comment! Having a dialogue between “Simpleton who states Conventional Opinion” and “Wise Guy (ahem) who states Contrarian But Correct Alternative” is a long-standing staple of this kind of work, and naturally Galileo used it (and stepped on a lot of toes by doing so), so the fact that Yudkowsky used it is appropriate enough. What did you object to or dislike? And if it turns out to be “stacking the deck just like Galileo with Simpleton being an idiot and stating Obviously Wrong stuff that not alone was no longer in common use but being debunked even as he wrote”, I will cheer all the way through reading 🙂

  21. The problem isn’t Eliezer assuming that he and his readers are smarter than average. That is certainly true. The problem is that he assumes that he and his readers are extra super-duper smarter than average. That is certainly false.

  22. mobile says:

    Libertarians influenced by Hayek (but not just libertarians) identified a fourth entry point of evil into the world, which is the converse of the second way: when expert local knowledge can’t trickle up to decision makers. From this, you get things like the Socialist calculation problem (corollary: Socialism is evil) or community redevelopment schemes that tear down vibrant and mostly-functional neighborhoods to be replaced with something even more Molochy.

    • Mr Mind says:

      Which is also the whole point of “Seeing like a state”.

    • vaniver says:

      The book’s ontology has these in the same category of “asymmetric information.”

      Cases where the decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information

  23. Jiro says:

    I understand the impetus. Eliezer was concerned that smart people, well-trained in rationality, would come to the right conclusion on some subject, then dismiss it based on the Outside View.

    I understand the impetus differently. Eliezer has unusual ideas about AI danger, cryonics, etc. which are rejected by scientific experts. Eliezer wants to explain why he’s correct on these things and the experts are wrong, and why you should listen to him instead of the experts.

    Be careful of arguments that are couched in generalities but which are really there to support a specific set of positions that goes mostly unsaid.

      • Jiro says:

        Some experts do believe in something which can be called AI risk, but how does that connect to Eliezer’s idea of AI risk (and his estimate of how serious it is)? I don’t see wholesale endorsements of MIRI or anything else which indicates approval of Eliezer’s ideas specifically.

        Furthermore, AI risk is just the tip of the iceberg. Experts generally consider cryonics to be bunk, and do not consider physicists who reject many worlds to thus be incompetent.

        • Wrong Species says:

          Eliezer is low status so regardless of the merits(or lack of) of MIRI, they probably won’t get much funding. OpenAI has more backing.

      • actinide meta says:

        To be fair, it was when he started preaching it.

      • Rob Bensinger says:

        AI danger isn’t rejected by experts.

        Before going too far into this argument, keep in mind that Eliezer uses his views on AI as one of the first examples of the disagreement/outperformance/deference issue on page 1 of the book. AI comes up frequently in the book, and gets discussed at length in the cut chapter.

    • sty_silver says:

      Not kind. Is it true? AI being dangerous / not dangerous seems to be about an even split right now (which a clear tendency of more people beliving it is dangerous over time). AI being very dangerous does not seem to be an outlandish idea either (Bill Gates, Nick Bostrom, Elon Musk, Stuart Russel, every organization worrying about X risk I have found while researching this topic for the first time). It is certainly true that the number of people who have not voiced publicly that they think it can destroy the world is larger than the number of people who have, but I think the bar for truth of the above statement is much higher.

      It has also not been my impression that “experts generally consider cryonics to be bunk”. Are there sources for that?

      • John Schilling says:

        EY’s views on AI risk go well beyond “AI is dangerous”, and his specific strategy for alleviating that risk is AIUI far from the consensus view of the professional AI research community. And even w/re the simple statement “AI is dangerous”, he holds that view with far greater confidence than the expert consensus and he evangelized that view even when expert consensus didn’t consider AI risk to be worth talking about. So there is a definite track record of being confidently opposed to expert scientific consensus on this issue.

        Add in cryonics and MWI, and the fact that MIRI pays his salary, and it is I think necessary to point out that EY has a motive to advance the view, “you shouldn’t always trust expert consensus; here’s why you should trust someone like me on issues like this”.

        • sty_silver says:

          Does your post contradict anything in my post?

          I agree that it is necessary, in the sense of being relevant. I argued it might be neither kind nor true. I also agree that the hypothetical motive is internally consistent.

          The sense I got from people evaluating Miri’s research vs other organizations is that its non-replacability is a strong point in its favor. This was also part of the reasoning behind the recent grant from the Open Philantropy Project.

          While the balance of our technical advisors’ opinions and arguments still leaves us skeptical of the value of MIRI’s research, the case for the statement “MIRI’s research has a nontrivial chance of turning out to be extremely valuable (when taking into account how different it is from other research on AI safety)

  24. RC-cola-and-a-moon-pie says:

    Man, am I torn about this book. I’m really interested in the subject. But it’s written by Yudkowsky, and I have such a strange conflicting view about his writing. I’ve seen plenty of really enthusiastic praise of him by people here, who are quite smart and reliable in their judgments. But when I went and started reading the famous “Sequences,” I was disappointed in what seemed to me to be pedestrian ideas presented in a very arrogant and off-putting way. (Perhaps the nadir of the part I made it through was an essay in the form of a dialogue between Yudkowsky and someone at a dinner party, where Yudkowsky reports that he demolished the guy with devastating arguments, such as citing the Aumann theorem in response to the guy trying to extricate himself from the conversation with a request to “agree to disagree.” The essay ends by recording a woman coming up to Yudkowsky and telling him how amazing he had been.) I’m sure this sounds like a knock on Yudkowsky but it may well just be a knock on me. The “outside” view would seem to indicate that my reaction was unwarranted; it’s highly unlikely that a lot of really smart people would be enthusiastic about an author unless there was real merit there. It’s fascinating ruminating on why my reaction differed so deeply from that of many others. If only someone would write a book telling me how to reconcile these two perspectives.

    • Jiro says:

      The “outside” view would seem to indicate that my reaction was unwarranted; it’s highly unlikely that a lot of really smart people would be enthusiastic about an author unless there was real merit there.

      “A lot of really smart people” is an illusion. The number of smart people who like him (or have even heard of him) is a tiny percentage of the total number of smart people in existence. It just looks large because when you have the entire population of the Internet to choose from, even just a few people looks large.

    • Ghatanathoah says:

      I think Scott’s essay on Non-Expert Explanation is pretty relevant to this:

      My own version of this experience was reading Eliezer Yudkowsky’s A Human’s Guide To Words, which caused a bunch of high-level philosophical ideas to slip neatly into place for me. Last week David Chapman wrote about what was clearly the same thing, even centering around the same key example of whether Pluto is a planet. A Gender Studies major I know claims (I can’t confirm) that the same thing is a major part of queer theory too. But Chapman’s version and queer theory don’t make a lot of sense to me; I was able to understand the former only because I already knew what he was talking about, and I have to take any statements about the latter on pure faith. On the other hand, nobody else seems to have found Guide To Words as important as I did; I don’t see paeans to it all over, nobody’s offering Eliezer any Nobel Prizes. It was a perfect fit for where my mind was at that moment – but there are probably a hundred other versions equally objectively good, some of which don’t even realize they’re versions of the same thing.

      It happened for a lot of people, Eliezer is very good at presenting ideas they did not yet understand in a way that helped them understand them. You weren’t one of them, but that’s fine. You probably heard the same ideas from somewhere else. When I read Eliezer I got about a 50-50 mix of “I heard this before” and “This idea is amazing!, it feels as if the scales have fallen from my eyes!.” He seems to have had an even larger impact on other people.

      It might also be a stylistic thing. You seem to find arrogance off-putting. I find it hilarious when it’s being directed at someone other than me. For people like me, Eliezer’s style adds humor and zest to spice up his ideas. For people like you it turns you off to the whole essay.

    • Nornagest says:

      it’s highly unlikely that a lot of really smart people would be enthusiastic about an author unless there was real merit there.

      Proves too much. Enough people are really enthusiastic about young-earth creationism, or Dianetics, or Tony Robbins, that a lot of them are probably smarter than me. Doesn’t make any of those topics worth reading about.

      Ultimately I don’t have a good way of discriminating, though. I share Scott’s skepticism towards Eliezer’s criteria.

      • Luke the CIA Stooge says:

        I’ve never read any Tony Robbins but Ive know alot of people in sales and politics who seemed to really get into him.
        My suspicion is he’s saying really meaningful things if your in a feild where your ability to succeed is based on your ability to interact with more people, in person, better and faster, and his advice is pretty much useless if your success isn’t a function of your ability to build your personal confidence and personal network.

    • carvenvisage says:

      the bleggs and rubes stuff is could be pretty ‘inane’ if you understand it, but people aren’t born understanding it, and I found it quite entertaining. I remember the title of that set of essays was ‘a human’s guide to words’, that’s about right for the level of how advanced it was. (Or perhaps ‘a primate’s guide to words’.) I have no doubt some people figure out most or all or some of this stuff when they’re 4 years old, but itnot most people.

      Then there’s also stuff like the one about elijah and the priests of baal, which I just found hilarious/delightful. I forget if there was any insights to it, my memory is ‘holy shit this is so good lol’, same way as with MS paint adventures (problem sleuth), or G.K. Chesterton’s writing, or (to a much greater extent than-) stumbling across tvtropes for the first time.

      Between those two (seperate) things, I think that covers a lot of the main appeal it had for me.

      >I’m sure this sounds like a knock on Yudkowsky but it may well just be a knock on me.

      ..No, that is a pretty unforgivable near-first impression.

      I was willing to write things like what you you mentioned off as childish exuberance because I’d enjoyed his other writing so much, didn’t bat an eyelid at stuff like that. -He’s a big kid, got a lot of energy, and that’s why he makes such great stuff.

      But there is a couple instances of less innocent stuff I couldn’t write off like that, and had to conclude the guy has a serious blindspot and/or is a bit cracked in the head. Like the throwaway rape (world-building) line from three world’s collide… ..I guess the guy must have read heinlein’s seniles too early in life.

      _

      edit: I think part of it might be that he has a kind of ‘fuck this, rationality is easy lul, lets Get-Good’-ish attitude, which I find very cool in general and especially especially as an antidote to views of life embodied (but not exclusive to) some religious doctrines of doom to eternal failure of the soul, that the highest aspiration is forgiveness for the continuing spiritual crimes one will inevitably commit, ..for one is only human.

      ‘Cool in general’ in that it has an (Ayn Rand style) vibe of ‘life is supposed to be glorious. We are put here to transcend, not even merely to rise-to-meet. Certainly not to be bogged down in a mire. That *with the baggage of Ayn Rand* is a winning combination, so you get it with just a few crazy spots and that alone (content entirely aside) is pretty great.

      So part of what I found refreshing about his writing is surely its exact opposite view of life to certain zeitgeist-flows people get trapped in.

      (See the lyrics of this song for a possible illustration of the kind of ‘vibe’ I mean https://genius.com/Tim-mcmorris-music-my-life-lyrics.)

  25. balrog says:

    I remember a pub talk where one economist told me that several years ago Japanese did in fact try primary emission hoping to cause inflation. And it failed!

    As a complete economy noob, does your “high money” means something which is not “high amount of cash flowing” or was I either too drunk or being outright lied to in a pub?

    • Salem says:

      The Japanese did run large fiscal deficits while printing money, which are sometimes termed “helicopter drops.” But whenever inflation started ticking upwards, they stopped printing money and raised interest rates. This was because of their incorrect belief that monetary policy works with “long and variable lags.” They were worried that if they didn’t stop printing money when inflation was still low that it would overshoot their target. If they had correctly realised that monetary policy works in part through communication of expectations, they would have understood that their actual policy was:

      * Loose money whenever inflation below 0%.
      * Tight money whenever inflation below 0%.

      In other words, an effective inflation target of 0%. And lo and behold, that is exactly what they achieved, all the while complaining terribly about it!

      • Wrong Species says:

        Market Monetarism always seemed like magic voodoo to me. If I’m understand correctly, if you have two countries with identical economies but one of them has a central bank that people believed was credibly committed to boosting inflation and the other didn’t, the former would have higher inflation, regardless of other economic factors. I’m not saying it’s wrong, but it just seems really strange to say that whether a country experiences a recession or not depends upon the words of an individual.

        • Swimmy says:

          If you have two countries, and in one of them financial markets believed that the central bank was credibly committed to boosting inflation and in the other one they didn’t, then the two countries probably don’t have identical economies. Because Market Monetarists believe that the market is right. If the people in country 2 don’t believe that the central bank is committed to boosting inflation, it is probably because there’s a lot of evidence that the central bank is not committed to boosting inflation. If that were an arbitrary or incorrect belief, there’d be low-hanging fruit in the financial market.

          Provided the market is allowed to function properly, it should incorporate all of the evidence available on the path of monetary policy. Policymaker public remarks are included in that evidence, but the market presumably weights the believability of those statements.

          • Wrong Species says:

            Take two countries, let’s say Australia and the United States. They aren’t equivalent in their economies but they are both wealthy, first world countries. The Australian Central Bank announces they are cutting interest rates by a quarter percent. The Federal Reserve does the same. Based solely on people’s beliefs, we could say that Australia has a loose monetary policy and the US has a tight one. That’s the magic voodoo to me.

        • David Hume had a good example to illustrate how this works. You look at everybody’s monetary wealth today. If you give them money equal to that right now then before too long prices will all double.

          If you credibly promise to give them that same amount after one month then people will spend down their savings in anticipation. There are liquidity constraints that mean that prices won’t quite double until the money arrives, but you’ll get most of the way there.

          If you give them the money now but people have to return it in a month or face execution then prices won’t move very much. Sure, people who are liquidity constrained will be able to spend a bit more and prices will go up a bit. But mostly people will hoard the money for the end of the month and nothing much will happen despite the money supply doubling.

          Hopefully that can give you an intuition about how credible promises can have an effect.

          • Deiseach says:

            If you give them the money now but people have to return it in a month or face execution then prices won’t move very much.

            Parable of the Talents:

            He also who had received the one talent came forward, saying, ‘Master, I knew you to be a hard man, reaping where you did not sow, and gathering where you scattered no seed, 25 so I was afraid, and I went and hid your talent in the ground. Here, you have what is yours.’ 26 But his master answered him, ‘You wicked and slothful servant! You knew that I reap where I have not sown and gather where I scattered no seed? 27 Then you ought to have invested my money with the bankers, and at my coming I should have received what was my own with interest. 28 So take the talent from him and give it to him who has the ten talents. 29 For to everyone who has will more be given, and he will have an abundance. But from the one who has not, even what he has will be taken away. 30 And cast the worthless servant into the outer darkness. In that place there will be weeping and gnashing of teeth.’

        • Salem says:

          We accept the impact of expectations on every other price, why shouldn’t it be true for the price of money?

          Suppose you have two identical tech companies (save for their boards). Both announce a bold new move into quantum computing. Board A has a habit of announcing bold new moves, then backing off them a month later. Board B sticks to such announcements. The share price of A and B will not react in the same way to the news, and there is nothing strange about this.

          The share price of a company today isn’t just affected by actions taken today, but the expectation of all future actions. The sum of those future actions is far larger than anything but the most extreme present-day action, so expectations about those future actions are critical to setting the current price. The CEO of A may well be disappointed that the market didn’t react to his announcement, but – assuming he follows through! – that reaction is delayed, rather than denied. If A and B both set up equally successful quantum computing divisions, the total market reaction to both will be equal, just that the market reaction to B is more front-loaded as people trust the board. It’s in this sense that Friedman was right that monetary policy works with long and variable lags – the gap between initial action and shifting expectations is unpredictable.

          The same is true of the price of money.

    • A Definite Beta Guy says:

      I would consult this post:
      http://www.themoneyillusion.com/?p=9404
      Basically, the amount of money Japan printed was not ever sufficient to get the growth they wanted. The BoJ set a target of no inflation, so whenever they printed enough money to start sparking inflation, they stopped printing more.

      The idea “we printed more money and we didn’t get the growth we wanted” is misleading. There are two components, supply and demand. You only get the growth you want if supply outpaces the demand. If you are only printing enough money to offset the increase in demand, you will not see NGDP growth, regardless if you increase the money supply by 10%, 50%, 100%, or 1000%.

    • balrog says:

      Thanks for links.

  26. b_jonas says:

    > Given that I am a well-known reviewer of books, clearly my opinion on this subject is better than yours.

    No it’s not. You’re probably biased to see this book better than it is, because you’ve already read a lot of Eliezer’s writings and like them. Also, your prestige as a reviewer is suffering because you haven’t yet reviewed *Soonish* by Kelly and Zach Weinersmith, even though its topic is one that matches the theme of your book and so you should review it.

    Though to be fair your blog automatically gets credit with me, because unlike most blogs on the internet, yours has an easily reachable public page with the full table of contents of all blog entries. Thank you for that.

    • Wrong Species says:

      I’m pretty sure Scott was joking.

    • Deiseach says:

      you haven’t yet reviewed *Soonish* by Kelly and Zach Weinersmith

      But his prestige goes up with me because he has not reviewed this book, since I am irrationally prejudiced against husband-and-wife ‘cutesy take on topic’ book writing teams, hence his not reviewing a book by a web comic cartoonist and his missus validates my opinion and flatters my vanity as to my good taste and good judgement.

      So which of the three of us (Scott, b_jonas, me) are joking? Are we all joking? None of us joking?

      • poignardazur says:

        Wait, it’s a genre?

        • Deiseach says:

          I think I’ve seen more than one example, though I may be vastly prejudiced by the “David and Leigh Eddings” books, which seemed to degenerate in quality starting around the time the missus got a writing credit on the cover (to be fair to Mrs Eddings, that Mr Eddings was re-writing the same story over and over again with the same plot, events and characters but only changing the names contributed to the degeneration). A lot of them seem to be in the self-help genre or this kind of quirky self-publishing effort.

          But I’ve successfully blanked most of the “husband and wife” style productions from my memory, so I can’t name any names for you! 🙂

  27. moridinamael says:

    I find that I’m confused about your statements about Google stock. And I don’t mean that in the passive-aggressive sense where I really mean “I think you’re confused” – I legitimately am not sure if I’m right or wrong.

    Alice thinks, with high certainty, that self-driving cars are the next big thing, and that Google’s market capitalization is going to quadruple in the next five years. Alice also thinks that the majority of the “smart money” doesn’t grasp this fact. Or the smart money may suspect it, but they have lower confidence than Alice. Or the smart money has ten thousand other potential investments on ten thousand other time horizons that they prefer to buying Google today.

    As far as I can tell, it is perfectly allowable by the inviolable cosmic laws of the Invisible Hand that Alice could be exactly right, could purchase Google stock today and could make a lot of money in five years.

    Google stock can be “priced right” today and that says nothing about the price of Google after Google has completed their Dyson sphere in a few years.

    In perhaps a more simplified reductio, if the Invisible Hand is so frickin smart, why does the S&P 500 grow at all? What moron would leave money on the table by selling an S&P 500 ETF? Ever? They’re leaving money on the table! And the answer is that you may sell that ETF if you think you have a better bet elsewhere, on a different time horizon.

    • A Definite Beta Guy says:

      Alice thinks, with high certainty, that self-driving cars are the next big thing, and that Google’s market capitalization is going to quadruple in the next five years. Alice also thinks that the majority of the “smart money” doesn’t grasp this fact. Or the smart money may suspect it, but they have lower confidence than Alice. Or the smart money has ten thousand other potential investments on ten thousand other time horizons that they prefer to buying Google today.

      How is Alice able to determine that Google’s stock cap will double? It’s not just a question of engineering, but a question of finance. IE, Alice might think Google will successfully adopt a self-driving car with 90% probability, while smart money thinks it is 85% probability, so Alice theoretically has arbitrage. But not smart money is correctly discounting the stock because they think the self-driving car will be immediately nationalized and Google will receive no value for it, but Alice hasn’t even considered that possibility.

      Or consider Tesla, which is a blackhole where billions of dollars disappear, never to be seen again. Sure, they have some decent technology, but that doesn’t translate into profits. if Alice is investing in Tesla because she thinks the cars are really good, she’s using the wrong metric to evaluate an investment.

      Why should Alice trust her views on the engineering more than the smart money, anyways? Goldman Sachs has enough money to hire engineers.

      Or the smart money has ten thousand other potential investments on ten thousand other time horizons that they prefer to buying Google today.

      This one doesn’t work. American capital markets are deep and liquid. This kind of market failure won’t exist for a company like Google, where everyone and their brother has money to invest. We’re in a savings glut.

      • The answer to your question is that Alice should assume the market has correctly priced everything about Google except the one thing where she disagrees with the consensus view. She should try to guesstimate how important that one thing is and, if it is important and positive and she is confident, buy the stock.

        That was the basis on which I made several successful investiments a very long time ago. The first was a little after the Macintosh came out. I was a professor in the Tulane business school and mentioned to a colleague that I was thinking of buying a Mac. He asked why I didn’t buy a PC Jr. instead.

        I thought about what that question implied. The machines were about the same size, but the Mac was using a Motorola 68000 instead of an Intel 8088–a much more powerful CPU which, if I remember correctly, was previously used mostly for multi-user machines. The reason it was using that CPU was that it needed it to run a graphic interface. I had been using a computer (a superclone of the TRS80) at that point for years and had seen a film on the Xerox Parc work with graphic interfaces.

        I concluded that the graphic interface was very important and that my colleague was probably a fair sample of the ignorance of almost everyone investing in the market, so I bought Apple stock. I later bought Microsoft stock due to a related argument that also turned out to be correct.

        I don’t have a large enough sample of such bets to be confident that I am correctly evaluating my expertise, but I have enough so that if I come across another situation where I think my judgement is likely to be better than that of the market on a particular important point I will be willing to bet on it.

        • A Definite Beta Guy says:

          You need a larger sample size to prove you can successfully beat the market, unfortunately. It’s your money, so feel free to invest as you wish. There’s certainly the chance you have some unique insights that let you outperform the market…it’s just not very likely.

          The biggest issue I have with your scenario is using your colleague as an example of the typical investor. That seems like a big leap, and also most Apple ownership these days is by institutional investors. So the median investor is a wealth manager kind of guy, not a mom&pop investor.

          • and also most Apple ownership these days is by institutional investors. So the median investor is a wealth manager kind of guy, not a mom&pop investor.

            That was about 1984, not “these days.” And a wealth manager then was not likely to be much better informed about computer interfaces than a professor at a top business school.

      • Doesntliketocomment says:

        This is the first time I’ve ever heard that the US, which as of the moment has a personal savings rate of 3.2% and has been importing money right and left to finance capital improvement, is in a “savings glut”. I also think you are incorrect in your belief that Goldman Sachs or any other institutional investor seeks expert opinions for every move it makes.

        • A Definite Beta Guy says:

          Importing money IS the capital glut. If we had a capital scarcity, we would have rising interest rates. Instead, cheap cash is pushing interest rates down. For economies at peak employment, like we probably are now? I don’t think it’s ever been so cheap to borrow.

      • Goldman Sachs has the ability to hire engineers but there’s friction involved, and also sometimes its hard to know that you need to hire an engineer without first hiring that engineer.

        If you actually have an uncommon expertise in a field that’s exactly the sort of non-public information that can allow you to expect excess returns, on average, in some particular case. But for all the reasons you point out it doesn’t guarantee returns. And if the topic has been in the news like self driving cars have then Goldman Sachs has probably already hired the engineers so only take this approach for important topics the public is not aware of.

    • FoxLisk says:

      Google stock can be “priced right” today and that says nothing about the price of Google after Google has completed their Dyson sphere in a few years.

      That’s explicitly called out several times in the book. Market pricing isn’t long-term correct right now; it’s short-term correct. and in a few years, it will be short-term correct again for that time period. Eliezer did not claim (and nor, I think, does anyone) that market prices today are correct forever. They’re just really, really close to correct for the next couple months, a gigantic fraction of the time.

      So Alice can stand to make money on the market given that she has correct long-term knowledge.

      • Pdubbs says:

        Market pricing isn’t long-term correct right now; it’s short-term correct. and in a few years, it will be short-term correct again for that time period.

        It’s helpful to think of the current price of stock as a timeseries prediction. Error in the prediction grows as a function of time, so current price says more about the price tomorrow than the price next week, and more about that than next month, and so on

      • A Definite Beta Guy says:

        I’m not sure how to conceptualize the difference between long-term correct and short-term correct when it comes to the Efficient Market Hypothesis. Would it be okay for someone to post a passage from the book that mentions this?

        • I haven’t read the book, but I think the EMH implies that the current price is the best estimate available from current public knowledge of the expected value of the long-term price. As time passes more public knowledge becomes available, hence the expected value changes.

          For a trivial example, consider a bet on a flip of a coin–I get a dollar if it is heads, nothing if it is tails. Before the coin is flipped the expected value of that bet is fifty cents. After it is flipped and before the money is paid out it is either zero or a dollar.

        • ADifferentAnonymous says:

          See chapter 1:

          If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” This is something into which human civilization puts an actual effort.

      • Antistotle says:

        I think that a better way of thinking about it is “stocks are priced according to the market’s belief based on available information”

        The thing is that most information about a companies we hear about is generally has actionable timelines of a quarter to a year. Two tops–especially for a technical company.

        There are companies out there we don’t hear much about because they’re not in a fast moving industry that ARE priced on the decadal scale, because, for example in the power generation market where things change on generational (no pun intended) time scales.

        Their price is based on that sort of information.

  28. ProntoTheArcherist says:

    I don’t understand how Outside Reasoning is a pure mistrust of one’s own reasoning – seems based on observation of other reasoners. In which case, it’s more of a “what am I missing – I should probably investigate further before doing anything rash” as that often turns out to be exactly the case. Might not have all the info isn’t the same to me as might be wrong.

    • cmurdock says:

      Isn’t that the video that suddenly and inexplicably argues for biblical literalism near the end (e.g. “the nephilim couldn’t have been alien-human hybrids because they were actually literal nephilim who really existed”)?

      • Alex Zavoluk says:

        I’ve watched it all more than once, and I don’t think the author argues for biblical literalism. He claims Ancient Aliens argues for biblical literalism, in the sense of the events actually occurring, but being caused by aliens rather than God, angels, Satan, etc.

        Along the way, he points out that AA misrepresents how the bible and other ancient texts describe these beings, in order to make them sound more like aliens. One example is that the bible describes nephilim as being physical beings, but AA claims that angels do not have a physical form, and so the bible was not describing angels. This is of course silly; our notion of angels might not have physical forms, but there’s no reason the bible, as written, cannot have angels with physical forms.

  29. Gobbobobble says:

    For example, Eliezer and his friends sometimes joke about how really stupid Uber-for-puppies style startups are overvalued. The people investing in these startups are making a mistake big enough for ordinary people like Eliezer to notice. But it’s not exploitable – there’s no way to short startups, so neither Eliezer nor anyone else can make money by correcting that error. So it’s not surprising that the error persists. All you need is one stupid investor who thinks Uber-for-puppies is going to be the next big thing, and the startup will get overfunded. All the smart investors in the world can’t fix that one person’s mistake.

    The same is true, more tragically, for housing prices. There’s no way to short houses. So if 10% of investors think the housing market will go way up, and 90% think the housing market will crash, those 10% of investors will just keep bidding up housing prices against each other. This is why there are so many housing bubbles, and why ordinary people without PhDs in finance can notice housing bubbles and yet those bubbles remain uncorrected.

    Is this a legal problem? If money could be made shorting startups or houses, but no one is doing it, isn’t that too a $20 bill lying on the ground?

    Surely there’s startup founders out there confident enough to take a “You give me $X now but if you’re [profitable/valuation doubles/still in business] in 3 years I’ll give you $X*Y” deal. With houses the incentives get murkier but you could probably arrange something with the mortgage providers.

    I’m definitely the proverbial bio freshman when it comes to big-f Finance – anyone up to explaining why the $20 bill isn’t real?

    • BillyZoom says:

      There are a few ways to short the housing market, although you can’t short a single residence. RMBS, or residential mortgage backed securities, are packages of residential mortgages that have been securitized. These RMBS can be shorted if you are an institutional customer. In this case, you’d call your fixed income securities salesperson at your favorite bank, and communicate what you want to short. You’ll need to have a margin account with them, have some collateral to post, etc., but this is something that is widely done by (usually hedge fund) money managers specializing in fixed income. This has historically been mostly done on agency RMBS, that is, RMBS issued by Freddie/Fannie. The non-agency market is smaller.

      You can also by credit default swaps linked to RMBS. This is how John Paulson make $6bn in 2008. The CDS market for individual RMBS’s has dwindled significantly since then, however, there is a fairly robust market for CDS on RMBS indices, primarily those indices provided by Markit.

      Other providers such as the ICE (Intercontinental Exchange, now owned by NYSE), provides residential mortgage indexes on which futures can be traded.

      Banks and other financial institutions also have ETFs which track housing prices. Vanguard, iShares and Barclays all have such a product.

      There are also residential REITs (real estate investment trusts) that focus on various subsectors of the housing market. They are all focused on rental properties.

      While none of these is explicitly shorting housing prices, they each provide a way to profit from declining home values based on the assumption that default rates will increase when housing prices go down (and pre-payment rates will decrease), or conversely that default rates will decrease (and pre-payment rates will increase). REITs tend to focus on rental income.

      The reason why you can’t short specific houses is simply that delivery is basically impossible and there isn’t any real price discovery methodology. That is, when shorting a stock, the person you sell it to can demand delivery. But since you don’t own the stock, you can’t deliver it. So you must go on the open market, buy the stock, and give it to them.

      This works not at all for houses. You can’t force someone to sell you their house, and the transaction costs are huge. So “delivery” really only works for liquid, fungible things. And with no delivery, you don’t have shorting.

      More generally, for any market, you need two opposing beliefs between the buyer/seller, and a way for adjudicating between those beliefs. For individual residences, this doesn’t exist.

      Hence all the workarounds/products described above.

    • add_lhr says:

      As Matt Levine (who sometimes reads SSC, I think) often says, the way to short the start-up market is to *found* a start-up yourself, raise a bunch of money from gullible investors, and enjoy the ride while it lasts. You and a few dozen of your closest friends can enjoy quite a comfortable few months / years drinking craft beer in your fancy new office by doing this with the right type of vaporware. As long as you mostly believe in your crazy concept and don’t make any actual misrepresentations, it’s possibly not even illegal…

      • John Schilling says:

        If [X] can be shorted by writing a check and calling a broker, while shorting [Y] requires years of at least part-time effort and the better part of your professional reputation, then I think [Y] ends up overvalued compared to [X].

  30. Freddie deBoer says:

    Eliezer has never seemed like a happy person, to me, which makes me wonder why some people are so devoted to making him their guru.

    • eyeballfrog says:

      I thought everyone switched over to making Scott their guru.

      • Freddie deBoer says:

        I’m just glad I read this review. It’s good and I’m not smart enough to read the book itself.

        • Scott Alexander says:

          This seems like a good place for the Outside View – you’re a PhD with expertise in statistics who’s also a beloved online writer. The chance that you’re not smart enough to read a book aimed at a popular audience is approximately nil.

      • quanta413 says:

        What eyeballfrog said. Except the switching part. I don’t think I would have ever found Eliezer a good choice for a guru.

    • suntzuanime says:

      Some of us have interests other than pure hedonism.

    • Nancy Lebovitz says:

      I don’t think Eliezer has specialized in teaching people how to be happy, he’s specialized in teaching people how to get important things right. It sounds like he’s not making the promise you’re looking for.

      I’m not sure whether there’s a definitive Yudkowsky essay about people being very different from each other, but meanwhile this and its comments is an excellent introduction to the subject.

    • poignardazur says:

      You know you really sound like a jerk, making blanket statements like this?

      There are lot of people in this community, with a lot of varying opinions on EZ ranging from grudging respect to thinking he’s incredibly smart, and virtually none of them having the kind of unconditional devotion that would be accurately described as “making him their guru”; you’re just gratuitously insulting people.

  31. Freddie deBoer says:

    I guess my question is why anyone would ever expect an evolved organism to ever achieve anything beyond the bare minimum necessary to continue to propagate the species. Expecting much more than that seems like theology to me.

    • Wrong Species says:

      I’m not sure what your point is. We obviously are doing more than the bare minimum to survive or we would be nothing more than bacteria.

      • Freddie deBoer says:

        I mean brute force natural selection seems to me to suggest that we should expect human systems to be closer to “not so unfit as to prompt a mass die-off” than to “perfectly fit.”

        • Uncorrelated says:

          If the environment is competitive enough then “not so unfit as to prompt a mass die-off” might be a very high level of fitness.

        • I don’t follow the argument. Natural selection isn’t mainly functioning at the species level, which is what you seem to assume, but at the individual level.

          If my species is well enough adapted to its niche not to disappear, it’s still the case that if I am a little better adapted than the average I will have more surviving descendants, my heritable characteristics will increase in frequency in the gene pool, and the average of the species will go up. That will keep happening as long as there are changes that can be made that will increase the reproductive success of the individual who has them.

        • quanta413 says:

          David Friedman’s answer is pretty good for what I think your question is Freddie, but maybe with more elaboration on why you expect that I could help answer?

          In a certain sense your topic level comment isn’t wrong. Outside of making copies of themselves, biological organisms may not excel at one easily measured trait to the extent human engineered systems do. It would be theological to expect a bird to develop the speed of a jet engine. Or a land animal the lifting power of a heavy crane. Or the brain to be an example of a correctly functioning inference engine.

          Reinforcing David Friedman’s point, natural selection is only partly about surviving a hostile (and changing) physical environment, the biotic environment (other organisms) is more important in a lot of senses which leads to David Friedman’s comment. Other members of your species are the biggest competitors for food, reproduction, etc.

          As a side note, organisms tend to be fit enough that mass die-offs (which I’ll take to mean mass extinctions) on the many species level actually tend to be relatively uncorrelated with how well adapted a species is to to its environment just before the mass die off.

          Back to the main thread at hand, it ranges from imprecise to totally wrong to talk of “perfectly fit” so maybe the problem is we’re all thinking of “perfectly fit” as being totally different places in terms of how much an organism is reproducing and how adapted it is to the environment? And thus we’re not even discussing the same thing.

          From what you are writing, I can’t tell if your understanding of natural selection mirrors a sort of Malthusian view of things or if you just consider actual evolved systems (so now I’m wondering if you’re applying the idea of natural selection to human cultural evolution where it’s not clear to me exactly what constraints we expect to hold) to be not so terrible as to collapse but very far from whatever perfectly fit is.

    • The Element of Surprise says:

      Selection is not for survival, but for maximal reproduction. Because the world is chaotic and complicated, organisms that learn about their environment and react to stimuli tend to be more successful than the ones that don’t, and tend to ultimately have more offspring.

    • Nancy Lebovitz says:

      I think “the bare minimum necessary to propagate the species” is ill-defined. There are rewards for having enough reserves to get through bad times– and then you also need the ability to protect those tasty reserves.

    • Viliam says:

      why anyone would ever expect an evolved organism to ever achieve anything beyond the bare minimum necessary to continue to propagate the species

      Sexual selection.

      Even when the species is already able to survive, there still remains competition between the individuals within the species about which ones will bring more of their genes to the next generation.

    • Scott Alexander says:

      In a competitive environment, the bare minimum to propagate the species could be very optimized.

      If all antelopes run 50 mph, they all have a small chance of being eaten when the lion attacks. If one of them runs 49 mph, they have a really large chance. If one of them runs 51 mph, they have almost no chance. So the antelope that runs 51 mph breeds more, until all antelopes can run 51 mph, and the cycle begins again. And so on until all antelopes run as fast as it’s possible for antelopes to run given other constraints (like not needing more food than is available).

      In the same way, if all companies are able to produce widgets for $1.00, then a slightly better company that can produce them for 90 cents gets all the widget orders and puts the rest out of business, until finally all companies remaining can produce them for 90 cents.

      So if you’re less than optimal in a competitive survival-of-the-fittest situation, you get mowed down.

      • In a competitive environment, the bare minimum to propagate the species could be very optimized.

        The rest of your comment is right, but the wording of that part is falling into the same mistake as Freddy’s comment. If one antelope runs 49 mph and the others run 50, that one dies but the species survives. If all run 49, the species might still survive. If one manages 51 mph and eventually all do, the reason is not that 51 is the “bare minimum to propagate the species” but that it is the bare minimum for enough reproductive success to maintain the frequency of your genes in the species gene pool.

  32. FoxLisk says:

    This isn’t about this blog post, but this is the best place I know of to discuss the book, so I’m going to post it here despite being vaguely off-topic.

    As a non-status-blind person, basically nothing written in the book about status regulation made sense to me. The entire theory, such as it is, appears to be generated based on misinterpretations of conversations that never happened in the first place, by EY’s own admission. And in no case does it seem that the status-regulation impulse was the only, or even the obvious, explanation.

    Did anyone else get anything out of that? Am I missing something big? I find most of EY’s writing to have, at worst, some interesting spark in it, but I found the ~1/3 of Inadequate Equilibria devoted to explaining status regulation to be somewhere between incomprehensible and wrong.

    • sty_silver says:

      I am also not status-blind, and I am somewhat obsessed with this topic. I think signaling, in general, is incredibly harmful.

      And my impression is pretty much the opposite. I don’t recall anything related to status that I felt like I did not understand, and I had the feeling that what was there was on point.

      Edit: I am open to have a discussion about this. I’d start with the parts you found incomprehensible.

      • Luke the CIA Stooge says:

        I found the status regulation part highly interesting and very useful.
        I found the voice trying to status regulate to be a bit too much of a characature. Like if all the characters had been casts as voices in someones head meant to represent aspects of how one person was thinking that would have felt more accurate.

        But there wasn’t any instict or pattern of though i didnt recognise from my own mind or conversations with others

      • FoxLisk says:

        Well, okay, on reflection, two things to start with:

        1. By “incomprehensible,” i don’t mean “i cannot understand it,” i mean something like “i have no idea how a human could think another human thought this,” and
        2. I think it’s odd of you to say, “i got something from it, but I’m going to ask you to explain what you didn’t get before I explain what I did.”

        Anyway. I’m going to go back through and pick some not-especially-well-chosen examples.

        The alternative is that a lone crank has identified an important issue that he and very few others are working on; and that means everyone else in his field is an idiot. Who does Eliezer think he is, to defy the academic consensus to the effect that AI alignment isn’t an interesting idea worth working on?

        I very much do not think people model things this way. When I see someone in this position I think the odds are extremely against them and wish them well. Status doesn’t factor into it. Maude, for most values of Maude, should think Eliezer is stupid, not overreaching.

        You consider this Market incredibly impressive and powerful. You consider it folly for anyone to think that they can know better than the Market. And you just happen to have on hand a fully general method for slapping down anyone who dares challenge the Market, without needing to actually defend this or that particular belief of the Market.

        Cecie: A market’s efficiency doesn’t derive from its social status.

        I just have no idea how Cecie’s response would be viewed by anyone as anything but a non sequitur.

        Shortly thereafter:

        Skeptic: I don’t know. The parallels between efficiency and human status relations seem awfully strong

        I have no idea how we got here — this whole exchange seems forced and the ideas don’t seem closely related to each other

        And continuing, EY even admits that he’s never observed anything remotely like this exchange occurring:

        I actually can’t recall seeing anyone make the mistake of treating efficient markets like high-status authorities in a social pecking order. It’s a mistake that somebody could make, though.

        (partial footnote included)

        Anyway, since I’m now Entering A Discussion rather than being inflammatory to Generate A Discussion, I will admit that some of what he says is not wrong.

        Like,

        trying to do interesting things in the future is a status violation because your current status right now determines what kinds of images you are allowed to associate with yourself, and if your status is low, then many people will intuitively perceive an unpleasant violation of the social order should you associate with yourself an image of possible future success above some level.

        that’s true. I think it’s overstated — or, at least, repeated too many times relative to its relevance — but I agree it’s a thing that occurs with some predictability.

        But on top of all the true things said, there’s an ocean of obvious missing corollaries that I feel should have been discussed rather than running the same few ideas into the ground. For example, after the fact, people who overreached are storified and their past audacity is treated as perfectly valid. Or the idea of corporate climbers who bluff social status in order to climb the ranks, and succeed often enough to get rich off of it — they have successfully claimed status they “shouldn’t” have for long enough that by the time anyone catches on, they’ve already attained the status they claimed and can no longer be dethroned.

        • sty_silver says:

          2. I think it’s odd of you to say, “i got something from it, but I’m going to ask you to explain what you didn’t get before I explain what I did.”

          I understood you as saying some of it you just didn’t understand and others you thought wasn’t true. Instances of either feel like good starting points. I pretty much thought everything was solid, so if I were to staart, I’d just have to list random parts and summarize them.

          I very much do not think people model things this way. When I see someone in this position I think the odds are extremely against them and wish them well. Status doesn’t factor into it. Maude, for most values of Maude, should think Eliezer is stupid, not overreaching.

          Hm, I’m realizing that discussing this is tricky because the only reliable data point each of us has is their own model. I do model things this way, for sure. In the context of internet posts, I’ve felt sereval times (before reading this book) that people would judge a certain post differently depending on whether the perceived status of the person making it was high enough to say that or not – where just a small change in status could swing it the other way.

          Just a few days ago, I’ve looked through old SSC bans. One of them was on a guest post by ozy. I got curious, read it, and looked on their blog. I thought the latest post there looked familiar, but I wasn’t sure why. The next time I looked on LW 2.0 I realized I had seen it there but not read it. In that moment, a qualitative shift their status assignment happened (as in, because I had read their piece on SSC and liked it, I now considered them high status enough so that they were allowed to write this frontpage LW post).

          A feature that I imagine to be different in academia is that how quickly status perception can change online. You don’t have to be an esteemed writer on this platform; if say I hear that you have a certain profession in RL that I respect, I will automatically model you on a high level of status without you even doing anything. Even without that, if the post convinces me that you belong on such a level I might still attribute it to you; if on the other hand your post makes it sound like you want that to happen but I don’t feel it is justified, I will want to punish you for trying. And I’ve felt before that this is a reaction people used to have a lot towards my posts (on other places). In fact I’m very confident that this is a thing.

          And, like, the reason why I model this that way (I think) is becuase every interesting point made is a personal attack on my status, because now I couldn’t come up with it. This is particularly true if the post feels like something I had already kind of figured out but not verbalized, and this is true all the time. So this is the qualitative difference: if the person has status not confirmed above me, it is a personal attack because they are trying to climb above my level (by saying things I might have said at some point, no less!). If the person is someone I already have attributed a status to higher than my own, it is a good thing, because now they confirm that me putting them so highly is justified. It’s even good for my status, then, because it means I’m good at assigning people status…

          In academia I imagine it happening the same way, only that status is less subjective. You say this novel thing while you’re Elon Musk? No problem. You say it as EY? … no, you violated the rules. You don’t get to do that. I will punish you for trying to put yourself above me.

          does this make sense, or did I digress too far into things not directly related ot this quote?

          I just have no idea how Cecie’s response would be viewed by anyone as anything but a non sequitur.

          It’s like

          You consider this Market incredibly impressive and powerful. You consider it folly for anyone to think that they can know better than the Market. Alas you admit that the market is actually very high status. Is is therefore not plausible for you to compete with it. And still, you dare to have on hand a fully general method for slapping down anyone who dares challenge the Market?! without needing to actually defend this or that particular belief of the Market no less??

          Cecie: … no, forget status. A market isn’t efficient because it has high status. It just is efficient. There is no rule saying just because the market is more powerful than I am, that I am therefore not allowed to find a simple rule of how it works.

          Skeptic: I don’t know. The parallels between efficiency and human status relations seem awfully strong

          The point of structuring it as a debate, I think, is to go through several possible objections Yudkowsky thinks people might have and addressing. This is handling the objection “but status might actualy just be an accurate representant for “skill,” and therefore using it to model things isn’t bad.”

          And continuing, EY even admits that he’s never observed anything remotely like this exchange occurring:

          I don’t think it’s a bad idea to also cover objections you haven’t explicitly heard yourself.

          that’s true. I think it’s overstated — or, at least, repeated too many times relative to its relevance — but I agree it’s a thing that occurs with some predictability.

          I also think it’s true. … and I think it’s extremely understated.

    • poignardazur says:

      I agree with others that status-based harmful incentives exist and can be relatively common.

      On the other hand, I also think Eliezer Yudkowsky doesn’t understand status, doesn’t understand his own relation to status, and should probably stop talking about status already because it’s making him look insecure and that’s just not sexy.

      I’m only starting to have this opinion, mind you. I’ll probably be able to articulate it better in a few months.

      • Deiseach says:

        I know feck-all about status and care even less (I probably have no or low status but I don’t care because from all the posts on here, what I see most is complaints about low SOCIAL status e.g. popularity, getting dates with attractive persons of preferred gender(s), genuine friendships and so forth and those are not at all concerns of mine; the complaints about setbacks from low PROFESSIONAL or EMPLOYMENT status are much fewer).

        So basically what I am taking away from all this is: does status exist? yes. what is it? that depends on the circumstances of the situation (e.g. in fashion world, high-status will be to be rich, thin, designer-clad, up to the very minute, connected to high status and influential magazines and fashion shows and so on, being a Nobel prize winning physicist not so much unless you are also a supermodel with your own successful haute couture line; someone can be high status in one context and low status in another, and so forth).

    • Anatoly says:

      I strongly agree. To me this fits with how I usually think EY, and LW-sphere more broadly, overuse status explanations, and overfit everything to them.

    • ADifferentAnonymous says:

      I came here to ask, does anyone actually feel the distinct emotion for status-regulation that Eliezer postulates, or have any direct evidence it exists?

      • sty_silver says:

        I think this is clear from my previous post, but yes I strongly feel it. I’m not sure how one would go about presenting evidence.

  33. blarglesworth says:

    Okay, okay, I’ll get the book. Eliezer’s thinking is very similar to mine here, but has taken another step or two beyond it to elucidate a bunch of things I’ve been confused by. And I have a degree-two personal connection to him, even though we don’t know each other and I haven’t spoken to the degree-one link since I was about 11, except maybe two or three brief conversations by randomly running into her – it being a small town we grew up in.

    I’ve had the same sorts of problems with the medical system – specifically, the psychiatric part of it – that Brienne did. I’m a pretty serious depressive who has lately seemed to pick up some bipolar symptoms too. I had a remarkable experience working around the medical system to find something that actually made my depression better too. That workaround probably saved my life.

    I had been feeling suicidal during all depressive episodes, and it had gotten far worse when my mother was diagnosed with stage IV lung cancer (never smoked) in 2013 and died a year later. I had spent the entire subsequent year depressed and essentially lying around in bed contemplating suicide, as treatments ranging from Prozac to TMS showed no signs of working whatsoever. So, having literally nothing left to lose and figuring I’d try one last thing before offing myself, I maxed out what was left on my credit cards and flew to the Peruvian Amazon to try ayahuasca.

    Only one experience of hallucinating, vomiting, and having horrible diarrhea in the middle of the rainforest under the guide of a shaman, and my suicidality went from “pretty sure I’m going to commit, and pretty sure I’ll succeed given my chosen method” to essentially gone, overnight. I’ve still been depressed and am still going through the psych med-go-round, but with a couple of short-lived exceptions, I haven’t felt truly suicidal since, and even my thoughts about it declined by ~95%. It’s been more than two years now.

    I also want to share the most interesting small-world situation I’ve experienced. Namely, Brienne was my best friend when I was 10-11. During third and fourth grade, before my parents realized it was no better than the crappy public school I was in, we were in the same class at the same tiny Catholic school in southeastern Indiana. It was pretty obvious we were like-minded even then, and I remember spending numerous recesses sitting on the swings talking with her and ignoring the schoolboy taunts about me having a crush on her, which were of course true.

    We went over to her place a few times, and had some good times hanging out with her along with her dad and her brother (who was IIRC my sister’s age, two years younger). I remember lots of really interesting pets and some early voice-recognition software that was hilariously terrible because it was 1999. I never tried to get my parents to let me have reptiles, but pet rats were an instant hit. My sister and I went through several pairs of them after being introduced over there. Best small pets ever, except for the whole “always dying slowly and painfully at age 2-3” thing.

    What if I’d never been pulled out of that school? It seems very likely we would have gotten together in high school. No idea if that would have lasted – but if it had, Brienne herself could have been in an inadequate equilibrium toward not finding a better match, because we’d have been a pretty good match even if Eliezer exists and is objectively better for her. A perpetually depressed partner, who is not into D/S roles, and who is good at autodidactic, contrarian, and creative thinking but not as advanced at it as Eliezer is, is clearly suboptimal for her vis a vis him.

    • zenmore says:

      Truly interesting story. How did you hear about this ayahuasca stuff?

      Also, very very small world.

    • Null42 says:

      I’ve read similar tales, and we know the shamans in these cultures used these for mental health issues. So it doesn’t seem unreasonable the stuff could help. We know the stuff is psychoactive–no reason it couldn’t cure or ameliorate depression in whatever 15% of depressed people have the particular brain configuration that responds to it. So it’s more an issue of the mid-twentieth-century prejudices against psychedelics preventing research into a potentially useful drug.

      As for the small world–I’m a newbie and not familiar with the ins and outs of the history of SSC, but is it possible there aren’t that many people with your interests and the same forces (awful word I know) that drew you to SSC had a similar effect on drawing Yudkowsky to his position? I.E., there’s a certain set of intellectual movements, say, that exist in turn-of-the-millennium USA and appeal to nerdy people, and that’s why you almost-knew each other?

    • poignardazur says:

      Who’s Brienne?

  34. Aevylmar says:

    A wise and trustworthy sage tells you that instead of trusting in the authority of those wiser than you, you should believe what your logical deductions tell you are true. But he backs it up with logical arguments that seem very weak, and your own logical deductions say that they are false.

    So if you accept what the sage says, you should disbelieve him. You should then accept what he says, because he’s a wise and trustworthy sage…

    Thus far the only way I’ve found out of this infinite loop is to deny the wisdom of the sage, which goes against some sense data, but oh well.

    • RC-cola-and-a-moon-pie says:

      There’s a similar paradox on the other side of the argument. Of course the justification for relying on the outside view is the product of a chain of argument that must of necessity be assessed by each individual’s private judgment. Thus, the strength of one’s justified reliance on the wisdom of experts can never validly be held to a higher degree of certainty than one’s estimate of one’s own individual success in reasoning (namely, the chain of reasoning that convinced you to defer). So it seems that one cannot even in principle escape the limitations of one’s own fallible reasoning by leveraging the wisdom of others, and deference to others, far from being an exercise in intellectual humility, can be viewed as an audacious exercise in intellectual self-regard, putting aside the common-sense view that you should make decisions by using your best effort at figuring out the right answer in favor of a counter-intuitive contrary approach on the basis of an abstract argument.

      • Aevylmar says:

        Not… necessarily, because people can be better at some kind of reasoning than others. You can say, “I am going to be biased with regards to my own prospects, but I expect to be less biased with regards to general principles,” and therefore go with outside view in cases where you expect your inside view to be biased, but be reasonably confident in the general principles underlying the chain of logic that got you to the outside view opinion.

        Or, for a negligibly different angle: If “recognizing experts” is a skill, and “mathematics” is a skill, if you’re better at “recognizing experts” than you are at “mathematics”, it makes more sense to recognize experts in mathematics than it does to do mathematics yourself. There isn’t a total paradox the way there is with the sage above.

    • Protagoras says:

      Buy cheap, sell dear, and never heed the likes of me.

      • Aevylmar says:

        I’m convinced that has to be a quote, but Google isn’t finding it.

        • Protagoras says:

          I am trying without success to remember where David Lewis uses that as an example of a piece of contradictory (and therefore impossible to follow) advice. I don’t recall him citing a source, but I do recall the way he said it making it sound like he was repeating an established example.

          • Deiseach says:

            Right, the contradiction comes in if everyone else is “selling dear” so that makes it very difficult, if not impossible, for you to “buy cheap”.

            But where that does work is things like (a) large supermarket chains with bulk buying power telling suppliers “we are going to pay you X cents per gallon” (or whatever unit) then processing/packaging and selling it on for “Y cents per head of cauliflower” (or whatever) where there is a big difference between X and Y. Your supplier doesn’t like the price you are offering? Fine, we’ll just find another guy who does, even if we have to import the product, and good luck selling your produce at the side of the road! (b) buying something like, say, a grain or cereal that is sold in its native country for pennies because ugh, that’s peasant food, who eats that unless they’re desperate? then selling it for dollars to Western consumers who have been convinced this is a super-food/latest trendy food fad that everyone, my dear, is serving and you’d better serve too or be out of the fashion (I’m thinking of the 90s fad for polenta, which is a “basic cheap peasant food” in Robert Browning’s “Pippa Passes”* but which became a trendy middle-class staple) (c) you get your product made overseas where it costs peanuts (relatively speaking) to make it and sell it in markets where you charge considerably more than peanuts.

            *Do you pretend you ever tasted lampreys
            And ortolans? Giovita, of the palace,
            Engaged (but there’s no trusting him) to slice me
            Polenta with a knife that had cut up
            An ortolan.

          • Protagoras says:

            No, the contradiction is all in the impossibility of following the advice of someone who says “never heed the likes of me.”

  35. vaniver says:

    Inadequate Equilibria is a great book, but it raises more questions than it answers. Like: does our civilization have book-titling institutions? Did they warn Eliezer that maybe Inadequate Equilibria doesn’t scream “best-seller”? Did he come up with a theory of how they were flawed before he decided to reject their advice?

    As it happens, Eliezer’s original name for the book was “Civilizational Inadequacy.” Early readers gave reviews like “from the title I expected it to be Eliezer ranting, but it was actually surprisingly nuanced and thoughtful.” So Rob Bensinger and others tried to come up with a name that was more “economics textbook” and less “Dath Ilan.”

    • Rob Bensinger says:

      I think the original title might have actually been Efficiency, Inexploitability, and Civilizational Inadequacy, which then got split into two books where the first one was called Inexploitability and Inadequacy and I don’t know if the second volume ever got a name. And then it got merged back into one book.

    • poignardazur says:

      Sounds like a good move. I would definitely have been put off by “Civilizational Inadequacy”.

  36. Jeremiah says:

    For those that are interested the podcast/audio version of this point is done and posted (Direct MP3 link here).

    I’m mostly posting because I need someone to share the sense of accomplishment. I don’t know how Scott writes this because I’m exhausted just by the act of reading it aloud…

  37. Antistotle says:

    Everyone hates Facebook. It records all your private data, it screws with the order of your timeline, it works to be as addictive and time-wasting as possible.

    Most people *love* Facebook and really don’t care that Facebook records their data. They get upset when Facebook uses that data in ways that they don’t like at that moment, but they LOVE Facebook.

    So why don’t we just stop using Facebook?

    I have, except for late Friday evenings when I’ve had as much Civ II as I can take, and I’m not quite tired enough to go to bed, and I don’t feel like digging around bit-torrent for a movie to watch. Of course, I don’t use my real name there.

    I retain an account or two because sometimes OTHER people have stuff there I need or want to see.

    More to the point, why doesn’t some entrepreneur create a much better social network which doesn’t do any of those things, and then we all switch to her site, and she becomes really rich, and we’re all happy?

    Ultimately because making money off Social Media is hard to do without being evil. Social media is inherently about sharing bits of our lives with each other. Because we’re all a little bit “not good”[1] there are parts of our lives that we wish to share with family that does NOT include some of the not “not good” bits. There are parts of our lives we want to share with cow-orkers that do not include the “not good” bits, and there are parts of our lives we want to share with our friends, some of whom are “not good” in the same way we are “not good”. To do this a service HAS to track in very granular ways–in the words of the philosopher Adam Ant “and who with and how many times?”.

    Which, given the current state of the art in network security, almost impossible to protect AND given certain kinds of “not goodness” might trigger mandatory notifications to the police.

    The only way you could do it “right” in this sense is https://diasporafoundation.org/ which is hella hard to make money off of.

    [1] Note that “not good” is in scare quotes, because we have such a diverse culture that something that one part of the culture considers tolerable is verbotten in another part, and is celebrated in a third.

    Consider the following person:
    1) Jewish, with part of his family Kosher/Orthodox
    2) Gay.
    3) Enjoys pistol shooting, and strongly believe in the 2nd amendment.
    4) Politically Centrist or Conservative.
    5) Works in San Francisco.

    Yes, this is remotely possible. I’ve known people in (or near) San Francisco who fit in three of the other four buckets.

    In each of these cases Ole Boi is going to want to keep part of his life if not secret, at least partitioned off from all the other parts of his life. You don’t want your Uncle Ben to know you’re eating a BLT because it’s not kosher. You don’t want your shooting buddy Jim to know that you’re gay. You don’t want HR to know you’re conservative and you REALLY don’t want anyone on Linked-In to know you are a 2nd amendment advocate.

    I picked those examples because at one point I was 3 of 5 of those (neither Jewish, nor Gay. However I’m an apostate Catholic Libertarian in a Catholic Democrat family, so close enough).

    Oh, and I’ve gotten in trouble at work for a rant I wrote on the internet that a potential customer of one of our suppliers found.

    • I found the FB part unconvincing, because there is nothing that keeps you from being on more than one social network. I in fact am on both FB and G+.

      If G+was better than FB for most people, more and more people with FB accounts would get G+ accounts as well, even though only a minority of their friends were on G+. Eventually almost everybody on FB would also have a G+ account, at which point activity on FB would gradually die away.

      I think you need stronger assumptions than are realistic–essentially that it isn’t worth spending any time on a social network unless a lot of your friends are there–to get the suboptimal equilibrium in this case. But I’m not a typical user, so may be wrong about what assumptions are realistic.

      • yodelyak says:

        I was in my first year at an Ivy during Facebook’s early roll-out. It was quickly non-optional for many aspects of social life, and naturally, being at an Ivy, you need to keep track of the contacts you make if you are trying to optimize for opportunity. Hell, that’s half of what you’re paying for when you pay an Ivy league sticker. (though I went as the token poor person who isn’t paying at all). If even 10% are using *only* Facebook, you have to use it or you’re giving up something significant. Nowadays, most serious networking professionals I know use at least a two-site strategy (often LinkedIn and Facebook, but maybe Google+, Instagram, Pinterest, Twitter, Quora, Goodreads, and lots more I’m sure). Once in a while some have responded to a linkedin request, or a facebook request, by saying something like “oh, I do all my real-friend stuff on ___” or “oh, professional connections I track on ___” but more often I find that if I add people in multiple places, I’m accepted across the board.

    • Null42 says:

      I just want to say, I really enjoyed your list of 5 things. I don’t want to go into too much detail, but I’m a person of partial Ashkenazi blood who’s generally left-leaning but very suspicious of feminism and believes in some of the tabooed topics here, who’s somewhat kinky (dom, light) and has a ‘straight’ job at a liberal institution.

      There is nobody who knows the whole me.

  38. themoneystore says:

    The same is true, more tragically, for housing prices. There’s no way to short houses. So if 10% of investors think the housing market will go way up, and 90% think the housing market will crash, those 10% of investors will just keep bidding up housing prices against each other. This is why there are so many housing bubbles, and why ordinary people without PhDs in finance can notice housing bubbles and yet those bubbles remain uncorrected.

    There’s no way to short houses directly, but there are many, many ways to short the housing market. If you’re an institutional investor, or just a guy with enough money to be legally considered one, you can enter into derivative transactions like credit default swaps. The Big Short is overrated in some ways, but covers this idea in an accessible way.

    Even if you’re an individual investor, it’s fairly easy to short a stock or a real estate ETF. There are dozens of publicly traded REITs and mortgage REITs which are essentially just monetized bundles of property, with different geographic or sector exposure. Sure, you can’t short one specific property, but if you’re feeling bearish about, say, downtown Northeastern retail, you can just go short a company that holds that. Retail brokerages like Fidelity let you do it.

    You could argue that you end up just profiting on your bet rather than moving the market directly, though in theory enough people doing this will end up doing the latter. In any case, there are plenty of ways to short housing.

    • actinide meta says:

      As I understand it, REITs have (had) pretty terrible correlation with housing returns (e.g. the Case-Shiller index). If there’s a way to be *long* housing, without a lot of costs, counterparty risk, or tracking error, I would actually love to know what it is, for liability matching reasons.

    • Gobbobobble says:

      Interesting, this largely answers my question above.

      As a followup: With shorting, do you have to specify a given timeframe or can you “buy negastock” and sit on it until the bubble pops? I’m still curious why it doesn’t appear to happen more often

      • Nornagest says:

        Short selling consists of borrowing stuff, selling your borrowed stuff to someone, then buying equivalent stuff back at a later date so that you can return it to the borrower. That makes you profit equal to the difference between the price you sold it for initially and the price you paid for it later, less the fee you paid to whoever you were borrowing it from. Note that this has an upper bound, but no lower bound — it therefore can lose you an unbounded amount of money if you choose badly enough.

        It only works for fungible goods, which is why you can’t do it for real estate. And the trade can only take as long as the lender is willing to lend their stuff to you, so open-ended shorts are usually not feasible.

      • martinw says:

        You have to specify a timeframe.

        And the downside of going short is that your risk exposure is unlimited. E.g. if you buy Tesla stock, the worst-case scenario is that Tesla goes bankrupt and your stock becomes worthless, so you lose the full amount of your investment. That’s bad, but hey, rule #1 of investing is not to gamble with money you can’t afford to lose.

        But if you go short Tesla, and then tomorrow Elon Musk reveals that he has invented an infinite-capacity supercapacitor and Tesla becomes the most valuable company on Earth, you may suddenly find yourself a million dollars in debt, on a trade on which you were hoping to make a few hundred dollars in profit.

        Which is why shorting stock is a lot more heavily regulated than buying it.

      • themoneystore says:

        You’ll set a end date in advance that you have to pay the stock back by. The further away that date, the more expensive the short, generally speaking.

        As to why more people don’t do it… it’s pretty risky and a lot of it is market timing. Just sitting on it “until the bubble pops” usually doesn’t work if you’re watching prices go up for years and have to keep paying in to cover it. To do it sustainably, beyond acting on some kind of gut feeling or inside information, you’d need to have pretty sophisticated risk management and hedging. It makes sense to me why hedge funds do much more (and much more successful) shorting than retail investors.

        This Investopedia guide will answer a lot of your questions.

      • Well Armed Sheep says:

        The old saw about this is that “the market can stay irrational for a lot longer than you can stay solvent.”

    • benwave says:

      I’m not sure this really works in the same way though – Do I have it right that people shorting a stock is something which causes it to drop in value? This mechanism is not obviously at play with housing-marker-goes-down securities, is it?

  39. StandingWave says:

    Foreign policy is an area where expert consensus is most wrong and/or corrupt. Barack Obama and Hillary Clinton knew the war in Iraq was a failure, but still used military force to topple Gaddafi, apparently assuming that western-style democracy would immediately form. The biggest supporters weren’t Fox News, but the New York Times. It was a war in support of women’s rights and against a rapist after all.

    • Nornagest says:

      The thing to remember here is that the American press, and most American voters regardless of party, love it to pieces when the evening news is full of Tomahawk launches and F-16s scrambling. They just need to come up with a narrative that says they’re the good guys, and preferably a battle plan that doesn’t leave them with a bunch of flag-draped coffins and unsightly memorials.

      And the Libyan Civil War was practically tailor-made for that. Remember, it kicked off in the middle of the Arab Spring, and not long after Occupy et al; at the time there was a lot of enthusiasm floating around for the power of protest. Overtly kicking in the door on a regime, shooting a bunch of people and trying to install a democracy among the survivors would have (rightly) been unthinkable for the Obama administration at the time; the worst days of the Iraqi post-invasion instability were still fresh in everyone’s mind. But an opportunity to ride to the rescue of a genuine popular revolution that’s just about to get stomped by an old-school tinpot dictator? That’s the kind of thing you can sell to the center-left.

      That popular revolution turned out to be completely unstable after it won, as 90% of them do, but out of sight means out of mind.

    • shakeddown says:

      Barack Obama and Hillary Clinton knew the war in Iraq was a failure

      So did Scott, but he still overwhelmingly backed toppling Gaddafi. This just seems like an area where a lot of smart well-intentioned people were wrong.

      • StandingWave says:

        I think a lot of smart people were afraid of looking like they were defending an evil rapist. I mean, Gaddafi was beyond parody. Literally had a “rape dungeon.” All that was needed to shut down an argument against intervention were those two words: “rape dungeon.”

        I guess the lesson pundits learned from the Iraq War didn’t have anything to do with effective foreign policy – it was that noone got fired for supporting the war, and people did get fired for opposing it.

        • Deiseach says:

          And do you know I never heard until your comment of this “rape dungeon”? I heard a lot about what a bad ruler he was, the terror, the force and brutality, all the rest of it, and of course in Ireland we knew about the IRA links.

          Yet somehow (and it may be because I am very ignorant) I seem to have missed any mention in all that coverage of a rape dungeon.

          Which makes me wonder about American coverage, and if the whole point of that was not indeed “to shut down an argument against intervention” (like the “stolen incubators” story during the Gulf War)?

          Googling gives me a link to the Daily Mail online (hmmmm) and that it apparently was some kind of BBC co-production (well, the Beeb is reliable, so this must be okay)? Until I get this synopsis:

          Colonel Gaddafi was called Mad Dog by Ronald Reagan. His income from oil was a billion dollars a week. He washed his hands in deer’s blood. No other dictator had such sex appeal and no other so cannily combined oil and the implied threat of terror to turn Western powers into cowed appeasers. When he went abroad – bedecked in fake medals from unfought wars – a bulletproof tent was flown ahead, along with camels that would be tethered outside. His sons lived a Dolce & Gabbana lifestyle – one kept white tigers, while another commissioned a $500 million cruise liner with a shark pool. Like other tyrants, Gaddafi used torture and murder to silence opposition, but what made his rule especially terrifying was that death came so casually. A man who complained that Gaddafi had an affair with his wife was allegedly tied between two cars and torn in half. On visits to schools and orphanages Gaddafi would tap underage girls on the head to show his henchmen which ones he wanted. They would be taken to his palace and abused. Young boys were held in tunnels under the palace. Yet because of his vast oil lake there seemed no limit to Western generosity. British intelligence trapped one of his enemies overseas and sent him to Libya as a gift. The same week, Tony Blair arrived in Libya and a huge energy deal was announced. Filmed in Cuba, the Pacific, Brazil, the US, South Africa, Libya and Australia, the cast of this documentary consists of palace insiders and those who gave shape to Gaddafi’s dark dreams. They include a fugitive from the FBI who helped kill his enemies worldwide; the widow of the Libyan foreign minister whose body Gaddafi kept in a freezer; and a female bodyguard who adored him until she saw teenagers executed. Gaddafi was a dictator like no other; their stories are stranger than fiction.

          How do I disentangle fact from sensationalism here? Washing hands in deer’s blood and sex dungeons and bodies in freezers? All this is coming out after his toppling and death and on the one hand, yeah people only feel free to tell the truth when the dictator is no longer in power but on the other hand, all kinds of “I’ve got a good story to tell, how much are the media willing to pay?” people also come out of the woodwork and I’m not sure how much I can believe anyone with an axe to grind. It’s hard to know what is true, what is mostly true but exaggerated, and what is invented to justify the new regime by “look how even more horrible he was than you ever knew!”

          • Judging by a quick Google, “rape dungeons” were a story from after Qaddafi’s defeat. The earlier claims were mostly about his soldiers raping women, which might well be true but not very unusual.

          • youzicha says:

            The only thing I remember reading from before the war was accusations that Gaddafi sexually harassed his female bodyguards. Maybe the NATO war in Libya was really the first victory for #metoo.

          • engleberg says:

            For ten or twenty years before Gadaffi fell, I’d see stories about Europeans making deals with him on the grounds that, well, best be neighborly: he’s been in power so long he must have a stable regime, he hasn’t been caught blowing up airliners or giving the IRA bombs for a while, any enemy of Ronald Raygun is our friend.
            Deals got made, but there were dissenters with clout.
            Then the Arab Spring got off to a good-looking start in Tunisia, and it was spreading, and a civil war started up in Libya and it looked like it Gadaffi would atrocity it more suo and maybe lose anyway, and everyone doing business with Gadaffi would have to tell voters at home they supported his atrocities and face vengeful winners in Libya. So the dissenters got enough clout that the Europeans refused to support Gadaffi. Then it looked like Gadaffi would win after all, and Europe would have a unified, hostile Libya under a vengeful Gadaffi, so they sent in NATO to make sure he lost. Then Hillary blew a great opportunity to STFU and gloated on TV ‘We came we saw he died’. Since Gadaffi died civil strife bungles on.

            Fiasco with bungled fiasco on top? Sure. Atrocity stories about Gadaffi to make it look righteous? Sure. Illuminati using fake atrocity stories for war ? Not completely.

          • John Schilling says:

            For ten or twenty years before Gadaffi fell, I’d see stories about Europeans making deals with him on the grounds that, well, best be neighborly […]

            For about ten years before Gadaffi fell, nations across Europe and around the world made deals with Gaddaffi on the grounds that he had fairly explicitly approached the world, particularly after 9/11, saying “I don’t want to be an Evil Dictator any more. It’s just not the same these days.
            Short of marching quietly into a cell under the Hague, which isn’t going to happen, what do I have to do to be part of Team Good Guy?”

            Then he went and actually did it. It wasn’t a matter of him not getting caught blowing up airliners or giving bombs to the IRA, he genuinely stopped doing that. He verifiably gave up his chemical weapons and his nuclear arms program, and ratted out the people who had been selling him nuclear technology. He returned foreign property that had been nationalized earlier in his regime, refocused his foreign policy on economic development in sub-Saharan Africa, and started appointing competent technocrats to replace loyalist thugs in key ministries. And, of course, made sure his position as Supreme Leader remained unassailably secure, because the retirement plan from that position is unspeakably awful.

            If there’s an end state for 21st century Evil Dictators that doesn’t involve a bloody civil war or the silly fantasy of their marching off to a cell in the Hague, Muammar Gaddafi was the poster child for making it happen. Except that sort of thing involves lots of boring diplomacy and not much else, so you never saw the posters.

            …and a civil war started up in Libya and it looked like it Gadaffi would atrocity it more so

            In roughly the same way that, ca. late 2001, it looked like Al Qaeda sleeper cells were going to unleash wave after wave of terrorist attacks that would kill hundreds of thousands of innocent Americans. Didn’t happen, wasn’t going to happen, but someone telling scary stories about it is always good for the ratings and if it even promises to bleed, it leads.

            Illuminati using fake atrocity stories for war ? Not completely.

            I’d like to say it was the Servants of Cthulhu, myself, but it was pretty much the Bavarian Illuminati working through Big Media.

  40. Alex Zavoluk says:

    I thought it was possible to short housing, it was just very difficult and very risky, because the potential downside is unbounded? Like that’s the whole point of The Big Short (I know, I’m not taking my financial history info entirely from a popular movie, but I think the gist of what these firms were doing is right).

    edit: also, unlike buying an underpriced stock, just buying into the short position doesn’t correct the error.

  41. Antistotle says:

    Every so often, I talk to people about politics and the necessity to see things from both sides.

    At some point a line gets crossed though.

    I see no point in bothering to see things from the side of flat-earthers, Breatharians, advocates of homeopathy[1] etc.

    I am willing to admit that I am often wrong, and to engage in dialog even with people I suspect of being of ill intent in order to better understand the world around me, but some things are so bat-guano insane that there is simply no way that seeing that point of view can be useful.

    [1] not talking about the “herbal remedy” homeopathy here. Talking about the “like cures like and if you dilute it properly it gets more powerful” homeopathy.

    • I see no point in bothering to see things from the side of flat-earthers, Breatharians, advocates of homeopathy[1] etc.

      The interesting question is how you decide what to put on that list.

      It’s of interest to me for a number of reasons. Two positions I argue for, A-C and my view of global warming (that not only the size but also the sign of the net effect on humans is not known), are ones that a lot of people would put on such a list.

      For a clearer case, I remember being told by a fellow Harvard undergraduate that he couldn’t take an econ course at Chicago because he would burst out laughing. My guess was that he had had one introductory econ course at Harvard at that point. Within a decade or two the Harvard economists had conceded that the Chicago economists were right on at least some of the contested points, and Chicago proceeded to run up a string of econ Nobel prizes.

  42. sourcreamus says:

    The BOJ problem is less that they have no upside to being right and more about the cyclicity of mistakes. Pretty much every central bank in the world was wrong in the 70s about inflation and unemployment not coexisting. This led to lots of inflation and then bad recessions to end the inflation. Thus young macro economists became determined not to make that mistake, so when their turn as central bankers came they made the opposite mistake and allowed too little inflation and too much unemployment. I bet the next generation of central bankers will be more prone to too much inflation having over learned the lessons of the current crop’s mistakes.

  43. JohnBuridan says:

    What I don’t get about Eliezer’s anti-academia schema is his seeming obliviousness as to how much conscientiousness is needed to be an autodidact. And not only conscientiousness, but you also need imagination/curiosity/openness to experience in spades. Who in the hell is dealt that hand?

    Sure, there are some great autodidacts out there, Eliezer, Neal Stephenson, Charles Darwin, Ray Bradbury… most of them are writers, more modern ones are computer types, diamonds in the rough, though… the majority of people need an institution to help them become who they want to become.

    Conscientiousness is not my strongest trait. I went to college looking to take the most difficult classes in my field that I could, but I knew I was never going to learn systematically if I tried to teach myself. And furthermore who, at 17, has an even rudimentary map of knowledge?

    • JohnBuridan says:

      I went to college to learn how to learn…
      I got the Map of Knowledge and now I can go treasure hunting.

    • rlms says:

      “Charles Darwin”
      Was he? He supposedly didn’t work very hard in university, but he did still go to med school.

  44. Mark says:

    Give people status for being wrong.

    I’m told that in America, business people get brownie points for having a few bankruptcies under their belt. Should be the same rule for being wrong.

    If people who were wrong were incentivised to loudly proclaim their mistakes, we’d get through and over mistakes more quickly.

    That’s why I kind of object to the whole rationalist project, as far as it is a method of being right. Trying desperately to be right is meta-level wrong. A group of people who want to be right all the time are slipping into dangerous territory.

    [
    Scientists pay lip service to this idea, but I think they have things the wrong way around. They are (supposedly) object level prepared to be wrong, but meta-level determined to be right.

    It would actually more productive to be determined to be object level right – “Noah was from Atlantis and I will fight to the intellectual death to prove it” – but I can only do that because on the meta-level (deeper, moral, ethical level?) I accept that I can be wrong.
    ]

    • sty_silver says:

      My impression from the rationality scene is that admitting to be wrong is considered high value and that its importance is emphasized heavily. Whenever I see a post on LW 1 or 2 where the author claims to have been wrong, it is reliably upvoted. Yudkowksy has also written about this several times.

      • Nornagest says:

        I haven’t posted on LW 2, but when I admitted I was wrong in a live thread on LW 1, I’d fairly reliably get 1-3 upvotes. That’s not nothing, but it’s not “high value” either; making a plausible argument for something was usually worth 5-15 upvotes depending on exposure. It offered an incentive to concede defeat rather than just flounce if you were getting stomped, but not to update if you still had any threads to cling to.

        • sty_silver says:

          That’s very fair. Um, I’m not sure why I said LW 1 and 2. What I said about Yudkowsky’s writing is true, but in terms of upvotes, I don’t actually distinctly remember data points on LW 1. I amend my post.

          Also, though, as Scott has pointed out before, the people who were still on LW 1 after Yudkowsky had left are not necessarily representative of much. I was there, and LW 2.0 feels very different – and much better – to me.

          • Nornagest says:

            the people who were still on LW 1 after Yudkowsky had left are not necessarily representative of much.

            True enough. Most of my LW karma comes from the period after the main Sequences were complete but before Eliezer left, so that’s mainly what I’m referring to. Voting patterns changed (though not for the better) afterwards, and they might have been different very early in the community’s history but you’d have to ask someone else.

    • DrBeat says:

      The rules of status cannot ever be changed by any deliberate action.

      The rules of status cannot ever be made to be useful.

      • Mark says:

        I’m not sure that makes sense.

        Do you not have a bit of a neurosis with respect to the issue of status?

      • CatCube says:

        Y’know man, you’re going to have to articulate these eternal “rules of status” for us sometime. Because I’ve been at various places on the status ladder in different domains*, and this is not obvious to me. I really want to try to allow for you seeing something I don’t, but you never seem to get much beyond complaining and into explaining.

        *Though never at the very bottom–the place most people complain about is high school, and I found that it was fine. I wasn’t one of the cool people, but they mostly left others alone. It *was* a small town, so there was a lot less to lord over people. For example, the sports teams had a no-cut policy, so it was more difficult for “jocks” to claim some kind of status.

  45. romeostevens says:

    The outside view version of Inadequate Equilibria would be about the research into expert judgment and what heuristics to use to evaluate which domains in which you can trust experts and which you can’t. https://smile.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715

  46. Douglas Knight says:

    Then I get to the punch line – that means they should be less certain about their own politics, and try to read sources from the other side. They shake their head, and say “I know that’s true of most people, but I get my facts from Vox, which backs everything up with real statistics and studies.”

    Politics is a bad example.

    You’re addressing people who claim to make a modesty argument of bowing to Vox’s experts and telling them to bow to a wider range of experts. Maybe they should. But just because they say that they get their facts from Vox doesn’t mean that they have beliefs about facts, let alone ones that they got from Vox. They just take Vox to endorse their side and can’t tell if Vox says that Charles Murray is right or wrong. They should move to the object level and have beliefs about the world.

  47. dark orchid says:

    I can think of some examples how evil that entered the third way can get thrown out again.

    MySpace never had quite facebook levels of dominance, but it was “the next big thing” for a while before facebook came along. I imagine facebook’s nightmare is a future wiki article starting something like “myspace and facebook were early examples of social networks”.

    Quark XPress was once the only digital publishing program worth using. Designers used it because that’s what clients and printers used. Clients and printers used it because that’s what designers used. Then Adobe came along and conquered the market, as described in an article you can find by searching for “How QuarkXPress became a mere afterthought in publishing”.

    Internet Explorer (IE) once was THE browser with over 90% market share. Sites optimised for IE because that’s what people used. People used IE because that’s what made websites work properly. Anyone here using IE?

    I’ve seen the above two examples phrased several times as “people won’t switch to your software because it’s better, but they will switch if it’s 10x better”. And if you are in a happy situation where it’s really hard for people to switch away from you, you can get away with a lot but if you go too far, they’ll switch in a huge crowd at once.

    Regulation can also work. Uber is in a bit of trouble in London at the moment, which creates a possible opening for a new competitor.

    Tower Two cannot win by being cheaper, but if it ever figures out how to do “digital literacy” properly and teach programming and data science to the masses 10x better than Tower One does it now, it has a chance. Various online/interactive tutorial systems have tried but not succeeded yet – but in principle I don’t think a 10x advantage specifically in digital subjects is impossible.

    I realise this next example is culture war material but if Tower One goes sufficiently postmodernist, there might be a point where another tower becomes 10x better in some sense and it triggers a mass switching. Whether this would be a good thing I don’t know.

    • Winners, Losers & Microsoft by Liebowitz and Margolis is in part about the sort of sequential competition you are describing. At any instant there is a dominant program, but when a noticeably better one comes along, it becomes the new dominant program.

  48. baconbacon says:

    EY was not correct on Japan, and it is straight forward to demonstrate.

    • Nornagest says:

      Let’s hear it, then.

    • Gobbobobble says:

      Please do.

      (As much as I love EY being wrong, this sort of Fermat’s Last Comment are super obnoxious)

      • baconbacon says:

        On a cell phone right now. The short of it is.

        1. The spike in inflation didn’t last, and in fact correlates with the increase in sales tax that occurred at almost the same time.

        2. The change in growth rates in Japan started well before the BoJ actually announced changes, and barely deviated with the change in inflation.

        I think you can concoct an interpretation where EY and Scott Sumner are correct, but it comes with near zero predictive power.

        For a quick visual Google ‘trading economics inflation rate Japan’ and select the 10 year graph.

        • baconbacon says:

          A little bit longer version

          From EY’s dialogue

          Your current problem is that there’s too little money flowing through the economy, meaning, not enough money to drive all the buying of real goods and labor that could be exchanged if your economy had more money.

          So here is a graph of the inflation rate in Japan, and here is a graph of the UE rate in Japan. Set both to 10 years and you will see that the UE rate rises up to ~5.5% in 2009, and then drops steadily through 2017 to ~2.75% at the last reading. You will see on the inflation graph that the inflation rate is positive leading up to 2009 when the UE rate was increasing, and then negative (with three minor peaks above zero, if you zoom out 5 more years you will see this is clearly the lowest period of inflation since ~2003/2004) through 2013. Inflation rises from 2013 through 2015 (peaking at almost 4%, a local maximum going back to around 1992) then drops back to around zero change in 16/17, there is no apparent change in the UE graph correlating with any of these changes.

          This is a direct shot through the proposition that economic issues (all CB’s are concerned with UE rates) are directly influenced by the inflation rate, or a “lack of money”. There is no correlation between the UE rate and the inflation rate outside of large economic shocks (in which case the cause is the economic shock, not the failure to hit the inflation target). The rest that follows is a variety of speculation from an unsteady base. A lot of it isn’t even supported by evidence like this statement

          When money is becoming more valuable, people try to hold onto it more, which slows down velocity,

          Actually when things you hold go up in value people tend to spend more money, its called the “wealth effect”, and there is a large literature on the subject (the wealth effect is semi bogus its self, but it has a lot more support than the above quote).

          • A Definite Beta Guy says:

            Sumner doesn’t think price inflation is a relevant metric. He’d prefer to use wage inflation. There’s also no correlation between unemployment and inflation at all for an economy that’s correctly running its monetary policy. If it were a PERFECT monetary policy, unemployment and inflation would be positively correlated…unemployment and inflation would both rise when adverse supply shocks occur. In the real world, information is imperfect, and demand fluctuations will still occur. So, the correlation between inflation and unemployment SHOULD be non-existent.

            If unemployment is actually increasing when inflation is going down, it means your central bank is failing in its job.

            The relevant empirical test is that Japan has massively increased its monetary base. It also has the lowest unemployment its seen since it entered its Lost 2 Decades, it has rising labor force participation (especially women), it has deprecated its currency, it has raised asset prices, and it has successfully created NGDP growth.

          • baconbacon says:

            Sumner doesn’t think price inflation is a relevant metric. He’d prefer to use wage inflation.

            Sumner doesn’t think price inflation is a relevant metric, unless it happens to agree with him. Here is a post from 2011 on Japan

            Some people look at these facts and see a central bank that was powerless to boost NGDP. That seems crazy to me. Why would a central bank trying to raise NGDP reduce the monetary base? Why would they raise rates? Here’s an alternative view. Suppose the BOJ was trying to prevent inflation. Then every time the CPI inflation rate rose up to zero, they would tighten policy. If my hypothesis is correct, then what type of path would one predict for the Japanese monetary base? The answer is surprising; almost exactly the path that we actually observed. Here’s why:

            1. Because the trend rate of inflation fell sharply between 1970-90 and 1991-2010, nominal interest rates fell close to zero (the Fisher effect.) This would produce a large increase in the real demand for base money, or a large fall in velocity. And that’s exactly what we observed in Japan after 1990.

            Here is another from 2014

            Japan is a perfect case study. Asset markets took off after mid-November 2012, when then candidate Abe first indicated he was going to push for a 2% inflation target. The yen fell from about 80 to the dollar to 103 today, while the Nikkei rose from under 8700 to over 15,300 today. So the asset price gains have been sustained. And we did see a rise in the Japanese price level, RGDP and NGDP. So in one sense Abenomics “worked.”

            Sumner has used inflation rates as a proxy time and time again, specifically in regards to Japan. He discusses the BoJ’s policy in terms of their inflation target, the market’s reaction to the inflation target, and how close they have been to hitting the target.

            There’s also no correlation between unemployment and inflation at all for an economy that’s correctly running its monetary policy. If it were a PERFECT monetary policy, unemployment and inflation would be positively correlated…unemployment and inflation would both rise when adverse supply shocks occur. In the real world, information is imperfect, and demand fluctuations will still occur. So, the correlation between inflation and unemployment SHOULD be non-existent.

            That is a leap. For it to logically flow shocks that came from imperfect information would have to swamp the signal that did come through, and would totally invalidate nGDP targeting as a strategy.

            The relevant empirical test is that Japan has massively increased its monetary base. It also has the lowest unemployment its seen since it entered its Lost 2 Decades, it has rising labor force participation (especially women), it has deprecated its currency, it has raised asset prices, and it has successfully created NGDP growth.

            I’m sorry but you don’t get to sneak the conclusion in this way. NGDP proponents argue that rising NGDP is the causal factor in creating the economic situation pulling these other indicators to the desired level, and that this is why banks should target it.

            If you look at this post from 2017 you will see that in addition to the UE rate starting to improve around 2009/10 that the labor force participation rate starts improving in a similar time frame (which SS uses inaccurately as a volley against his opponents). These improvements preceding the NGDP rise is a direct shot through the casual reasoning of monetarists (or at least SS style monetarists).

          • A Definite Beta Guy says:

            NGDP targeting is not invalidated as a tool if the inflation rate loses its correlation with the unemployment rate. NGDP-targeting means the central bank will continue simulative efforts until NGDP futures hit whatever target is decided (5% being reasonable). The central bank will use tightening if the NGDP futures market is greater than the 5% threshold.

            Unemployment and NGDP will both miss, but this will be due to a combination of real business cycle effects and demand-side misses. A real business cycle isn’t avoidable: it will increase inflation, it will decrease NGDP, and it will increase employment. In this case, inflation and unemployment will be POSITIVELY correlated.

            If it’s a demand-side miss, then inflation will decrease, and unemployment will increase. The two will be inversely correlated.

            The net effect is ambiguous.

            Either way, the theory points out very clearly that if NGDP targeting is capable of working, the inflation-unemployment correlation is non-existent or positive. There is no reason to assume it will be negative. You cannot use the lack of a correlation as a point against it. Inflation and unemployment are only inversely correlated when the Central Bank repeatedly fails to hit its macroeconomic targets.

            There aren’t many clear relationships between inflation and UE in the US or Australia either, when they are not operating in a recession.
            In the US, there was an obvious inverse relationship…during the recession, which was a demand-side shortage. There is no obvious relationship OUTSIDE this period. Right now, unemployment and inflation are both decreasing together:
            https://fred.stlouisfed.org/graph/fredgraph.png?g=gg1X
            The same pattern holds for Australia. Again, you see the same lack of a pattern, except for the last time Australia had a recession in the early 1990s, where unemployment skyrocketed and inflation tanked:
            https://fred.stlouisfed.org/graph/fredgraph.png?g=gg2a

            Sumner discusses the inflation target in Japan because Japan has explicitly created an inflation target. If you are going to set an inflation target, you need to HIT your inflation target.

            Either way, I don’t see what your links to the Money Illusion prove. Sumner incorrectly labels Abenomics starting earlier than it actually does. That doesn’t change that Abenomics is implemented, and NGDP/RGDP both increase, reliably, as expected. It’s also true that the normal Japanese rates of 80% labor supply, 3.5% unemployment, are now 83% labor supply, 2.5% unemployment, which is the result of the NGDP level.

            There was a recovery prior to the implementation of the Abe-nomics, but it was tepid, and did not deliver Japan back to trend growth line, and did not increase employment over what was already accomplished. That new equilibrium was set in the Abe-nomics era.

        • jblum says:

          baconbacon is correct and using the BOJ over the last four years as a canonical example of an outsider view being proven right is frankly bizarre.
          1) the Japanese economy has done fine but no better than Europe or the US or the global economy generally. Unemployment rate is low but wage inflation is still low and GDP growth has been barely any better than the period before the global recession. Nominal GDP and CPI optically look higher but that is due largely to consumption tax hikes which flow through directly into measured prices and have nothing to do with expanding the money supply. Inflation expectations on a forward looking basis are up a bit but no more than what you would expect in a strong global economic uptrend. The Nikkei has also done well but again no better than the S&P 500 on a total return basis.
          2) Abenomics has had many modalities (structural reform, yield curve control, central bank equity purchases, etc.), only one of which is expanding the money supply which incidentally the BOJ has tried many times over the years and it made more or less no difference. Maybe EY has a more sophisticated critique of the BOJ but he certainly makes it sound as if it’s as simple as narrow money monetarism which has really been proven wrong about as well as anything could be proven wrong in macro.

          All of the above is disputable on a details basis, but the point is that saying events have clearly shown that an outsider’s simple money supply critique of the BOJ has been obviously shown to be correct is just wrong. To the point where it makes me question every other example he gives where I don’t have personal expertise.

          • baconbacon says:

            Nominal GDP and CPI optically look higher but that is due largely to consumption tax hikes which flow through directly into measured prices and have nothing to do with expanding the money supply.

            This point is crucial, the sales tax hike in 2014 of 3 percentage points correlates very closely to the NGDP rise from 2014 through 2017.

          • A Definite Beta Guy says:

            Uhhhhh…I’m not really sure many economists are going to sign up for the idea that a tax increase is good economic stimulus. The actual tax increase in 2014 is associated with a temporary stall in yen-denominated Japan GDP:
            https://fred.stlouisfed.org/graph/fredgraph.png?g=gfkc

            If you look at the real figures, it’s a definite drop:
            https://fred.stlouisfed.org/graph/fredgraph.png?g=gfl0

            What is the proposed mechanism of the sales tax increase causing an increase in NGDP?

          • baconbacon says:

            Uhhhhh…I’m not really sure many economists are going to sign up for the idea that a tax increase is good economic stimulus.

            This is bait and switch, most economists discuss economic stimulus in terms of RGDP not NGDP, and most wouldn’t/don’t consider a rise in NGDP without a rise in RGDP to be good economic stimulus, since RGDP growth has been lower in Japan since the increase than it was in the 3 ish years prior this doesn’t pose a problem for most economists.

            The actual tax increase in 2014 is associated with a temporary stall in yen-denominated Japan GDP:

            A temporary stall is pretty meaningless from a macro perspective, there could be many possible ways for large events to have the opposite effect for a short period (just as a hypothetical, as the tax increase was expected there could have been a pulling forward of consumption prior to the increase, causing a ‘drop’ in the immediate aftermath that fades quickly).

          • A Definite Beta Guy says:

            A temporary stall isn’t irrelevant if it’s exactly what was predicted. An increase in the consumption tax leads to a one-time inflation bump and a stall in NGDP, which means a drop in RGDP.
            I don’t see how that is NOT fatal to the idea that Japan’s higher NGDP is because of the VAT. There’s no reason to think that a tax increase will automatically raise GDP, either real or nominal, and it’s not in the data.

            What IS in the data is that Japan’s CPI rates, if you prefer to use those, were also higher than average PRIOR To the tax increase. After the tax was implemented, Japan has also achieved higher inflation than what it achieved in 2001-2008, with fewer bouts of deflation.

            https://fred.stlouisfed.org/graph/fredgraph.png?g=gfRp

            Japan had -.5% inflation or greater in every Q1 2003-2008. It’s hit that twice in the Abenomics era (2013-2017).

          • jblum says:

            Agree that the CPI chart is the way to adjudicate but show a longer history and use year on year growth instead of quarterly to smooth out the noise, as is the convention. Set the start date to 1990:

            https://fred.stlouisfed.org/series/CPGRLE01JPM659N

            The temporary spikes in 1997 and 2014 are the clearly visible consumption tax hikes. The government charges businesses and businesses pass it on to consumers; it’s straight forward and really not controversial.
            Let’s remember the original point though – with a straight face look at that chart (starting after the bust in 1990) and tell me the BOJ were clueless fools pre 2013 and then suddenly figured it all out over the past four years. We can argue about whether Abenomics has worked (I think it has, a little, but due to lots of stuff and certainly not only monetary base expansion) but the very fact that it’s a legitimate argument invalidates Yudkowsky’s example as it is being offered – i.e. a slam dunk case of an outsider seeing something that foolish insiders missed.

          • baconbacon says:

            I don’t see how that is NOT fatal to the idea that Japan’s higher NGDP is because of the VAT.

            You can’t see how the impact of a tax increase could somehow not occur 100% in the very first quarter it is implemeted? You can’t see how an announced 3% price increase might lead to purchasing of some goods an extra week or two earlier?

            There’s no reason to think that a tax increase will automatically raise GDP

            No one has argued that it would,

  49. A couple months before Eliezer, I wrote a post about building a SAD light that is much more powerful than anything available commercially, based on the same reasoning: “You Need More Lumens,” https://meaningness.com/metablog/sad-light-lumens

    Maybe the obvious idea is obvious. Or maybe he was inspired by something he read on the internet somewhere 🙂

    I’m using it right now, btw. It works.

  50. Tenacious D says:

    Suppose you thought that modern science was broken, with scientists and grantmakers doing a bad job of focusing their discoveries on truly interesting and important things. But if this were true, then you (or anyone else with a little money) could set up a non-broken science, make many more discoveries than everyone else, get more Nobel Prizes, earn more money from all your patents and inventions, and eventually become so prestigious and rich that everyone else admits you were right and switches to doing science your way. There are dozens of government bodies, private institutions, and universities that could do this kind of thing if they wanted. But none of them have.

    This reminds me of an idea I had one time for a crowd-funded lottery for scientific research. Basically, researchers could put a Kickstarter-style pitch and ask on the site and then anyone could buy tickets and allocate them to projects they find worthy. At a set date, each research project would have all the tickets they were given put into a draw–provided that their ask was less than the total money raised that round. After the winner is selected, any remaining money would be used for a second draw among projects below that threshold. And so on until all the money raised has been awarded.

    Crowdfunding now has a demonstrated ability to raise five or six figure amounts fairly routinely. There are already serious proposals for making grants into more of a lottery system (this one came up near the top of a Google search). And since smaller asks would stay in this kind of lottery longer, it would bring affordable research ideas to the fore.

  51. Qiaochu Yuan says:

    Scott, don’t take this the wrong way, I intend this mostly in the name of clearing up confusion: I think most of your reasoning runs on outside view and you’ve gotten really good at it, and that there are nontrivial inside view skills you’re weak at that Eliezer is strong at (but which he doesn’t describe in very much detail in this book, unfortunately). I don’t know how to pin these skills down precisely, but they have something to do with building gears models in Val’s sense, and also something to do with being good at something like math (but lots of people who are good at math don’t have the skill, for some reason). I think the intended audience of this book is for people who are also, or could be, strong at these skills, and are preventing themselves from using them out of modesty.

    You write that Eliezer’s concern is that modesty will prevent people from having meaningful opinions. This is not the concern, as I read it: the concern is that modesty will prevent people from doing anything interesting, like curing their wife’s SAD, or writing HPMoR, or working on AI safety. And I don’t think anything you wrote really engages with that concern.

    • Deiseach says:

      the concern is that modesty will prevent people from doing anything interesting, like curing their wife’s SAD, or writing HPMoR, or working on AI safety

      With regard to the bolded part, I can only play the part of Legolas in the following exchange:

      “Strange are the ways of Men, Legolas! Here they have one of the marvels of the Northern World, and what do they say of it? Caves, they say! Caves! Holes to fly to in time of war, to store fodder in! My good Legolas, do you know that the caverns of Helm’s Deep are vast and beautiful? There would be an endless pilgrimage of Dwarves, merely to gaze at them, if such things were known to be. Aye indeed, they would pay pure gold for a brief glance!”

      “And I would give gold to be excused,” said Legolas; “and double to be let out, if I strayed in!”

      If a sudden access of modesty would have prevented the writing of HPMoR, would that Yudkowsky had emulated the humble violet in this instance!

      • rlms says:

        I was scrolling through his Facebook page to find another post, and encountered one that began “From tonight’s Thanksgiving conversation: What are the two greatest novels from before 2010?

        I nominate:”

        The answers may surprise you!

  52. DrBeat says:

    There is a lot in here that I’ve been saying for a while and you all said I was insane for saying.

  53. userfriendlyyy says:

    I’m kind of amazed how wealth and power are completely overlooked as motives. The Japanese economy is a prime example. Japan trashed a decade of growth because a few elites kept a policy in place (hard money) that benefited the already rich at the expense of the whole country. Rich people hate inflation so much that they would gladly torture the entire world to prevent it. They have long become used to having power and influence and not having to work for it, just living off investments. There is no bigger threat to wealth than inflation. The fact that central bankers want to be on the good side of rich people for when they go looking for their next job shouldn’t shock anyone. The fact that central banks everywhere take inflation a whole lot more seriously than full employment shouldn’t surprise anyone and the mark on their reputations for doing so is only now just starting to show up now that we have more Americans killing themselves yearly on opiates because their lives suck than died in the Iraq and Vietnam wars combined. The fact that almost the entire economics profession is quite content to create involuntary unemployment as a way to prevent inflation should tell you everything you need to know about those people. The fact that the rich would actually do better under full employment yet refuse to let society come anywhere close to it should tell you that it isn’t just about wealth, it’s about power.
    https://www.nakedcapitalism.com/2012/08/kalecki-on-the-political-obstacles-to-achieving-full-employment.html

    • Null42 says:

      I agree with you full-throatedly, whole-heartedly, and an awful lot. Eliezer actually does address this in his reason number 1 evil exists (and I have not read the book yet). I think he underestimates the importance of this–I think many if not most major problems rely on either people in power benefiting too much from a status quo to change it or the various groups that would be affected being unable to agree on a solution.

      The first problem’s well-appreciated–see most of Marxist analysis–but I wonder how many people appreciate the second. There’s obviously a huge coordination problem between the Trump and Sanders fanbases, both of which hate corporate control of the government but would never agree on a solution that would include the other party. Amusingly, I remember when one guy from Occupy Wall Street suggested an alliance with the Tea Party to go after big money in politics and the Tea Party guy shut him down…and now it’s Steve Bannon, of all people, who got canned for trying to reach across the aisle to the American Prospect on China (which snubbed him, though they did list their points of agreement and explain that racism was a deal-killer).

      • userfriendlyyy says:

        Yes, this problem is so frustrating. The post-modern neoliberal democrats have effectively walled off working with anyone that isn’t in lock step with them on every issue. Unfortunately they have used their megaphones to call anyone willing to unite on class lines a racist or sexist who needs to check their privilege.

        Because somehow shut up and vote for the war monger who is personally responsible for turning Libya into a place where you can buy slaves, rigged elections in Haiti, legitimized a coup in Honduras, and used prison (read slave) labor to help out around the AK Govenors mansion because she will be better for people of color is logically consistent.

        The Amazing author Mark Fisher tried tackling the issue in his essay “Exiting the Vampire’s Castle” before he killed himself earlier this year.

        • Null42 says:

          Hillary had defects, to be sure, but I think she would have been better for Americans of color, if only by marginally enlarging the safety net. She would have vetoed the current Republican tax bill, for example.

          Despite that minor point, I agree with you on the major one. I think the thing is that there’s a gap between ‘what’s good for the Democrats’ and ‘what’s good for their constituency’…which is also true for the GOP, of course. The biggest problem IMHO is the two-party system…which the current pair of parties is all too eager to perpetuate. (Fisher’s a Brit, which means we have it even worse than he did.)

          BTW: that link is dead, here’s a live one:
          http://www.thenorthstar.info/?p=11299

          • userfriendlyyy says:

            Clinton may have been marginally better for POC in the short term. However, since Obama destroyed the democratic brand by bailing out Wall Street and not so much as slapping a single banker on the wrist while joe 6 pack’s already precarious job prospects dwindled and the home got foreclosed on while Obama did nothing and/or made things worse; an unpopular Clinton presidency that continued selling out the working class would have handed the GOP a super majority and a much less feckless President in 2020 who would absolutly destroy the entire safety net and really ushered in neofeudalism.

            And yes the two party system is the problem. The fact that the most popular solution on offer, ranked choice voting, is a total non solution and is still getting so much push back is a sad sign. Range voting, approval, and a few other options are all infinitely better.
            http://electology.github.io/vse-sim/VSEbasic/

          • Null42 says:

            I agree 100%, but it’s a bit idealistic to assume you’d get rid of the two-party system. I’m not sure which of the two parties would be more susceptible to subversion or destabilization, though. It’d be fun to have a globalist-libertarian party and a populist-nationalist party, but of course the globalist-libertarian party would never get any votes…

          • userfriendlyyy says:

            lol true that.
            A change in voting methods would make parties less necessary by making primaries unnecessary. It would allow people to vote for people who aren’t backed by one the oligarchy supported parties without fear that by doing so their less favorite plutocrat would win.

    • There is no bigger threat to wealth than inflation.

      Why? Inflation is a threat to forms of wealth that are defined in nominal terms, but most wealth–shares of stock, ownership of land, … isn’t.

      • Protagoras says:

        Yes, he oversimplified, but higher than expected inflation is bad for creditors, good for debtors, while the reverse is true for lower than expected inflation. So creditors fiercely resist any policy that might raise inflation above what they had previously anticipated, while eagerly encouraging policies that reduce inflation to previously unanticipated lows. And quite obviously the wealthy are much bigger creditors.

      • Yeah. There is a real concern by those rich people, though, in that capital gains tax doesn’t account for inflation.

  54. yodelyak says:

    I guess if I put my takeaway from bingeing all of “Inadequate Equilibria” today, I’d say the conclusion was all I really wanted:

    “Better, I think, to not worry quite so much about how lowly or impressive you are. Better to meditate on the details of what you can do, what there is to be done, and how one might do it.”

    The serenity prayer said that better, though, maybe. Serenity prayer plus Desiderata and Prayer of Jabez… definitely. [[I’m apparently increasingly prone to evaluating advice by whether it’s more useful than a few good chunks of the bible. As in, if I can’t outperform a generic bible-literate kindly rabbi/minister, then I maybe shouldn’t be in the business of dispensing advice to a wide audience?]]

    Edited to delete a bunch of useless quibbling.

    • Deiseach says:

      Serenity prayer plus Desiderata and Prayer of Jabez… definitely.

      I dislike the Desiderata because it was over-used as hippy-dippy stuff in the 70s when I was a kid (just like “The Sunscreen Song” in the late 90s, remember that? Every damn time I turned on the wireless, they seemed to be playing the bloody thing, which I found as patronising and platitudinous as the Desiderata):

      Neither be cynical about love/For in the face of all aridity and disenchantment/It is as perennial as the grass. – is it any wonder that after hearing that warbled over and over again as an impressionable young’un, I formed the view in reaction to that about love being nonsense?

      I was formed more in the mould of the Prayer of St Francis, than the Prayer of Jabez, a craze I vaguely remember from a while back, but I’d never read the actual prayer, and I have to say that this really brings home to me the difference between American Evangelicalism/Non-denominationalism and Catholicism (as it was in Ireland in the 80s).

      Prayer of Jabez – “Oh that you would bless me and enlarge my border, and that your hand might be with me, and that you would keep me from harm so that it might not bring me pain!” (“enlarge my border/territory” is the part which seems to have been most abused, in the Prosperity Gospel kind of way, and the part that seems most representative of a certain strain of American Protestantism – God as dispenser of worldly favours as a sign of grace).

      I contrast this with the St Francis Prayer, which seems not to be by St Francis at all but rather written in 1912 by a French priest, and which a certain (i.e. my) generation (and probably subsequent ones) learned to sing as Make me a channel of your peace; this was so popular, even Margaret Thatcher quoted it! (the idea of Maggie being a peacemaker and reducing disharmony, in light of the miners’ strike and the Falklands War amongst other things, is highly amusing):

      Lord, make me an instrument of your peace.
      Where there is hatred, let me bring love.
      Where there is offense, let me bring pardon.
      Where there is discord, let me bring union.
      Where there is error, let me bring truth.
      Where there is doubt, let me bring faith.
      Where there is despair, let me bring hope.
      Where there is darkness, let me bring your light.
      Where there is sadness, let me bring joy.
      O Master, let me not seek as much
      to be consoled as to console,
      to be understood as to understand,
      to be loved as to love,
      for it is in giving that one receives,
      it is in self-forgetting that one finds,
      it is in pardoning that one is pardoned,
      it is in dying that one is raised to eternal life

      • yodelyak says:

        Hm. I take your point re: how young people often form opposite impressions of anything they are too often told.

        One thing I was surprised by, in Desiderata, is specifically that “perennial as the grass” line. Something I found myself struck by around age 29, and which remains true at 32, suspect probably means the feeling is permanent, is that I *still* love all the same people I loved when I was 10, or 14, or 19, or 23, and with the same fervency that I did then. In some cases, since there isn’t anything I can do with that feeling, I have to shut it down every time it comes up by re-triggering the anger and hurt that ended that relationship. So, although I didn’t see it this way as a kid, it’s also a kind of warning. Don’t spread your love too widely–once your heart is given, you can’t take it back. Also, if you find you love somebody, think twice before assuming you can “trade up” and stop loving that person in favor of a different one. You can probably *also* love that additional person, but the old love will still be there if it was ever real, always re-growing up under the old one. Also, if you love someone, and they hurt you, that’s a hurt you may feel your whole life. It’s actually a very conservative idea, that love, once planted, is perennial.

        The prayer of st francis is really great. Thanks for bringing that up–I live among atheists (and more or less am one these days–too much Wittgenstein, and too much thinking about what it means to be “people of the book”, and I find that while my morals and ethics remain Christian, I don’t have the same object-level beliefs as people who expect divine intervention or an after-life.) I doubt I could get “it is in dying that one is raised to eternal life” past the less eternalist people in my life as a wall-hanging, so desiderata it may have to be for me… but dang the prayer of st francis is quite good.

    • Null42 says:

      Christianity has evolved over 2000 years as a response to hard times (Judaism is even older than that). Stands to reason some of it would be helpful in your current situation.

  55. ashlael says:

    Having spent some time working in politics, I feel like the question is more “why does society ever make smart decisions” than “why does society sometimes make dumb decisions”.

    I mean, yes, definitely ask why we use suboptimal nutritional fluid. But also ask why we haven’t thrown ourselves into hyperinflation for no reason like Zimbabwe. Trust me, the answer is not “Because it would be really dumb to do that.”

    • Having spent some time working in politics, I feel like the question is more “why does society ever make smart decisions” than “why does society sometimes make dumb decisions”.

      Society doesn’t make decisions–it isn’t a person.

      Individuals interacting under some sets of rules tend to produce desirable outcomes. Individuals interacting under other sets of rules don’t. Neither half is all that surprising, although perhaps the first more than the second.

      And at best it is only “tend to.”

  56. Eiður Á. Möller says:

    I wonder if a good tactic to ‘raise awareness’ for the right kinds of lipids would be to just scrape anti-vaccination sites, ctrl-F -replace “vaccination” with “killer lipids” , and then create a lot of facebook groups and twitter accounts around the ‘conspiracy’ in the most hyperbolic way possible. Maybe sprinkle a little “The government is trying to cover up killer lipids with disinformation about vaccination” on top — win/win

    Put hysteria to good use?

    • cactus head says:

      Sounds like it would poison the well for precisely the people you’d want the message to reach.

    • Deiseach says:

      See my longer comment below, but the problem is not “killer lipids”, it’s “you need a mix of lipids; 100% soya oil isn’t enough on its own but neither is 100% fish oil, and pre-term babies need lipids as well as other constituents of nutrition but being pre-term, they’re not yet able to digest or process them properly and in some infants this will be deadly”.

      They are aware of the problem, they are working on it, unfortunately running clinical trials involves risks (“In clinical trials of a soybean oil-based intravenous lipid emulsion product, thrombocytopenia in neonates occurred (less than 1%)”) and while a company may be willing to run tests in adults where the side-effect is “reduced platelet count meaning lowered ability for blood to clot”, very few will be willing to risk the bad PR associated with “Heartless Big Pharma used helpless babies as guinea pigs in dangerous experiments!”, so running the tests to get FDA approval means they will be even more cautious about any side-effects, and so slower to move on to testing in humans and hence to market. As would the FDA, they don’t want to have to deal, either, with media interviews with sobbing mothers weeping about “Callous FDA murdered my beautiful tiny baby because of their uncaring rush to licence money-making drug because they’re in the pocket of Big Pharma!”

      Whipping up hysteria would have the downside, especially in the litigious USA, of scaring hospitals so badly about being sued for a dead baby that they don’t dare use a new formulation, it’ll be safer and more legally defensible to stick with the “this is what is standard in the field, this is what has been used for forty years” formulation.

  57. onyomi says:

    A bit of a quibble that may, nonetheless, have some broader implications:

    I don’t believe everybody really does hate Facebook. I think people enjoy complaining and that it’s good social signalling to claim to hate Facebook. As soon as it got as popular as Facebook (and therefore claiming to like it offered no status), almost any conceivable Facebook replacement would soon be as “hated” as Facebook.

    If you survey people about the coffee they like, they’ll tell you they like a strong, dark roast with just a little milk and sugar. If you do a taste test, you’ll find people like weak coffee with a lot of milk and sugar. It’s obvious which recipe most businesses will pick (best solution: offer weak, milky, sugary coffee but label it “dark, bold roast”).

    Which is not at all to say I think we secretly like overpriced health care and dead babies. But rather that, any time we see an outcome that “everyone” “agrees” is terrible, the first question may be to ask “do they really?” Sure, in the abstract, everyone wants better, cheaper, more effective health care. But if you actually describe in any detail the kinds of tradeoffs necessary to solve problem x, y, or z with the system, how many people will actually support those changes in the privacy of the voting booth?

    Put another way, any time we see a system “everyone” agrees is “terrible,” there is a good chance there are a lot of people who secretly don’t want to see it change, though they’re not going to come out and say so.

    • Deiseach says:

      Which is not at all to say I think we secretly like overpriced health care and dead babies.

      Right, I looked that up because I found it very confusing that if the currently used parenteral nutrition is deficient in the correct lipids, and everyone knows the correct lipids, but the only source that is FDA approved uses the wrong lipids, then why is this? Is it really a case of “it would be too expensive to do trials and the rest of the things necessary to get FDA approval on a revised formulation, and there aren’t enough premature babies to make back the money on this, so we’re not going to bother”?

      Warning: this post is going to have as many links as a Sidles production, sorry about that.

      So far as quick Googling turns up results, here in Ireland we seem to be using an American formulation. So do we not care about dead babies? Or is it a case that (again, as quick Googling seems to indicate) lipids-related problems are a recognised source of difficulty in premature infants, attempts to manage this are made, and that there are now new formulations out there? So Scott’s information may be outdated or incomplete (e.g. the whole problem of ‘getting premature infants fed’ is so risky that things like reluctance to start lipids early enough or sufficiently enough have been studied, there is at least awareness of the problem out there)?

      Despite adoption of a more aggressive approach with amino acid infusions, there still appears to be a reluctance to use early intravenous lipids. This is based on several dogmas that suggest that lipid infusions may be associated with the development or exacerbation of lung disease, displace bilirubin from albumin, exacerbate sepsis, and cause CNS injury and thrombocytopena.

      An earlier paper also seems to suggest that lipids were viewed as a risk factor for inducing hypoglycaemia:

      Although there has been a trend toward earlier and higher rates of infusion of intravenous lipids for preterm infants, with most now starting at least low-rate infusions (0.5–1.0 g/kg/day) on the first or second day after birth, there has been little study of the benefits or risks of this change in practice. Early lipid infusions augment glucose production and may contribute to hyperglycemia, because beta oxidation of fatty acids promotes gluconeogenesis. Lipid carbon also contributes to hyperglycemia by competing with glucose carbon for oxidation. Lipid emulsions also contain glycerol, which contributes significantly to gluconeogenesis and net hepatic glucose production. More research is needed to determine the balance of benefits of increased caloric production from early lipid infusions versus some of the risks of early lipid administration, for example hyperglycemia.

      Actually, reading that, part of the poor survival rate seems to have been that they weren’t fucking feeding pre-term infants at all, partly because they had no goddamn clue what was the best formulation (can’t blame them for groping in the dark) and partly because:

      Apparently, if there is anything even slightly unusual about an infant or an infant’s condition(s), withholding feeding, either intravenous and/or enteral, is a reflex response. This approach clearly reduces the nutritional intake of infants below that required for growth.

      Paper on up-to-date view of the matter, which I can’t log into because I don’t have the credentials, but which does have a section on lipids. Getting back to the point, if the problem was “we gave formulations with only soy-bean oil to pre-term babies and this was bad for some of them, we now know a mix of oils is better, but the only FDA approved stuff still uses only soy beans”, then yes, there are now new recommended fish oil and olive oil as well as soya oil formulations out there. And it seems that if certain babies have pre-existing conditions (unconjugated hyperbilirubinemia) that lipids intake is dangerous unless properly monitored, so some of the “dead babies killed by lipids” may be down to this.

      Anyway, the Irish recommendation is:

      The lipid solution currently used is SMOFlipid®, as it is thought to reduce the incidence of parenteral nutrition associated liver disease (Attard et al., 2012). SMOFlipid® contains fish oils (n-3 fatty acids), which may have anti-inflammatory properties (Schade et al., 2008), and reduce risk of hypertriglyceridaemia and cholestasis

      (SMOF stands for “soya, medium-chain-triglyceride (derived from coconut oil), olive and fish oils” used in the formulation).

      But even that warns for dangers:

      Deaths after infusion of soybean-based intravenous lipid emulsions have been reported in preterm infants. Autopsy findings included intravascular lipid accumulation in the lungs.

      Preterm and small-for-gestational-age infants have poor clearance of intravenous lipid emulsion and increased free fatty acid plasma levels following lipid emulsion infusion. The safe and effective use of Smoflipid in pediatric patients, including preterm infants, has not been established.

      The safety and effectiveness of Smoflipid have not been established in pediatric patients. Deaths in preterm infants after infusion of intravenous lipid emulsion have been reported [see Warnings and Precautions (5.1)]. Because of immature renal function, preterm infants receiving prolonged treatment with Smoflipid may be at risk of aluminum toxicity [see Warnings and Precautions (5.6)]. Patients, including pediatric patients, may be at risk for PNALD [see Warnings and Precautions (5.6)].

      There are insufficient data from pediatric studies to establish that Smoflipid injection provides sufficient amounts of essential fatty acids (EFA) in pediatric patients. Pediatric patients may be particularly vulnerable to neurologic complications due to EFA deficiency if adequate amounts of EFA are not provided [see Warnings and Precautions (5.9)]. In clinical trials of a soybean oil-based intravenous lipid emulsion product, thrombocytopenia in neonates occurred (less than 1%). Smoflipid contains soybean oil (30% of total lipids).

      So once again, it’s “follow the conventional wisdom”; if I’m reading that about “not established safe for paediatric patients” and I’m not sure the risk of using this outweighs the benefits, I’m going to go with the traditional formulation used by everyone else, even if that perhaps may be riskier for certain infants.

      tl; dr: Lipids-caused deaths in pre-term infants is not so much “we don’t care about dead babies” as “it’s really, really, I mean really, hard to give correct and sufficient nutrition to pre-term infants who also may have a truckload of other conditions affecting them”.

  58. P. George Stewart says:

    Maybe part of the problem here is that there’s an underlying assumption among rationalists and intellectuals, noted long ago by Karl Popper, that knowledge is easy to recognize once you see it.

    In fact the reason that $20 bill wasn’t noticed for several weeks may be because it was camouflaged by dirt or discarded candy wrappers. As Popper said, you might not even recognize knowledge when you see it. And even “experts” are in this position.

    IOW, the g-related aspect of intelligence (let’s say, broadly, analytical intelligence) is only part of the story – in many cases, the part of the brain that notices (e.g.) an opportunity for arbitrage may not be related to g at all, it might be some random part of the brain that was designed to spot red in a field of green, the algorithm for which gets accidentally gerrymandered into inducing the person to intuitively think that stock x is going to fall, or that tune y is going to be a hit.

    So it’s a mistake to bias everything around g-related modes of thinking, Bayesian reasoning, etc. The old, classical liberal idea is better: we just don’t know in advance who’s going to come up with a brilliant idea (e.g. relating to the good life), and it might not necessarily come from someone with high intelligence (and the status that comes with that). Therefore the market, for all its undoubted flaws, is still the least-worst solution (and of course any other system in any other field that has the same “throw stuff against the wall and see what sticks” method, or generate-and-test method, like evolution itself, like the immune system, like the brain, perception, etc.).

    I’m inclined to take “expertise” in any field that doesn’t work along market/evolutionary lines as more related to a mixture of social status and efficiency than as predictive of outcomes. The reason your average professor bins hopeful amateur solutions should be simply for the sake of efficiency in their own time management, NOT because they don’t think some amateur might come up with a stunning solution to something. Also, chances are there are many, many unknown “amateur” solutions to problems out there that might well be viable, or viable with a bit of help; but the problem is that their acceptance might require re-jigging of the status structure, and in most cases that’s almost impossible.

    Or to put this another way, socially-recognized experts in various fields are as often selected for their ability to adroitly climb social status ladders as they are for expertise – that’s really what messes up the expected benefits of socially-recognized expertise institutions.

    • The old, classical liberal idea is better: we just don’t know in advance who’s going to come up with a brilliant idea (e.g. relating to the good life), and it might not necessarily come from someone with high intelligence (and the status that comes with that).

      This reminds me of one of the central ideas of How China Became Capitalist by Coase and Wang. The authors argue that most of the important changes, such as the shift to something close to private property in agriculture, were not things planned by the authorities. They were “marginal revolutions,” things that happened at low levels. What the authorities got right was not suppressing them.

      The authorities, the people at the top after Mao died, knew that socialism was the best economic system. They also knew, once they were free to go abroad, that their super socialist system had left China extraordinarily poor compared to other countries. Their conclusion was that socialism was great but they must not be doing it right, hence that, along with deliberately making changes that they thought might help, they should let things happen and see if they worked.

      The authors pretty clearly thought that doing that worked better than if the Chinese had tried to restructure their system based on the advice of western economists, even if the economists had been from Chicago rather than Harvard.

  59. Deiseach says:

    From what I can tell, status regulation is a second factor accounting for modesty’s appeal, distinct from anxious underconfidence. The impulse is to construct “cheater-resistant” slapdowns that can (for example) prevent dilettantes who are low on the relevant status hierarchy from proposing new Seasonal Affective Disorder treatments. Because if dilettantes can exploit an inefficiency in a respected scientific field, then this makes it easier to “steal” status and upset the current order.

    Okay, I know I’m asking for a slap-down here, but this does sound like motivated reasoning: “the only reason my Magic Cure won’t be taken up by The Establishment is because I don’t have the relevant Status-Signalling Credentials so it’s no good me even trying to get this out into the mainstream, so I’m not going to even try because I know I’ll just be insulted and degraded”.

    This is not “taking an example, extracting a principle, and applying it to meta”, this is “I see no point in modesty because I AM A GENIUS and the only reason – the ONLY reason – my genius is unrecognised is because those jealous drones who somehow, for some unknown reason, managed to get into a gate-keeping position on new discoveries, are too obsessed with maintaining their status and hence too threatened by my GENIUS”.

    Stuff like this is why it makes it hard for people who did not go the Less Wrong/Sequences route to take Yudkowsky seriously. Every crackpot says they’re an unrecognised genius and it’s only because the jealous small-minded carp about them not having a degree in the relevant field that they can’t get their paper on perpetual motion published in the prestigious journals; applying the Ypsilanti Protocol*, why should I believe Yudkowsky really has cracked the problem with conventional treatment-resistant SAD and isn’t yet another crackpot?

    *

    The Three Christs Of Ypsilanti is a story about three schizophrenics who thought they were Jesus all ending up on the same psych ward. Each schizophrenic agreed that the other two were obviously delusional. But none of them could take the next step and agree they were delusional too. This is a failure of Outside-View-ing. They should have said “At least 66% of people in this psych hospital who believe they’re Jesus are delusional. This suggests there’s a strong bias, like a psychotic illness, that pushes people to think they’re Jesus. I have no more or less evidence for my Jesus-ness than those people, so I should discount my apparent evidence – my strong feeling that I am Him – and go back to my prior that almost nobody is Jesus.”

    • Ilya Shpitser says:

      Look, the entire frame of “unrecognized geniuses” is about recognition, e.g. being given status one is due.

      There are standard ways of getting status, like doing impressive stuff everyone agrees is impressive. That’s what intellectuals/academics do.

      People who frame themselves as an unrecognized genius generally are trying to grab some unearned status, basically.

      If a genius did not want recognition (s)he would presumably outwit the world out of some utility and live happily ever after. Lots of folks do this. I know a few world-calibre minds that are quietly making their millions.

      • nimim.k.m. says:

        People who frame themselves as an unrecognized genius generally are trying to grab some unearned status, basically.

        I like this description (as I pointed out, I sometimes find E.Y.’s writings unbearable to read).

        When you cast yourself in the role of an unrecognized genius, you are simply starting another status game, not the standard “I’m very accomplished by the traditional metrics of status”, but the one where you present yourself as the Quixotic hero battling against the windmills of evil establishment. This is how you can also attain “status”, or less abstractly , sympathy and recognition amongst the people who buy your narrative. This works because while humans like narrative of “venerate and listen to the elders”, we also like narratives where we can “root for the heroic underdog”.

        And as someone commented downthread, it’s not like the academic world (while certainly imperfect) is a genuinely gated garden where only the members of ivory towers can enter. High schooler can submit articles [as a 1st author!] to conferences and arxiv (if they manage to get in contact with someone in the academia).

        • nimim.k.m. says:

          Thinking about this, I’d like to mention as the classic literary reference the novel Pickwick Papers by Charles Dickens, where Mr. Pickwick is considered the great genius and scientist by the members of the club named after him.

          A casual observer, adds the secretary, to whose notes we are indebted for the following account—a casual observer might possibly have remarked nothing extraordinary in the bald head, and circular spectacles, which were intently turned towards his (the secretary’s) face, during the reading of the above resolutions: to those who knew that the gigantic brain of Pickwick was working beneath that forehead, and that the beaming eyes of Pickwick were twinkling behind those glasses, the sight was indeed an interesting one. There sat the man who had traced to their source the mighty ponds of Hampstead, and agitated the scientific world with his Theory of Tittlebats, as calm and unmoved as the deep waters of the one on a frosty day, or as a solitary specimen of the other in the inmost recesses of an earthen jar.

  60. Polymath says:

    Most accidents and moving violations come from relatively few bad drivers. So I am an above average driver, even though I am a below median driver, and most people can be above average in the sense of having fewer than the average number of accidents and moving violations.

  61. Null42 says:

    I wonder if one problem is that rationalists simply aren’t evil enough.

    Think about it. You want to be right, to find the truth, and to help everyone around you. You’ve hived off an effective altruist movement that aims to find more effective ways of doing good. *You’re good people*, at a base level; Eliezer apparently channels his lust for power into consensual kink, to give one of the few examples I’m familiar with. And while you obviously don’t think everyone thinks that way, a lot of your writing seems to think that systems are inefficient and the perfect system could solve the problem. If we knew how to do things the right way, we could and would do the right thing.

    But what if the system evolved because it survives and it benefits the people in power to keep it in place? What if you assume most people are just trying to survive and the people on top are trying to prosper? What if it’s not a system coordination problem or some other defect of the system, but rather that the system only exists because it pleases the people in power and nobody else is able to change it? Eliezer addresses this in his reason #1 for evil, but I think it deserves it more weight and investigation.

    There’s a mental fallacy (I am sure the rest of you know the name, it escapes me) where you assume everyone thinks similarly to you. Maybe rationalists simply lack dark tetrad traits–not enough sociopathy and Machiavellian cunning? (Narcissism and sadism seem of limited use here, but maybe I’m missing something.)

    • The Nybbler says:

      What if it’s not a system coordination problem or some other defect of the system, but rather that the system only exists because it pleases the people in power and nobody else is able to change it?

      Then you’re a cynic.

      Maybe rationalists simply lack dark tetrad traits–not enough sociopathy and Machiavellian cunning? (Narcissism and sadism seem of limited use here, but maybe I’m missing something.)

      If you have the dark triad traits, you have no reason to try to change the system; it’s much easier for you to just step on the suckers and do well in the current system, as you only have to battle a few of the current winners at a time, not all of them.

      • Null42 says:

        What’s wrong with being a cynic? Human history seems to include a lot of evil.

        More practically, we accept as believers in evolution that human beings have traits that have evolved to propagate their genes, rather than necessarily for the flourishing of the species as a whole. So why not assume they would be primarily concerned with their own self-interest and that of their immediate families, rather than humanity as a whole, and try our best to work around it?

        If you have the dark triad traits, you have no reason to try to change the system; it’s much easier for you to just step on the suckers and do well in the current system, as you only have to battle a few of the current winners at a time, not all of them.

        If you have all the dark triad/tetrad traits, yes. You are describing quite well the way many powerful people behave, I think. But it’s not an on-off switch, and some people have some but not others. Don’t you think more Machiavellianism at least would be useful in trying to understand the ways of malefactors and build systems that safeguard against them? I’m basically trying to point out a possible systemic cognitive error that may arise from ‘sampling bias’ in terms of who makes up rationalist communities. (Outside View?)

        • So why not assume they would be primarily concerned with their own self-interest and that of their immediate families, rather than humanity as a whole, and try our best to work around it?

          That is more or less what economists do assume. I haven’t read the book, but judging by Scott’s review Eliezer is working within that framework.

          The question then is why and under what circumstances that assumption implies good or bad outcomes.

  62. brodix says:

    I have this problem of coming up with interesting ideas, but rarely can even find people willing to debate them.
    Probably the most elemental is that western culture views time backwards. It isn’t the present’moving’ past to future, but change turning future to past. As in tomorrow becomes yesterday because the earth turns. This makes time more like temperature, color, pressure, etc. than space. What we measure, duration, is the present, as events coalesce and dissolve.
    Suffice to say this is not a popular point in physics discussions.
    Another is the logic of God. Logically a spiritual absolute would be the essence of sentience from which we rise, not an ideal of wisdom from which we fell. More the new born, than the wise old man, but since religion is more about social order, than spiritual insight, it works better to build it around wisdom, than consciousness.
    Which gets me in trouble with both theists and atheists.
    Another idea is that as a medium of exchange, money is the social contract commodified. It is like blood in the circulation system and as such,, is a public utility. Yet not like government, which is like the central nervous system. The problem is trying to store it, which would be like storing blood. The body can store fat, but not blood. So the government is incentivized to borrow up excess money, when it should threaten to tax it and make people store value in other ways, like stronger communities, better infrastructure, healthy environments, etc. Instead we have this metastatic economic circulation system sucking value out of everything and rotting in its excess.
    Just some of the ideas I can’t get much conversation on.

    • Null42 says:

      So the financial system is a metastatic angiosarcoma with necrotic changes in the areas of maximum tumor mass? Actually, I’d buy that.

      • brodix says:

        Hmm.
        That might be an effective analogy. I would go with a cross between internal bleeding and high blood pressure.
        When groups were small, economics was organic reciprocity, but accounting is necessary for large groups. As such money is a social contract.
        People want to save as much as possible, so it gets pulled out of circulation and more has to be added.
        Think how much of this “wealth” is stored as government debt and the odds it will ever be paid out. Along with all the other forms of notational value out there. Not to mention all the real social and environmental value being destroyed to create it and the level of delusion is monumental.
        Most people save for the same general reasons, from raising children to retirement, so if we did this as a communal function, rather than trying to do it individually, it would go back to that original reciprocity. Make banking public.

        • Deiseach says:

          People want to save as much as possible, so it gets pulled out of circulation and more has to be added.

          But that is because (1) people need money to live, and their future needs depend on having enough money because prices will probably rise as the cost of living goes up, yet their earning capacity will either stagnate or diminish (they have to retire, either voluntarily or not; some jobs you can keep working in them till you drop dead, but most you will either be physically unable to keep working at your most productive level, your knowledge/skills become outdated, younger replacements for you come along, you’ve been promoted as high as you can go). Governments are urging people to take out private pensions to make sure they have enough to live on come retirement age, so people are encouraged all round to be prudent and plan for the future and sock that spare cash away in some form so you have a nest egg for tomorrow (2) that stored money will eventually go back into circulation; you will spend it on your kids (education, college funds, helping them out with loans to rent or buy their own homes) and on meeting your own needs in the future (the money to live on when you are no longer earning in a job). Even if you plan to use that money in your seventies to go on expensive cruises and live a life of ease and enjoyment, you’re still going to spend it, or if you have any left over, pass it on to your kids as their inheritance.

          The circulation problem (don’t squirrel that money away today for tomorrow) is a hard nut to crack; nations have tried to do it by, for example, guaranteeing public pensions for everyone (so they have a minimum income in old age) but that has hit the barrier where now governments are saying “We won’t have enough to pay out in the future, so take out your own pension plan now in your prime earning years”, which makes “save your money and don’t spend it” even more good advice, and takes even more out of circulation.

        • People want to save as much as possible, so it gets pulled out of circulation and more has to be added.

          People don’t generally save by accumulating a stack of currency under their mattress. They save by accumulating claims to interest bearing assets.

    • Tracy W says:

      So the government is incentivized to borrow up excess money, when it should threaten to tax it and make people store value in other ways, like stronger communities, better infrastructure, healthy environments, etc.

      And yet Singapore generally runsa government surplus, has low taxes, and doesn’t seem to be doing too badly in the infrastructure line. Other countries that often run government surpluses include Norway and Denmark, again not doing terribly economically.

    • Harry Maurice Johnston says:

      It isn’t the present ’moving’ past to future, but change turning future to past. Suffice to say this is not a popular point in physics discussions.

      Rightly so, I’m afraid, as it would be wildly off-topic unless you can express what you’re talking about mathematically or explain what observable differences there would be between the conventional model and your idea. (Preferably both.)

      However, you may be interested in the existence of timeless physics, in which, roughly speaking, time passes because the universe expands.

      • Viliam says:

        time passes because the universe expands.

        Correct me if I am wrong, but I didn’t find this in the linked article. The expansion of universe is only used as an argument that the state of the universe never repeats itself.

        • Harry Maurice Johnston says:

          I haven’t read the original research, just Elizier’s summary of it, but it is my understanding that there are thought to be fairly deep connections between time as we experience it, entropy, gravity, and the expansion of the universe. That understanding would undoubtedly have influenced my interpretation of what Elizier was saying; whether it made it more or less accurate is an open question. 🙂

          Now that you’ve mentioned it, I would guess that you could in principle do timeless physics without referencing the expansion of the universe, but I suspect that attempting to do so would make it significantly more difficult to do anything very useful with it.

          [Epistemic status: based entirely on intuition and vague recollections of stuff I was told decades ago. I do have a PhD in physics, but that was a long time ago and wasn’t directly related to this subject to begin with.]

  63. John B says:

    Eliezer and Robin may be smarter than I am, but are they smarter than Albert Einstein, Isaac Newton, Galileo, Aristotle, and all the other believers of the past? One problem with extending the “listen to the smartest people” sort of argument to philosophical questions like the existence of God is that these attitudes are very much culturally determined. If everybody you read or talk to on a given question comes from the same culture, then you are necessarily going to get the answer that culture promotes. Scott, Eliezer and Robin all belong to the culture of 21st-century rationalists, so whether you want to take their ideas about God or ethics seriously depends on how you feel about the assumptions of 21st-century rationalist culture. The consensus of very smart people from other cultures is quite different than the one they share. How we would know that we have more insight into the existence or non-existence of God than people of the classical world or the Renaissance or the Enlightenment or the early 20th century I have no idea.

    • pelebro says:

      “The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this. These subtilised interpretations are highly manifold according to their nature and have almost nothing to do with the original text. For me the Jewish religion like all other religions is an incarnation of the most childish superstitions.”
      That’s an Einstein quote.

      • John B says:

        Wikipedia: “Einstein stated that he believed in the pantheistic God of Baruch Spinoza.” I suspect your quotation refers to his distaste for the sort of personal god who responds to prayers; Einstein thought the laws of nature were immutable. But he talked about God a lot, most famously in “God does not play dice.”

        • Protagoras says:

          And Sartre has a famous story set in hell. It almost sounds like you don’t know what pantheism is, though I’ll be charitable and assume that you are just failing to understand how Einstein’s version of it worked. It’s possible to be a pantheist and be superstitious, after all, it’s just that Einstein wasn’t, and with some plausibility interpreted Spinoza as having been similar to himself. Einstein had an attitude of reverence toward the universe as a whole. He did not believe that there was anything beyond or behind that universe, anything outside the realm of science. When he said “God does not play dice,” he was not presenting an argument that randomness does not exist (he did that elsewhere), he was merely asserting that randomness does not exist in a somewhat quirky pantheist idiom.

        • Glen Raphael says:

          @JohnB:

          Einstein stated that he believed in the pantheistic God of Baruch Spinoza

          …Which is a sneaky/smart way of saying that he doesn’t believe in God.

          Spinoza essentially defined “God” as “the universe”. Since the universe exists, a follower of Spinoza can say that “God” exists without being committed to believing in anything that even vaguely resembles biblical teaching on the subject.

    • grendelkhan says:

      are they smarter than Albert Einstein, Isaac Newton, Galileo, Aristotle, and all the other believers of the past?

      Einstein didn’t believe in God in any way that would have been recognized by the other people on that list. I don’t think that’s a coincidence.

      We underestimate, these days, just how big a shock it was to discover the true age of the earth and the true origin of species. The watchmaker argument was legitimately convincing until the nineteenth century; there was a true unknown in there, and for all we knew, we could have been created by a loving god to carry out His will on earth.

      Discovering that, so far, everything we thought to be noumenal and mysterious actually has reductive explanations that are usually so complex and boring that either you need Carl Sagan himself explaining them to keep people interested, or most people will just ignore the findings… well, that’s the sort of Outside View logic that changes things.

      When the great scientists of the nineteenth century embarked on a quest to discover our Creator, they did so in the expectation that they’d find the Christian God. They didn’t, and everything that followed was different.

      • Deiseach says:

        The watchmaker argument was legitimately convincing until the nineteenth century

        The watchmaker argument is not fundamentally incompatible with evolution etc.; the “wind it up and set it going, then leave it alone to tick away by itself” distant watchmaker deity can be reconciled with the geological age and origin of species, “winding it up” being initial creation of the universe (and its underlying fundamental laws that govern energy and matter) and then, once no longer intervening or paying attention to what is going on in the creation, the natural laws/laws of physics operate in the same way as the gears of a watch operate blindly and mechanically and we get species arising, evolving, competing, and dying out.
        Deism could tolerate this, Theism not so much, and the conception of the Christian God as not a vague animating force was opposed to it.

        The big shock was to the notion of a personal, interventionist God (as per the Nick Cave song) and more particularly to a strain of Biblical literalism, but the new discoveries could be reconciled with a broader theological conception. To quote Hilaire Belloc from “Survivals and New Arrivals” (a work published in 1929 interesting to read to see how he considers some attacks exploded which have revived and are still, or again, flourishing in our day):

        The Literalist believed that Jonah was swallowed by a right Greenland whale, and that our first parents lived a precisely calculable number of years ago, and in Mesopotamia. He believed that Noah collected in the ark all the very numerous divisions of the beetle tribe. He believed, because the Hebrew word JOM was printed in his Koran, “day,” that therefore the phases of creation were exactly six in number and each of exactly twenty-four hours. He believed that man began as a bit of mud, handled, fashioned with fingers and then blown upon.

        These beliefs were not adventitious to his religion, they were his religion; and when they became untenable (principally through the advance of geology) his religion disappeared.

        It has receded with startling rapidity. Nations of the Catholic culture could never understand how such a religion came to be held. It was a bewilderment to them. When the immensely ancient doctrine of growth (or evolution) and the connection of living organisms with past forms was newly emphasized by Buffon and Lamarck, opinion in France was not disturbed; and it was hopelessly puzzling to men of Catholic tradition to find a Catholic priest’s original discovery of man’s antiquity (at Torquay, in the cave called “Kent’s Hole”) severely censured by the Protestant world. Still more were they puzzled by the fierce battle which raged against the further development of Buffon and Lamarck’s main thesis under the hands of careful and patient observers such as Darwin and Wallace.

        So violent was the quarrel that the main point was missed. Evolution in general — mere growth — became the Accursed Thing. The only essential point, its causes, the underlying truth of Lamarck’s theory, and the falsity of Darwin’s and Wallace’s, were not considered. What had to be defended blindly was the bald truth of certain printed English sentences dating from 1610.

        • The watchmaker argument is not fundamentally incompatible with evolution etc.;

          The watchmaker argument, at least as I understand it, is not “here is a watch–it is possible that there is a watchmaker.” It is “here is a watch, that implies the necessity of a watchmaker.”

          Darwinian evolution doesn’t prove that God isn’t responsible, it eliminates a powerful argument to prove that he must be.

          • Bugmaster says:

            Arguably, the Watchmaker Argument wasn’t really all that powerful to begin with. Saying “I can’t imagine how X could happen therefore God made X happen” is an extremely convincing argument — legitimately so — but only to someone who already believes in God.

        • Bugmaster says:

          In addition to what DavidFriedman said:

          If I were a theist, I’d find the trend a little alarming (though I’m not one, so there may be some theistically obvious reasons why that’s silly). In the ancient times, the average person really did believe that gods were basically super-powered people, who lived on a really tall mountain, or up in the sky, or deep in the underworld. This wasn’t a result of some complex philosophical chain of reasoning; it was just an obvious fact. It was a literal fact too, not a metaphor or a simile or some other clever cognitive trick.

          But as the background level of knowledge in society grew (slowly at first, then faster as the scientific method got developed), gods kept receding. Now they weren’t up on a mountain, but maybe in outer space, or in another dimension. And as time went on, religions had to continuously adjust. Outside of some die-hard fanatics, few people could sustain the belief that angels are physically pushing planets around, or whatever. Gods moved farther away, and got weaker.

          The process continues today, and now most people (in the West, at least) believe in a vague semi-deistic god, who set up the Universe then went away; and who intervenes in human lives (assuming he does at all) in ways that are indistinguishable from chance. Religions are not leading this change in zeitgeist; instead, they are continuously playing catch-up to science. Even in areas that have been traditionally the province of religion, such as morality, science is leading the way.

          If I were a theist, this would worry me. Will God recede even further into obscurity in the near future ? How far will he go ? Does this mean that my entire belief system is wrong ? If not, how do I know that my current concept of God is correct, unlike all those previous ones that were thought to be totally correct but turned out to be hopelessly naive by modern standards ? Or were they correct after all, which would make our modern society is irrevocably damned ? From a more pedestrian point of view, do I blaspheme every time I use the GPS on my phone ?

          • grendelkhan says:

            Well said–it’s a given these days that being religious entails a certain level of cognitive dissonance, of doublethink, of non-overlapping magisteria. But it didn’t always. This is one way it’s easy to forget just how different the past was.

    • Tracy W says:

      Eliezer and Robin may be smarter than I am, but are they smarter than Albert Einstein, Isaac Newton, Galileo, Aristotle, and all the other believers of the past?

      What relevance does this have? We have access to far more knowledge than Einstein, Newton, Galileo, and Aristotle. We can learn things from other people much faster than we can figure things out for ourselves.

  64. t mes says:

    Im interested in people’s take on cryptocurrency as a $20 bill lying on the ground in grand central station. Personally, i referred to bitcoin negatively on economic grounds since 14 cents. At $100, I started calling it out as an outright bubble. At $7000, I admitted that perhaps I along with all those economists, several of whom had nobel prizes, may have been too dogmatic but perhaps that is simply the evolution of a greater fool in the making.

    • Nornagest says:

      The efficient markets hypothesis only works when you have enough eyes on the problem and no big structural issues in the way, and that wasn’t true for most of Bitcoin’s history — all the way up through its first big bubble in 2013 and for some time after, the only people that really knew or cared about it were nerds.

      That’s increasingly less true, but I still don’t think efficient markets are a good reason to be skeptical of it — maybe in a year or two, when you can buy Bitcoin futures on eTrade with minimal hassle, but not now. That’s not to say that there aren’t reasons to be leery of it — I’m not putting any more money into Bitcoin right now, though I do own a stake that I have no immediate plans to liquidate — but model uncertainty and basic too-good-to-be-true skepticism seem much more relevant in the current environment.

    • Deiseach says:

      I think most people are operating on “nothing can continue to rise in value indefinitely” and I think that is true. Bitcoin to be the exception to this would be exceptional, so there has to be a ceiling somewhere and it has to hit it eventually.

      The problem then is “how do we know when ‘eventually’ is?” People have been predicting a crash for a while now and it hasn’t come. So does this mean that because Bitcoin is a cryptocurrency and not like any other form of exchange in human history to date, that it will be the exception to the rule? that it will continue to rocket up in value and even if it does hit a ceiling, it will stick there and not crash in value? That’s the hard part to work out.

      And Bitcoin is volatile, it’s had crashes and recoveries already. So again, is the next crash (because there is going to be a next crash) going to be followed by a rally, or will it be the final crash and next thing Bitcoin is worth bare cents and everyone who mortgaged their house to speculate in the currency are going to lose their shirts?

      Or is Bitcoin the first example, but not the ultimate one – it will be replaced in turn by a better version that has worked out the bugs and from the start operates more stably and sensibly in price than Bitcoin, so no extravagant returns by speculating on the increase in value but equally no crashes where it loses all value?

      Unless you have what are good, solid and satisfactory answers (at least to your own satisfaction) to these questions, I think investing in Bitcoin right now in the middle of the frenzy is a bad idea. If it survives, I think it will hit a stable level but nowhere near the massive returns in value as right now, which means that as a ‘get rich quick’ scheme it suffers the same flaws and penalties as most get rich quick schemes.

      I’m too ignorant of markets and economics to be able to answer any of these questions (will it stabilise and survive, is there a final crash coming and if so when, will it be replaced by a new cryptocurrency the same way that online social media like MySpace and Bebo fell out of favour and were replaced by new popular sites, how do you play the market and make money trading in this thing) so I’m leaving it up to experts because I have no other way of deciding if it’s worth it or not. (Even if I had the money to invest in it, I wouldn’t, because it seems too dodgy but as I said, I know nothing).

  65. fortaleza84 says:

    I’m pretty sure that collective action problems, agency problems, and the like as described by EY are generally referred to by economists under the rubric of “transactions costs.” I haven’t read his book, but does he cite the well established body of scholarship on these issues? Does he used the established terminology? And does he say anything new? If not, then how is he any different from the biology student who mistakenly believes he has a Nobel-worthy insight?

    • Sniffnoy says:

      Eliezer isn’t really trying to say that much new here. He’s not claiming to have come up with the idea of collective action problems and transaction costs. In the dialogue that makes up much of the book, the point of view that the book is pushing is put in the mouth of a “conventional cynical economist”. Same as with the sequences — very little of that was new either, and Eliezer has always admitted this.

      • fortaleza84 says:

        In his book, does he cite the pre-existing literature? Does he use the generally accepted terms for the phenomena he describes?

        • Sniffnoy says:

          In his book, does he cite the pre-existing literature?

          He doesn’t, not to any real extent.

          Does he use the generally accepted terms for the phenomena he describes?

          As best I can tell mostly yes, but this isn’t my area. He definitely misses at least one, though — when he mentions assurance contracts he doesn’t mention them by name. Pretty disappointing. I pointed this out in the comment thread on LW but it isn’t fixed in the book. It’s possible he drops the ball similarly elsewhere as well and I just don’t know enough to notice.

    • Eponymous says:

      I’m pretty sure that collective action problems, agency problems, and the like as described by EY are generally referred to by economists under the rubric of “transactions costs.”

      Nope.

      I haven’t read his book, but does he cite the well established body of scholarship on these issues?

      He seems familiar with it. It was surprisingly good, and I’m an economist. He indicates he had an economist read it over, and I believe him.

      Does he used the established terminology?

      Hahahaha. Of course not. He’s Eliezer, of course he uses neologisms. But I’m willing to forgive that. What does it matter what you call a thing? Except that it gives people google keywords.

      Here you go: market failure, agency costs, lemon problem, moral hazard, adverse selection, pareto optimality, inefficient equilibria, asymmetric information, etc etc

      And does he say anything new? If not, then how is he any different from the biology student who mistakenly believes he has a Nobel-worthy insight

      It’s not deserving of a Nobel, but it was a worthwhile read. Plenty of interesting nuggets. Besides, the point is the applied epistemology, not the economics.

  66. Don P. says:

    I found this an interesting use of language:

    Eventually, even though he still thought the presentation was really convincing, he accepted that he was probably a typical member of the group “people impressed with time-share presentations”, and almost every member of that group is wrong. So even though my father thought the offer sounded too good to be true, he decided to reject it.

    That seems to give the wrong valence to that expression, and you’d generally be more likely to say that something was rejected because is was TGTBT. But not only do we understand what’s being said, it doesn’t seem all that wrong even when we look at it closely.

  67. grendelkhan says:

    I haven’t yet read the book, so this may be covered in there, but I’d like to call out academic publishing as a clearly awful situation in the style of Facebook. There are a few prestigious publications; they’re prestigious because important research gets published there, and if you have research, you know it’s important because it’s published in a prestigious journal.

    This isn’t so bad, except that the journals are all owned by rent-seeking monstrosities attempting intent on lining their own pockets despite being the one actor in the research pipeline (apart from researchers and reviewers) which produces marginal-at-best value. So the market is captured, and scientists will have to send ever more of their budgets to a few fat cats.

    It’s instructive to look at the ways in which this bad equilibrium, though still present, is incomplete. Consider end-runs like the arXiv (publish papers for free in addition to with your journal), the open-access PLoS journals funded by a wealthy foundation, and perhaps most importantly, Sci-Hub, which makes infringing the publishers’ copyrights trivial, and solves the problem for academics doing research, though perhaps not for those publishing it.

    • Sniffnoy says:

      Don’t forget arXiv overlay journals. That’s something aimed at actually taking down this equilibrium.

  68. Jameson Quinn says:

    OK, so Eliezer has identified and described 3 ways evil enters the world. Obviously, the next step is to patch those leaks.

    Way 1, bad incentives: somebody should just fix those incentives! But they don’t! Probably because of a bad Nash equilibrium. Reduces to way 3.

    Way 2, bad lines of communication. This seems to be something that actually does tend to fix itself given enough time. Where “fix itself” may mean “somebody who cares fixes it”.

    Way 3, bad Nash equilibria. Solution, better mechanism design!

    I do not have a general recipe for better mechanism design, but I do at least have a good example: voting methods.

    For single-winner elections, FPTP is horrible, IRV is a bit better, Approval is closer to perfect than it is to FPTP, and things like 3-2-1 voting, star voting, or others are probably roughly as close to perfect as we can get.

    For multi-winner elections (the problem of representation): sortition and liquid democracy are each in their own way perfectly ideal. Given the constraints of most constitutional republics, neither of those are possible, but pretty good approximations can be made. PLACE voting is a method that does pretty well and was designed to be politically viable too. There is a non-crazy plan for getting a multiple US states using PLACE by 2022; and a separate plan for Canada. This would significantly improve the Nash equilibrium problem for significant portions of human experience; if achieved, it would have an asset-like value of well over a trillion dollars at reasonable QALY-to-dollar exchange rates.

    Electology.org is working on this problem. I’m on the board. I think that donations and involvement there is an effective form of altruism.

    • Do any of your voting methods solve the problem that voters are rationally ignorant? If not, while you can expect different methods to produce different outcomes, why would you expect any particular method to produce much better outcomes?

      I have a preferred method–I’m guessing it’s what you call “liquid democracy.” But I prefer it on aesthetic grounds, not because I have any good reason to think it would work better than what we now have.

      • Jameson Quinn says:

        Let’s say that voter preferences are drawn from some distribution that is both biased (predictably irrational) and noisy (unpredictably irrational and/or ignorant) from the true ideal according to some omniscient moral calculus. Good democratic systems help cancel out the noise, so that the total mean-squared-error is only the bias. Bad democratic systems instead convert the noise into further bias. Since the outcome space is high-dimensional, the chances that the new bias cancels out the old bias is very low, so the expected MSE goes up. Fixing that would drive it down.

        This is of course an oversimplified back-of-the-envelope model. But it gets some of the essential intuition. One thing it misses is the time dimension: bad equilibria tend to be over-calcified, meaning we get at best the solutions to problems a decade or two out of date.

        Anyway, better democracy doesn’t guarantee perfection by a long shot but it’s better than bad democracy.

        • Doctor Mist says:

          better democracy doesn’t guarantee perfection by a long shot but it’s better than bad democracy

          While my knee-jerk response is to agree with this, I do note that there are a lot of assumptions packed into it. For one, it implies that a more faithful capture of the electorate’s bias (predictable irrationality) is a good thing, while I can at least imagine that the noise (unpredictable irrationality) is the only thing that keeps the bias from making us hare off onto truly awful extreme paths.

          Even if one grants that democracy is the worst system except for all the others, it may not be obvious why that is true.

          • Jameson Quinn says:

            Bad democracy is “noise” in that it is not correlated with signal, or at least not obviously so. It isn’t “noise” in the sense of being unpredictable; for instance, Duverger’s Law is awfully simple. So bad democracy is no kind of hedge against Moloch; it _is_ Moloch.

    • Sniffnoy says:

      For single-winner elections, FPTP is horrible, IRV is a bit better, Approval is closer to perfect than it is to FPTP, and things like 3-2-1 voting, star voting, or others are probably roughly as close to perfect as we can get.

      So I wasn’t familiar with 3-2-1 or star or PLACE and had to go look those up (actually I couldn’t find anything about star; the only thing I could find by that name was a system for secure electronic voting, i.e. not a voting system in the sense being discussed here, though certainly it could be combined with one). What is the advantage supposed to be of 3-2-1 over something simpler like approval or range voting? It just seems kind of hacked-together and arbitrary.

      The idea of delegated voting for people who don’t want to fill in a whole ballot for each position though is a good one. That could really be incorporated into just about any voting system! That should really be better known, I’d never heard of it before, outside the context of something like liquid democracy which is entirely based around delegation…

      • Jameson Quinn says:

        STAR voting stands for “Score then automatic runoff”. It’s pushed by the equal.vote coalition (Mark Frohnemayer, based in OR).

        As for “what’s good about 3-2-1”: I’m going to first give the quick version of the actual argument, then afterwards do a bit of arguing from authority to try to dissuade you from leaping to believe naive objections to my argument.

        When you’re dealing with single-winner voting methods, the pathological case that drives all the important impossibility proofs (Arrow, Gibbard-Satterthwaite, etc.) is a Condorcet cycle (CC). But honest Condorcet cycles are rare enough to be negligible. The potentially-pathological cases that actually occur in real life are center squeeze and the chicken dilemma. In numvoters:preforder format, here’s a center-squeeze (CS) scenario:
        33: A>B>C
        22: B>A=C
        44: C>B>A
        Group B is smallest, but B is the Condorcet winner. The “right” winner is probably B. Here’s a chicken dilemma (CD):
        33: A>B>C
        22: B>A>C
        44: C>A=B
        Groups A and B have to cooperate in order to beat C, but can get a relative advantage from unilaterally betraying each other. The “right” winner is probably A.

        It’s essentially impossible to create a voting method that gets the right answer to both of the above in a strategically robust manner. Both CS and CD scenarios can both masquerade as CC scenarios through strategy, and the “right winner” of these apparent CC elections would be different depending on the (unknowable) underlying truth.

        (SODA voting deals with this problem by trying to use meta-strategic incentives to push candidates themselves to predeclare in such a way that on voting day strategic CCs are impossible. Which is pretty cool but involves a level of descriptive complexity that is impractical for serious reform proposals, so I won’t talk more about that here.)

        One trick that deals with both CS and CD elections pretty well is Borda voting. But that causes a far worse pathology: “dark horse plus 3” (DH3) elections, in which voters from each faction strategically give a second-highest rank to a candidate they know will lose, and then that candidate ends up winning precisely because everybody knew they were nearly universally hated. As pathologies go, that one is really really bad.

        3-2-1 voting deals with CS and CD pretty well, kinda like Borda does; but the dark horse strategy is never viable in the first place, because the DH candidate will never make it into the top 3. This breaks down somewhat if there are more than 3 honestly-viable candidates; but that’s negligibly rare in the real world. So 3-2-1, ugly as it seems from a mathematical perspective, is actually very carefully designed to deal with real-world situations.

        I developed 3-2-1 voting. I also developed SODA voting which is aesthetically more appealing and easier to prove surprising things about (if there are no honest Condorcet cycles and candidates are honest then it beats Arrow’s theorem! Which is a pretty amazing characteristic!). And I developed E Pluribus Hugo (not a single-winner method, but still a voting method), so I have a real-world success story (google it, my name, and Bruce Schneier). So when I say that in practice I prefer 3-2-1 over SODA, you should hesitate in your intuition about 3-2-1 seeming “hacked together”, and imagine that I’ve probably at least considered the objections you’re likely to come up with (or at least, the first 3 of them).

        Oh, and aside from the above, 3-2-1 does pretty well on my utilitarian simulations.

        • Sniffnoy says:

          Fascinating! I have to admit I’m still skeptical but it certainly makes a lot more sense seeing the reasoning behind it. Thanks for the explanation!

          One question: I just noticed that neither you nor the wiki page mention 3-2-1’s strategic nomination properties. How’s it do on clone independence? Because, like, it’s always seemed to me that getting rid of vote-splitting (and, obviously, not replacing it with teaming, either… I actually would have thought of that rather than DH3 as the most obvious problem with Borda) was one of the main reasons people want voting system reform.

          • Jameson Quinn says:

            3-2-1 is, in theory, vulnerable to cloning; if 3 clones sweep at the first stage, the election is over. But in order for that to actually happen, you’d need to be able to run 3 clones from 2 “separate” parties* without provoking effective counterstrategy. In other words, the worst case is that it devolves to approval voting; that’s neither very likely nor very problematic.

            With cloning effects, the real problem is when there’s a slippery slope; that is, when it’s easier for cloning to affect a close election than a less-close one. That is not the case for 3-2-1.

            *3-2-1 has a rule that the top 3 cannot all be from the same party.

  69. grendelkhan says:

    An example of a bad eqilibrium I’ve been reading about lately (Daniel Golden, The Price of Admission): legacy admissions to colleges. Especially as colleges have become more selective (the pool of potential students is much broader, the population is greater, but there’s still the same number of slots), the spaces taken up by wealthy or well-connected mediocrities are a very real cost.

    You can contrast this with race-based affirmative action, which is at least debatable, i.e., it can be, and is, debated one way and another. Policy can shift, and has shifted. Legacy admissions, on the other hand, are durable unpopular, seldom debated, and omnipresent–a clear sign that something is distorted.

    There’s a very Mr. Smith Goes to Washington story in the book, about Michael Dannenberg, a staffer for Ted Kennedy–a powerful Senator near the end of his career, about as close to an unmoved-mover as you can get in that situation. Dannenberg got the Senator’s ear, but his attempts at reform failed:

    The higher education community wasn’t fooled. On April 29, a sympathetic lobbyist warned Kennedy’s staff that any attack on legacy preference and early decision would “create a massive firestorm of protest from colleges and universities … Go there at your own peril.”

    The prediction was accurate; higher education groups, such as the National Association of Independent Colleges and Universities and the American Council on Education, organized a low-profile but intense campaign against the proposal. They didn’t send out a “major blast” calling for colleges to denounce it publicly, one lobbyist told me, for fear that it would appeal to the media and public opinion. “We didn’t want this crazy idea to take off,” the lobbyist said. Instead, emissaries from private colleges in their home states visited the Democratic committee members, conveying the message that the proposal went too far and that any federal intervention in college admissions, even one designed to help minority students graduate from college, would in the end damage affirmative action.

    That is, if anything, a good example of how stable these bad equilibria can be. And why disrupting education is so sorely needed. One possible out is situations where serious comparative advantage can be had by hiring better people, without regard to celebrity or lineage or anything else. Programming seems to fit the bill, though I’m not entirely sure why it’s so special. Maybe App Academy et al. will have the proper effect, i.e., breaking the university system’s monopoly on credentials for the professional class. (To a lesser extent, WGU does this too; degrees there can cost as little as one from App Academy, if you’re suitably motivated, though they take somewhat longer to acquire.)

    • Aevylmar says:

      There’s two non-malevolent explanations for legacy admissions that I know of. One of them is the link with alumni donations; that, on the one hand, successful alumni are expected to give money to their college, and on the other hand, alumni kids get a hand up. An exchange of money for services, sort of.

      The other, more rational one, was simply that it family connections served as another data point. If your father was a genius who went here, and your grandfather was a genius who went here, the odds of you (and your son) being a genius who goes here are higher than your SAT scores would suggest. Of course, that doesn’t explain why Podunk U values the statement “my father graduated from Podunk U” more than it does the statement “my father graduated from Harvard…”

      • Null42 says:

        It’s mostly the money thing, I think. Sure, you could auction off seats to the highest bidder, but the alumni thing gives the warm fuzzies, which means more loyal donors and probably more money in the end.

        • I think “the money thing” is a bit complicated. If it was just “alumni give money so reward them with admissions favoritism” the bias would be limited to alumni who gave money, and it isn’t.

          I think it’s more nearly “the body of alumni is its own faction, children of alumni who go here are more likely to feel like members of that group than other people who go here, so legacy favoritism increases the number of people who feel loyal to the college. People who feel loyal to the college are more likely to give us money in the future.”

      • SamChevre says:

        I’d note that alumni give a lot more than money, and legacy admissions make particular sense in that context. At selective-admission schools, alumni are key contacts for new graduates. It’s perfectly common, and expected, for students to call alumni to talk about jobs in a field (“informational interviews”); to ask for an introduction to someone else; and similar things.

        All this seems obviously easier and more natural, as well as more likely to be effective, if you know one of the children of the alumnus you are calling. And these opportunities for connection are especially valuable to students who don’t have “natural” connections–students with no family members who went to college, who are from far away, and so on.

        So I can see a good argument for legacy admissions on the grounds that it benefits the non-legacy students.

        (Note that I took far too little advantage of classmates and their connections, and of the alumni network–but I saw it make a significant difference.)

  70. MikeyPinkieRings says:

    In regards to healthcare, this is not as simple as a $20 bill lying on the floor. There are quite a few people that are guarding that $20, ensuring that it remains exactly where it lies on the floor.

    Free market medicine is the name of the guys trying to pick up the $20. Take a look at Surgery Center of Oklahoma for an example. They are incredible in what they offer. And, on top of that, check out their rates of infection as a metric on how well they do their job.

    • Nornagest says:

      I have nothing against the idea, but just because I have this irrational compulsion to defile any bowl of cornflakes I see: it seems likely to me that any significantly counternormative model of medicine, free-market or otherwise, would be likely to select for unusually driven and conscientious doctors. If you’re a doctor and you’re not unusually driven and conscientious, there’s no good reason not to follow the usual career path for doctors.

      That means you’re probably not very likely to get infected and die now, but it doesn’t necessarily establish that the same would still be true if the model caught on.

      • Douglas Knight says:

        Driven? Maybe. Conscientious? Why do you think so?

        Do you know about the Shouldice Hernia Centre? (see Gawande; search for “western”)

        I think it likely to be a counterexample to your rule. It’s a for-profit hospital, but that’s not what’s weird about it. What’s weird is that it is hyperspecialized. It compresses the surgical residency from 5+ years to 1, producing not general surgeons, but people who only perform hernia repairs. This seems like it should attract the least driven doctors, people content with being technicians, not taking years to hone their craft. Probably appealing to the conscientious, though.

        • Nornagest says:

          Driven? Maybe. Conscientious? Why do you think so?

          Because it means you’ve spent a lot of time thinking about not just medicine but models for practicing it, and concluded that the mainstream one sucks. That takes foresight and conviction, which might not exactly be conscientiousness but which I’d expect to correlate strongly with it.

          This might not fully apply to your hernia center, depending on how they’re marketing it.

  71. Henry Gorman says:

    Amusingly, Eliezer’s characterization of academia, which seems important to a lot of his arguments, clearly isn’t grounded in any sort of empirical investigation. Ordinary people are not actually barred from participation in scholarly discussions. Scholarly journals don’t demand that article-writers have particular credentials (indeed, many articles are written by graduate students who haven’t attained their degrees), and the review process is almost always blind. Conferences and talks frequently allow members of the community to attend and ask questions; when a non-academic asks a question or poses a comment, panelists will almost always respond courteously as long as as it’s coherent, and they’ll try to respond even to “not even wrong” sorts of participation (although these efforts tend to get frustrating for everybody involved.)

    Similarly, the hard distinction that Yudkowsky and others draw between the academic’s world and the autodidact’s is misleading. Higher-level courses at the undergraduate level are almost always designed to improve students’ ability to do research/learn/understand ideas on their own rather than to impart a particular set of facts. Graduate students, especially after the first year or so, are almost entirely responsible for their own learning. Your adviser is there to give occasional suggestions, strengthen your work with different sorts of feedback, and help you avoid truly awful mistakes, not to spoon-feed you information.

    I (and pretty much all the other academics I’ve ever talked about the issue with) actually agree that American higher education and academic research have huge problems. However, to elaborate on that critique and propose alternatives, one needs some actual understanding of how these institutions actually work.

    • John Schilling says:

      Yes on both counts, and one more point beyond. Enthusiastic amateurs can be annoying, e.g. when they don’t know any better than to invent perpetual motion machines, but if they’re reasonably polite and coherent the academy will listen long enough to see if they’ve actually got something. And one of the most valuable graduate courses I ever took was from a professor with no prior knowledge in the subject, who was learning the material with the students.

      But here’s where academia does give credentialed experts status not afforded to outsiders: the credentialed experts get status for trying. Ramanujan was invited to Cambridge on the strength of theorems he had developed, high school students, as minim.k.m notes, get first-author credit for work they’ve done. fifth-graders may have to settle for second author. But all for results they have actually achieved. If any of them had gone about telling people they were working on great advances in mathematics, AI, or chemistry in advance of their results, they might have gotten a pat on the head and a dismissive “that’s nice”. See also: bicycle mechanics proposing to Conquer the Air, 1899-1907.

      With a Ph.D., you get a stipend and maybe research funding, and people aren’t so dismissive when you tell them what you are trying to do even though you haven’t done it yet. The attempt, combined with the relevant credentials, gains some measure of status in advance of results.

      This is approximately as it should be. We need some but not all people to sit around pondering deep thoughts and trying to solve academic problems, so we need some way of sorting people into those who will be financed and otherwise supported in such endeavors and those who will be left to their own devices. Academia and the Ph.D. may not be perfect in that regard, but they seem to be good enough and they seem to accommodate those who take alternate paths once they actually arrive.

    • Sniffnoy says:

      This seems to come up a lot. Eliezer seems to frequently depict academia as way more dysfunctional than it actually is in ways actual academics keep telling him are just wrong.

      Might have a lot to do with the particular area though. I do math, and it seems like so do a lot of the people who keep telling him things aren’t so bad. I can easily believe medicine might be a lot worse. Eliezer was basically trying to do philosophy, so, not sure what to make of that.

      • Eponymous says:

        I’m an academic, and I see academia as pretty dysfunctional. I’m not sure about “in just the way Eliezer describes”, because I can’t remember Eliezer’s arguments on this topic in sufficient detail. But based on my recollections, I think he’s in the right ballpark modulo some characteristic idiosyncracies.

  72. Eponymous says:

    The Outside View is when you notice that you feel like you’re right, but most people in the same situation as you are wrong. So you reject your intuitive feelings of rightness and assume you are probably wrong too.

    That’s not the outside view! Or rather, it’s just one particular application. The outside view is when you analyze something by thinking about how similar situations have turned out; it doesn’t presuppose that the answer is to always reject your own conclusion and accept the expert consensus. Sometimes the outside view tells you the situation bears a resemblance to cases where experts turned out to be wrong! Your version is closer to what Eliezer calls modest epistemology, which is just a universal argument for always trusting expert consensus.

    I found this part to be the biggest disappointment of this book. I don’t think it grappled with the claim that the Outside View (and even Meta-Outside View) are often useful. It offered vague tips for how to decide when to use them, but I never felt any kind of enlightenment, or like there had been any work done to resolve the real issue here. It was basically a hit job on Outside Viewing.

    But that’s just what the whole first half of the book was about! It explained in extreme detail a framework for thinking about when civilization is likely to have the right answer, and consequently when you shouldn’t trust your own judgement against civilization’s. Except this isn’t binary, it’s more like how much Bayesian evidence does civilization’s beliefs provide on this topic, given what I know about the processes that generates those beliefs, and civilization’s track record versus mine in similar situations (there’s the outside view!). Of course, this is secondary to examining the content of their arguments and comparing them to mine. Remember, argument screens out authority, although it can never *perfectly* screen out authority.

    So basically the first half of the book was Eliezer tabooing “outside view” (as (mis)used by people arguing for modest epistemology) and explaining in great detail precisely how much we should trust civilization’s conclusions relative to our own in a particular domain. In other words, how do we do a Bayesian update on the evidence of society’s beliefs; because a Bayesian uses *all the evidence*.

    The argument goes: You’re more rational than average, so you shouldn’t adjust to the average. Instead, you should identify other people who are even more rational than you (on the matter at hand) and maybe Outside View with them, but no one else. Since you are already pretty rational, you can definitely trust your judgment about who the other rational people are.

    I don’t think that’s the argument at all. It’s just to update on all the evidence, full stop. That includes updating on evidence about the reasoning ability of other cognitive processes whose outputs you can observe (whether your friends or domain experts). And you use your current brain to do this, because that’s what you’ve got. Modest epistemology doesn’t get around this either, since you still have to use your brain to figure out the correct reference class, or who counts as an expert, which gives you plenty of degrees of freedom to play with.

    And if you do it right, this very often results in just accepting expert consensus. But sometimes it doesn’t.

  73. grendelkhan says:

    I’m sorry to bring current events into this, but not sorry enough not to do it.

    It’s one of the key findings of political science that the modal voter knows essentially nothing. And yet things have, recently, gotten considerably worse. Not because voters are less informed, but because whatever it was that kept the leadership from doing whatever they wanted and lying about it (probably the notion of respected groups that would tell the truth apart from partisanship, nonexistent at the current level of culture-war escalation) is no longer functioning.

    The current tax bill is extraordinarily unpopular. It is also pretty much a done deal at this point. Something has gone wrong here, and I think it’s at least somewhat interesting. As David Roberts points out, voters tend to be minimally informed at best, and historically we’ve worked around that with a sort of benign technocracy–no one’s entitled to their own facts, etc.

    Without that technocracy in place, we’re left with Red Facts and Blue Facts; if I, for example, point to this cavalcade of asymmetric tribalism, it won’t change anyone’s mind, and you’ll have people telling you with a straight face that legalizing same-sex marriage is kinda like undermining the notion of the rational knowability of the universe. I have no idea how to fix this sort of thing, but it’s definitely gotten worse in the last few years.

    • quanta413 says:

      I don’t buy this narrative at all. I’ll move well back before the culture wars to make my point. The Mexican-American War, the Spanish-American War, WWI, and Vietnam were all partly sold via lies. In order:

      1. Grant gave evidence that the U.S. plan was to intentionally advance the army as far into the territory disputed with Mexico as possible and within striking distance of Matamoros (literally across the river). The point was to bait Mexico into take a shot at U.S. troops so that war could be declared.

      2. Hearst and Pulitzer pushed the Spanish-American War based upon a mixture of exaggeration and lies because it sold papers well. The Maine probably wasn’t intentionally sunk by the Spanish. There were also a lot of lies about saving the Cubans from the dastardly and evil Spanish etc. etc. Regime change and all that. Of course, the later U.S. backed rulers of Cuba were a great example of “Meet the old boss, same as the new boss”. Oh how little times change…

      3. The Lusitania was a British ship carrying munitions and the Germans were within their rights to sink the ship. The Germans were using submarine attacks without warning but the British had merchant ships with concealed guns designed to trick U-boats into surfacing so they could be attacked. This made giving warning of an attack suicidal for the Germans. The British had been seizing and stopping U.S. merchant ships in order to blockade Germany much the same way Germany was attempting to blockade the U.S. but the superiority of the British Navy to the Germans meant they didn’t have to resort to sinking ships like the Germans did.

      4. The Gulf of Tonkin incident was similar to how the U.S. baited Mexico into war by moving its military into a neutral or disputed zone. There were supposedly two attacks on Aug 2 and Aug 4. Except the incident on Aug 4th wasn’t real. McNamara knew it was questionable, and then knew it was probably false, and simply “failed” to inform Johnson of the fact. Anyways, this then led to a congressional resolution authorizing Johnson to basically go to war in Vietnam without actually declaring war.

      As far as I can tell, it’s just as reasonable a narrative to claim that lying has gotten a little harder over time so now more people are aware of the essentially low value placed upon truth by politicians. Maybe U.S. politicians are now more inclined to lie about each other than about foreigners than they used to, but there’s a pattern of the lies only being uncovered long after the public has lost interest, so I’m betting that decades from now we’ll learn more about the lies we’re currently being fed. This will then make for perfectly interesting history but will mostly just reinforce to anyone paying attention that lies are an excellentway to achieve political goals.

      • 5. The only reason the Japanese made an undeclared attack on the U.S. military before the U.S. made an undeclared attack on the Japanese military was that it took the Flying Tigers longer to get into action in China than they expected. They were represented as a private group but were in fact financed by the U.S. government, manned by Air Force people–facts that only came out later.

      • grendelkhan says:

        Is it significant that all of the examples given have to do with starting wars? (Also, see the first Gulf War.)

        Clair Patterson may have faced plenty of opposition, but I don’t think the President was out there claiming that lead in the atmosphere was good for you, and all this anti-lead talk was a Russian conspiracy to suppress competition from the United States, y’know?

        • quanta413 says:

          I’m unsure how significant it is that my examples are all war honestly. It’s the first thing that came to mind to me as wars are one of the issues that most motivate me at this point when voting. But second choice would have been lies about espionage or classified research (MKUltra maybe).

          Part of the problem though I think is that things smaller than wars kind of fade out in the history books if you don’t have more specialized knowledge so it’s harder to know long afterwards what exactly was being lied about and why. Somehow I doubt though that I’m going to find that arguments over tariffs in the 19th century (or 20th) had good epistemic hygiene. It’s not like voters are super well informed on any particular topic though, so unless given good evidence to the contrary, I’m willing to assume politicians lie about as much about domestic issues in order to drive their agenda as they do about wars. Although it may be more difficult then to separate frantic hyperbole from blatant lie.

  74. The book reads like two books, but it could have read like one, if the second half had got onto the systemic and structural aspects of contrarianism

    Large groups being contrarian about the same thing at the same time, such as global warming scepticism, is inefficient , because other possiblities are left unexplored.

    High quality contrarianism requires a lot of investment into a small number of areas. David Icke style contrarianism about everything is inefficient because it is too thinly spread.

    ‎I’m taking my central example of good individual contrarianism to be the kind of people who become experts on some disease they have , whereas even a mainstream expert is spreading their time across N diseases, not just one.

    You can also increase the global likelihood of correct contrarianism just by having a really good distribution, avoiding the problem of clustering on particular issues — it simply raises the odds.

    Quality in contrarianism is by no means all about individual smatuness: effort and luck are also important.

    Being a good contrarian individually, and organising contrarianism globally are very different issues. “How can I be a good contrarian” and “how can we increase the overall baseline of correct contrariansim” are different questions.

    All societies are very far from optimising their contrarianism. Bay area rationalism has a problem if its own, in that it encourages clustering around a short list of issues

  75. Glen Raphael says:

    Regarding treating SAD with bigger lights/lightboxes, I worry that Eliezer is fooling himself. Two factors to consider are the Hawthorne Effect and regression-to-the-mean.

    Hawthorne Effect advocates might claim that if you make a fuss over someone (eg, by changing the lighting in their home or workplace) then ask “do you feel better?”, they’ll want to say “yeah, a little better” – and even adjust their behavior to fit – whether it objectively helped or not. Thus it has been claimed that making a workplace BRIGHTER seems subjectively to help, but so does making it DIMMER. Such improvement might plausibly last exactly as long as you are tweaking things and paying attention to your study subjects, even if the changes themselves are nonsensical.

    (Yes, the Hawthorne Effect has replication issues. Still worth considering.)

    Regression to the mean is a bigger problem. If a symptom such as depression has a cyclical nature or responds to changing circumstances, then by sheer chance sometimes it will be MUCH WORSE than at other times. You’re only motivated to try a heroically clever and effortful treatment regimen when the depression is unusually bad, which means at some random time after that it’ll probably be less bad. Or if it isn’t – if it gets worse – then you’ll try an even more heroically effortful treatment. Bottom line: if the depression ever hits bottom and bounces back you are guaranteed to have been trying something to treat it at the time it stopped getting worse, whereupon you will be inclined to credit that treatment as a miracle cure.

    So you were trying lots-of-light when the depression you were treating seemed at its worst, and it got better; great! But somebody else who was trying aromatherapy saw the exact same level of improvement in the depression they were treating. You’re both convinced you have a cure medical science is ignoring, only the clever cure the other guy thinks is being ignored is “smell tulips in the morning” and he has as much evidence for his cure as you have for yours.

  76. ADifferentAnonymous says:

    Okay, no one (including me) wants to suggest it, but is the real answer that one should be immodest in proportion to one’s general intelligence? Like, if your IQ is 150, and millions of people with IQ <= 130 have tried the thing you think you can succeed it, you should trust your inside view?

    • The Nybbler says:

      If millions of people with IQ <= 130 have tried it, chances are either a bunch of 150 IQ people have tried it or have otherwise become convinced that it cannot work. But that's quibbling with the example. I think it is true the more intelligent you are, the more confidence you should have that your idea is actually good.

    • Ilya Shpitser says:

      If you meet high IQ on the road, kill it.

    • sandoratthezoo says:

      I think that high intelligence is usually pretty useless at cracking moderately hard problems. Millions of people with IQ 100-130 who have really tried hard to solve a problem are certainly going to be better at finding a solution than one IQ 150-170 person.

      Intelligence is very good at solving small problems, especially large numbers of small problems one after each other and thus both:

      a. Providing efficiency in lots of tasks, high productivity at work, etc.
      b. Allowing one to get practically educated faster.
      c. Allowing one to move faster on the frontiers of knowledge, where there have not yet been a million IQ 100-130 people who’ve found most everything.

    • Tracy W says:

      Isn’t cracking a problem in large part about being in the right place at the right time, as well as being massively intelligent?
      Consider e.g. the independent discoveries of calculus, or evolution. Or the happenstance of the first computer/software billionaires. Newton and Leibniz weren’t just smart, they were in a place in knowledge and society where calculus was the next low-hanging fruit (and they had the scholarly contacts to be recognised for this.)

  77. Abdelfattah Allou says:

    One of the consequences/corollaries of EMH (Efficient Market Hypothesis) is that only passive investing/management (e.g. index fund) is worthwhile because no amount of active investing can beat it in the long run.

    However, even proponents of the EMH (however ridiculous it has been shown to be) realized that EMH works precisely because there is active management going on – it requires some group with resources and risk-appetite to figure out and profit from a calendar effect (and potentially another group to profit from the exuberance of the first group): The Efficient Market / Invisible Hand is not some guaranteed process or divine force that works in mysterious ways but it is precisely a direct result of individuals looking for inefficiencies and trying to profit from them.
    It is a God at whose altar rash goldhunters/scapegoasts are sacrificed. It can’t ever take itself for granted and build from there, it requires a continuous struggle to sustain/converge towards it. Just like democracy. And science. In other words, a bottom-up process.

  78. Cardboard Vulcan says:

    I have not commented here in the past but I believe I can contribute to an ongoing topic. In particular I have some knowledge of the research on SAD and have known many of the researchers involved in the field since it began. Eliezer Yudkowsky in his recent book wonders why doctors treating SAD have not used the simple technique of, as he put it “more light.”

    In fact the proper dose of light, both in intensity and duration has been a lively topic of interest among researchers in the field since SAD was discovered more than 30 years ao. So much so that Charmane Eastman has published a commentary on the topic under the very title “How to Get a Bigger Dose of Bright Light.”

    Eastman, C. I. (2011). How to Get a Bigger Dose of Bright Light. Sleep, 34(5), 559-560.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3079932/

    In general it has been found that increasing the duration of light exposure has a greater effect than increasing the intensity once you get past a few thousand lux. So it is plausible that Brienne’s recovery is due to the longer duration of 2,000 lux light. This possibility is within the conventional thinking in the field. Early light exposure seems particularly important so her use of a light that comes on with a timer at 7:30 am is not at all surprising to one who knows the research.

    If this is so, why is the advice to add more light in the way Eliezer did not widely circulated? There are a number of reasons for that, none of which particularly implies something broken about SAD research, or at least not broken in the way Eliezer claims.

    First, there is a large difference between trying such a setup with one person and reliable evidence that it works. In medicine that generally comes in the form of a double-blind trial. That is particularly difficult in the case of SAD for two reasons. 1. Depression, seasonal or not, has a high rate of placebo response and/or spontaneous remission. 2. It is quite difficult to provide a placebo equivalent to staring at a bright bank of lights.

    This has posed problems in the field since its inception. But the difficulties are enormously increased if one talks about treating treatment-resistant SAD, and even more so if the intervention is as drastic as the one Eliezer has tried.

    I can only begin to describe the difficulties involved. Treatment resistant one year does not mean treatment-resistant the next, so the problem of placebo response is even more pronounced. And if a simple light box is a strong placebo, what about modifying someone’s entire living space with hundreds of bright lights? What placebo intervention would be plausible?

    And you can’t simply tell the subjects to wire up 130 LEDs in their apartment — not unless you want to be responsible for one of them burning down their building — not to mention the need for some standardization of therapy. Basically you’d probably have to have an electrician wire up each apartment, assuming all the patients live in two room apartments or stay in one or two rooms at home, *and* are at home all winter rather than working outside of home, etc etc. Then you’d have to go to each subjects home and measure their lux levels and make modifications if needed. It’s only a very small subset of patients who would be able to do this. Brienne happened to be one.

    And you’d have to recruit patients, ones who had already failed treatment with a lightbox but were nonetheless willing to radically disrupt their lives on the unproven chance that even more light would cure them. And not just a few patients. Due to the placebo problem you’d need a lot of patients to get a significant effect size.

    You’d have a hugely expensive and almost undoable task, and I haven’t even touched on the Institutional Review Board issue.

    Some of these issues might have solutions that a determined researcher could overcome but perhaps not. Medical research is hard, not because the researchers are status-hoarding mindless drones but because medical research is hard. It just is.

    It’s also worth pointing out that research on SAD and related disorders is woefully underfunded. As an NIH official explained it, they have to compete for money with things like cancer research, and people who are sad in winter don’t gather the same sympathy or funds. (The lack of funding of medical research in general and SAD research may be an inadequate equlibrium, but it’s not for the lack of trying on the part of researchers.)

    I would note that the difficulties of placebo-controlled studies in SAD, particularly trying to use varying levels and durations of light, has lead researchers to concentrate on measuring physiological effects of bright light and seeing how those vary with time and intensity rather than directly measuring the rate of response of patients which is a much more noisy data output. That also explains the lack of studies of intensities beyond 10,000 lux — the physiological responses seem to flatten out well before that point.

    But why not just bypass a controlled trial and just try out the “more light” approach in a few patients? Well to some extent this is done. While commercial light boxes often tout results in 30 minutes, it’s understood that often a 2 hour session is better (and many patients can’t manage two hours, often because they have to go to work.) And it’s hardly unusual to, for example, try a large dose of a drug for patients who don’t respond. But it’s a whole different ball game to suggest a very expensive and complicated-to-implement therapy with nothing more than a hunch that it might work in some unknown proportion of patients. Patients are often reluctant even to spend the cost of a conventional light box. And the therapy is impractical for many, for reasons outlined above (e.g. you have to be at home all day in an suitable apartment or house that can be wired-up.)

    There is one final reason — and perhaps the most important — that “use more light” in the fashion Eliezer describes is not widely suggested. It is only in recent years that low power and low heat LEDs have become affordable in quantity. Not long ago such at setup would have either cost thousands of dollars or required unusual amounts of power and production of heat rendering it impractical. I do know of one person who as long ago as the 1990s used around 10 commercial fluorescent light boxes (at great expense) to provide hours of light exposure throughout his living space and who is now buying LEDs, and another individual who is experimenting with intense LED ceiling lights. Maybe as the price of LEDs drop such setups may become more common.

    Finally it is ironic that Eliezer has chosen SAD research as an example of people with “a lack of incentives to be creative” and who are obsessed with “winning prestige or the acclaim of peers” and thus are closed to outside innovative ideas.

    The entire field of SAD reseach began when an engineer name Herb Kern hypothesized that his winter depression might be due to lack of light and approached Norm Rosenthal and Tom Wehr at NIH. Far from dismissing him as a low-status outsider, they enthusiastically embraced his idea.

    Eliezer bemoans the lack of “weird things researchers had tried, not just lightbox variant after lightbox.” In fact SAD researchers have tried a lot of weird things. What about the idea of shining light on the back of patients knees? That would rate pretty high on any weirdness scale but was an active subject of research a few years ago. It turned out not to do anything, but it was tried. The latest weird treament is narrow band blue light. And there are currently several ongoing trials of transcranial light administered in the ear canal (which I don’t think will work but it’s being tried and it’s pretty weird.) There is also ongoing research on the effects of light on carbon monoxide levels in retinal blood as well as research on the effect of air ionization on SAD.

    I think Eliezer’s approach to treating Brienne’s SAD was an excellent idea. There appear to be a few others in the rationality community who have tried similar setups such as this one https://meaningness.com/metablog/sad-light-led-lux so the idea is not unique to Eliezer. It might be worth submitting these as case reports to a medical journal.

    Had he simply presented the idea as a potential contribution the treatment of SAD I would have nothing but praise. But Eliezer is not writing this as a contribution to the general knowledge of the treatment of SAD. Indeed he specifically says he thinks it would be pointless to try to get SAD researchers interested in his approach. In his view the researchers aren’t “caring enough” to devote effort to actually treating SAD instead of seeking status and money, which he seems to think, for reasons known only to him, will accrue to those who simply plug away at the same old light boxes rather than to innovators. In his world there “there aren’t large groups of competent people visibly organizing their day-to-day lives around producing outside-the-box new lightbox alternatives.” I think that is demonstrably false based on any fair history of the field. He claims that SAD research is a prime example of an “inadequate equilibrium” of a kind that is holding back our civilization. His case is not proven and I have provided good evidence against it. I hope anyone reading this will adjust their probabilities accordingly.

    • Ilya Shpitser says:

      Thanks for your thought comment on the history of the field.

      In order for Eliezer to recognize all this he would have to read, or at the very least have appropriate priors on experts. But he’s a writer, not a reader, as the old Russian joke goes.

      • Ilya Shpitser says:

        thought -> thoughtful (saw the typo too late, sorry).

      • Cardboard Vulcan says:

        Reading Eliezer’s erroneous account of this field, which I know a bit about, makes me think that believing his account of, say, the Bank of Japan or other topics, would fall under Gell-Mann amnesia.

    • Rob Bensinger says:

      Thanks for writing this up, Cardboard Vulcan. This is a good pushback—in particular, if sufficiently low-power / low-heat LEDs weren’t available until very recently, then I do think that’s a good reason to conclude that this isn’t a severe “mistake” as the societal level.

      Basically you’d probably have to have an electrician wire up each apartment, assuming all the patients live in two room apartments or stay in one or two rooms at home, *and* are at home all winter rather than working outside of home, etc etc.

      Hm. One alternative might be to just take large number of SAD sufferers, wire up their homes with an imperfect-but-standardized very-high-intensity array of lights that automatically turns on and off at particular times of day, and have the sufferers log whenever they’re at home. This wouldn’t address all the issues you raised, but it would give you a way to estimate how general this kind of solution is. If (for example) it turned out that SAD virtually always goes away when someone spends 15 non-consecutive hours under sustained very-bright lights over the weekend (including early morning), then the question of identifying a “treatment-resistant” subpopulation would perhaps be moot. And natural variation in how much time people spend under the lights would provide at least some information about how response varies with exposure times, though the strength of the evidence would be reduced by the risk of confounds.

      You might not nail down the exact mechanisms or a lot of specific details about unusual subpopulations, but you might learn enough to better assess whether lightboxes are the the best first or second line of defense for SAD. (E.g., you might not know exactly how much of the benefit is placebo, but patients presumably care more about treatment than about knowing exactly how much of the treatment is placebo.) If there have already been experiments like this, I’d be very interested to learn more here.

      Some of the issues you raise are ones Eliezer mentioned in the context of parenteral nutrition in chapter 3—e.g., IRB review and more generally “scientists are legally or culturally discouraged from recommending ‘obvious’ treatments in cases of urgent need until there’s an extremely strong evidence base for that specific intervention”.

      I take it that the argument for conservatism is something like: “Once you relax your standards even a little, it’s too easy to trick yourself into thinking you’re solving a problem when you really aren’t. And above all, medical researchers need to first Do No Harm, rather than rushing in with big bold ideas before they have all the facts pinned down.”

      Whereas the radical perspective is: “More Bayesian standards for evaluating efficacy aren’t really more ‘relaxed’ than conventional standards. And when people are suffering and dying in huge numbers, falling back on the ‘null action’ for all plausible-but-RCT-deficient ideas isn’t appropriate, and isn’t actually ‘safe’ or harm-minimizing (from an altruistic standpoint). If our priority is patient outcomes rather than avoiding ever making bad non-null actions, then we should be acting with much more urgency and coordination to drive down SAD rates as rapidly as possible—the same level of urgency they’d bring to the task if their child’s life was on the line, or if they were actively working to put out a fire.”

    • Rob Bensinger says:

      It’s worth noting in passing that Eliezer’s comments about SAD are consciously in the “scientifically literate consumer of health treatments records his thought process while making a personal medical decision” genre rather than the “scholar constructs a literature review or sociological study of SAD research” genre. I think it’s completely appropriate to point to mistakes in the particulars of “consumer of health treatments records his thought process while making a personal medical decision” reasoning—particularly when the reasoning is being used as an exemplar of what good adequacy-assessing guesswork should look like. If you think Eliezer isn’t putting enough emphasis on simple hypotheses like “the field is just small and under-funded”, then that’s important information for calibration about the relative frequency of different kinds of dysfunction. But the added context is important to note for readers who might not know what those passages are saying or why.

      The reason Eliezer tries to eyeball the adequacy of SAD research is because a quick off-the-cuff adequacy estimate is one of the inputs to that kind of ordinary daily medical decision, not because he’s claiming any special knowledge of the field. If he did have special knowledge of SAD research, he’d have needed to use a different example. The point, as he puts it, is that “it’s important to be able to casually invoke civilizational inadequacy,” as a completely normal state of affairs. It has to be possible to talk about system-level failures without it being translated into stronger or stranger hypotheses like “here are a bunch of unusual individual personality defects (e.g., ‘callousness’ or unimaginativeness’) afflict [field].”

      Each additional hypothesis that gets raised isn’t supposed to be an additional attack on SAD researchers, or an additional conjunctive detail in a single story; it’s an additional hedge / disjunctive path. The intent behind considering many different possibilities without reaching a conclusion (other than “there’s probably something somewhere”) isn’t to tar SAD researchers with more and more negative associations. It’s “here are a few examples of the kinds of system-level inefficiencies that might be responsible; I don’t know what exactly is going on in this case, but the base rate for inefficiencies of these varieties is high enough that I shouldn’t assume there’s no low-hanging fruit I can grab in this area”. From the book:

      For a fixed amount of inadequacy, there is only so much dysfunction that needs to be invoked to explain it. By the nature of inadequacy there will usually be more than one thing going wrong at a time… but even so, there’s only a bounded amount of failure to be explained. Every possible dysfunction is competing against every other possible dysfunction to explain the observed data.

      So if your hypothesis (“research on SAD and related disorders is woefully underfunded [because of widespread biases/misconceptions about the severity of mood disorders]”) is true, then that automatically competes against other hypotheses and reduces the need to appeal to independent factors.

      • Douglas Knight says:

        CV’s comments are largely compatible with EY’s, except that EY does claim that he did a literature review. There is some discrepancy between their descriptions of the literature, although I’m not entirely convinced that they contradict each other. Most of CV’s excuses seem to me to really support EY.

  79. Julius says:

    What are central-line infections?

  80. n8chz says:

    From before the edit:

    Even people who don’t like capitalism admit it’s very good at what it does, which is something like “exploit money-making opportunities” or “pick low-hanging fruit in the domain of money-making”.

    I hate capitalism as much as anyone and I give it far more credit than that. I fully acknowledge that it is the best possible means of calculating the optimum allocation of resources “efficiently” based on supply and demand. The catch is, it only seems to work with dollar-weighted criteria and I insist on person-weighted. Since there seems to be no way that the market mechanism can be harnessed into calculating a person-weighted allocation, I toy with the question of how much inefficiency is an acceptable price to pay for how much person-weightedness.

  81. Plucky says:

    There is a slight modification to the $20 bill conundrum that makes it closer to a lot of real-life applications

    You see a $20 bill on the ground, but it’s smeared with dogshit and before you’d be willing to pick it up and put it in your pocket, you’d need to walk 10 minutes to the nearest CVS and get a ziplock back to put it it. Or any reason there’s a time-lag in between when you decide to pick it up and when you are actually able to.

    Do you walk over to CVS and get the bags or not? For most people, 20 minutes of effort is definitely worth a $20 payoff. It’s equivalent to $60/hr wage, which for a standard 2,000 hour workyear is equivalent to $120k salary. So yea, unless you’re someone with highly valuable time it makes sense.

    BUT, what if someone else also saw the $20 bill, and is currently at the CVS, and would pick up the bill during your roundtrip, leaving your effort totally wasted? Then it’s a probability game. If you think there’s a 50/50 chance someone’s already had the idea first and is ahead implementing, then the expected value payoff is only $10, which in payoff/effort terms is $30/hr / $60k/yr territory. Plenty of people would (rationally) leave that on the ground. Drop the individual odds to 1/10 and the EV payoff is below minimum wage. You need a credible reason to believe you have decent odds of having first-mover advantage in order to be the person who picks up the bill.

    When you put that problem in Grand Central Terminal, it becomes an zero-divided-by-zero limit problem: With so many people coming-and-going, the odds you’re the first to see it are zero. You have no reason to believe you have an inherent, structural first-mover advantage. However, neither does anyone else, so the odds any individual person has decided to walk over to the CVS are also very low. But again there are a great many people, so even a tiny individual probability could become a high collective probability. There’s no definite answer. Depending on subtle differences in effort/reward balance the probability someone picks it up could limit to 1, 0, or any number in between. And unless you’re John von Neuman you’re not smart enough to solve for that equilibrium in your head in the time it takes to walk through the Grand Central concourse.

    The result is either
    A) one of the first few people to see the bill picks it up, reasonably believing the reward/effort payoff makes sense
    B) no rational person deems the odds good enough (i.e. the large number case limits to zero), and the person who ultimately picks up the bill is a total nutjob who takes the effort because he acts on a completely irrational belief in his probability of being the first. Perhaps he believes he’s the only one that sees the bill because it’s printed on magic paper no one can see except for the wise few who always walk around wearing 70’s style red/blue 3D glasses. Seeing no other 3D-glasses wearer in the terminal, his first-mover advantage becomes obvious to him, if delusional to everyone else.

    If you look at many innovations out there, there’s a lot of B) around. Fortune really does smile on the delusionally bold sometimes. The preposterously egotistical, too. Often enough to make one think the modified $20 bill game describes plenty of situations better than the simple version.

  82. danjelski says:

    Your review is much better than the book itself. The book is poorly written.