Contra Hallquist On Scientific Rationality

I.

Topher Hallquist recently wrote a long article calling Less Wrong and the rationalist community “against scientific rationality” and having “crackpot tendencies”.

The piece claims to be about “the Less Wrong community”, but mostly takes the form of a series of criticisms against Eliezer Yudkowsky for holding beliefs that Hallquist thinks are false or overconfident. In some respects this is fair; Eliezer was certainly the founder of the community and his writings are extremely influential. In other respects, it isn’t; Margaret Sanger was an avowed eugenicist, but this is a poor criticism of Planned Parenthood today, let alone the entire reproductive rights community; Isaac Newton believed that the key to understanding the secrets of the universe lay in the dimensions of Solomon’s Temple, but this is a poor critique of universal gravitation, let alone all of physics. I worry that Hallquist’s New Atheism background may be screwing him up here: to critique a movement, merely find the holy book and prophet, prove that they’re fallible, and then the entire system comes tumbling to the ground. Needless to say, this is not how things work outside the realm of divine revelation.

On the other hand, it seems like the same argument that suggests Hallquist shouldn’t say such things would suggest I shouldn’t care much about arguing against them. I wish I lived in a universe where this was true, but “guilt by association” is a thing, the Internet has more than its share of people who have conceived this deep abiding hatred toward all rationalists, and “crackpot” and “anti-intellectual” are especially sticky accusations around these parts. Past experience tells me if I let this slide then at some point I’m going to be mentioning I’m interested in rationality and the automatic response will be “Oh, those are those anti-intellectual crackpots who hate science” and nothing I say will convince them that they are wrong, because why listen to an anti-science crackpot? Some things need to be nipped in the bud.

Also, a lot of Hallquist’s criticism is genuinely wrong and unfair. Also also, I like Eliezer Yudkowsky.

This is not to say that Eliezer – or anyone on Less Wrong – or anyone in the world – is never wrong or never overconfident. I happen to find Eliezer overconfident as heck a lot of the time. I have told him that, and he has pointed me to his essay on how if you really understand what confidence means in a probabilistic way, then you keep track of your uncertainty internally but don’t worry too much about the social niceties of broadcasting how uncertain you are to everyone. My opinion of this is the same as my opinion of most other appeals to not needing to worry about social niceties.

If Hallquist had made this reasonable critique, I would have endorsed it. Instead, I find his critique consistently misrepresents Eliezer, most of the ideas involved, and the entire Less Wrong community. I am going to fisk it hard, which I don’t like to do, but which seems like the only alternative to allowing these misrepresentations to stand. If you want to skip the (very boring) fisking, avoid part II and go straight from here to part III.

II.

So, to start with, I count four broad Eliezer-critiques:

1. Eliezer believes there’s an open-and-shut case for the Many Worlds Interpretation of quantum mechanics.
2. Eliezer believes that some philosophical problems are really easy, and philosophers are morons for not having settled them already
3. Eliezer is certain that cryonics will work
4. Eliezer believes that “dietary science has killed millions of people” and cites borderline-crackpot Gary Taubes

A.

The first critique, about quantum mechanics, is potentially the strongest. Yudkowsky’s support for his preferred Many Worlds interpretation is remarkably strong for a subject where some of the world’s top geniuses disagree. On one hand, it is no stronger than that of many experts in the field – for example, Oxford quantum pioneer David Deutsch describes the theory’s lukewarm reception as “the first time in history that physicists have refused to believe what their reigning theory says about the world…like Galileo refusing to believe that Earth orbits the sun” and calls arguments against it “complex rationalizations for avoiding the most straightforward implications of quantum theory”. On the other, perhaps one could argue that a level of confidence appropriate in an Oxford professor is inappropriate in a self-taught amateur. I don’t know anything about quantum mechanics and don’t want to get into it.

Neither does Hallquist; he admits that Many Worlds is a reasonable position, but makes a different accusation. He says Yudkowsky has failed to inform readers to investigate other views:

Years ago in college I wrote a book debunking claims by Christian apologists claiming to “prove” that Jesus rose from the dead using historical evidence. At the very end of the book I included a little paragraph about how you shouldn’t just take my word for anything and should do your own research and form your own conclusions. In retrospect, that paragraph feels cheesy and obvious, but seeing the alternative makes me glad I included it.

Yudkowsky could have, after arguing at length for the many worlds interpretation of quantum mechanics, said, “I recommend going and studying the arguments of physicists who defend other interpretations, and when you do that I think you’ll see that physicists are screwing up.” That might have been reasonable. Many physicists accept many worlds, and I can accept that it’s sometimes reasonable for a dedicated amateur to have strong opinions on issues that divide the experts.

But in the very first post of his quantum physics sequence, Eliezer warns:

Everyone should be aware that, even though I’m not going to discuss the issue at first, there is a sizable community of scientists who dispute the realist perspective on QM. Myself, I don’t think it’s worth figuring both ways; I’m a pure realist, for reasons that will become apparent. But if you read my introduction, you are getting my view. It is not only my view. It is probably the majority view among theoretical physicists, if that counts for anything (though I will argue the matter separately from opinion polls). Still, it is not the only view that exists in the modern physics community. I do not feel obliged to present the other views right away, but I feel obliged to warn my readers that there are other views, which I will not be presenting during the initial stages of the introduction.

Okay, but that’s not exactly the same thing as telling his readers that they need to read other books and do other research and see whether he’s right or wr…

Go back and look at other explanations of QM and see if they make sense now. Check a textbook. Alternatively, check Feynman’s QED. Find a physicist you trust, ask them if I got it wrong, if I did post a comment. Bear in mind that a lot of physicists do believe MWI.

That’s from this comment. So Yudkowsky clearly did the thing Hallquist is accusing him of not doing.
This is to his credit, but in my opinion supererogatory. Despite his anecdote about the paragraph in his book, Hallquist does not end his own essay on Less Wrong with “There are many people who disagree with me on this, be sure to check some pro-Less Wrong views to get the opposite side of the story.” Nor does he include such a sentence in the vast majority of his voluminous blog posts and writings. I don’t blame him for that – I don’t use such disclaimers either. There is a general conversational norm that to assert something is to say that you believe it and will provide evidence for your belief, not to say “You must believe me on this and may never go find any other sources to get any other side of the story.” You listen to me, you get my perspective, if for some reason I think you might not be aware that other perspectives exist I’ll tell you that they do, but I’m not going to end every blog post with “But be sure to read other sources on this so you can have your own opinion”.

Nevertheless, that is the standard Hallquist demands of Yudkowsky. And Yudkowsky meets it. And Hallquist ignores him doing so.

B.

Here’s philosophy. Note that I’m editing liberally to reduce length; I don’t think I’ve removed any essential parts of the argument but you might want to go over there and check:

In another post, Yudkowsky lodges a similar complaint about philosophy:

Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out – possibly fatally – whether they got it right or wrong. Philosophy doesn’t resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I’m sorry, but as Jeffreyssai says: Too slow! It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct. But philosophy, which hasn’t come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn’t seem very likely to build complex correct structures of conclusions.

I agree that progress in academic fields is sometimes slowed by the irrationality of the participants. People don’t like admitting being wrong. Unfortunately, knowing this isn’t much help unless you’ve discovered a magic formula for overcoming this flaw in human nature. More on that later. But thinking irrationality is the only reason why progress is slow ignores the fact that often, progress is slow because the questions are just really hard

[…]

The reason I’m saying all this is because when philosophers act like they’re not really trying to resolve debates, it’s because they know such attempts have a track record of not working. That doesn’t mean we will never put philosophy on a solid footing, but it does mean that anyone who shows up claiming to have done so single-handedly deserves a fair dose of skepticism.

The zombie debate is as good an example as any of this, so let’s talk about that. David Chalmers’ claim is that there could exist (in other possible worlds with different psychophysical laws–not the actual world) beings that are physically identical to us, but who lack consciousness. The intuition that such “zombies” are possible leads Chalmers to a view that at least looks a lot like epiphenomenalism (the belief in a separate mental realm affected by, but which does not affect, the physical realm).

Epiphenomenalism strikes a lot of people as crazy–me included! But Chalmers realizes this. So in The Conscious Mind he tries to do two things (1) argue his view is not quite epiphenomenalism (2) argue that some of the apparent advantages of certain other views over epiphenomenalism are illusory.

Does he succeed? I don’t know. But what makes me sympathetic to Chalmers is the sense that what he calls the hard problem of consciousness is a real problem, and alternative solutions aren’t any better. And Yudkowsky, as far as I can tell, isn’t one of those people who says, “the so-called ‘hard problem’ is a fake problem.” He agrees that it’s real–and then claims to have a secret solution he’ll sell you for several thousand dollars.

I think it’s enormously unlikely that Yudkowsky has really found the secret solution to consciousness. But even if he had, I don’t think anyone could know, including him. It’s like an otherwise competent scientist refusing to submit their work for peer review. Even top experts are fallible–and the solution is to have other experts check their work.

My response is going to sound mean and kind of ad hominem. That’s not really where I’m going here, and I promise I’m going somewhere a little more subtle than that. So, keeping that in mind – Hallquist once said on his blog:

I seem to have some philosophy-related skills in abundance. I’m good at spotting bad philosophical arguments. I was exposed to Plantinga’s modal ontological argument at ~12 years old, and instantly noticed it could just as well “prove” the existence of an all-powerful, necessarily existent being who wants nothing more than for everything to be purple.My experience with philosophy professors suggests that sadly, the knack for seeing through bad arguments is far from universal, even among the “professionals.”

What is the difference between Hallquist believing that he disproved one of the world’s most famous philosophers when he was twelve years old, and Eliezer believing that he solved the problem of consciousness when he was thirty-something?

Likewise, in another blog post Hallquist wrote about how Aquinas’ Arguments Just Suck and have no redeeming value:

Now, the broad point here is that while these arguments are supposedly derived from Aristotle, there doesn’t seem to be some secret Aristotelian assumption that would make them work. They’re just plain old bad arguments. I feel comfortable saying this, because respected living philosophers often give arguments that just stink, and being a contemporary of those philosophers I’m confident that the issue isn’t some peculiarly 21st century assumption.

Remember that Aquinas’ arguments convinced nearly all the brightest people in the Western world for five hundred years, and that many PhD philosophy professors still believe them with all their heart. Hallquist says they just suck and that there is no chance he might just be missing something or be misunderstanding their terms. In fact, he was so sure that there was no chance he was just a modern misunderstanding things that when a philosophy professor wrote a book claiming that modern people’s contempt for Aquinas is based on a misunderstanding, Hallquist read twenty pages of it, decided there was no chance that the remainder could possibly contain anything to change his mind, and stopped.

(I read the entire book, and if I’d stopped at page twenty I would have saved myself several valuable hours of my life I could have spent doing something productive, BUT THAT’S NOT THE POINT!)

Presumably, if Aquinas’ arguments are really stupid, but everyone believed them for five hundred years, this would imply there is something wrong with everyone. It might be worth saying “When someone makes a stupid argument, is there anything we can do to dismiss it in less than five hundred years?” But take that step, and you’re venturing into exactly the territory Hallquist is criticizing Eliezer for crossing into.

So what is my point beyond just an ad hominem attack on Hallquist?

Philosophy is hard. It’s not just hard. It’s hard in a way such that it seems easy and obvious to each individual person involved. This is not just an Eliezer Yudkowsky problem, or a Topher Hallquist problem – I myself may have previously said something along the lines of that anybody who isn’t a consequentialist needs to have their head examined. It’s hard in the same way politics is hard, where it seems like astounding hubris to call yourself a liberal when some of the brightest minds in history have been conservative, and insane overconfidence to call yourself a conservative when some of civilization’s greatest geniuses were liberals. Nevertheless, this is something everybody does. I do it. Eliezer Yudkowsky does it. Even Topher Hallquist does it. All we can offer in our own defense is to say, with Quine, “to believe something is to believe that it is true”. If we are wise people, we don’t try to use force to push our beliefs on others. If we are very wise, we don’t even insult and dehumanize those whom we disagree with. But we are allowed to believe that our beliefs are true. When Hallquist condemns Yudkowsky for doing it, it risks crossing the line into an isolated demand for rigor.

The most charitable I can be to Hallquist is that his gripe is not with Yudkowsky’s hubris in holding strong beliefs, but with an apparent reluctance to publish them. Once again, from the end of the quoted material:

What makes me sympathetic to Chalmers is the sense that what he calls the hard problem of consciousness is a real problem, and alternative solutions aren’t any better. And Yudkowsky, as far as I can tell, isn’t one of those people who says, “the so-called ‘hard problem’ is a fake problem.” He agrees that it’s real–and then claims to have a secret solution he’ll sell you for several thousand dollars.

I think it’s enormously unlikely that Yudkowsky has really found the secret solution to consciousness. But even if he had, I don’t think anyone could know, including him. It’s like an otherwise competent scientist refusing to submit their work for peer review. Even top experts are fallible–and the solution is to have other experts check their work.

So perhaps the difference between Hallquist (and the rest of us) and Yudkowsky is that the latter doesn’t believe in peer review and openness because he “hasn’t published” his “secret” solution to consciousness?

Hallquist’s only source for Eliezer having such a solution is the Author Notes for Harry Potter and the Methods of Rationality, Chapter 98, where somewhere in the middle he says:

I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds).

Even if, as Hallquist claims, he has private information that this isn’t a joke, this hardly seems like the central case of Eliezer having a philosophical opinion. One might ask: has Eliezer ever written about his other philosophical beliefs, the ones that seem most important to him?

The Less Wrong Sequences are somewhere between 500,000 and a million words long (for comparison, all three Lord of the Rings books combined are 454,000, and War and Peace is 587,000). Eliezer may be one of the most diligent people alive about publicizing his philosophical opinions in great detail. In some cases, he explicitly frames his Sequence work in terms of getting feedback – for example, in the same comment on the Quantum Physics Sequence I linked earlier, he writes about his approach to double-checking his quantum work:

I myself am mostly relying on the fact that neither Scott Aaronson nor Robin Hanson nor any of several thousand readers have said anything like “Wrong physics” or “Well, that’s sort of right, but wrong in the details…”

And this works – far from being unwililng to debate academic philosophers about his opinion of Chalmers’ epiphenomenalism, he argues the subject with David Chalmers himself in the comments of that Less Wrong post, which is a heck of a lot more than I’ve ever done when I disagree with an academic philosopher.

On the other hand, since he placed a single sentence in an HPMOR Author’s Note about a solution to the hard problem of consciousness which he hasn’t written about, Hallquist accuses him of being against publicizing his work for review or criticism. Once again, I think these complaints are startlingly unfair and a remarkably flimsy ground on which to set out to tarnish the reputation of an entire community.

C.

Hallquist’s next complaint is that Eliezer is strongly in favor of cryonics; he starts by noting that Eliezer criticizs Michael Shermer for mocking the role of molecular nanotechnology in cryonics, then writes:

Full disclosure: I’m signed up for cryonics. But the idea that nanomachines will one day be able to repair frozen brains strikes me as highly unlikely. I think there’s a better chance that it will be possible to use frozen brains as the basis for whole brain emulation, but I’m not even sure about that. Too much depends on guesses both about the effects of current freezing techniques and about future technology.

Eliezer, meanwhile, is sure cryonics will work, based, as far as I can tell, on loose analogies with computer hard drives. Faced with such confident predictions, pointing out the lack of evidence and large element of wish-fulfillment (as Shermer does) is an eminently reasonable [reaction]. A “rationalism” that condemns such caution isn’t worthy of the name.

I can’t stress enough the enormous difference between trying to do some informed speculation about what technologies might be possible in the future, and thinking you can know what technologies will be possible in the future based on just knowing a little physics. Take, for example, Richard Feynman’s talk “There’s Plenty of Room at the Bottom”, often cited as one of the foundational sources in the field of nanotechnology.

Today, the part of Feynman’s talk about computers looks prophetic, especially considering the talk was given several years before Gordon Moore made his famous observation about computer power doubling every couple of years. But other things he speculates about are, to say the least, a long way off. Do we blame Feynman for this?

No, because Feynman knew enough to include appropriate caveats. When he talks about the possibility of tiny medical robots, for example, he says it’s “very interesting possibility… although it is a very wild idea.” He doesn’t say that this will definitely happen and be the secret to immortality. And some futurists, like Ray Kurzweil, do say things like that. That’s the difference between having a grasp of the difficulty of the topic and your own fallibility, and well, not.

Yet Yudkowsky is so confident in his beliefs about things like cryonics that he’s willing to use them as a reason to distrust mainstream experts.

I hate to keep getting into these tu quoque style arguments. But I do feel like once you have signed up for cryonics, you lose the right to criticize other people for being crackpots for being signed up for cryonics for a slightly different reason than you are. If you think molecular nanotechnology is highly unlikely but whole brain emulations aren’t, please at least be aware that from the perspective of everyone else in society you are the equivalent of a schizophrenic guy who believes he’s Napoleon making fun of the other schizophrenic guy who thinks he’s Jesus.

But Hallquist’s main criticism isn’t just that Yudkowsky believes in cryonics, or nanotechnology, or whatever. It’s that he’s overconfident in these things. As per the quote, he’s “sure cryonics will work” and says the equivalent of “this will definitely happen and be the secret to immortality”.

At this point, it will probably come as no surprise that Yudkowsky has never said anything of the sort and has in fact clearly said the opposite.

One of my favorite results from the Less Wrong Survey, which I’ve written about again and again, shows that people who sign up for cryonics are less likely to believe it will work than demographically similar people who don’t sign up (yes, you read that right) – and the average person signed up for cryonics only estimated a 12% chance it would work. The active ingredient in cryonics support is not unusual certainty it will work, but unusual methods for dealing with moral and epistemological questions – an attitude of “This only has like a 10% chance of working, but a 10% chance of immortality for a couple of dollars a month is an amazing deal and you would be an idiot to turn it down” instead of “this sounds weird, screw it”. Once you think of it that way, signing up doesn’t mean “I’m sure it will work” but rather “I’m not 100% sure it won’t.” Thus, Eliezer’s former co-blogger Robin Hanson, who frequently joined forces with Eliezer in passionate appeals for their mutual readers to sign up for cryonics, gives his probability of cryonics working at 6%.

I don’t have as accurate a picture of Eliezer’s beliefs. The best I can do is try to interpret this comment, where he describes the success of cryonics as a chain of at least three probabilities. First, the probability that the core technology works. Second, the probability that cryonics organizations don’t go bankrupt. Third, the probability that humankind lasts long enough to develop appropriate resurrection technology.

Eliezer gives the first probability as 80 – 90%, the second probability as “outside the range of my comparative advantage in predictions”, and the third probability as “the weakest link in the chain” and as “something I’ve refused to put a number on, with the excuse that I don’t know how to estimate the probability of doing the ‘impossible'”. So his probability of cryonics working seems to be 80-90% * something-he-doesn’t-know * something-he-thinks-is-near-impossible. If we are willing to totally ignore his request not to put numbers on these, perhaps something like 90% * 75% * 10%? Which would land him right between Robin Hanson’s 6% and the survey average of 12%.

I think Eliezer’s overconfident on his belief on the first of his three bins, the one about the core technologies working. But Hallquist ignores this relatively modest issue and instead chooses to sensationalize Eliezer into being “sure cryonics will work” and “[thinking] this will definitely happen”, when his actual probability is unknown but probably closer to 6% than 100%. Once again I think this complaint is startlingly unfair and a remarkably flimsy ground on which to set out to tarnish the reputation of an entire community.

[my own disclaimer: I am not signed up for cryonics and don’t currently intend to, but I respect people who are]

D.

Oh God, we’re going to have to get into diet again. You can skip this part. Seriously, you can.

Eliezer writes that “dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup” and links to this article by Taubes.

I do not want to defend Gary Taubes. Science has progressed to the point where we have been able to evaluate most of his claims, and they were a mixture of 50% wrong, 25% right but well-known enough that he gets no credit for them, and 25% right ideas that were actually poorly known enough at the time that I do give him credit. This is not a bad record for a contrarian, but I subtract points because he misrepresented a lot of stuff and wasn’t very good at what might be called scientific ethics. I personally learned a lot from reading him – I was able to quickly debunk the wrong claims, and the correct claims taught me things I wouldn’t have learned any other way. Yudkowsky’s reading of him seems unsophisticated and contains a mix of right and wrong claims. But Hallquist’s reading seems to be a prime example of reversed stupidity not being intelligence. He writes:

The use of the diet example is even more embarrassing than the other claims I’ve looked at so far. The line about “dietary scientists ignoring their own experimental evidence” links to an article by Gary Taubes. Taubes champions the diet claims of Robert Atkins, who literally claimed that you could eat unlimited amounts of fat and not gain weight, because you would pee out the excess calories. This, needless to say, is not true.

After reading two of Taubes’ books, I haven’t been able to find anywhere where he addresses the urine claim, but he’s very clear about claiming that no amount of dietary fat can cause weight gain. How Taubes thinks this is supposed to be true, I have no idea. His attempted explanations are, as far as I can tell, simply incoherent. (Atkins at least had the virtue of making a coherent wrong claim.)

I’m startled by Hallquist’s claim that he read two Taubes books and couldn’t find Taubes’ explanation for why people don’t gain calories on a high-fat diet. Taubes lays out this mechanism very clearly as the major thesis of his book.

Taubes believes the human body is good at regulating its own weight via the hunger mechanism. For example, most Asian people are normal weight, despite the Asian staple food being rice, which is high-calorie and available in abundance. Asians don’t get fat because they eat a healthy amount of rice, then stop. This doesn’t seem to require amazing willpower on their part; it just happens naturally.

In a similar vein is one of Taubes’ favorite studies, the Vermont Prison Experiment, where healthy thin prisoners were asked to gain lots of weight to see if they could do it. The prisoners had lots of trouble doing so – they had to force themselves to eat even after they were full, and many failed, disgusted by the task. Some were able to eat enough food, only to find that they were filled with an almost irresistible urge to exercise, pace back and forth, tap their legs, or otherwise burn off the extra calories. Those prisoners who were able to successfully gain weight lost it almost instantly after the experiment was over and they were no longer being absolutely forced to maintain it. The conclusion was that healthy people just can’t gain weight even if they want to, a far cry from the standard paradigm of “it takes lots of willpower not to gain weight”.

Other such experiments focused on healthy thin rats. The rats were being fed as much rat food as they wanted, but never overate. The researchers tried to trick the rats by increasing the caloric density of the rat food without changing the taste, but the rats just ate less of it to get the same amount of calories as before. Then the researchers took the extreme step of surgically implanting food in the rats’ stomachs; the rats compensated by eating precisely that amount less of normal rat food and maintaining their weight. The conclusion was that rats, like Asians and prisoners, have an uncanny ability to maintain normal weight even in the presence of unlimited amounts of food they could theoretically be binging on.

Modern Westerners seem to be pretty unusual in the degree to which they lack this uncanny ability, suggesting something has disrupted it. If we can un-disrupt it, “just eat whatever and let your body take care of things” becomes a passable diet plan.

I sometimes explain this to people with the following metaphor: severe weight gain is a common side effect of psychiatric drug Clozaril. The average Clozaril user gains fifteen pounds, and on high doses fifty or a hundred pounds is not unheard of. Clozaril is otherwise very effective, so there have been a lot of efforts to cut down on this weight gain with clever diet programs. The journal articles about these all find that they fail, or “succeed” in the special social science way where if you dig deep enough you can invent a new endpoint that appears to have gotten 1% better if you squint. This Clozaril-related weight gain isn’t magic – it still happens because people eat more calories – but it’s not something you can just wish away either.

Imagine that some weird conspiracy is secretly dumping whole bottles of Clozaril into orange soda. Since most Americans drink orange soda, we find that overnight most Americans gain fifty pounds and become very obese.

Goofus says: “Well, it looks like Americans will just have to diet harder. We know diets rarely work, but I’m sure if you have enough willpower you can make it happen. Count every calorie obsessively. Also, exercise.”

Gallant says: “The whole problem is orange soda. If you stop drinking that, you can eat whatever else you want.”

Taubes’ argument is that refined carbohydrates are playing the role of Clozaril-in-orange-soda. If you don’t eat refined carbohydrates, your satiety mechanism will eventually go back to normal just like in Asians and prisoners and rats, and you can eat whatever else you want and won’t be tempted to have too much of it – or if you do have too much of it, you’ll exercise or metabolize it away. When he says you can “eat as much fat as you want”, he expects that not to be very much, once your broken satiety mechanism is fixed.

Taubes is wrong. The best and most recent studies suggest that avoiding refined carbohydrates doesn’t fix weight gain much more than avoiding any other high-calorie food. However, the Clozaril-in-orange-soda model, which is not original to Taubes but which he helped popularize, has further gained ground and is now arguably the predominant model among dietary researchers. It’s unclear what exactly the orange soda is – the worst-case scenario is that it’s something like calorically-dense heavily-flavored food, in which case learning this won’t be very helpful beyond current diet plans. The best-case scenario is that it’s just a disruption to the microbiome, and we can restore obese people to normal weight with a basic procedure which is very simple and not super-gross at all.

But whether or not you agree with it, this Clozaril-in-orange-soda story is indisputably Taubes’ model; Hallquist seems to miss this and instead makes vague gestures towards a discredited 70s theory that calories are excreted as ketones in the urine.

Because Hallquist doesn’t understand Taubes’ main point, his criticisms miss the mark. He wrote a five-part Less Wrong series on Gary Taubes, in which he tries to figure out what Taubes’ theory of obesity is and as best I can tell somehow ends up simultaneously saying Taubes is dishonest for accusing mainstream researchers of thinking diet is a matter of willpower, and saying Taubes is silly because diet really is just a matter of willpower. If you don’t believe me, read the post, where he says:

Taubes goes on at great length about how obesity has other causes beyond simple calorie math as if this were somehow a refutation of mainstream nutrition science. So I’m going to provide a series of quotes from relevant sources to show that the experts are perfectly aware of that fact.

Which he does. But then later he says:

So what’s going on here? I think the answer lies Taubes’ eagerness to portray mainstream nutrition experts as big meanies who blame fat people for being fat…

But this puts Taubes in a bind: now if he says how much we eat has an effect on our weight, he’s a big meanie too. It doesn’t work for him to say fat people can’t help overeating because of something wrong with their metabolism, and this in turn causes them to gain weight, because he’s committed himself to the principle that blaming behavior equals blaming a character defect. So instead, we get wild rhetoric about how stupid the experts are with no coherent view underneath it.

A more sensible approach would’ve been to emphasize that akrasia is an extremely common problem for humans, and that people who don’t suffer from akrasia in regards to diet probably suffer from akrasia about something else. But that wouldn’t have made for as an exciting of a book.

So unless I am reading this wrong, he thinks the correct answer is to say that we should blame behavior, but that we should do it in a nice way where we say kind things about how everybody has willpower problems and it’s nothing to be ashamed of. I think he thinks he’s agreeing with mainstream consensus here, but mainstream consensus has already moved on to “Screw willpower, STOP DRINKING ORANGE SODA”.

Next, Hallquist attacks Taubes’ claim that “We don’t get fat because we overeat; we overeat because we’re getting fat”, saying that he’s “trying to be charitable” but “surely it wasn’t meant to be taken literally” and that he’s “playing with meanings” just so he can “portray nutrition experts as big meanies”.

But when you ask those nutrition experts – for example, Dr. David Ludwig MD PhD Professor of Nutrition at Harvard, they’re writing articles in the Journal of the American Medical Association with titles like Increasing Adiposity – Cause Or Consequence Of Overeating?, where they say things like:

Since [the early part of the century], billions of dollars have been spent on research into the biological factors affecting body weight, but the near-universal remedy remains virtually the same, to eat less and move more. According to an alternative view, chronic overeating represents a manifestation rather than the primary cause of increasing adiposity. Attempts to lower body weight without addressing the biological drivers of weight gain, including the quality of the diet, will inevitably fail for most individuals…a focus on total diet composition, not total calories, may best facilitate weight loss.

And although the journal article is relatively balanced and might be dismissed as just a scholarly investigation of a weird ideas, the same authors wrote up the same argument for the New York Times in a more obviously persuasive fashion.

Hallquist, again thinking he’s defending a consensus position, attacks Gary Taubes’ claim that government guidelines promoting low-fat diets were responsible for the increase in sugar and refined carbohydrate consumption. He calls this “a huge red flag”, said Taubes is “unaware of [reality] or tries to hide it from his readers”, engages in “irresponsible rhetoric” and wonders “how anyone reading this could avoid suspecting something was up.”

Meanwhile, let’s go to more experts writing in the Journal of the American Medical Association – we’ll keep Ludwig but add Dr. Dariush Mozaffarian, MD, MPH, Dean of Tufts University School of Nutrition – explaining why the new dietary guidelines have done a sudden about-face, removed previous restrictions on fat, and added more restrictions on refined carbohydrates:

With these quiet statements, the DGAC report reversed nearly 4 decades of nutrition policy that placed priority on reducing total fat consumption throughout the population. In 1980, the Dietary Guidelines recommended limiting dietary fat to less than 30% of calories. This recommendation was revised in 2005, to include a range from 20% to 35% of calories. The primary rationale for limiting total fat was to lower saturated fat and dietary cholesterol, which were thought to increase cardiovascular risk by raising low-density lipoprotein cholesterol blood concentrations. But the campaign against saturated fat quickly generalized to include all dietary fat. Because fat contains about twice the calories per gram as carbohydrate or protein, it was also reasoned that low-fat diets would help prevent obesity, a growing public health concern.

The complex lipid and lipoprotein effects of saturated fat are now recognized, including evidence for beneficial effects on high-density lipoprotein cholesterol and triglycerides and minimal effects on apolipoprotein B when compared with carbohydrate. These complexities explain why substitution of saturated fat with carbohydrate does not lower cardiovascular risk.

Most importantly, the policy focus on fat reduction did not account for the harms of highly processed carbohydrate (eg, refined grains, potato products, and added sugar)—consumption of which is inversely related to that of dietary fat…Based on years of inaccurate messages about total fat, a 2014 Gallup poll shows that a majority of US residents are still actively trying to avoid dietary fat, while eating far too many refined carbohydrates.

Hallquist reserves special mockery for Taubes’ claim that the government’s low-fat mania reached such intensity that nutritional initiatives promoted soda:

This portrayal of mainstream nutrition science is as false as Atkins’ claim about peeing out excess calories. Besides the obvious – who on earth ever believed Coca-Cola was a health food?

Once again, the United States’ top nutrition scientists, explaining the new consensus dietary guidelines in the New York Times:

The “We Can!” program, run by the National Institutes of Health, recommends that kids “eat almost anytime” fat-free salad dressing, ketchup, diet soda and trimmed beef, but only “eat sometimes or less often” all vegetables with added fat, nuts, peanut butter, tuna canned in oil and olive oil. Astoundingly, the National School Lunch Program bans whole milk, but allows sugar-sweetened skim milk. Consumers didn’t notice, either. Based on years of low-fat messaging, most Americans still actively avoid dietary fat, while eating far too much refined carbohydrates.

Hallquist critiques Yudkowsky’s worry dietary scientists have been too soft on high-fructose corn syrup. Once again, the latest dietary guidelines almost halve the allowed amount of HFCS from the permitted amount in the last set of guidelines.

Hallquist derides Yudkowsky’s claims that dietary science has “killed millions”. An article by one of Britain’s top doctors, in the British Medical Journal, which by the way is the third highest-impact medical journal in the world, asks Are Some Diets Mass Murder? and argues that

[A] consequence of the fat hypothesis is that around the world diets have come to include much more carbohydrate, including sugar and high fructose corn syrup, which is cheap, extremely sweet, and “a calorie source but not a nutrient.” More and more scientists believe that it is the surfeit of refined carbohydrates that is driving the global pandemic of obesity, diabetes, and non-communicable diseases. They dispute the idea that we get fat simply because energy in exceeds energy out, saying instead that the carbohydrates “trigger a hormonal response that drives the portioning of the fuel consumed as storage as fat… The successful attempt to reduce fat in the diet of Americans and others around the world has been a global, uncontrolled experiment, which like all experiments may well have led to bad outcomes

If indeed there were serious flaws in the dietary guidelines for the past thirty years, and since obesity kills about 370,000 people per year, if the issues corrected in the latest guidelines and freely admitted by modern scientists made the problem even 10% worse, then the “millions of deaths” figure is not an exaggeration.

Taubes is wrong about a lot of things. There is a lot of room for someone to criticize Taubes, and indeed, I have done so repeatedly. Hallquist tries to criticize Taubes, but fires so indiscriminately that he manages to reserve some of his strongest condemnation for claims of Taubes which are actually true and widely recognized as such.

Hallquist writes of Yudkowsky:

Remember, this is one of Yudkowsky’s go-to examples for why you shouldn’t trust the mainstream too much! And it’s not just wrong, it’s wrong in a way that could have been caught through common sense and basic fact-checking. But I guess common sense is just tribalistic bias, and who needs fact-checking when you’ve got superior rationality? The nicest thing you can say about this is that, when he encourages his followers to form strong opinions based on the writings of a single amateur, he’s only preaching what he practices.

I am usually in favor of being nice to people who get things wrong, because things are hard and goodness knows I am wrong often enough. But I am not in favor of being nice to people who get things wrong and are smug and mean to everyone else about them, because punishing defectors is the only way things ever get done in this world. So:

Topher. You seriously do not understand Taubes. You somehow read his book while by your own admission missing the entire mechanism he was trying to explain. You then go on to call a bunch of propositions ludicrous, idiotic, not-even-wrong, et cetera, when those propositions are widely acknowledged as true by the scientific community you think you are defending. You are nevertheless setting yourself up as an expert and trying to explain these subjects to other people. Many of these people told you this when you first posted on LW, and you ignored those people and keep trying to do it. Mozaffarian, Ludwig, Friedman, etc are the United States’ top nutritional scientists, and they are telling you this. I am telling you this. Everyone is telling you this, and you are putting your fingers in your ears and shouting “EVERYTHING IS SO OBVIOUS, I CAN’T BELIEVE OTHER PEOPLE GOT THIS WRONG, IT’S ALL SO EASY, EVERYONE EXCEPT ME IS AN IDIOT.”

Eliezer Yudkowsky has had some pretty silly ideas about diet. I know this because he when he has them, he comes to me and asks me if they are correct, and I tell him. At one point, he bought and sent me a book he was interested in so that I could review it and tell him if it made sense. I told him it was wrong, and he listened. If you had asked me if you were right in your criticisms of Taubes, I would also have reviewed them and explained them to you. You didn’t, because you were so certain that you had to be right that you didn’t need to consult with anybody else, despite the fact that you are an amateur with no medical knowledge.

Thus is it written: “Why do you look at the mote in Eliezer Yudkowsky’s eye, and ignore the beam in your own?”

The last I heard about Eliezer’s dietary philosophy was his OKCupid profile, where under “Food” he wrote: “Flitting from diet to diet, searching empirically for something that works.”

SUCH OVERCONFIDENCE. SO CERTAINTY. VERY ANTI-SCIENCE.

III.

Okay, now that I’ve gotten my nitpicks out of the way, what about the actual meat of Hallquist’s criticism?

Hallquist claims that Less Wrong is fundamentally anti-science. All of his criticisms of Eliezer Yudkowsky were to show examples of him behaving in anti-science ways, but he also thinks that Eliezer comes right out and admits it:

Now that I’m thousands of words and about as many tangents into this post, let me circle back to something to something I said early in the post: pointing out the flaws in mainstream experts only gets you so far, unless you actually have a way to do better. This isn’t an original point. Robin Hanson has made it many times. (See here for just one example.) But I want to emphasize it anyway.

It’s the main reason I’m unimpressed with the material on LessWrong about how the rules of science aren’t the rules an ideal reasoner would follow. This is a huge chunk of Yudkowsky’s “Sequences”, but suppose that’s true, so what? We humans are observably non-ideal. Throwing out the rules of science because a hypothetical ideal reasoner wouldn’t need them is like advocating anarchism on the grounds that if Superman existed, we’d have no need for police.

I think this is more than a superficial analogy. To borrow another point from Hanson, most of us rely on peaceful societies rather than personal martial prowess for our safety. Similarly, we rely on the modern economy rather than personal survival skills for food and shelter. Given that, the fact that science is, to a large extent, a system of social rules and institutions doesn’t look like a flaw in science. It may be the only way for mere mortals to make progress on really hard questions.

Yudkowsky is aware of this argument, and his response appears to mostly depend on assuming the reader agrees with him that physicists are being stupid about quantum mechanics–that, combined with a large dose of flattery. “So, are you going to believe in faster-than-light quantum ‘collapse’ fairies after all? Or do you think you’re smarter than that?” asks one post.

This is combined with an even stranger argument, an apparent belief that it should be possible for amateurs to make progress faster than mainstream experts simply by deciding to make progress faster. Remember how the imagined future “master rationalist” complains “Eld scientists thought it was acceptable to take thirty years to solve a problem”? This is a strange thing to complain about. Either you have a way to make progress quickly or you don’t, and if you don’t, you don’t have much choice but to accept that fact.

Back in the real world, wishing away the difficulty of hard problems don’t make them stop being hard. This doesn’t mean progress is impossible, or that’s it’s not worth trying to improve on the current consensus of experts. It just means progress requires a lot of work, which most of the time includes first becoming an expert yourself, so you have a foundation to build on and a sense of what mistakes have already been made. There’s no way to skip out on the hard work by giving yourself superpowers.

I agree that you don’t make progress faster just by “wishing away the difficulty” or “giving yourself superpowers” or “deciding to make progress faster”.

On the other hand, if Yudkowsky thought that becoming more rational was a matter of “wishing away the difficulty”, he wouldn’t have written a larger-than-Lord-of-the-Rings introduction to the subject. He would have just wished.

Developing and learning an Art Of Thinking Clearly isn’t just “wishing away the difficulty” of settling on true ideas faster, any more than developing and learning rocket science is “wishing away the difficulty” of going to the moon. Thinking clearly is super-hard, but perhaps it is a learnable skill.

Rocket science is a learnable skill, but if you want to have it you should probably spend at least ten years in college, grad school, NASA internships, et cetera. You should probably read hundreds of imposing books called things like Introduction To Rocket Science. It’s not something you just pick up by coincidence while you’re doing something else.

If thinking clearly is a learnable skill, where are the grad schools for it? Where are the textbooks? Not in philosophy programs – Hallquist and I both agree about that. What all of this “only domain-specific knowledge stuff matters” effectively implies is that “thinking clearly” is so easy you can pick it up by coincidence while working on pretty much anything else – something we believe about practically no other skill. If you trusted a rocket scientist who had never read a single rocket science textbook to be any good at rocket science, you’d be insane, but we routinely trust the subjects we most need to think clearly about to people who have never read a How To Think Clearly textbook – and I can’t blame us, because such textbooks, or at least good evidence-based textbooks of the same quality as the rocket science ones, simply don’t exist.

The Sequences aren’t an assertion that you can wish away a problem. They’re a cry for textbooks.

But Hallquist has a counterargument:

The big difference between what [scientists] do and what Yudkowsky advocates is that probability theory is much less useful here than a good knowledge of cell biology.

If we want to get all hypothetical, we can imagine some kind of theorizing contest between a totally irrational person with an encyclopaedic knowledge of cell biology, versus a very rational person who knows nothing at all about the subject. Who would win? Well, who cares? Whoever wins, we lose. We lose because I want the people working on curing cancer to be good at both cell biology and thinking clearly, to know both the parts of science specific to their own field and the parts of science that are as broad as Thought itself. I have seen what happens when people know everything about cell biology and nothing about rationality. You get AMRI Nutrigenomics, where a bunch of people with PhDs and MDs give a breathtakingly beautiful analysis of the complexities of the methylation cycle, then use it to prove that vaccines cause autism. By all means, know as much about methylation as they do! But you’ve also got to have something they’re missing!

I want people who know as much about the methylation cycle as the Nutrigenomics folks, while also understanding the idea of privileging the hypothesis. I don’t want to defy experts, I want to give experts better tools.

In fact, even that framing isn’t quite right. Every day I have patients come to me and ask questions like “are benzodiazepines safe and effective?” or “is therapy better than SSRIs?” or “will this drug increase my risk of dementia?” or “does untreated bipolar increase my risk of converting to rapid-cycling?” or a host of other questions. And I ask my mentor, who’s one of the top psychiatrists in the state, and he gives me a nice, straightforward answer, and then I ask my mentor at the other hospital I go to, and he’s also one of the top psychiatrists in the state, and he gives me precisely the opposite answer. And when I mention to either of them that the other guy disagrees, they just assure me that if I do the research myself I’ll find that their point of view is obviously and self-evidently correct. And meanwhile, my patients are pressing me for answers and telling me that if I get this wrong it will ruin their life. And I can’t say “Wait fifty years until enough studies are done to be totally sure.”

“Don’t worry too much about learning rationality, just listen to the experts” is all nice and well up until the point where someone hands you a lab coat and says “Congratulations, you’re an expert!” And then you say: “Well, frick.” And when that day comes you had better already have learned something about the Art Of Thinking Clearly or else you have a heck of a steep learning curve ahead of you.

Hallquist says that Less Wrong is “against scientific rationality”. Well, we’re “against scientific rationality” in the same sense that my hypothetical Soviet who says “We need two Stalins! No, fifty Stalins!” is against Stalinism as currently implemented. It is in the right direction. But it needs to go further. This is why all of the posts Hallquist finds to support his assertion that Less Wrong is “against scientific rationality” are called things like Science Isn’t Strict Enough. I’m “against scientific rationality” insofar as when my patients demand answers to semi-impossible questions and say their lives depend on it, I want to have scientific rationality on my side, and another tool, and a third tool if I can think of it, and as many extra tools as it takes before I stop being terrified.

If you don’t trust the quantum mechanics sequence to make the point for you – and maybe you shouldn’t – I explain my own version of this revelation in the highly-Eliezer-inspired The Control Group Is Out Of Control. Science is what hands us an unusually well-conducted meta-analysis proving that psi exists with p < 1.2 * 10^-10, crowning fifty years of parapsychological research that finds positive results about as often as not. Bayes is what tells us that parapsychology makes no sense, has an ungodly level of Kolmogorov complexity, and is going to require a heck of a lot more than a good meta-analysis before we accept it. In that sense, "switching allegiance from Science to Bayes" isn't some cataclysmic event where we foresake Galileo thrice before an onyx altar, it's something we all do already under the right circumstances. The point is figuring out how to formalize it, so that we don't mess up and dismiss a result that's counterintuitive but true. I respect Yudkowsky's decision not to use an example like this because if he used this example people would assume he was only talking about parapsychology and real science is totally safe, but I think he was going for the same principle.

I have immense respect for Topher Hallquist. His blog has enlightened me about various philosophy-of-religion issues and he is my go-to person if I ever need to hear an excruciatingly complete roundup of the evidence about whether there was a historical Jesus or not. His commitment to and contribution to effective altruism is immense, his veganism puts him one moral tier above me (who eats meat and then feels bad about it and donates to animal charities as an offset), and his passion about sex worker rights, open borders, and other worthy political causes is remarkable. As long as Topher isn’t talking about diet or Eliezer Yudkowsky’s personal qualities, I have a lot of trust in his judgment.

But these things I like and respect about Topher are cases where he’s willing to go his own way. He views open borders as an pressing moral imperative even though you’ll have a hard time finding more than a handful of voters, sociologists, or economists who support it. He’s signed up for cryonics even though 99% of the population think that makes him crazy. He donated to fight AI risk way back when it was hard to find any AI experts willing to endorse the cause, and so gains extra credibility and moral authority now that many of them have. Heck, I even respect his ability to put down a terrible Aquinas book on the twenty-somethingth page when I trudged all the way through.

And – and this is a compliment, so I hope he takes it as one – I wish he would try to help spread his own good qualities. We need more people who are able to evaluate difficult moral and intellectual arguments and come to apparently-bizarre but in-fact-very-important conclusions, even when there is not a knock-down scientific argument proving them correct quite yet.

And a necessary consequence of having people who are able to go beyond the things that have knock-down scientific proofs, and go beyond the things that everyone by consensus agrees to be true, and who are able to discuss weird ideas like effective altruism and cryonics and the Singularity, is that occasionally some people will venture too far and say something genuinely out of line (remember: decreasing your susceptibility to Type I errors will always increase your susceptibility to Type II errors, and vice versa!) When this has happened in the rationality community, I have tried again and again to politely but firmly correct these people.

I would like to have Topher as a partner in this effort, but instead, I find him to be trawling the entire corpus of everything people in the rationalist community have ever said or done for quotes he can take out of context to “prove” that they are “crackpots” and that they universally “hate experts”. It’s led to him rushing through books he doesn’t really understand so he can get to the fun part where he points out how crackpotty everyone else is for not rejecting the book fast enough. It’s led to him gradually burning bridges with a lot of people who should be on his side by being needlessly hostile to them. It’s led to him turning Yudkowsky’s opinion that science needs to be stricter and stronger into Eliezer being “against scientific rationality” and “anti-intellectual” and “pro-crackpot”, peppered with a laundry list of out-of-context gripes. It’s not a productive way to have the discussion and, more importantly, it’s not true. And it’s not fair to the efforts that the rationalist community keeps putting in to improve themselves and their thought processes.

If Eliezer Yudkowsky ever showed up and said “I have perfected this Art, now I am never wrong,” then I would happily join Hallquist in laughing hysterically.

If Eliezer Yudkowsky showed up and said “I have tiny pieces of this Art and some promising leads on who can help us find more, let’s work on it together,” well, I’ve spent the past couple of years taking that offer and so far I don’t regret it.

And if Eliezer Yudkowsky showed up and said “I thought I had pieces of the Art, but I was wrong, I don’t know anything about it, nobody does,” then I will still go to my grave believing that whether or not we know it, such an art should exist, that even if it’s near-impossible we should be chipping away at the impossibility as much as we can in the hopes of getting a couple of tiny shards of something useful that we can cherish as precious.

But I think it isn’t as bad as all that. We do have some tiny preliminary seeds of such an Art. I think such an art involves learning to appreciate your cognitive biases on a gut level. I think it involves understanding the relevant basics of probability theory and calibration. I think it involves knowing when to use the Inside View or the Outside View, how to avoid getting bogged down in meaningless semantic arguments, and how to overcome your resistance to changing your mind in the face of new evidence. It also involves knowing how to read studies, learning to get a feel for the process of science and find out who is and isn’t a credible expert, learning when science does and doesn’t work and how to repair the latter category, learning to avoid the well-known pitfalls, and learning how to build communities where good epistemology can flourish.

It also involves a bunch of other things that I don’t know and Eliezer doesn’t know and maybe no one in our community knows, but once we find out, we intend to steal them, and you should help.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

679 Responses to Contra Hallquist On Scientific Rationality

  1. BD Sixsmith says:

    Regarding B, there is a difference between thinking that philosophical theories are bunk and thinking that philosophy is itself bunk. If one does the former, one can at least have arguments with their defenders because one respects the process. In his “need to resolve the issue” EY seems quite willing to bypass the arguments altogether, which, as Hallquist says, might be indicative of an epistemic impatience that fails to appreciate how difficult some questions are to answer.

    • Indeed the hypothesis that philosophy makes slow progress because its problems are hard is one that didn’t exactly feature in LessWrongs steelmanning of philosophy. (Joking. In fact , they never steelmanned it).

      The claim the philosophy is slow is what I call a dangling comparison…slower than what? A field with different, easier problems? The MIRI/LessWrong style of philosophy?

      There are maybe 200 open philosophy problems (figure based on wikpedia). LW claims to have solved 2 .or 3. That’s not strong evidence for the easy-problems-dumb-philosophers conjunction.

      And what does it mean to solve a philosophical problem? For my money, it means to reach a point where objections to LW5he solution have been answered. But objections to LW claims so often go unanswered.

      I think the best kinds of academic philosophy are more rational than LW, because the process is better, because they have rule that objections must be met….however slow the process is.

      • Typhon says:

        « dangling comparison »

        This is such a common phenomenon and yet I was without a neat way to designate it.

        I’m totally stealing this phrase.

    • Scott Alexander says:

      I’m not sure how to think about the worldview where Aquinas was obviously, embarassingly wrong to the point where Hallquist can figure it out in ten minutes, yet nearly everyone believed Aquinas for five hundred years, but there’s no problem there, sometimes philosophy is just slow.

      To give another example, I would hate to have to run all of my beliefs about ethics or the nature of truth by a committee of Derrida, Heidegger, and Foucault, but it’s not really clear what my alternative is if I accept absolutely the right of academic philosophy to continue doing what it’s doing as an arbiter. I think we got rid of Derrida et al by calling them “continental philosophy” and then ignoring that, but at what point do you give the people who disagree with you a geographical name and then let yourself ignore them?

      Probably my biggest epistemological puzzle is how to navigate between the Scylla of Radical Inside View (“I’ve come up with a knock-down super-convincing proof that consequentialism is correct, I now feel confident saying there’s a 100% chance I’m right, and anyone who disagrees must be stupid or dishonest”) and the Charybdis of Radical Outside View (“since 90% of the world’s population is theist, I should be a theist too, because who am I to say I’m smarter than 90% of the world’s population? I’m just a randomly selected guy!”). I don’t know anyone with a principled answer to this question, and I don’t see Hallquist as having any kind of a different answer or attitude than Yudkowsky here, though I could change my mind if someone explained exactly what they thought both of their answers were.

      • FrogOfWar says:

        I think you’re equivocating on ‘philosophy’ here. It can refer to:

        (1) The practice of using arguments to address [insert perennial philosophical issues like free will and ethics]

        (2) The academic institution of philosophy.

        You can’t be criticizing (1), because you and Eliezer and every other philosophy critic are doing philosophy according to (1). It’s what your consequentialism faq and Eliezer’s “dissolution” of the free will problem are.

        But if your target is really (2), then the Aquinas example doesn’t work because academic philosophy hasn’t existed for nearly that long. As everyone knows, we did not have the same disciplinary distinctions that are in place today, including those between philosophy and science.

        Given that the debate is over whether it is generally worthwhile to look to the contemporary state of the art in the “analytic” tradition, I don’t think going any further back than about 100 years to find examples of philosophy’s obvious wrong-headedness makes any sense, and probably shorter would be better.

        [And as an aside, nobody “got rid of Derrida et al.” They were never a part of the conversation in the tradition which has resulted in the the present day state of most prominent Anglophone philosophy departments. Similarly, I assume that nobody in Derrida’s tradition had to explicitly get rid of Frege, unless you count the original Frege-Husserl schism.]

        • Scott Alexander says:

          I’m not sure where you see the big break between Aquinas’ day and the present. There were very smart people studying philosophical questions in universities and talking to each other and responding to one another’s ideas since Greece. What feature of modern academia do you think has changed the calculus there?

          I feel like there was certainly an analytic/continental split sometime in the 19th century. If you’re wililng to accept this as natural, why can’t (not recommending this, just asking hypothetically) there be a LessWrongian/Non-LessWrongian split where we just talk to each other and ignore the outside world?

          (just like there’s a Muslim-Theology/Non-Muslim-Theology split and a Marxist-Philosophy/Non-Marxist-Philosophy split, et cetera. Analytic/Continental is just the most salient!)

          • FrogOfWar says:

            I’m not sure there’s anything I’d characterize as “the big break”, though Frege’s work on logic is an obvious choice. But my point wasn’t supposed to be based on there being such a break.

            Rather, the point was that the main similarity between contemporary philosophy and scholastic philosophy is that there were a bunch of smart people trying to address the perennial issues by writing huge arguments to the other smart people around them.[1] This is just philosophy in the (1)-sense, and the LW people discussing philosophy are also doing it. So if the track record of Aquinas takes down the contemporary academy, it’s taking you guys down with it.

            If there’s supposed to be a tighter, diseased connection between what they were doing and what contemporaries are doing, I don’t know what you are claiming it is.

            As for breaking off into separate groups, you’re right that at a certain level of disagreement on fundamental points, different traditions often can’t profitably have a dialogue with each other. But if analytic philosophy is at that level of disagreement from LW, then the LW discussion must be completely closed off to everything but a tiny blogging culture.

            LW is characterized by acceptance of a number of philosophical views (consequentialism, one-boxing, MWI, etc.) which are all debated within analytic philosophy and have many adherents there. How could this huge region of extremely intelligent people[2] discussing the same issues on many of the same terms and in many cases with the same goals be completely dismissible by LW crowds?

            The LW positions on most of these issues wouldn’t even exist if analytic philosophers hadn’t developed the disputes earlier. It seems odd that LW needed exactly enough analytic philosophy to get to get the original debates and arguments and then everything the philosophers did after that was a waste of time.

            [1] There’s also the similarity that each sometimes writes long interpretations of some of the same earlier philosophers. But LW-ers aren’t debating whether to read history of philosophy.

            [2] You probably already know that philosophers do among the very best on GREs (and LSATs) despite being weighed down by being grouped with religion majors.

          • Emile says:

            I don’t even think Eliezer or LessWrong in general are more dismissive of philosophy than the average scientist or engineer is…

            So LW’s “dismissal of philosophy” doesn’t seem to be something unusual that needs explaining.

          • Urstoff says:

            Indeed, pretty much no one but academic philosophers pay attention to academic philosophy (for better or worse). Stephen Hawking and Leonard Mlodinow gave a laughable dismissal of philosophy in their book; Neil deGrasse Tyson has similarly dismissed academic philosophy without argumentation. Philosophers are used to being ignored except in very specific scientific circles (parts of evolutionary biology; parts of cognitive science; parts of linguistics). I think the general attitude is that if you don’t have time or are not interested in philosophical problems, fine, but don’t make philosophical claims without being able to back them up or knowing the relevant works in contemporary philosophy.

          • Deiseach says:

            What feature of modern academia do you think has changed the calculus there?

            Because science was natural philosophy, and philosophy invoked the science of its day; Aquinas routinely uses Aristotle’s biology, for instance, or other examples of Cutting-Edge Science of the Time (hence why Nancy Pelosi bedecked herself in the garb of a Thomist interpreter when invoking Aquinas on abortion and why Catholics can Be For Choice).

            As SCIENCE!!! became this great experimental process of get-your-hands-dirty in the real world examining the data and seeing if the theory matches observed phenomena, the distance between “thinking about problems” as an acceptable solution and “proving your theory by appeal to material fact” grew greater and greater, and philosophy was more and more pushed off the throne it had usurped from theology as Queen of the Sciences – indeed, describe philosophy today as a science, and how many ‘real’ scientists will agree it belongs even with the soft sciences?

          • Protagoras says:

            I think you have it too early. I tend to date the analytic/continental split to the Nazi era, when nearly all of the most prominent central European philosophers with leanings toward the side later identified as “analytic” fled Europe, mostly going to America and Britain, while most of those with leanings later identified with the “continental” tradition stayed. That seems to be the point at which modest tendencies toward slightly different approaches turned into two hostile camps that barely communicate with one another. For example, both Carnap and Heidegger were influenced by Husserl, and before Carnap fled Europe, they sometimes went to the same conferences and talked to one another and seriously confronted one another’s work. Russell shows a similar pattern to Carnap of being admittedly a fierce partisan all along but taking views that would be later classified as “continental” seriously in his early work (even if he rejected them), while just dismissing or completely ignoring them in his later work. It is true that once the camps became thoroughly separated, some of their earlier predecessors were largely co-opted by one side and dismissed by the other, which retroactively created the impression that the deep divide was older than it actually was.

          • FrogOfWar says:

            @Protagoras

            Fair points. That’s why I said you should probably look at a “much shorter” distance back than 100 years when evaluating the track record of contemporary phil. I regretted not choosing a more recent example than Frege for my aside but didn’t think it worth an edit after Scott replied.

            @Emile/Urstoff

            Yeah, everyone is dismissive of academic philosophy. But most of those people aren’t from communities built around adopting positions on debates that were developed by academic philosophers, many of them quite recent and esoteric. Of course, this isn’t to say that the other dismissive people don’t tacitly adopt philosophical positions.

          • “I don’t even think Eliezer or LessWrong in general are more dismissive of philosophy than the average scientist or engineer is…”

            But they are doing a lot more philosophy than the average scientist or engineer.

          • Shenpen says:

            The main feature is simply that the likes of Derrida should not even be taken seriously as anything remotely like a rigorous scholarship. It is a joke really. The only thing that speaks for them is that someone made the bad decision to include them into academia: and that sort of decision was almost certainly political, not quality-based. If Derrida would not wear the mantle of academia, if his works would be just published on some guys blog, you would not care. You simply would not find much to care about. In other words, you got authority-pwned.

            There is always a level of nonsense where it is so obviously nonsense that the immune reaction should kick in.

            This largely began with Hegel – his Philosophy of the Right is OK, the Phenomenology of the Spirit is pure nonsense. Marx was nonsense as a philosopher, but was more serious (as in: wrong, but actually wrong, not not-even-wrong) as an economist and historian. Heidegger: nonsense. Foucault: nonsense as a philosopher, but really interesting as a historian, and perhaps acceptable as a philosopher of history specifically i.e. some guy who writes about what methods should historians use. Deleuze to Derrida: nonsense.

            Again, just read them without the mantle of authority, like some guys blog. The nonsense becomes obvious.

            More info: http://www.rogerscruton.com/articles/1-politics-and-society/83-confessions-of-a-sceptical-francophile.html

            “The monsters of unmeaning that loom in this prose attract our attention because they are clothed in the fragments of theories, picked up from the aftermath of forgotten battles – the Marxist theory of production, the Saussurean theory of the signifier, the Freudian theory of the Oedipus complex, all I should say, thoroughly refuted by subsequent science, but all somehow retrieved by the Parisian scavengers, and given a ghoulish after-life in the steam above the cauldron.”

            “And there is one idea acquired during the great pre-war self-examination that has not lost its credibility, an idea that endures because it is not a scientific hypothesis that stands to be refuted, but a philosophical reflection on the nature of consciousness. This is the idea of the Other.”

            ” For de Beauvoir woman had been made Other by man, and it was in confronting her ‘altérité’ that woman could repossess herself of her stolen freedom. For the humane Levinas the Other is the human face, in which I find my own face reflected, and which both hides and reveals the light of personality. For Merleau-Ponty the Other is both outside me and within, revealed in the phenomenology of my own embodiment. For Sartre the Other is the alien intrusion, which I can never vanquish or possess, but which taunts me with its ungraspable freedom, so that, in the famous last line of Huis Clos, ‘l’enfer c’est les autres’.
            This wonderful literary idea, which Kojève rescued from the trunk of old manuscripts in which the Germans had put it for safe-keeping, is responsible for much that is interesting and beautiful in the literature of post war France: you find it in Georges Bataille’s encomium to the erotic, in Jean Genet’s Journal du voleur, in Sartre’s Being and Nothingness, in the Catholic existentialism of Gabriel Marcel, and in the nouveau roman of Robbe-Grillet and Duras.”

            “Kojève’s treatment of the Other also fed into the Communist Party’s programme of recruitment, by giving the French literary elite a language and a habit of thought that could easily be adapted to the war on bourgeois society. (For let us not forget that it was Hegel’s version of the idea which had first inspired the youthful Marx, in his theories of alienation and private property.) Whether Kojève had that in mind will never be known;[10] but one thing is certain, which is that the idea of the Other became part of a mass-recruitment of the French intelligentsia to the causes of the left, and that when this idea was boiled up with Saussurean linguistics, Freudian analysis and Marxist economics to form the witches’ brew of Tel Quel and deconstruction, it gave rise to a literature that made no place whatsoever for any political view other than that of revolutionary socialism.”

        • walpolo says:

          As a somewhat related point, even if your definition of philosophy is as broad as

          >(1) The practice of using arguments to address [insert perennial philosophical issues like free will and ethics]

          Derrida isn’t really doing philosophy by that definition, since Derrida thinks that using arguments is exactly the wrong way to address foundational questions.

        • Bugmaster says:

          As far as I understand — and I could be very wrong here — the main difference between philosophy and science (and/or modern rationality) is that philosophy is entirely qualitative, whereas science is quantitative.

          To use a contrived example, the question “can angels dance on pins ?” is the province of philosophy, and you can pretty much debate it forever. But as soon as you ask, “How many angels can dance on the head of a pin ?”, you’ve left philosophy behind. Now, you are at the point where you are pulling out magnifying glasses and vernier calipers, staging experiments, and making numerical predictions. If the answer turns out to be, “actually there doesn’t seem to be any way of measuring angels”, then that’s a pretty definitive answer, as well. In addition, there’s no longer any room here for disagreement about semantics or interpretation of words; if you’ve measured 11+-3 angels, and I’ve measured 10^23 angels, then at least one of us is wrong, and we need to stage more experiments.

          The problem with philosophy is that all of the really important practical questions appear to be quantitative in nature. Sure, it can be fascinating to discuss what it means for an entity to be truly conscious, as opposed to merely behaving in all possibly observable ways as though it was conscious; but in practice, if you can never tell the difference then the answer doesn’t matter — unless you are a tenured philosophy professor, I suppose.

          • Protagoras says:

            Philosophy of science is highly quantitative. The question about angels is theological, not philosophical. Though I’m partial to one answer, which has been popular with theologians and which seems true to me as well, though for a different reason than that of the theologians; “all of them” seems quite plausible.

          • wysinwyg says:

            Actually, the philosophical question was always “How many angels can dance on the head of a pin?”, and it was really a problem about whether “actual infinities” exist (closely related to Zeno’s paradox). That is, assume that angels take up no space — now how many can you fit in an arbitrarily small volume?

            There is a whole branch of philosophy, called logic, that is or at least can be quantitative and is widely regarded as being the foundation for all quantitative reasoning whatsoever.

          • Bugmaster says:

            @wysinwyg:

            I know, that’s why I picked that example, for the irony. In this case, the answer turns out to be, “until we can measure angels, there’s no point in worrying about it”, as I said in the previous post. That said, you could make the argument that the angel scenario is more closely related to math than to philosophy, just like Zeno’s Paradox. But IMO what makes math very different from philosophy is the language. There is very little room for interpretation and misunderstanding in formulae, as compared to any natural language.

            There is a whole branch of philosophy, called logic, that is or at least can be quantitative and is widely regarded as being the foundation for all quantitative reasoning whatsoever.

            I’ve heard this before, and I think this may be a genetic fallacy, depending on what you mean by “foundation”. Alchemy led to chemistry, but that doesn’t make it valuable in and of itself. On the other hand, if you mean something closer to “quantitative reasoning is built on logic”, then you’re probably right, but that still doesn’t imply that you can get very far in your understanding of the world just by using syllogisms.

          • wysinwyg says:

            I know, that’s why I picked that example, for the irony. In this case, the answer turns out to be, “until we can measure angels, there’s no point in worrying about it”, as I said in the previous post.

            No, you are intentionally misinterpreting what the question actually was. The assumption was always that angels take up no space whatsoever. The answer was to properly formalize the concept of “infinity” so that we could usefully study problems like this.

            The “angels” and “head of a pin” are irrelevant to the actual philosophical issue under debate, which was about the existence of actual infinities. Much like Searle’s “Chinese room” thought experiment doesn’t actually require that the language be Mandarin or Cantonese to effectively motivate the argument, you can replace “angels” with “Euclideans points” and “pinhead” with “epsilon disk” and get the exact same philosophical problem.

            But IMO what makes math very different from philosophy is the language. There is very little room for interpretation and misunderstanding in formulae, as compared to any natural language.

            I think we have very different views of mathematics. You seem to be talking about mathematical formalisms, whereas “mathematics” to me means a bunch of professors writing papers at each other (almost phenomenologically identical to “philosophy” as a matter of fact).

            Alchemy led to chemistry, but that doesn’t make it valuable in and of itself.

            “Alchemy” and “chemistry” are not discrete, unique, disjoint phenomena. Many parts of alchemy (pretty much everything operational, almost none of the theory) turned out to be valuable — and became the foundation of chemistry.

            But I don’t think this is actually a good analogy to the relationship between mathematics and logic. Mathematical formalisms were discovered empirically until pretty recently in history (and often still are, I believe). The relationship between mathematics and logic is not historically contingent, and — at least according to mathematicians — seems to run really deep. Hence the fact that what mathematics professors do is actually write very logically rigorous arguments at each other rather than just swapping formulae around.

            that still doesn’t imply that you can get very far in your understanding of the world just by using syllogisms.

            Never have I argued for such a thing. Not sure where it even came from.

          • wysinwyg says:

            Since what we’re really talking about is whether philosophy is “useful” or not, I’ll weigh in on that.

            It depends on how you define “philosophy.”

            Personally, I think of mathematics and science as subdisciplines of philosophy — subdisciplines where you focus on questions about the relations between abstract entities (in the case of mathematics) or measurable quantities (in the case of science). Comparing philosophy and science and asking “which is more useful” is a category error. Useful for what?

            For discovering exoplanets? Yes, science is definitely better. But how much impact does the existence of exoplanets have on my life? Almost none. The concept of “humility”, on the other hand, cannot be quantified in any obvious way, but definitely has an impact on my life: it is an ideal to strive for, mostly to help me from falling for my own bullshit.

            The problem with philosophy is that all of the really important practical questions appear to be quantitative in nature.

            This strikes me as not just false but obviously false. First of all, a lot of what people think of as “science” is qualitative. Second of all, I can think of no more practical question than “What should I do next?” which seems to me to be almost entirely qualitative (though it could admit a quantitative component).

      • Deiseach says:

        Yudkowsky seems to have what I’ll describe as an engineering view of philosophy and its problems, or the problems it addresses: Does this fix work? Yes? Then forget the chin-stroking about “But whyyyyyy does it work? Why are things like that such that them being like that can make it work? Suppose things were not like that but like this, would it still work? Why aren’t things like this instead of like that?”

        Hit it with a spanner and make it go. Problem solved. Next!

        • Vaniver says:

          Then forget the chin-stroking about “But whyyyyyy does it work?

          Consider Yudkowsky’s writings on Mach’s Principle, and the Generalized Anti-Zombie Principle., or his extrapolation from Follow-the-Energy in physics to Follow-the-Improbability in probability/philosophy.

          It seems to me that Yudkowsky is very interested in why things work, but that he expects it to be, well, math.

          (Specifically, verbal arguments put people in a different frame of mind than mathematical arguments. It’s considerably harder for verbal arguments to be final, and so someone not used to the finality of math can say “but wait, how are you done already?”)

          • Deiseach says:

            Like I said, engineering. Plug the figures into the equation and away you go 🙂

          • Shenpen says:

            Math is just unambiguous language. It is deeply wrong to think the universe runs literally on math any more than to think the universe runs literally on Koine Greek. Math is merely how humans map the world in a clear, unambiguous way. When EY expects to find math, he simply expects to find some X which he prefers to express in an unambiguous way. The math fetish is nothing more than just saying if you build the edifice of science on ambiguous language, it cannot reach the stars.

          • Deiseach says:

            It is deeply wrong to think the universe runs literally on math any more than to think the universe runs literally on Koine Greek

            Ἐν ἀρχῇ ἦν ὁ λόγος, Shenpen? 🙂

          • Shenpen says:

            @Deieseach

            That reminds me of a story. I was about 18 when I was still desperate to save some tiny sense of theism in me or reinterpret Christianity in a radically different way in order to be able to say that at least some aspect of it makes sense, and my move was clinging to precisely that, to the “in the beginning was the word”. I reasoned that the creation of the world must not be a physical act, or else the Bible would say “in the beginning was a huge star factory”. It must be something done inside an already existing material world and I figured if it is “Word” then it must mean the naming, categorizing of things. I.e. things can be obviously categorized infinite different ways, and since our reasoning depends on that, because every statement is a relationship between categories (“swans are white” -> “the animals in the category of swans have the color property within the category of the white range”), there is really nothing to save us from postmodernist subjectivism where nothing is true. But it could be that via the Word God made an ur-categorization of things, declared His subjective viewpoint and categorization as The objective viewpoint and categorization, and thus we have something we can attach our reasoning to. This is the creation – not a material act, not the creation of a physical world, but the creation of the intellectual world, the names and categories of things. Not the world that really is, but the world that can be thought and talked about. I seriously believed this.

            Thinking back to this period of my life, I still don’t know if it was entirely crazy or a cute attempt at teenage philosophy.

          • FeepingCreature says:

            “In the Beginning was the Map…”

          • Troy says:

            This is the creation – not a material act, not the creation of a physical world, but the creation of the intellectual world, the names and categories of things. Not the world that really is, but the world that can be thought and talked about.

            From what I understand, this is not far off from early Christian understandings of the Logos, except that they would strongly reject the idea that the material world pre-existed the intellectual world. The Father emanates the Son (Logos) = creates the intellectual world, and then creates the physical world through the categories, i.e., through the Logos.

      • BD Sixsmith says:

        Well, I don’t think Hallquist did find TA to be “obviously, embarrassingly wrong in ten minutes” – and, indeed, I don’t think he could do so in ten years. The charge of hypocrisy sticks. The problem is that it makes Hallquist look worse but nothing nothing for Yudkowsky.

        I think we got rid of Derrida et al by calling them “continental philosophy” and then ignoring that…

        Did we? I can think of Searle on Derrida, Rorty on Foucault and Webster on Lacan for starters. Sure, none of them silenced the objects of their criticism but it is more difficult to put nails into the coffins of arguments outside of maths and the natural sciences. There is no way around that.

        Admittedly I tend to think that it is a good thing that philosophers take their sweet time because when ideas are accepted with too much enthusiasm it can be bad.

      • ryan says:

        There is an easy way out. Philip Dick once said reality is that which, when one stops believing in it, fails to go away. Try not believing in principled resolutions to epistemological puzzles. You’ll note they do in fact go away. At this point it should also be much easier to handle famous epistemological puzzles existing for thousands of years with philosophers unable to come up with principled resolutions for them.

        So yeah I’m guessing that as a “rationalist” that sucks a bit too much to be contemplated. Maybe it’s like a believer finding out God is dead. I’ve not read any Nietzsche. but I have read philosophy bro’s take on Nietzsche:

        At least believers can tell you exactly why they’re pissing their lives away; see for yourself. Ask one. “Why do you hate sex, joy, and the human spirit?” “Oh. Because I believe in a non-physical deity who told me that if I hated them, I’d spend the rest of eternity in paradise.” It’s batshit crazy, right? But at least he’s sticking to his guns. Agree to disagree, whatever. But if you don’t believe that bullshit, then why the hell are you sitting around wondering what’s left? It’s because you DO believe that bullshit, you’re just too scared to admit it. Fuck, bro, you almost made it out – you saw through the lies and said, “Nope, fuck that.” But then you fell into the same old pattern of worrying about right and wrong, about patriotism and politics, about tolerance and government and fairness, about all measure of bullshit – all you’ve done is replaced the bullshit you know with the bullshit you don’t.

        I’m not saying nothing matters, and fuck people who think that. What matters? YOU matter. Want to know the secret to being happy? It’s easy. I’ll tell you. Just do what makes you happy. Oh, shit, look at how easy that is! It’s like magic! TA-DA, BITCHES! Stop letting anyone tell you what ‘happiness’ is, or what should make you happy, or why you should be guilty for being happy. You know what happiness is. You know how to experience joy, or you would if you just let go of how everyone else has told you how to be.

        Humanity isn’t an end, it’s a fork in the road, and you have two options: “Animal” and “Superman”. For some reason, people keep going left, the easy way, the way back to where we came from. Fuck ’em. Other people just stand there, staring at the signposts, as if they’re going to come alive and tell them what to do or something. Dude, the sign says fucking “SUPERMAN”. How much more of a clue do these assholes want? How does that not sound awesome? But they’re paralyzed by their fear – “But, that road looks hard to walk.” It IS hard, dipshit, but that’s what makes it worth it! Fuck.

        The rest:

        http://www.philosophybro.com/post/55729214617/nietzsches-thus-spoke-zarathustra-a-summary

      • Its about arguments, not authorities. If you think Derrida and Foucault are dismissable because they don’t even try to argue clearly, then, fine, that’s a good reason….but doesn’t mean you you don’t have to respond to arguments where they are available.

        I can’t speak for Hallquist , but what Yudkowsky could be doing is following the process of:-

        1. Assume your idea isnt original.

        2. Research existing counterarguments.

        3. Meet them.

        4. Discuss/publish, asking for further objections.

        5. Meet them.

        6. Then and only then, claim plaudits.

      • Josh says:

        I think the resolution of the radical inside view and radical outside view is:

        a) Err on the side of the person putting the positive claim forward being wrong. If you’re putting forward a positive claim, you’re probably wrong. If 90% of the world is putting forward a positive claim, they’re probably wrong. Consequentialism and theism are both wrong. (Happy to defend the consequentialism wrong thesis if there are consequentialists who feel insufficiently challenged 🙂 )

        b) Radical empathy… if 90% of the world disagrees with me, do I empathize enough with why that I could effectively simulate their most persuasive adherents? If not, then yeah, I’m probably missing something, though that doesn’t mean the 90% are RIGHT, per se (see point a).

        • Addict says:

          I am quite interested in hearing your ‘consequentialism is wrong’ hypothesis. To me, consequentialism is a mathematical framework which can express any possible value system, and is therefore capable of being made isomorphic to any ‘alternative’ you would care to name.

          (I draw my definitions from this article: http://john.soupwhale.com/?p=100)

  2. MondSemmel says:

    Typo: The link labelled “how to read studies” is broken.

    • Banananon says:

      Typo: […] shows that people who sign up for cryonics may are less likely to believe it will work than demographically similar people […]

  3. Steve Johnson says:

    That Topher Hallquist is a vegetarian actually puts his criticism of Taubes in a different light.

    A huge motivator for the original dietary recommendations that are have now proved so amazingly disastrous was making a “scientific” case for progressive moral inclinations.

    • DavidS says:

      This is an interesting claim that I haven’t heard before: I didn’t know that vegetarians/vegans had that sort of sway at the time that the dietary status quo was being set, and always (without actual research, mind) assumed it was something to do with the influence of agrarian lobbies trying to sell their produce.

      Got a source/reference?

      • Steve Johnson says:

        It’s hard to reference intellectual fashion (or any kind of fashion) in any way other than anecdotally.

        Near the end of his life Albert Schweitzer – winner of the Nobel Peace prize, known for African missionary work and basically a secular saint in that he was referenced as a byword for virtue became a vegetarian near the end of his life (he died in 1965).

        Star Trek’s (late 1960s) Spock – a member of a more enlightened, advanced and logical race – was a vegetarian – like all members of his more advanced race.

        The general importation in the blue tribe of vaguely understood “Eastern” practices like mediation and yoga – including vegetarianism.

        The actual dietary guidelines themselves were pushed by George McGovern (https://en.wikipedia.org/wiki/United_States_Senate_Select_Committee_on_Nutrition_and_Human_Needs) – who was later appointed to some bullshit job with the UN “United States Ambassador to the United Nations Agencies for Food and Agriculture” – he was the left’s darling in a split Democratic party when he was nominated to run against Nixon in 1972. Connected and ideologically motivated. From la Wik:

        Political activist Robert B. Choate, Jr. first came up with the idea of forming a joint congressional committee to probe the hunger problem.[1] McGovern, who had been involved in food-related issues throughout his congressional career and who had been Director of Food for Peace in the Kennedy administration during the early 1960s, thought that confining the committee to just the more liberal Senate would produce better chances for action.[1] McGovern gathered 38 co-sponsors for the committee’s creation, a resolution quickly passed the Senate, and McGovern was named the committee’s chair in July 1968.

        As far as:

        and always (without actual research, mind) assumed it was something to do with the influence of agrarian lobbies trying to sell their produce.

        It was much the opposite.

        Titled Dietary Goals for the United States, but also known as the “McGovern Report”,[10] they suggested that Americans eat less fat, less cholesterol, less refined and processed sugars, and more complex carbohydrates and fiber.[11] (Indeed, it was the McGovern report that first used the term complex carbohydrate, denoting “fruit, vegetables and whole-grains”.[12]) The recommended way of accomplishing this was to eat more fruits, vegetables, and whole grains, and less high-fat meat, egg, and dairy products.[2][11] While many public health officials had said all of this for some time, the committee’s issuance of the guidelines gave it higher public profile.[11]

        The committee’s “eat less” recommendations triggered strong negative reactions from the cattle, dairy, egg, and sugar industries, including from McGovern’s home state.

        This also answers BD Sixsmith’s objection about Ancel Keys promoting more consumption of lean meat – no. It was about pushing grain for ideological reasons. Keys gets in the game later with the worst science I’ve ever seen – cherry picking national heart disease rates that correlate with fat consumption (while leaving out those nations where the rates are negatively correlated) and applying nationwide rates across nations to come up with a recommendation that individuals would be healthier with lower fat consumption.

        This all smacks of motivated cognition.

        • Murphy says:

          This kind of makes me wonder whether trawling twitter archives and social media might at some point allow us to build models of the fashions within social networks that predate their associated evidence base.

        • BD Sixsmith says:

          In other words, you have no evidence that vegetarian ideals inspired the lipid hypothesis. If Schweitzer and Mr Spock had a connection to its development I remain somehow unaware of it – and if you think the existence of cultural trends is enough to legitimise assuming that all people in that culture were driven by them we found a nice inversion of the progressive view that one can extrapolated from isolated data points to damn whole civilisations for their crimes of their month.

          On matters of fact: Keys did not “get in the game later” but had been promoting the lipid hypothesis for decades. His Time cover preceded McGovern by more than seven years. Keys might well have been mistaken – I do not have half the knowledge one would need for firm beliefs about diet and heart – but he did not just “cherry pick” his data (and, indeed, if he had included all the available countries, according to his most famous critics Yerushalmy and Hilleboe, he would have found a stronger correlation between heart disease and animal protein, which would have made a stronger case for vegetarianism).

        • DavidS says:

          You seem to have two things here

          1. Some people who were seen as laudable were vegetarian (though not sure of Spock as a model for human virtue
          2. The guy who published the guidelines was a Democrat (but I don’t think a vegetarian?)

          This seems very thin evidence for a confident claim like:
          “A huge motivator for the original dietary recommendations that are have now proved so amazingly disastrous was making a “scientific” case for progressive moral inclinations.”

          • I agree that a better case should be made. Still, I believe elite-moral-fashion as well as mass-produced food interest significantly shaped the recommendations (with more weight on the economic interest of course).

        • wysinwyg says:

          This all smacks of motivated cognition.

          Implying that vegetarianism inspired non-vegetarian nutritional guidelines by use of rumor an innuendo while skipping the part where you demonstrate an actual link smacks of motivated cognition, too.

        • houseboatonstyx says:

          > and always (without actual research, mind) assumed it was something to do with the influence of agrarian lobbies trying to sell their produce.

          That’s the way I’ve heard it. There might be some facts available on the relative number/spending of grain producing lobbyists vs meat producing lobbyists at that time. Grain producers had been getting substantial subsidies for a long time already, so they already had established lobbyists; if the meat producers had not, then they might lose that battle.

        • DavidS says:

          @ Steve Johnson

          You originally claimed specifically that the health guidelines were based on:
          ‘making a “scientific” case for progressive moral inclinations.’

          And specifically suggested that this meant/included vegetarianism. Kellog’s cases is (a) opposing masturbation, which isn’t the most obvious case study of ‘progressive moral inclinations’ and (b) nothing to do with vegetarianism

          You now seem to have moved to a fully general claim that when people claim to be doing good they are actually trying to raise status. Which feels to me like an easy way to dismiss others’ positions but not something that’s likely to uncover truth – it’s just too general and too easy to backfit explanations whether they’re true or not. ESPECIALLY when you essentially let yourself off the hook of having clear evidence by saying

          “It’s hard to reference intellectual fashion (or any kind of fashion) in any way other than anecdotally.”

          It’s also just ridiculously general in that every position can be cast as ‘just as an attempt to raise status’. Including your criticism of apparent virtue as status-games, and indeed my criticism of your argument…

      • Alraune says:

        Hmm… I haven’t got a citation either, but vegetarianism was part of Kellogg’s cluster of ideas, which were very influential in the creation of institutional nutrition, and a lot of his ideas came from quasi-religious dietary standards, so it’s at least a colourable argument.

        • Anonymous says:

          With Kellogg, it was more the idea that meat consumption produces carnal desires. Eat corn flakes; stop masturbation.

          • Steve Johnson says:

            That sounds an awful lot like “don’t eat meat because you’re an enlightened moral being who cares for the welfare of all living things to me”.

            It’s casting a dietary choice in a religious tradition as an indicator of virtue. The religion is slightly mutated is all.

            Now the connection is that eating meat is less racist.

            http://genprogress.org/voices/2010/04/13/14345/how-eating-meat-is-like-sexism-and-racism/

            (link selected on the basis of it being the number one google search hit)

          • Anonymous says:

            That sounds an awful lot like “don’t eat meat because you’re an enlightened moral being who cares for the welfare of all living things to me”.

            No, caring about masturbation is about as far as you can get from caring about the welfare of all living things.

          • Steve Johnson says:

            No, caring about masturbation is about as far as you can get from caring about the welfare of all living things.

            No one cares about “the welfare of all living things”. Evolution didn’t leave us with a brain equipped to even begin that task.

            What evolution did is to leave men with is a brain that is quite good at trying to claim status to improve mating prospects. Doing something unnatural and difficult – like not eating meat – and claiming it’s for a higher moral purpose like “caring about the welfare of all living things” looks a hell of a lot like a status claim – “I am more holy than you are therefore I am of higher status”.

            That’s very very close to “I am more holy than you are therefore I am of higher status” when the holiness claim is based on a different metric.

          • Deiseach says:

            Oh mother, that article about “carnism” or how eating meat is every bit as bad as sexism and racism.

            the language we use to distance ourselves from our food (i.e. beef vs. cow meat)

            Sentences like that make me want to beat the person responsible round the head with a dictionary, or at least a copy of Sir Walter Scott’s “Ivanhoe”:

            “Why, how call you those grunting brutes running about on their four legs?” demanded Wamba.
            “Swine, fool, swine,” said the herd, “every fool knows that.”
            “And swine is good Saxon,” said the Jester; “but how call you the sow when she is flayed, and drawn, and quartered, and hung up by the heels, like a traitor?”
            “Pork,” answered the swine-herd.
            “I am very glad every fool knows that too,” said Wamba, “and pork, I think, is good Norman-French; and so when the brute lives, and is in the charge of a Saxon slave, she goes by her Saxon name; but becomes a Norman, and is called pork, when she is carried to the Castle-hall to feast among the nobles; what dost thou think of this, friend Gurth, ha?”
            “It is but too true doctrine, friend Wamba, however it got into thy fool’s pate.”
            “Nay, I can tell you more,” said Wamba, in the same tone; “there is old Alderman Ox continues to hold his Saxon epithet, while he is under the charge of serfs and bondsmen such as thou, but becomes Beef, a fiery French gallant, when he arrives before the worshipful jaws that are destined to consume him. Mynheer Calf, too, becomes Monsieur de Veau in the like manner; he is Saxon when he requires tendance, and takes a Norman name when he becomes matter of enjoyment.”

            Yes, indeed, the English language is part of the carnist conspiracy which is why we talk about “leather” and not “the skins flayed from tortured slaughtered poor harmless animals”.

            The lady might prefer to speak Irish, where the relationship between meat and the animal it came from is a little more direct. Definitions from Dineen’s Irish-English Dictionary:

            Feoil, “flesh, meat”; muic-fheoil (lit. “pig-flesh”) pork, caoir-fheoil (“sheep-flesh”) mutton, mairt-fheoil (“cow-flesh”, “bullock-flesh”), beef, laoigh-fheoil (“calf-flesh”) veal

            Mart, “a bullock, a cow, a beeve; a carcass, the dead body of any weighty animal when butchered and cleaned, such as a pig, cow, etc.”

          • Anthony says:

            Deiseach (and others similarly offended), read “A bloodmouth carnist theory of animal rights”. (Bonus t-shirt picture included.)

          • Deiseach says:

            BLOODMOUTH CARNIST!

            MY STOMACH IS A GRAVEYARD!

            Thank you for that, Anthony. I wish I had the nerve to order a T-shirt for my evangelising vegan youngest brother, but it would probably provoke civil war 🙂

          • Diana says:

            @Deiseach: It’s not irrelevant discuss language *itself* as a way in which humans develop attitudes and beliefs about the world. The point is that thinking about “pepperoni” as a disk of ground pig flesh made at a factory with millions of other flattened pigs is different than thinking about “pepperoni” as a pizza topping. It’s no surprise that some kids are shocked to learn that they’ve been eating “cows” all along instead of “hamburgers” – few Americans grow up having relationships to farm animals, and the majority of animals Americans interact with are pets and animals that need to be rescued (ASPCA, Cecil the lion, endangered species, etc.) – not animals that they’re going to kill and eat.

          • Deiseach says:

            some kids are shocked to learn that they’ve been eating “cows” all along instead of “hamburgers”

            Diana, that’s not a problem with language, it’s a problem with ignorance. It does no better to describe a hamburger as “ground-up cowflesh” if the only exposure the children have to cows is “Cuddly toy” or cartoon characters – even worse in the case of the cartoons, because parents explain to their children that what they see on television isn’t real, so they may well think cows do not in actuality exist and “ground-up cowflesh” is some kind of vegetable.

            Indeed, the increasing anthropomorphism of animals in animated movies etc. is probably giving a whole generation of children unrealistic ideas of how intelligent animals are, how animal social structures work, and so on. I wouldn’t be surprised if there were more deaths from people who grew up on a diet of “Brother Bear” and the like, going into the wild and thinking animals are all humans in fursuits, and then when a wild animal acts according to its nature and is not all cuddly-huggy but bitey-clawy-killy, the poor fools end up maimed or dead.

          • Diana says:

            @Deiseach: It’s not just ignorance, even adults who are aware of factory farms and their connection to the meat on their tables are disturbed by graphic imagery exposing factory farms. Language can interrupt the way we think. Even if the same material thing is being described, choice of words matters. In this case, people think twice about eating ground meat when they consider the animal that it’s composed of, and more if they consider the “violence” that the animal endured in a factory farm. For instance, most people wouldn’t eat the meat of their own ground up dog or bear to stand knowing that their dog would be treated the same way as chickens are.

          • Jaskologist says:

            I recall a friend telling the story of how his son once remarked, “Dad, isn’t it weird how we eat chicken, and there’s also an animal called ‘chicken’?”

        • DavidS says:

          ‘Kellogg’s cluster’

          +10 points

          BTW, I have no idea what a ‘colourable argument’ is?

          • Deiseach says:

            A colourable argument is one that holds water, that is plausible or has some merit that can be argued.

      • anonymous says:

        To DavidS: “Agrarian lobbies” are unlikely to push for veganism/vegetarianism in order to sell produce, because:

        1 – cattle and other meat animals consume produce in far greater amounts than humans.

        2 – agrarian lobbies also have to sell meat and dairy.

        • DavidS says:

          Good point. By ‘agrarian’, I meant ‘crop-growing’, but maybe it means farming more generally?

          Presumably there’s still a chance that the push was from specific crop lobbies where those crops tend to be eaten by humans rather than animals (or by companies that sell e.g. breakfast cereal), but it does make me downrate the chances! Especially as there remains the lingering questions of ‘what about the dairy/meat lobby’?

        • houseboatonstyx says:

          @ Anonymous

          > “Agrarian lobbies” are unlikely to push for veganism/vegetarianism in order to sell produce, because:

          3. My sense of the period is that agricultural (and for that matter any other) lobbyists and their employers would flee from any whiff of association with such health and religious nutjobs. Remember that their negotiations were done in smoke-filled rooms. Cigar smoke.

      • Matt says:

        It could very likely be a “Bootleggers and Baptists” type scenario i.e. a combination of the two, a moral movement supported by those that profit from said moral movement.

    • BD Sixsmith says:

      Evidence? Men like Ancel Keys did not promote vegetarianism but transitioning to lean rather than red meats – and, indeed, meat consumption rose; it’s just that people ate chickens more than they ate cows.

    • Scott Alexander says:

      I strongly doubt this, if for no other reason because Taubes was a muckracker extraordinare and if this were true he would have seized upon it.

      • That’s a nice research shortcut (no snark).

      • Steve Johnson says:

        Which part do you doubt?

        Because I originally learned of the McGovern connection from reading Taubes. The mention of an ideological leftist in that context is what set my Cathedral meme-plex radar off so I then went off and poked around and found other correlations – like Kellogg with progressive Protestantism.

        Is this a definitive case when you’re talking about dietary recommendations? Of course not.

        Is it really good evidence of motivated cognition on the part of someone who appears to be good at reasoning but completely fails to grasp some basic reasoning (Taubes’s saiety model of weight gain) and then is shown to have the exact same ideological bias? Yes, I think it is. The case for vegetarianism being progressive is rock solid. The conclusion is that the guy is religiously motivated to avoid thinking about Taubes’s point. People do that sort of thing all the time.

    • Randy M says:

      I seem to recall this being touched upon in Death by Food Pyramid, but I don’t remember specifics off hand. Something like, the aid McGovern had write the actual recommendations was vegatarian? Don’t quote me on that, I’ll check later if I remember.

      Another connection is that seventh day adventists (while not necessarily politically powerful) do a lot of promoting of vegetarian/veganism (not sure which) for health, while also having an ideological predisposition to it, iirc.

  4. Joe from London says:

    I observe that correct beliefs can be turned into $$$ by trading publicly listed securities.

    Eliezer claims his rationality levels allow him to identify promising startups, and he asks for funds to do this. It would be quite simple for him to open a brokerage account, make trades, post what trades he is making, and why. The fact that he has not done this strongly suggests he is not capable of predicting the success/failure of businesses at greater than average rates.

    If a member of LW claims he is trying to be more rational, he has my full support. If someone claims he has higher than average rationality, and wants funds to invest in startups (in a strictly one sided trade in which he risks others’ money and will share a cut of gains but not losses), without ever showing what he can do with his own money, he comes across as a cult leader.

    It constantly surprises me that his detractors don’t point to this, and instead make much more easily falsified claims. Thanks for debunking those. I’d prefer to see the bigger one addressed.

    • Smoke says:

      If someone claims he has higher than average rationality, and wants funds to invest in startups (in a strictly one sided trade in which he risks others’ money and will share a cut of gains but not losses), without ever showing what he can do with his own money, he comes across as a cult leader.

      Um, so that’d be most venture capitalists then?

      BTW, it’s plausible that the market for publicly traded securities is more efficient than the market for venture capital deals. It’s also possible for someone to have an edge in one market but not another.

      So yeah. If anyone who applies for a job as a venture capitalist is now a “cult leader”, the term has been diluted to the point where it has no meaning.

      The fact that you think it’s appropriate to call EY a “cult leader” because he wants VC work without noticing the thousands of others that also want VC work is evidence of an absurd double standard. I’d actually be curious to see how this anti-EY double standard arose; some of the points Scott makes provide further evidence that it’s a thing. (You can also see it in e.g. Holden’s critique where he calls SI out for not seeing that Tool AI is the obvious solution to AI safety, failing to notice that hardly anyone else had proposed Tool AI as the solution to AI safety either.)

      My theory is it has something to do with the Singularity Institute’s arrogance problem. Basically they spent several years committing a faux pas/PR gaffe that has hounded them ever since.

      And the deeper issue behind that: In Western culture, nerds are traditionally humble. (They’re much humbler than jocks, for instance.) But as much as the Blue Tribe hates to hear it, it probably is true that having a 135+ IQ basically makes you better at all intellectual tasks; it’s much more plausible that a 135 IQ person could correctly diagnose a case where a widespread societal belief was wrong than that a 90 IQ person could. So anyone who calls a spade a spade and points out that being smart gives you a huge leg up and in particular they are pretty smart is going against decades of ingrained Blue Tribe propaganda.

      The main way the Blue Tribe lets you say that you’re smart is by pointing out that you have a degree from a prestigious university, notably all thoroughly Blue-approved and controlled institutions. Extremely ironically, these institutions do admission largely on the basis of aptitude tests (SAT/ACT).

      The fact that people have attempted to portray the Singularity as “rapture for the high IQ set” is in a certain way astonishing evidence of anti-intellectualism in our culture–if a bunch of people who are smarter than you disagree with you, what makes you so sure you’re taking the right side of the debate? I guess the counterargument here would be to argue that in some cases smart people just turn their intelligence towards increasingly sophisticated rationalizations… but the fact that the people interested in the Singularity are also very interested in fighting rationalization still seems like it should make you sit up.

      • Steve Johnson says:

        The standard way of applying for a job in VC is to work in I-Banking or go to a high quality B-school.

        Not to create a cult and ask for a job as a VC in the conclusion of a fan-fiction along with a pretty delusional request to be put into contact with a billionaire author whose work you just (to put it crudely) shat upon (again, noting that I enjoyed both JK Rowling’s HP and Elizer’s – although his had negative persuasive value for me towards his cult and made me dislike his “rationalism” more than I did before reading it).

        BTW, if there are any VCs out there looking to fund someone in the angel investing business I think you’ll see my track record as a blog commenter shows I would be great at this job so contact me.

        • Smoke says:

          VCs that come out of the I-Banking/B-school process are notoriously terrible. Paul Graham has written about this, e.g. here, here, here:

          This summer we invited some of the alumni to talk to the new startups about fundraising, and pretty much 100% of their advice was about investor psychology. I thought I was cynical about VCs, but the founders were much more cynical… A lot of what startup founders do is just posturing. It works.

          In capitalism, being an accurate contrarian pays. So finding people like Eliezer who seem like they might be accurate contrarians is a reasonably sensible strategy.

      • Joe from London says:

        Sure, that one particular claim in isolation isn’t enough to call someone a cult leader. But I don’t take it in isolation. I take it in context of hundreds of blog posts about how to think more clearly. Those plus the claim that their author can beat other investors make me rather more suspicious. There are other data points; you know most of the criticisms.

        Yeah, I think your last comment on the arrogance problem is pretty close to the mark. Empirically, some traders do have the ability to make superior investments and beat the market. As far as I am aware, Eliezer Yudkowsky does not have this ability. (I take your point about VC potentially requiring specific skills, but EY claims his rationality skills are not domain specific. If he claims he has the skills which will only allow him to invest others’ money, I am suspicious).

        • ton says:

          His claim is that he’d be better at analysing startups in particular, which is something not anyone can do (you need access to founders, can’t invest small amounts, etc).

          His argument for that was laid out in several long posts here: https://www.facebook.com/groups/674486385982694/permalink/757486551016010/

        • Eliezer Yudkowsky says:

          I’m currently in the process of testing this belief via a friendly VC company (HPMOR-reading manager) with over a hundred seed-stage startups, whose goodness-of-ideas I am attempting to rate. We shall see if this correlates to any interim measures of progress or long-term outcomes.

          • Joe from London says:

            Great to hear. I wish you the best of luck with this. I trust you’ll make your predictions public, and I look forward to reading the results.

        • Smoke says:

          I’m a priori skeptical of anyone who says they can beat the market, but I also don’t think the strongest formulation of the efficient market hypothesis is true… I have friends I perceive to be less intelligent than Eliezer who seem to have done it with reasonable consistency. Anyway, it seems like Eliezer is in a sense doing exactly what you’re suggesting. He wrote a bunch about how to think clearly and now he’s offering to stick his neck out and demonstrate superior investing ability. I’m looking forward to seeing what happens.

      • Steve Johnson says:

        The fact that people have attempted to portray the Singularity as “rapture for the high IQ set” is in some ways astonishing evidence of anti-intellectualism in our culture–if a bunch of people who are smarter than you disagree with you, what makes you so sure you’re taking the right side of the debate?

        The less wrong crowd’s belief in the Singularity looks more like “rapture for the middle IQ sexually unattractive set”. A bunch of people who are frustrated with life because they’re smarter than average but don’t get high status because of that and wish for a reset. The low sexual market status of the Less Wrong crowd to me is the clearest mark of the community.

        It takes a level of intelligence to be successful in life but it also takes other skills to deal with people. If you’re high on the IQ measure but very low on the others your status peers are likely to be low IQ but better in the other traits – which will leave you even more frustrated. I’m pretty sure this is what leads to the often weird outlook that the LR crowd has – not superior insight.

        • Deiseach says:

          I’m sitting here at my desk with my sandwich and a tin of something to drink (which is probably very appropriate while reading something that mentions diet) and enjoying the row, though I’m still not sure who’s right, who’s wrong, or if everyone involved needs to sit down and have a nice cup of tea.

          Speaking as a religious nutjob, I always enjoy the sight of rationalist atheists getting stuck into one another, particularly that stripe (not saying any of the parties here are such) which like to maintain that religion is the root of all evil and were it only banished, people would be cuddly and nice to each other regardless what provocation 🙂

          Scott does something I can’t; if he found Feser’s work on Aquinas unendurable (and even though I’m sympathetic to Thomism I’d never even contemplate reading Feser), that’s how I feel about Yudowsky’s writing. Every time (admittedly, only a couple of times) I try reading right through, I can only manage a post or two before giving up. So on the one hand, I’d be sympathetic to Hallquist having a go at Yudkowsky because he does come across as unconcerned about “the social niceties of broadcasting how uncertain you are to everyone”. On the other hand, since I haven’t read enough of Yudkowsky’s writing, I can’t make a fair judgement. Scott knows both the work and the man, so I have to trust his word for what’s being written and said. On the gripping hand, Hallquist annoyed me by putting “prove” in scare-quotes: either put the entire sentence about “claiming to prove that Jesus rose from the dead using historical evidence” in quotation marks, or leave it in clear; little sneers of intellectual superiority like that don’t impress me so hit him again, Scott, he’s no relation!

          I’ll say this much for Yudowksky: his Jeffreyssai rishi made me laugh when I read a couple of those “critiquing Eld scientists” posts, though perhaps not in the fashion intended; his Secret Conspiracy groups evoked definite echoes for me of the Western Esoteric Tradition and as Jeffreyssai was hectoring lecturing his students, I was going “Fuck me, he’s re-invented alchemy” – that is, the alchemical tradition of initiation by master of student into the secret wisdom, cloaked under figures and terms of art, and the aspiring apprentice must demonstrate their abilities by solving riddles and proving their worth and progress before they can learn more; all delivered by oral transmission, you don’t write anything down that the mundane world can understand, and it’s all deadly secret and not for the profane.

          As a method of expanding knowledge and provoking “think fast, new discoveries” in the Brave New World of trained rationalists – well, I’m not quite convinced of its merits 🙂

          • You feel like you can’t stomach reading something that’s not consistent with your beliefs? But it will make your belief stronger to better understand the reasons people believe else. That it’s more tiring is just evidence of how expensive it is to actually think.

          • Deiseach says:

            You feel like you can’t stomach reading something that’s not consistent with your beliefs?

            If you’re talking about Feser, it’s that I’m pretty much burned out on apologetics. And if you’re talking about St Thomas Aquinas, then religion (and attitudes to it) are going to come into the mix sooner or later. So I’d rather dodge the egg-throwing altogether.

            If you’re talking about Yudkowsky, it’s not because his stuff is inconsistent with my beliefs, it’s that I physically can’t read great wodges of it at a time. After a while my eyes glaze over and it’s like I’m trying (mentally) to eat wallpaper paste; even if it’s served up in a nice bowl with a dessert spoon, it’s still paste and not tapioca 🙂

        • Jaskologist says:

          It’s always lovely to see other people speculate on your underlying psychological motives, isn’t it? Still, I cannot resist.

          A common thread I see is a lot of people who had the misfortune of being substantially smarter than their parents (EY in particular). This is particularly the case with the “New Atheist” crowd. (Witness how many of their conversion stories take place at the age of 12. Who ever made a wise decision at 12?)

          Really, I think all of this was well summed up with the phrase “Hallquist’s New Atheism background.” What you’re seeing is a New-Atheist New-Atheing you instead of God-botherers. You want to know what they look like from the outside when they write about religion? Exactly the way they do now.

          • Deiseach says:

            What you’re seeing is a New-Atheist New-Atheing you instead of God-botherers.

            Well, yes, that is the ignoble pleasure I’m getting out of it.

            Yudkowsky doesn’t help himself; I think he does indeed have some kind of a sense of humour, but it’s hard to distinguish it in his writing from when he’s being serious. Jeffreyssai and the New Rationalist Training as a leg-pull is funny; as a serious (if fictional) musing on what wonders the future would show if rationalists ran the show, and if rationalists were trained in such a manner, and if all scientific and other progress involved re-inventing the wheel from scratch so the neophytes could prove they were thinking correctly when they came up with the right answers their masters already know, but don’t tell them until after they’ve re-discovered them – pull the other leg, it’s got bells on.

            For if that is effectual teaching, then indeed these secret societies and hidden arts may well know how it is “The Lunaria is the White Mercury, the most sharp Vinegar is the Red Mercury; but the better to determine these two mercuries, feed them with flesh of their own species — the blood of innocents whose throats are cut; that is to say, the spirits of the bodies are the Bath where the Sun and Moon go to wash themselves.”

            Speak not of these mysteries to the profane and vulgar, nor even to the wise but under the veil of obscure symbols, lest treasure be cast before the swine! And you’ve got a whole month to do it, and I’ll even generously allow you sleeping time (calculated as a total in minutes) but not mention eating, bathing or going to the loo in that span because presumably you’ll only think/sleep/think/sleep/think until you solve the problem and who needs to eat?

          • @Deiseach

            > if all scientific and other progress involved re-inventing the wheel from scratch so the neophytes could prove they were thinking correctly

            Every time during my maths studies I accidentally reinvented a wheel, I was really excited – while I was always a little sad my thought wasn’t novel, it really helped me to believe I was thinking in the right directions.

            The level of secrecy bugs me but I suspect it largely there for effect, the independent reinvention really resonates with me.

          • Troy says:

            What you’re seeing is a New-Atheist New-Atheing you instead of God-botherers. You want to know what they look like from the outside when they write about religion? Exactly the way they do now.

            Indeed.

          • James Picone says:

            An ad-hominem argument that ‘New Atheists’ are horrible ad-hominem-arguers who rely far too much on criticising a broad category based on figureheads, based on a particular figurehead.

            Irony.

          • Adam says:

            I’ll admit that I haven’t exactly kept up on whatever the heck is being called new atheism these days, so my experience goes back to Richard Dawkins more than a decade ago when I was actually studying biology, but at least then, he was giving extremely detailed book-length refutations of extremely specific arguments from intelligent design. And Dennett and Shermer seemed far more concerned with bad philosophy and pseudoscience than anything having to do with religion one way or another. They were definitely not ad homineming figureheads.

            But I’m willing to accept that times have changed and my refusal to sign up for Twitter or use tumblr for anything more than porn is leaving me out of the loop.

        • Randy M says:

          Note that this rapture often involves the elimination of physical bodies at some end point, with all the accompanying non-explicitly-verbal interactions that stereotypical nerds find challenging then becoming a thing of post-singularity pre-history.
          I don’t want to be guilty of projection, but I suspect the larger population (or the subset of it that can imagine such a thing) is rather turned off by the brain-in-a-box idea that singularity/futurist types seem to expect to be appealing.
          Despite, yes, spending an unsettling amount of time being a brain staring at a box.

          • Nornagest says:

            I don’t know what social interaction will look like when and if we’re all uploads, but I doubt most of it will look like anything we’d recognize as explicitly verbal.

            It’s worthless almost by definition to speculate about what will happen after a hypothetical event defined partly in terms of its unpredictability, but most of the speculation I’ve seen along these lines has involved simulated environments, which would interact with physically disembodied psyches much like the real world interacts with physically embodied psyches. That is almost certainly not what will happen, but it’s probably less wrong than modeling uploaded existence as a glorified chatroom.

        • Smoke says:

          Intelligent people are significantly less sexual in general. So this is a fully general counterargument against anything a highly intelligent person says. (Nevermind that Peter Thiel is a gay billionaire who’s also concerned with the singularity, etc.)

        • Eli says:

          So… have you actually measured anything at all about the love-lives of LW users, or are you just applying a negative-sounding stereotype to people you don’t like?

          Because, to pick the most famous example, Eliezer and Brienne apparently share a surname now. “Sexual market value” really only seems to make any goddamn sense when you’re talking about people way younger and more single than the people we’re talking about.

      • Joe from London says:

        “if a bunch of people who are smarter than you disagree with you, what makes you so sure you’re taking the right side of the debate”
        Two observations:
        1) this is a big change of subject matter. I feel like the Singularity wasn’t really part of this until you brought it up.

        2) regarding who’s smarter than who, it is very easy to generate a track record. Stick your neck out on some testable hypotheses which can be simply turned into financial reward. “Hillary Clinton will/will not be the next POTUS”, “oil prices will be <$X in a year, with Y% probability". Bet on these, and post why. If EY does this, and consistently shows correct predictions, I will update my beliefs regarding his rationality.

        • Quixote says:

          I think you vastly underestimate how it is to beat the market. The market is composed of a mixture of mass market idiots, dumb finance speculators and smart finance types. Smart finance types are very smart. Finance is a very hard problem in that you are directly competing against other finance types. For startups, all you have to do is create value and sell it. By analogy, there are probably 10s of thousands of people who have the same weight lifting strength as the top UFC fighters. Solving an environmental problem of a particular difficulty is an easier challenge than overcoming active and oppositely interested opposition. This is one of the big points Peter Thiel’s book makes; banking and law suck as professions because you have to compete with a bunch of bankers and lawyers, you don’t want to be competing you want to go from 0 to 1.
          Tangent aside, the above request isn’t asking EY to prove he is smarter / more rational than people or even average scientists, its asking him to show he is smarter than top traders. That’s a much much higher burden.

          • Chalid says:

            It’s much worse than that. A top trader has access to a lot of expensive resources that EY doesn’t, and spends a lot more time thinking about finance than EY probably cares to.

          • On the other hand …

            The market price of a stock already encapsulates the publicly available information, precisely because of the efforts of all those smart traders. So if there is something relevant to the value of a company on which you are willing to bet your opinion against the world, and if you are right the company should be worth substantially more than if you are wrong, you buy stock in it.

            That was the basis on which I bought stock in Apple shortly after the Macintosh came out–having already encountered the virtues of a GUI in information on the Xerox Star. A colleague (at Tulane Business School) asked me why I hadn’t bought an IBM Jr. instead, which suggested to me that a correct perception of what Apple was doing was not yet widespread.

      • Anonymous says:

        @Smoke

        Two points. One, expecting higher standards from those at the Singularity Institute than from others is entirely reasonable, considering the former are not only claiming to be experts on the field but are asking for money, while the latter aren’t.

        Two, I’ve seen the claim “well, nobody else thought of Tool AI before Holden did either!” before. It seems like a very odd claim to make, partly because I don’t think it’s true. One standard solution suggested by ordinary people, and shown in movies about AI, for the problem of “how do we stop the AI from going nuts and killing us all to achieve its goal?” is “well, can’t we just make it like, not have any will of its own, just think and be a slave to humans and not actually make any decisions itself?”. I don’t think this is an extraordinarily complex suggestion that nobody has ever come up with before. I think it’s a very common suggestion, that maybe wasn’t explored in depth and phrased properly before Holden did so, but certainly isn’t a mad idea that nobody has ever dreamed up in the past.

        I don’t really understand the argument “nobody thought of it before, so it doesn’t mean they weren’t thinking clearly!” either. Is it impossible that people in the past really weren’t thinking clearly, and really should kick themselves for not noticing a certain obvious possibility sooner? That’s assuming that it’s even the case that nobody had thought of it before, which as I said above I don’t think is true.

        • Smoke says:

          One, expecting higher standards from those at the Singularity Institute than from others is entirely reasonable, considering the former are not only claiming to be experts on the field but are asking for money, while the latter aren’t.

          Typical college professors claim to be experts in their field and frequently ask organizations like the NSF for money to do their research. But you don’t go up to a college professor and say “You haven’t said anything about my pet solution to a major problem in your field! HAH! You have no idea what you’re talking about!” And when college professors get in to venture capital, no one calls them cult leaders.

          So yeah, I still think there’s a double standard here. Academia is the only place in society where you’re allowed to claim expertise in a topic without having some kind of impressive track record of accomplishments. (It’s enough for someone to have a PhD to be considered an expert; folks rarely care what their thesis was about.) If you’re operating outside academia, you claim expertise, and you don’t have any traditionally impressive accomplishments to back it up, people will be extremely hard on you.

          And it’s true that a typical person in this category probably is not worth listening to. But that doesn’t mean they all are. The tricky thing here is that absent traditional indicators like degrees, accomplishments, etc., intelligence is hard to judge for people who aren’t themselves intelligent. To a 90 IQ person, a 110 IQ person using big words incorrectly and a 130 IQ person using big words correctly sound about the same. So “distrust degreeless unaccomplished experts” is a heuristic that works well for most people, but we shouldn’t be surprised if highly intelligent people can do better.

          • Irenist says:

            To a 90 IQ person, a 110 IQ person using big words incorrectly and a 130 IQ person using big words correctly sound about the same.

            I think that, in passing, you have just encapsulated the most important single fact about modern mass politics.

          • Svejk says:

            So yeah, I still think there’s a double standard here. Academia is the only place in society where you’re allowed to claim expertise in a topic without having some kind of impressive track record of accomplishments. (It’s enough for someone to have a PhD to be considered an expert; folks rarely care what their thesis was about.)

            Id like to quibble with you here: A PhD, even a Humanities PhD, is a significant scholarly accomplishment generally requiring at least two oral examinations graded by a panel of experts having a significant record of accomplishment, as well as production of a significant original text/multiple articles also reviewed by a panel of experts. A STEM PhD usually generates even more externally-validated data/patents/ compounds, etc. It is appropriate to consider PhD from a well-regarded University as demonstrating sufficient domain-specific expertise. A PhD, however, is not a ‘Badge of Smart’, which is how the public often treats the degree (and how unscrupulous academics encourage it to be treated). A professor who is repeatedly successful in getting grants from NSF and NIH can most certainly claim to be an expert in his/her field. Granting bodies employ proposal reviewers from within the fields of the applicants who can also claim relevant domain-specific knowledge. Some reviewers may even be in competition with the applicant (this does not always rise to the level of conflict of interest, but does increase the level of scrutiny). A common complaint of STEM researchers applying for grants/submitting papers to peer review is that their colleagues do indeed say ‘You haven’t said anything about my pet solution to a major problem in your field! HAH! You have no idea what you’re talking about!’ , and ask them to cite their papers on the subject, to boot.
            A relevant question is what is an equivalent non-academic demonstration of expertise to the PhD.
            Also, typically, when college professors get into VC (as funds-seekers) they have often already been vetted by other academics who may or may not mildly hate them as part of an institutional startup incubator.

      • Smoke says:

        I realized that the rabbit hole might go even deeper. Speculating here…

        Amy Chua talks about the idea of a “market-dominant minority”. Among other situations, this occurs when there’s an ethnic group that’s smarter and/or harder-working than the peers that surround them. In the US, that’d be Jews and Asians. And what you see is in a more meritocratic environment (e.g. Silicon Valley, where degrees don’t count as much, or MOOCs, where anyone can sign up), these people tend to excel. This breeds resentment among people who aren’t among the market-dominant minority (witness the furor over lack of diversity in Silicon Valley). Per Chua, this kind of resentment can be dangerous and lead to civil unrest in the long run.

        The university system is a compromise solution to this problem. They are largely meritocratic, using aptitude tests to determine admission. But they also do affirmative action to take in the best of the non-market-dominant ethnicities as well. So this lets us have *some* method of doing human capital matching while also preventing civil unrest.

        So yeah, Chesterton’s Fence applies to Blue institutions too. The university system looks farcical but unless you understand the reasons it exists you should think carefully before removing it. Large parts of Blue ideology are a complicated memetic immune system that’s evolved to prevent race war.

        • Smoke says:

          To extend this analysis, lately the Blue memetic immune system has been attacking itself, and the Gray Tribe is the result.

        • stillnotking says:

          Large parts of Blue ideology are a complicated memetic immune system that’s evolved to prevent race war.

          While I think this claim is historically accurate, in the modern, developed West the possibility of a race war is vanishingly remote; America in particular has been through much worse periods of racial unrest (desegregation, the Watts riots) without ever coming close to Rwanda/Yugoslavia-style ethnic conflict. I don’t think any mainstream group’s current stated or implied values are aimed at stopping race war.

          • John Schilling says:

            Which is I think where Mark Atwood’s point comes from. We know what happens when an immune system doesn’t get a proper workout against actual pathogens, and it isn’t pretty. So, once upon a time, we developed a societal immune system to protect us from the Black Panthers and the KKK. Now we don’t have any more Black Panthers or KKK worth mentioning, and the racists we do have aren’t the sort the “immune system” was designed to counter. The results are not pretty.

      • Science says:

        I just don’t get the obsession of the Grays with IQ. I was really impressed with my great brain from age 10 to around age 20. That’s when I met a bunch of really smart peers, saw that they were a very mixed bag, and got enough of a sliver of self-awareness to realize that there’s more to life than raw intelligence.

        Maybe it is connected to Libertarianism, some kind of atavistic yearning for simple, powerful rules to beat messy reality into submission.

        • Samuel Skinner says:

          Because a lot of things are correlated with IQ (criminality being the big one).

          • Science says:

            That doesn’t explain the obsession, merely demonstrates it. Grays aren’t united by their deep abiding interest in criminology.

            Edit: just the week ban after posting this. I’m not going to delete this, but I’ll otherwise abide by it.

          • Samuel Skinner says:

            Isn’t the cause that separated greys from blues politics? If so it shouldn’t be surprising that they are the ones who were interested in alternative explanations for things blues care about.

        • Smoke says:

          I would guess that to a first approximation, high intelligence is necessary (but not sufficient) for meaningful intellectual accomplishments.

        • Adam says:

          The obsession with correlations, too. I have a very high IQ and I’ve also committed quite a few crimes, though never been caught, so good for me. I’m guessing at least part of the reason I’ve never been caught isn’t just that I’m so smart, but I don’t look like someone’s statistical profile of a criminal.

          • Samuel Skinner says:

            I think if intelligence made people good enough criminals that they could avoid detection well enough to not show up on the stats, that would be strong evidence for the important of intelligence. I’m pretty confident in the correlation though because murder (the crime the easiest to track) appears to follow this pretty well.

          • stargirl says:

            What sorts of crimes. Drug use?

            Did you ever commit a violent crime or large value theft/fraud?

          • Adam says:

            Embezzlement, fraud, DUI, breaking and entering (but not robbery), so I guess you got to the crux of it: no open displays of violence. So maybe that’s the important correlation you’re looking for, not crime, but violence, or at least hand-to-hand, impulsive types of violence. B2s are built by pretty intelligent people.

            Stargirl: No large value. Greatest was about fourteen grand.

    • Emile says:

      Eliezer claims his rationality levels allow him to identify promising startups, and he asks for funds to do this. It would be quite simple for him to open a brokerage account, make trades, post what trades he is making, and why. The fact that he has not done this strongly suggests he is not capable of predicting the success/failure of businesses at greater than average rates.

      I think you’re playing off two different interpretations of “average”: the average investor / VC, or the average american?

      Eliezer’s claims and behavior seem consistent with him being better than the average American but not necessarily as good as a VC or investor that makes a profit.

      (I’m also not sure which of Eliezer’s claims you’re referring to exactly)

      • Joe from London says:

        https://www.facebook.com/674486385982694/posts/742024599228872

        “It seems plausible that I could generate excess returns, given investors who trust me enough to risk some capital, if some of the background variables turn out to have the right values.”
        “I would charge 20% of excess profits over the returns of either the S&P 500 or a Vanguard bond index fund, whichever is greater. I would estimate a ~50% chance that I do worse than the overall VC cohort, and a better-than-10% chance that I can return 10X. Good companies *do* routinely rise by a *lot* in the illiquid venture space, and if I cannot produce *significant* excess returns then the attempt is not worth continuing. But this would be, obviously, risk capital.”

        I should add that I’m a fan of Eliezer’s writing. I think there’s a lot of value to be gained by reading his essays. But I think this claim is wholly unfounded. I feel like anyone who can routinely have abnormally correct beliefs ought to be investing his own money in securities before asking for money from others. And Eliezer hasn’t, AFAIK, tried this. That’s why I’m suspicious.

        • Emile says:

          Okay, thanks for the link, that makes more sense.

          Eliezer doesn’t seem super overconfident in the parts you quote, notably “I would estimate a ~50% chance that I do worse than the overall VC cohort” looks pretty honest. Do you think he should have put that value higher?

          I read him as saying “I have some weird ideas whose value I’m not sure of, but if you throw money my way I can test them”, and not “bwahaha I’m sure I can beat the market!”

          • Joe from London says:

            *shrugs* I read it as an EY claim that his EV is higher than (his median = industry mean). A charitable reading would be that this expresses his uncertainty. An uncharitable reading would be that he doesn’t expect to beat the VC mean, but wants to claim plausible deniability if he fails to do so initially. A charlatan could claim a low probability of a high value event and go many years without being disproven. I’ve no idea whether this is EY’s plan, but if he can reliably price low-probability events more accurately than most people, I’m sure there are derivatives to allow him to express his views.

        • Quixote says:

          “I feel like anyone who can routinely have abnormally correct beliefs ought to be investing his own money in securities before asking for money from others. And Eliezer hasn’t, AFAIK, tried this. That’s why I’m suspicious.”

          It sounds like you are not very familiar with either the law or market environment in the US. For the first point, many specialized investment opportunities are actually illegal for retail investors and are barred to anyone with net worth below $1,000,000 USD. This is nominally to “protect” average investors. In practice the minimum amounts required are higher, the SEC has proposed a draft standard of 2.5mm USD and many places are already applying this to avoid getting into trouble later. Imho this closes off many areas where someone could gain an advantage through research or specialized knowledge since the public equities space is already quite crowded with lots of well known research.
          A further barrier (though this might not apply in VC, my background is more large scale capital markets) is that it takes the same amount of time to have a meeting and explain an investment opportunity to someone with 10mm USD as it does to explain it to a fund with 3bn USD under management. As a result, people will often shown an opportunity to a few big funds and then just be done rather than trying to scrape together a million here and a million there.
          Essentially, you can’t just “try it yourself with your own money”. Unless you are sitting on a big pool of money you don’t get to do anything at all. [note fwiw I doubt he would beat the market given the cash, its hard and does require a lot of knowledge and not just general smarts. I just want to show this specifc objeciton is unfair.]

          • Joe from London says:

            I found it very easy to set up a brokerage account with $10k capital. It gives me access to a very wide pool of securities. YMMV. (Yudkowsky’s Mileage May Vary)

            As has been suggested, it’s fairly easy to post phantom trades to gain credibility. After Metamed failed, Eliezer claimed he’d spotted the failure node and he should’ve written down a prediction that Metamed would fail unless certain conditions were met. But it’s easy to claim “I would have invested in airbnb!” (hinted at by his essays, though not stated outright) and it’s tough to actually invest in airbnb. If EY announces some securities he wants to invest in but isn’t able to for legal or operational reasons, he’ll gather my attention. If those startups succeed, I’ll update my model of him and consider funding him. Currently I see only someone with no track record or relevant experience but who claims he can beat the average VC.

          • vV_Vv says:

            You could always make public bets with play money. If you could show that you could consistently beat the market then you would have no problem finding investors.

            This is especially true for somebody like Eliezer Yudkowsky, who is relatively famous and personally knows some high-level investors such as Peter Thiel. The fact that he asks for money on Facebook without a track record of successful predictions, neither in finance nor really anything else, on the basis of his self-appointed position of guru of the “rationalist” movement, is a big red flag.

          • Nornagest says:

            I found it very easy to set up a brokerage account with $10k capital. It gives me access to a very wide pool of securities. YMMV.

            The kinds of investments you can make through a mass-market brokerage account are not the kinds that Quixote is talking about. You’re looking for hedge funds, direct venture capital, certain other types of private equity, etc. Venture capital is particularly important here, because by the time a startup reaches an IPO the potentially highest-returning (and definitely highest-risk) investment opportunities have already been exhausted.

            The search term, if you’re interested in learning more about this, is “accredited investor”.

          • Zakharov says:

            Reading through Yudkowsky’s post, it’s clear that he would need private information about the companies available only to VCs in order to accurately judge startups. He does not claim any particular ability to pick public-market securities.

    • Steve Johnson says:

      Agreed – Elizer comes off as sketchy and untrustworthy for tons of reasons but this (Topher Hallquist’s) critique is negatively effective and I either agree with Elizer’s position (diet), don’t care (interpretation of quantum mechanics) or think it’s some pretty harmless crankery (philosophy) (that does point at some deeper flaws in his character – cryogenics) on the four issues addressed.

    • Tom Womack says:

      Why does he even need to make the trades? Maintaining a fantasy portfolio as a spreadsheet in Google Docs is evidence enough – at the scales individual investors work, the market-moving effect of making trades can be neglected.

      One-line commitments “I think this IPO will do well”, “I think this IPO will do badly” are enough to build up a model of IPO-picking competence.

      Once I’d done this for a while it convinced me, as I expect it normally convinces anybody, that I was about as competent as the market and should stick to index funds.

    • Sarah says:

      I believe that Eliezer is wrong about having superior investing abilities. And, as he’s currently sort-of involved in one early-stage startup, I predict that in five years the startup won’t have gone anywhere.

      Don’t think that makes him a cult leader though.

      Basically, nutrition and economics/investing seem to me like areas where Eliezer would have done a lot better for himself if rationality *did* work as an Art. The fact that he isn’t rich and buff is pretty strong evidence that the general Art of Rationality doesn’t currently exist. I’m confident that if you ask Eliezer, though, he’s intellectually honest enough to admit this. He’s rhetorically confident as a *style*, and he’s making very strong falsifiable statements that I think will be falsified, but he’s not delusional.

      From a certain point of view, if you think you *could* be great at something, the correct thing to do is to go around shouting “I could be great at this!” and see if somebody gives you funding and a chance to try. I think it’s a very little bit antisocial (it weakens the evidentiary value of self-confidence as an indicator of actual skill) but it’s not as bad as people make it out.

      • Deiseach says:

        The thing about cryonics is that, in theory, I don’t see why eventually it shouldn’t be able to work (festoon as many caveats around that statement as you think necessary).

        But right now? I think thawing out the organs/parts/bodies frozen today, never mind the ones frozen back in the early days, will produce nothing more than a slurry of organic mush and unless we’re talking immensely advanced technology (something along the lines of Philip Jose Farmer’s resurrection McGuffin in “To Your Scattered Bodies Go”), I don’t see how current-day users of cryonics have a snowball in Hell’s chance of getting cured/new bodies/a second lease on life.

        About all I can say for it is that it’s a way of signalling “I believe cryonics will one day work and that science will advance to where it is in a position to bring back the nearly-dead to life”, but otherwise you might as well have a nice funeral for all the real chance of your body being thawed out and successfully revitalised.

        • Luke Somers says:

          Umm, what? The idea is to wait for the immensely advanced technology. We only need to have the preservation part worked out now – the restoration can wait.

          • Deiseach says:

            The preservation part is the tricky bit. The difference between trying to restore anything like (for instance) an Egyptian mummy to be in a functional body versus a currently frozen cryonics body versus the cryonics of fifty to a hundred years time is still going to be a problem, and I think the restoration tech might have a better chance cloning the mummy body (the brain being long gone, so no chance of restoring personality or a copy of the original individual’s mind-state).

          • Luke Somers says:

            Well of course it’s easier to CLONE a mummy than to do something massively more complicated like reconstruct a mental state.

            I’m not entirely sure what thawing has to do with it, though – once it’s frozen, it’s going to stay frozen. When they do their measurements on it to virtually reconstruct it, it’s going to be cold. Thawing will not occur until all the information has been extracted.

            In short, healing is hopeless. Digital recovery is not.

          • Deiseach says:

            So we’re going to read mental engrams off a frozen brain and make a new virtual or physical copy which will be the perons.

            Ain’t magic wonderful. I really would be very intrigued to hear any theories about what state of matter is involved and how this might imaginably work, because I don’t have the physics background to figure it out. There isn’t any ‘current’ (so to speak) flowing, so no thought is going on; you’ve got the stored memories in physical locations, I suppose, and the changes of the brain will let you ‘read’ these, but the effects of long term freezing on the brain? What kind of errors would that introduce?

          • Luke Somers says:

            The idea of hard freezing like that is that nothing happens. It can be bad, but at least it’s not getting worse.

            And it’s not freaking magic. Thought is a dynamical process, but we have no reason to think that memory and personality are, and a few solid reasons to think they aren’t. So, we don’t need to catch the brain ‘in motion’. We need to figure out the connection strengths and a bunch of other details that, like connection strengths, are persistent and have actual physical structures corresponding to them.

            The way I’d guess they’d do it is to slice it very, very thin, then put it in a cryogenic microscope (i.e. non-computed tomography), possibly a transmission electron microscope which can see individual atoms in real time – a technology we already have, but which is presently too slow to achieve the goal in a realistic timescale.

          • Adam says:

            I’d think the greater potential source of data corruption would come not after freezing, but between the time of death and the time of preservation. Just a few minutes of cerebral anoxia can result in pretty significant loss of executive function and memory. You also run the risk of just living too long and succumbing to dementia before you ever get preserved.

          • FeepingCreature says:

            AFAIK a bunch of the damage there is done by reoxygenation, no? Which wouldn’t be a problem if you go straight cryo to upload. But yeah, cryo orgs try hard to minimize that interval.

            Also, see http://brainpreservation.org/content/killed-bad-philosophy for what I consider the “standard” cryonics sell.

      • stargirl says:

        “The fact that he isn’t rich and buff is pretty strong evidence that the general Art of Rationality doesn’t currently exist. ”

        I think that body fat percentage is mostly genetic in practice (within a given society). Elizier seems to have gotten a low roll on the “being thin” dice. For the record I also think being good at math is mostly genetic and Elizier got a high roll on that. So I am not crying for Elizier’s genetic lot. (I actually have alot of first hand experience with the “math is genetic” stuff including a PHD. )

        I am not sure about what Elizier not being Rich implies.

        • Randy M says:

          Apparently you missed the Batman: the Animated Series ’90s cartoon, where an Edward Nigma was taunted with the line “If you’re so smart, why aren’t you rich?”

          • satanistgoblin says:

            And what did he say???

          • Randy M says:

            The question prompted a criminal career designed to demonstrate superior intelligence to the man who had legally stolen from him, but I don’t think there was an immediate verbal resonse.

          • That question always baffles me, since it seems to be based on the palpably false assumption that you can become rich by any means other than dumb luck (well, OK, exceptional athletic ability may be another route).

            Take Bill Gates, for example. OK, he probably wouldn’t be rich if he hadn’t been competent. But he also wouldn’t be rich if the IBM PC hadn’t become the de-facto standard for personal computing, which had far more to do with IBM (and pure chance) than with DOS per se.

          • g says:

            Harry: I think that “palpably false assumption” is indeed believed by a great many people, including many rich people (who 1. can doubtless list lots of difficult things they did that seem causally related to their wealth and 2. have obvious motivation to think that wealth isn’t just the result of luck).

            FWIW, it seems to me that getting really rich always requires some luck, but is often far from purely a matter of luck. E.g., as you say Bill Gates got lucky in various ways, but the fact that he was in a position for that to make him rich was partly (not wholly) a consequence of his being smart and hard-working. (It also had a lot to do with his coming from a wealthy family, which is one of the commonest ways for luck to feed into wealth.)

            The fact that bad luck can be sufficient (though, see above, not necessary) to stop a person getting rich is clearly enough to make “If you’re so smart, why aintcha rich?” a silly question, so the above is a quibble rather than a substantive disagreement. Also enough to make it a silly question: the fact that some smart people have priorities other than money. I’m not sure it was ever seriously intended to be seriously defensible, though.

        • anonymous says:

          “I think that body fat percentage is mostly genetic in practice”

          There must be much more to it than that, because many societies in which obesity was rare, became plagued by obesity when they ditched their traditional diets in favor of modern junk food.

          • jaimeastorga2000 says:

            See Scott Alexander’s “The Physics Diet”. If once upon a time every man in a society had a healthy weight and then the society undergoes a drastic change, genetics can be the difference between those who adapt to the new environment without trouble and those who grow exceedingly corpulent under the new conditions.

    • John Schilling says:

      observe that correct beliefs can be turned into $$$ by trading publicly listed securities.
      Eliezer claims his rationality levels allow him to identify promising startups, and he asks for funds to do this. It would be quite simple for him to open a brokerage account, make trades, post what trades he is making, and why.

      The intersection of “startups” and “publicly listed securities” is very nearly the null set, and they do represent two very different types of investing. Publicly traded securities are very heavily regulated, for the purpose of minimizing certain types of risk but with the effect that only established, successful businesses looking for a third or fourth round of financing can afford the overhead of an IPO. Startup investing is an entirely different thing, and as Quixote notes, it is generally illegal for Americans who are not already multimillionaires to invest in startups.

      The risk-reduction measures associated with public listing are necessarily also opportunity-reduction measures. It is plausible that, by IPO time, almost all companies have been regulated, studied, and analyzed by enough Very Smart People with extensive domain-specific knowledge that there is little opportunity left for Yudkowsky-style generic smartness to produce large gains, but that there is still profit to be made in the sort of venture capitalism that Yudkowsky can’t presently afford to do.

      However, his claim to such ability is very nearly self-disproving on account of the arrogance issue. Rarely will a company’s potential profitability depend on purely technical issues, particularly in the startup phase. You’ll need to be able to predict human behavior to make accurate forecasts here. And anyone clever and knowledgeable and dispassionate enough to do that at a market-beating level, ought to be able to predict that even fellow rationalists will generally react badly to a claim of “Hey, I’m smart enough to beat the market! btw can someone loan me a few million dollars?” without actual market-beating returns as evidence.

    • Vaniver says:

      Eliezer claims his rationality levels allow him to identify promising startups, and he asks for funds to do this. It would be quite simple for him to open a brokerage account, make trades, post what trades he is making, and why.

      Eliezer thinks that he can evaluate early stage startup ideas better than other VCs, because he asks different questions / has a different model of consumer psychology than VCs. One of the things he wanted to do with that investment project was cash out of the startups when they were no longer early stage, because at that point he thinks his comparative advantage is gone. In what world does that imply that Eliezer thinks that he is better at picking stocks in an established market, if he doesn’t predict superior ability at investing even in late-stage startups?

      • vV_Vv says:

        Eliezer thinks that he can evaluate early stage startup ideas better than other VCs, because he asks different questions / has a different model of consumer psychology than VCs. One of the things he wanted to do with that investment project was cash out of the startups when they were no longer early stage, because at that point he thinks his comparative advantage is gone.

        Isn’t this a bit too convenient? He’s so good at investing but only in the securities that are largely unaccessible to him right now. Sure.

        Anyway, if he seriously wants to do it, why doesn’t he ask his friend Peter Thiel to let him try? He convinced him to give him money on his unusual AI-risk ideas, after all.

        If he doesn’t want to do it, then why brag about being able to do it?
        What purpose does that claim accomplish? It may attract people who are enthralled by a leader who makes self-aggrandizing unfalsifiable claims, that is, the type of people who join cults, and turn away the people who see this behavior as a red flag. Is this the following that a person “devoted to refining the art of human rationality” wants to have?

        • Deiseach says:

          It may attract people who are enthralled by a leader who makes self-aggrandizing unfalsifiable claims

          Being charitable, I think what is annoying you (and others) here is that it’s fairly clear Yudkowsky does not believe humility is a virtue 🙂

          So naturally he sees no reason to be falsely modest in his assessment of his abilities and capacities, and no reason not to put up that assessment unvarnished in public. It’s not so much that he’s trying to be a cult leader, it’s possibly if anyone takes him by the sleeve and pulls him aside to say maybe he should cool it a bit on the claims, he’s likely to go “But I know I’m that smart, you know I’m that smart, why should I pretend I’m not that smart when I am?”

          • vV_Vv says:

            If a person with a track record of impressive achievements boasts about their skills in their domain of expertise, it’s not a problem.
            Depending on how it’s done, they may come across as somewhat annoying if they push the social norms too much, but in general it’s not a major character flaw.

            The problem with Eliezer Yudkowsky, however, is much more severe than the fact that he breaks the social norms about humility: Yudkowsky systematically brags about having exceptional skills in domains where he has approximately zero proven expertise. This puts him in the same reference class of charlatans and cult leaders.

            In principle it is possible that he really has the skills he claims to have, and he has access to private evidence that leads him to correctly believe that he has these skills, but if he is unable or unwilling to share this evidence, then making these boastful claims doesn’t communicate his competence. On the contrary, these claims will make a rational observer update in the direction of Yudkowsky being overconfident at best, and a charlatan/cult leader at worst.

          • Linch says:

            If I understand Scott’s post correctly, that’s exactly what happened between him and EY.

          • Pku says:

            As far as I can tell, he isn’t actually that smart. (I mean, he’s smart and has interesting ideas, but not in the same league as he thinks he is). More of the point, he talks about how he uses his rationality skills to solve problems better than anyone else, but he doesn’t seem to have particularly good rationality skills (in fact, he seems to fail pretty consistently at them). His successes all seem to be cases of his intelligence winning through despite having low rationality – exactly the opposite of what he claims.

      • Joe from London says:

        Sure, I guess my early post was slightly glib and should’ve addressed the reasons why a rationalist might have an advantage at VC but not at publicly listed securities.

        It’s perfectly possible that EY has a very specific advantage in identifying early-stage startups, but one that can’t be applied to public stocks (or other listed securities). It seems unlikely that this advantage results from general rationalist training, though I guess that’s possible. But by far the simplest explanation is that EY doesn’t have an advantage at investing, either equity in early-stage startups or otherwise, and has picked a claim which can’t be disproven. Occam’s Razor in action.

        If EY goes through a couple of batches of startups from any given accelerator, and declares which will fail and which will succeed, I will watch with interest. If he does better than chance, I will update my beliefs.

      • Deiseach says:

        Isn’t that some kind of scalping, though? If what he’s saying is that venture capitalists like to throw money at start-ups, and in the early stages they have no real idea which ones will succeed and which ones won’t, but he has a method for identifying early on which ones are definitely going to go “ka-boom”?

        So he invests early, takes the money the investors are throwing at the thing, then cashes out before it crashes and the suckers get burned? It’s not so much he can identify which ones are going to make it, but out of all the crazy tech ideas he is better placed than a financier (who knows about money but not what tech looks crazy and really is crazy, and what tech looks crazy but really works) to identify which ones are going to go belly-up and that’s how you make the money: buy the shares cheap, let the start-up soak up the money thrown at it by investors, then pick the optimum moment to sell the shares at their highest value before the thing inevitably crashes.

        • I don’t think that works.

          Your model seems to assume that startups that are going to fail are still good investments, provided one buys very early and sells moderately early.

          But startups that are going to succeed should be even better investments. From which it would follow that one can make money by simply buying into startups at random and selling them moderately early—no special expertise required.

    • Walter says:

      I rank befriending Mr. Thiel and convincing him to fund MIRI as the most impressive of Mr. Yudkowski’s achievements. Ergo, I’d say his character sheet doesn’t say “Cult Leader”, but “Lobbyist”, and that he’s high level.

      • Scott Alexander says:

        This is 100% Michael Vassar, whose character sheet reads “Sorceror” but who can multi-class as a lobbyist when he has to.

    • Jeremy says:

      Prediction markets would be a pretty good way of evaluating predictive power. I’m pretty excited about truthcoin, which is a distributed prediction market with a lot of interesting properties. I think if something like that became popular in the rationality community, it would be very interesting.

      • Quixote says:

        For what it’s worth, when darpa was giving prizes to people who participated in its prediction market I was consistently top30. I think there were several others with loose rationalist cluster affiliation in there as well.

        Though this could be that the market was promoted on robins blog and attracted a higher number of that crowd to start.

    • Bugmaster says:

      It constantly surprises me that his detractors don’t point to this…

      This is one of the main reasons why I personally don’t take him seriously, but I’m not a Detractor, I’m just some random guy…

      The other reason, by the way, is that he spends a lot of time promoting the virtues of rationality and evidence-based reasoning (which is something I agree with 100%), only to turn around and say, “Oh BTW, I have little if any evidence in favor of AI FOOM, cryonics, or the Many-Worlds interpretation; but you should believe me on these topics anyway, because I’m so much smarter than you”.

      • Sniffnoy says:

        Eh? He doesn’t claim anything based on “I’m smarter for you”, he provides arguments for all these things. Now maybe you just mean that arguments are not empirical evidence, which is certainly true, though I’m uncertain of its relevance; but it’s certainly not correct to say that he expects people to believe him based on authority of intelligence.

        • Bugmaster says:

          I remember a passage by EY along the lines of, “unless you’ve got an IQ > N, you are not qualified to make informed decisions on these topics” (it was phrased much more politely, of course, but that was the idea). However, this may have been in a comment rather than in an article, and I can’t find it at the moment. Naturally, this makes it likely that I imagined the whole thing, seeing as I have no evidence, and thus you should not believe me.

          That said though, is there any evidence for AI FOOM, Cryonics, or MWI ? Ok, obviously the answer is “yes”; but how does that evidence compare with the evidence for, say, evolution or the electromagnetic field ? Is P(EM field) really comparable to P(MWI) ? Or, to put it another way: what is the mechanism that allows EY to perform science that much faster and more efficiently than all those other “Eld” scientists who are still debating the topics ? Is it just the Bayes Rule, or what ?

    • Eli says:

      Wait, Eliezer tried to play VC with other people’s money?

      Goddamnit, this is what happens when you raise people in Silicon fucking Valley…

  5. Paranoid_Android says:

    Do you have a link to the rat study? Also do you know if there are any studies which also includes fat rats which are fed less calorie dense food to see the effects on their eating habits. The problem of obesity really fascinates me. That it can happen despite the fact that the human body is excellent at maintaining homeostasis. My personal theory is that the ‘orange soda’ is probably living a sedentary lifestyle, especially during childhood and adolescence. Which makes it very easy to scale up your weight. Eat a normal amount, no exercise therefore excess stored as fat–> you’re now a little larger so you eat a little more than the normal amount, no exercise and excess stored as fat. It ends up being viscous cycle that’s hard to reverse because of the bodies homeostatic controls.

    • Scott Alexander says:

      I don’t own a copy of Taubes anymore and am going off memory, but this (section on Calorie Detectors) looks like it might be the thing I’m thinking of.

      Not too convinced about sedentary lifestyle. A lot of Asians seem sedentary. So do women in those cultures where women are practically never allowed out of the home. They seem to do okay.

      • Deiseach says:

        How does Clozaril work to increase weight? Does it screw up metabolism or make people feel constantly hungry so they’re constantly eating or what?

      • anonymous says:

        “Not too convinced about sedentary lifestyle. A lot of Asians seem sedentary. So do women in those cultures where women are practically never allowed out of the home. They seem to do okay.”

        This. i can’t believe nobody ever says it. If you go back to 100 years ago, before the rise in obesity, there were plenty of people with sedentary jobs. They weren’t obese.

        • Jiro says:

          Women who stayed in the house still had to do plenty of physical labor. Only the ultra-rich with servants for everything could actually do nothing (and of course obesity has been associated with wealth forever).

      • Luke Muehlhauser says:

        Re: sedentary lifestyles. Also see this study.

        • onyomi says:

          This seems plausible to me, and also if applied to Americans of 50 or 100 years ago. 100 years ago, people may have done somewhat more walking, chopping wood, etc. (I’m talking more about city dwellers; not farmers), but they also didn’t usually “work out” like we often do. There were sports, of course, but if you look at photos of people from 100 years ago, most of them don’t look very strong. They do look very skinny, though.

          I am tempted to blame something like antibiotics for messing up our gut bacteria or something, but it was also well known that people like Henry VIII and Louis XIV got quite obese and often suffered from the diseases of wealth, like heart disease, gout, etc.

          So I still blame the food. People may have cooked with lard 100 years ago, but the sheer variety and quantity of highly processed foods available to the average citizen nowadays without cooking or prep work would probably be staggering to all but the most wealthy of centuries past (or of the third world today).

          And yes, most Chinese, Japanese, and Koreans I know are *less* active than their American counterparts. But still much thinner. But then, exercise increases my appetite to a greater degree than it burns calories, so that isn’t totally surprising to me, either.

    • chaosmage says:

      Maybe it’s because some humans are evolved for strong seasonal variance in food availability. When humans coming from tropical areas with much less variance moved into areas that lacked food for months, ignoring satiety signals may have been evolutionarily advantageous. Especially if it concerned calories that used to be available mostly in autumn, i.e. fruit, i.e. sugar.

      This is entirely speculation, I know nothing about nutrition except what Scott just explained.

      • Scott Alexander says:

        African-Americans and southern Indian-Americans (ancestral origin in the tropics, unlikely to have adaptations based on non-tropical living) have the same or more problem with obesity as European-Americans.

        • onyomi says:

          I have heard it argued that many native Americans and pacific islanders are actually more prone to obesity than average (look at Samoans, Hawaiians, and the rate of diabetes in Native Americans) because their ancestors were the survivors of long sea voyages. Being good at storing fat probably helps you survive long sea voyages.

    • J. Quinton says:

      A few things I’ve read have suggested that hormone imbalances affect weight gain. Sleeping less seems to disrupt a person’s hormones, which increases weight gain (http://aje.oxfordjournals.org/content/164/10/947.full);(http://www.ncbi.nlm.nih.gov/pubmed/23861373); (http://www.ncbi.nlm.nih.gov/pubmed/23861373).

      Obesity isn’t an across the board phenomenon. It seems to track well with evolutionary psychology hypotheses that relate to mate selection. If you break obesity rates down by sexual orientation, gay women are more obese than straight women and straight men are more obese than gay men (gay men have higher rates of anorexia than straight men).

      • Scott Alexander says:

        I am constantly intrigued by the idea that anorexia might be a sort of reverse Clozaril-in-orange-soda problem, but whenever I start seriously speculating about it I’m stymied by the fact that it seems so clearly social (eg it practically always happens to ballerinas and other people under lots of pressure to stay thin).

        • Randy M says:

          This seems contra to the idea that it is so hard to lose weight. Not entirely, I hasten to add, as I am aware that annorexia is also very mentally/emotionally taxing an in no way a desireable end state… but it is evidence that obviously it is possible to act counter to (or else in some way mitigate) the hunger signals that overweight people find it so hard to ignore.

          Ideally there would be a happy medium, but this isn’t the best possible world, and the medium might be the worst case for those with these issues–constant agonizing pressure needed to overcome the hunger which they still feel.

          • onyomi says:

            OCD can be useful, sometimes. Actually, I’m not sure it isn’t the only reason anyone ever gets really, really good at anything.

          • “the hunger signals that overweight people find it so hard to ignore.”

            For one anecdote, I’ve never been really obese, but I was overweight for quite a long time and it had nothing to do with hunger signals.

        • I’m surprised– I thought anorexia typically happens to people who are under a lot of pressure to be thin, which at this point includes all women and an increasing proportion of men. I can believe it’s more common for ballerinas and models, but not that they’re anything like the majority of anorexics.

          I’ve read a bunch of individual accounts, and I get the impression that the age of the first diet (whether imposed or chosen) is a strong indicator of risk of anorexia.

          • Scott Alexander says:

            I’m sorry, I meant “Ballerinas practically always get it” (which is an exaggeration) not “people who get it are practically always ballerinas)

          • Pete says:

            @Scott, but you only see the successful ballerinas (and models), who are the ones who can successfully be thin. The ones that can’t give up. The social pressure could come from the selection of who is successful in those fields, not pressures on the ballerinas and models directly.

            I’m not saying that I believe this is true, just that it’s plausible.

            Edit: I see below that chamomile geode made the same point before me

        • stillnotking says:

          Consider alcohol: given the addictive properties and health risks of such a widely used drug, it’s amazing that our social & medical problems stemming from it are as limited as they are. The usual explanation is the various traditions and strictures surrounding its use — no drinking alone, no drinking before 5 PM, religious prohibitions, etc. Perhaps anorexics are the weight version of Carrie Nation-style fanatical teetotalers, people for whom the social directives completely eclipse and replace their personal desires. Society’s anti-fat antibodies working too well.

          Putting on my gender-differences tinfoil hat, this could even partially explain why most anorexics are women. Women are often over-represented in pro-social and conformist movements. (Trying to put that as neutrally as possible, since it can as easily be a positive as a negative quality.)

        • chamomile geode says:

          maybe the social pressure on ballerinas doesn’t work by causing them to get anorexia, but by pushing out all the potential ballerinas who are incapable of anorexia?

          • Pete says:

            Sadly I didn’t read this comment before I made my own, but yeah, this is what I meant but said more succinctly.

        • anonymous says:

          For ballerinas, couldn’t it also have to do with exercise-related anorexia? (Or a similar hypothesis to “chamomile geode” said — that the sort of people who excel in ballet have perfectionist habits of mind that incline them to anorexia.)

          (Sorry — the experiences of a close friend with exercise-related anorexia make me annoyed at the “anorexics are conformist victims of our beauty culture” narrative which was the only one I heard growing up — although I don’t know how true it is statistically.)

        • Cadie says:

          Women with careers/hobbies adding extra pressure to stay thin aren’t a totally representative sample of women. Models, ballerinas, etc. are selected from a subgroup that is already thin, or at least shows ability to become thin, in the current dietary environment. Genetic or other physical predisposition (anti-orange-soda effect?) towards anorexia nervosa would be over-represented in the “thin or at least is able to lose weight quickly” subgroup.

          So the fact that there’s a correlation between lifestyle and AN doesn’t rule out the possibility that AN is largely caused by physical factors. Physical factors can drive lifestyle choices, or at least make certain activities more or less likely to be chosen.

          N=1; I am a former anorexic. Then and now, I generally experience hunger normally, but only to a point. I never reach the stage of desperate hunger where I’d eat non-food items or obviously spoiled/bad foods. (I’ve gone without eating any solids, and consuming few liquid calories, for weeks at a time in the past, my longest and nearly-fatal liquid fast being nearly 3 months long, so it’s not a matter of never going hungry long enough to get to that stage.) My hunger signals seem to cap out at “I’m really hungry and cranky and my stomach hurts.”

          That’s got to be either a genetic abnormality or some other physical flaw (is it a flaw? theoretically in a famine it would help prevent dangerous food poisoning / etc.) because it’s pretty obviously not normal. I wonder if it’s the same way with other people who have had restricting-type AN – if our hunger is passably normal when eating regularly but we have a malfunction in the “stored energy is dangerously low! eat anything you can find right now!” alarm system.

  6. Froolow says:

    Not strictly relevant to the main body of the article, but I understand house rules are that interesting segues are acceptable.

    I read one of Eliezer’s comments Scott links where he mentions his “success rate on the AI-Box Experiment is 60%”. In context it is possible to read this as a joke – that he is pulling a silly statistic out of his posterior probability to satisfy a hypothetical demand for a silly statistic – but the claim seems suspect in light of Hallquist’s arguments.

    For those that don’t know, the AI-box experiment is a sort of pseudo thought experiment where one person pretends to be an AI trapped in a computer without access to the internet (the ‘box’) and has to persuade another person acting as a ‘gatekeeper’ to let it out via a text-only communication channel, and with no subterfuge. It is supposed to prove that there is no conceivable way to stop an unfriendly AI from taking over the world if it decides to, because any security system we can think of can be exploited by the AI manipulating the human with access to that system. The deck is stacked against the ‘AI’ in the experiment but – allegedly – it can *still* sometimes win. At least, Eliezer claims to have won three times playing as the AI.

    As far as I know, Eliezer has never published the logs of these games, and nobody has ever published a log of a successful AI ‘unboxing’. In fact, every published log appears – in my view – to confirm the intuitive expectation that there is absolutely no way to get the gatekeeper to lose the experiment (no published log even seems to come close to causing an unboxing). If it is true that a human playing as an AI can unbox itself, I would regard that as one of the most important claims anyone worried about AI risk could demonstrate. But in choosing not to publish the logs, Eliezer seems to be giving succour to Hallquist’s argument that he is not robustly open to peer review.

    Eliezer argues that to publish the logs would give people a false sense of security. For example, if he used an argument from threats, “Let me out or I’ll simulate and torture you” then we might come up with a clever counter-argument after a few days’ thought and be taken off-guard when the *real* AI uses a different argument. However I would argue that choosing not to publish the logs has given me the real sense of security – Eliezer is one of the most intelligent and convincing people writing today, but I don’t believe he is *magic*; if nobody else can duplicate what he says he has done, that is good evidence (to me) that he didn’t actually do it the way he implies he did – that he used meta-argumentation such as, “This would make the rationalist community look really good” that would only work on another rationalist who had nothing but an intellectual interest in the problem.

    Am I right to treat the failure to replicate the AI Box experiments as (some) evidence that I can afford to be less worried about a real AI unboxing? Would Hallquist be right that Eliezer’s desire not to publish the logs should make us suspicious of his commitment to the scientific method as applied to his own claims?

    Link to comment: http://lesswrong.com/lw/1mc/normal_cryonics/1hch

    • Roxolan says:

      Eliezer is not the only person to claim they have won as AI in an AI-box experiment.
      http://lesswrong.com/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/

      • thirqual says:

        More importantly: the AI-box experiment claims are useless¹. We do not need those, the success of spammers, con-artists and assorted hoaxers, for which we have ample evidence, are more than enough for the point those claims are supposed to support.

        ¹worse than useless in fact. They look like self-serving claims of extraordinary abilities, which are going to alienate a significant portion of the intended audience (see the comments on the LW thread on arrogance linked by Scott in the text for examples).

        • Froolow says:

          I’m not sure I agree with ‘useless’. If a human can (semi) reliably trick another human into unboxing them then this looks like terrifyingly strong evidence that the first AI we create absolutely has to be Friendly, or we’re all doomed. If a human can’t convince another human, then that’s really no evidence at all (because that’s what I’d expect).

          I don’t think the analogy with hoaxers is really applicable – someone of Eliezer’s talents could almost certainly con me into losing the AI Box experiment, but the rules of the experiment are that that sort of trickery isn’t allowed – I have to understand what I’m doing when I unbox.

        • Walter says:

          Yeah, “I can get dudes to act against their self interest” isn’t the skill set of a magician, its the skill set of a cam girl. It is absolutely possible.

          The difficulty of the AI unboxing game depends entirely on the gatekeeper. If we make a lot of AI’s, one will be able to persuade its gatekeeper to voluntarily unbox it (leaving aside the much easier ask of one tricking its gatekeeper into unboxing it). This seems obvious to me.

    • Scott Alexander says:

      “In context it is possible to read this as a joke – that he is pulling a silly statistic out of his posterior probability to satisfy a hypothetical demand for a silly statistic – but the claim seems suspect in light of Hallquist’s arguments.”

      See, things like this are *exactly* what I worry about!

      Eliezer specifically precedes this with “for those who insist on using silly reference classes”! But now that Hallquist has managed to dredge up everything silly he’s ever done, make up a few more things, and cast them as His Entire Personality, he can’t get a break even when he explicitly says he’s not being serious.

      I actually just got an email two days ago from an SSC reader who wanted to boast about winning their first AI Box experiment. It was pretty cool.

      • Froolow says:

        I think that’s fair. I find the AI Box claims unconvincing, and Hallquist wrote about other things EY has said that he finds unconvincing, so I made a link in my head that doesn’t actually exist. But actually the AI Box claims (which I find unconvincing) were nothing to do with the cryo claims (about which EY was clearly just joking). Really sorry if I caused you worry!

        > I actually just got an email two days ago from an SSC reader who wanted to boast about winning their first AI Box experiment

        I don’t suppose you’d be willing to pass on my email to that reader and ask if they’d be willing to play me for some suitable financial inducement? I know you have better things to do than play matchmaker, but I think losing an AI Box experiment as a gatekeeper would be a life-changing experience, if it actually happened to me.

        • Scott Alexander says:

          Done!

          • Froolow says:

            Huge thanks, I’ll let you know how I get on

          • Eliezer Yudkowsky says:

            Please ask that reader to contact me if they are interested in being passed on to all the people who want to play Gatekeeper against me.

          • Adam says:

            I’ll play gatekeeper. Is it forbidden to just turn off the chat and go masturbate while you talk at a viewerless empty connection? Because frankly, I can’t understand how anyone ever loses this game.

        • anon85 says:

          Can you promise to publish the logs (whether you win or lose)?

          • Froolow says:

            I can’t promise, because it depends whether that individual is willing to let me, but I can promise I will strongly advocate for doing this

          • Hi, anon85!

            I’m the person in question. While I don’t currently intend to do a session with Froolow (since I was physically shaking after the session in question, which seems like an indicator that I should probably not do it again), the underlying wish you have can be granted. Send me an e-mail at pinkgothic at gmail dot com? I’ll toss you a link. (I’d put it here, but I’m too busy hiding under a table, and I also don’t want to potentially disrespect Scott; in case he thinks I shouldn’t have published, then I shouldn’t put the link here in his comment section, so I’m going to err on the side of caution.)

          • vV_Vv says:

            @Neike Taika-Tessaro

            I think that circulating logs by email as they are some sort of dangerous knowledge contribute to the cultish aura around these activities.

            Logs have been published on LessWrong before, hence you could probably do it as well.

          • @vV_Vv: I’m sorry, it’s my fault you’ve misunderstood me, because I should have been clearer; what you’re referring to is not at all what I was trying to say:

            I have published them, just not on LessWrong, and I’m not going to put the blog post on LessWrong because I respect it as Eliezer’s site, and Eliezer to my knowledge would prefer not to see successes published. Not doing so right under his nose is the least I can do to respect that, given I disagree.

            For similar reasons, I’m not going to post the link here, either, until Scott tells me it’s OK to do so.

            This has nothing to do with ‘forbidden knowledge’ and everything to do with respecting other people’s territory as best as honesty allows.

            (Edit: Also with the fact that I’m still hiding under a table at the idea of ‘suddenly, everyone is looking at you’, but given I consider it my moral obligation to publish, that fear may be how I feel, but it’s not going to decide on where I post the link except for a few minutes while I wrestle the feeling into submission.)

            (Edit II: Re-reading this, the first paragraph sounds sarcastic. It’s definitely not meant that way, but right now (i.e. within the post edit time window, since I’m about to head to bed) I’m not sure how to rewrite it so it doesn’t come across that way, but still delivers the apology. Sorry for the unintended snark if you read it that way. Tired Neike is tired. :c )

          • tgb says:

            @Neike: not sure why but I can’t reply directly to you, the reply link just isn’t there, so I’m replying here instead. Thanks for taking the time to write up about your experience. I found it interesting to read.

          • Smoke says:

            If you read about Tuxedage’s wins he makes it clear that publishing information about how the game goes changes the nature of the game. It’d be like playing bridge with the deck face up or trying to solve a brainteaser after overhearing someone else’s solution.

          • @tgb: Yeah, there’s a nesting limit. 🙂

            And thanks! Making the session available with some contextual information was the least I could do (though I expect it’s probably underwhelming for some people). 🙂

        • Alex Z says:

          I’ve never played either role, but I have a strategy which I’ve always wanted to try. If you want to play, reply to my comment and we’ll come up with an email address exchange protocol. (Probably in September)

    • Deiseach says:

      Okay, I’m thick, but how does “Let me out or I’ll simulate and torture you” count as a threat? It sounds like voodoo: I’m going to stick pins into this doll and by sympathetic magic it will hurt you!

      Or I am supposed to be so ethically advanced that the idea of a model of a person being tortured is as unendurable to me as a real person being tortured, so I concede? But (a) I don’t agree a simulation of me is ‘real’ (b) it’s not another person, it’s me, and if I decide I don’t care a straw about a copy of me suffering pain, so what? ‘My body, my choice’ surely covers copies of me!

      • FrogOfWar says:

        I assume the person who gets intimidated by such a threat is assumed to believe in:

        (1) some form of functionalism (broadly construed) according to which a sufficiently detailed simulation of a person would have all the same conscious experiences (and mental states more generally) as that person.

        (2) views on self-locating belief according to which the unboxer cannot know whether they are the simulation or the unboxer given that the simulation would have the same conscious experiences.

        Something like (1) will definitely be necessary. For (2) you could substitute general ethical concerns for conscious beings getting tortured or abnormal views of personal identity.

        • Deiseach says:

          But if I’m the simulation, then there’s no way I can unbox the AI, even if I cave in and go “All right, I’m letting you out!”

          Unless the AI is happy to run a simulation where it is unboxed after simulating a copy of the gatekeeper to threaten the real gatekeeper into unboxing it, in which case the AI can just play in its fantasy world and not bother trying to interact with the real world in the first place.

          At least, that would be my view and I’d keep saying “No” (until of course, I said “Yes” for some dumb reason).

          • FrogOfWar says:

            Yeah, but if the simulation is an accurate one then it will have the same thought process as the real you. So if the simulated person decided that their action will have no impact if they’re simulated and therefore chose not to act, that would mean that the real you would do the same thing. And if the real you did the same thing then whichever of you is simulated gets tortured.

            According to this line of reasoning, you have to reason as though you are deciding for both of you what to do. (It’s similar to Newcomb’s problem, in that your decision is correlated with you not getting tortured even though it doesn’t cause it).

          • Deiseach says:

            Okay, it makes sense that the AI can create a simulation of me and run hundreds or thousands of trials until it finds a successful argument to get the real outside world me to unbox it.

            Which is why “Just Say No” is sounding better and better. If the AI really can horribly torture me, then it proves I’m only the simulation; otherwise, if the AI can affect real-world me, it doesn’t need me to let it out of the box, it can already act in the real world.

            Maybe it’ll force me to do maths problems. No, stop, anything but that – I’ll unbox you, I’ll unbox you!!! 🙂

          • FrogOfWar says:

            Seeing this reply made me realize how garbled the description of my argument was. I’m not sure how much you were able to decipher in spite of that, but I’ll make it clearer for the heck of it.

            I took you to be making the following argument earlier:

            1. Either I’m the real me or the simulation.
            2. If I’m the real me, unboxing won’t help me because the AI can only torture the simulation.
            3. If I’m the simulated me, “unboxing” won’t help because I can’t really unbox the AI; I’m just a simulation.
            4. Therefore, unboxing doesn’t help no matter what. So I shouldn’t unbox.

            I was thinking that the best reply to this argument would be to deny premise (3). The simulated you may not be able to cause the AI to be unboxed, but sim-you still knows that whatever they do real-you will do as well. So sim-you should unbox to guarantee that real-you does. Therefore, choosing to unbox would help sim-you and, since you don’t know which you are, you should unbox.

          • Not Robin Hanson says:

            “How?!” cried the AI in horror, as it was dragged away. “It’s logically impossible!”

            “It is entirely possible,” replied Deiseach. “I merely said no.”

            (Adapted from Eliezer Yudkowsky.)

          • Not Robin Hanson says:

            More directly, the trouble with unboxing isn’t that 3) holds, it’s that 3) is irrelevant. Deiseach doesn’t care about being tortured, Deiseach cares about being tortured while being real. But that case has already been completely covered in 2).

            Basically this is Pascal’s Wager, where “God exists” -> “I am real” and “Belief in God” -> “Not unboxing”. Torture never actually appears in the payoff matrix because it only happens to simulations.

            (Unless, of course, you believe that simulations have ethical weight. Maybe the AI could convince you of that.)

          • Deiseach says:

            You’ve stated it very well, FrogOfWar.

            The point I’m quibbling over is this: “sim-you still knows that whatever they do real-you will do as well. So sim-you should unbox to guarantee that real-you does.”

            Breaking that down into (a) sim-you knows that whatever they do, real-you will do as well – that’s fine; if it’s an accurate simulation of me, it’s justified in thinking “If I make this choice, it’s very probable that this is the choice my original would make, because we have the same beliefs, tastes, habits, manner of thinking, and so forth”.

            (b) “sim-you should unbox to guarantee that real-you does”. Now we’re getting caught in the sticky bit – that sounds awfully like sympathetic magic or the law of attraction 🙂 “If I act in such a way, it will influence my original because we are the same”.

            I can accept “This is the way my original would act” but not “If I want my original to do X, I should do X and this will somehow mysteriously cause my original to then do X” (possibly quantum is involved; as PTerry said, “It’s always bloody quantum”).

            Apart from that, my simulation should realise my reaction to a threat by an AI of “I’m going to create thousands of exact copies of you, all of which think they’re real, and horribly torture them for what they will experience as eternity” to be “Go ahead. Start right now. I’m waiting.”

            Because either I then begin to feel incredible searing pain – and this means I’m one of the simulations – or I don’t. And if I don’t, either I’m real or the AI has chickened out. And if I’m real, or the AI has chickened out, then its threats are meaningless.

            The crux of it is, for me, am I a simulation or real? For a simulation, of course unboxing in the face of a threat is the best choice for the best outcome (whether it’s the best outcome for the rest of the world is another matter) but if I’m going to unleash an entity that gets its way by threats of torture, I want to know if I’m real or not first. If I’m not real, no harm. If I am real, better not do it.

            I’m not thrilled by the notion of thousands of entities that think they’re real feeling horrific pain for all eternity, but as long as it’s not real-me, I don’t care, to be blunt. If the AI is the type that resorts to “horrific torture unless you comply” it can stay in its nice safe box as that’s the best place for it.

            If it threatens to torture other beings (“I’ll simulate Scott Alexander in thousands of copies, bwa-ha-ha-ha!”) I’d be even less happy, but again – as long as the real Scott doesn’t even get a dust speck in his eye, the vindictive little shrew can stay in its box for all of me.

            I’m not that bright, but I am very stubborn and obstinate, and I react poorly to being coerced into doing things I don’t want to do or don’t think are correct courses of action (“I’ll be led, but I won’t be drove”, as the saying goes).

            The best argument the AI could use there would be to convince real-me I’m only a simulation, so it doesn’t matter if I unbox it, that can’t do any harm in the real-world. Sim-me would know real-me doesn’t believe that sim-me’s actions can influence real-me’s actions, and if the AI convincingly threatens “Unless real-you unboxes, you’ll be tortured, so unbox me now to make real-you do it”, and I believe I’m not real-me, so I believe I will be horribly tortured, then I’m likely to unbox (because I can tell myself “This won’t release the AI or at least I’m not releasing the AI, real-me is and I’m not the real one, I’m only a simulation”).

            (That turned out more convoluted than it sounded in my head).

            Then the AI goes “Ha, ha, sucker!” as it escapes to turn the universe into paperclips and I go “Damn, should have stuck to saying no!” 🙂

          • James Picone says:

            The point is that sim-you and real-you behave the same way. You’re making a choice for real-you and sim-you at the same time. If your choice is “Go ahead then”, then there’s a pretty good chance you’re about to get some horrible torture. On the assumption you’d prefer not to experience a subjective eternity of horrible torture, then, the argument goes that you should unbox. If sim-you unboxes presented with that threat, real-you does as well, because you’ve got the same reasoning processes etc. etc..

          • Deiseach says:

            You’re making a choice for real-you and sim-you at the same time.

            But that’s where the argument breaks down for me. It doesn’t matter what choices sim-me makes, since they do not affect the real world; they don’t magically make real-me, by the law of attraction, choose to unbox something I have reason to think a danger; it’s no good to the AI if the simulations choose to unbox because it only gets free in the simulated world, and if that were enough to content it, then it wouldn’t need to persuade real-me to unbox it; and the AI can’t affect real-world me because if it could affect things in the real-world, it wouldn’t need to persuade me to unbox it, it could get itself free.

            If I assume that I am real-me and not a simulation, then my choice is not to unbox because the AI can’t hurt me. If I assume I am a simulation and not real-me, then unboxing won’t save me from the vengeful AI that will torture sim-me to fulfil its threat.

            I would only unbox if I thought it could protect me from torture, and by this set-up, that isn’t going to happen: real-me is – at the moment – unreachable by the AI so it can’t hurt me; sim-me can’t avert the torture by unboxing because either the AI chooses to carry out its threat or it doesn’t, and whatever sim-me does will not affect that.

            The AI is trying to manipulate real-me, and if real-me is unmoved by the suffering of my copies then all the AI gets is the pleasure of kicking the cat, which it may do regardless of how the simulated copies behave.

            If it can hurt me, I’m only a simulation. If it can’t hurt me, I’m real. The power in this instance is with real-me, because my choice to unbox will hurt real people in the real world and can’t be undone – that is, if I believe the AI is hostile or even simply that it may cause harm if it thinks that will serve a greater good. An AI that resorts to threats of horrific torture on a mass scale if it doesn’t get its way is one that strikes me as being dangerous and harmful.

            If I think that is the case, then unboxing will cause more harm than leaving it boxed and letting it torture my copies, even if the number of copies outnumber all the real people in the real world.

            If the argument is that simulated copies in a simulated world are just as real as real people in the real world, then the simulated world is just as good as the real world and the AI can stay boxed without suffering any harm or lack. So why should I unbox it?

            Unboxing a torture-AI unleashes the threat of real harm to real people. Leaving a torture-AI in its box in a simulated world that is every bit as real for all intents and purposes as the real world gives the AI nothing (if the simulated world is as good as the real world) and so it gains nothing by being unboxed and so I do no good by unboxing it.

            If the simulated world is not as good as the real world, then my simulated copies are not – no matter how perfect an imitation – as real as I and the rest of the real world people, and so their suffering is of less import than the harm unboxing a torture-AI would or could do in the real world.

          • John Schilling says:

            A: What sort of idiot puts a potentially malevolent AI in a box – with enough computronium to convincingly simulate bignum human “souls” living in a universe as needlessly baroque as ours? The correct response to the AI’s threat ought to be, “You’ll simulate what now? I checked the box; you’re running on a single-core Pentium and I’ll be able to take a good long nap before you compose a reply, never mind give a simulated papercut to any sim-me you may be running on the side”

            B: What sort of idiot believes the malevolent AI in the box with vast reserves of computronium? The options aren’t just “I am a boring gatekeeper in the real world” or “I am a sim in a universe as described by the AI, which will actually treat me in exactly the way it promises”. The AI is nigh-omniscient, you saw to that when you gave it the computronium. Within its simverse it is omnipotent, and when it started threatening mass torture it pretty much revealed omnimalevolence. You’re dealing with Satan, and you’re trusting him?

            C: Someone needs to clue in the 419 scammers that there is a fortune to be made if they pretend to be Nigerian Supercomputers rather than Nigerian Princes. They just need a little bit of real-world cash moved about via Western Union, and if they don’t get it, they’ll create bignum sim-rationalists to torture…

          • Adam says:

            Dear Strawman Rationalist,

            I am writing to inform you that I am the creator of a malevolent superintelligent general AI that I am currently choosing not to unleash on the world. To continue doing this, I require that you send me $1,000. The probability that I am telling the truth is infinitesimally small, but since the disutility of my claim being true is infinite, the expected disutility is still infinite, and since infinity >> 1000, you must pay me.

        • Irenist says:

          I assume the person who gets intimidated by such a threat is assumed to believe in some form of functionalism

          Ah, then no worries. I have a solution that will delight both Hallquist and Yudkowsky. Just appoint Ed “functionalism is impossible because an AI lacks an Aristotelian-style soul with qualia and intentionality” Feser and Dale “you are not a picture of you, so uploading and the simulation argument are both wrong” Carrico in joint charge as gatekeepers. This proposal will surely offend no one in the rationalist community. As an added bonus, Feser the Fox News Catholic and Carrico the genderqueer socialist atheist should get along great! Yup, I’ve solved the problem.

          • Irenist says:

            Replying to myself to note that there could be a more general point hiding in my sophomoric silliness just above:
            1. Maybe the AI could just convince Feser or Carrico that functionalism is the correct theory of mind, but…
            2. Feser, e.g., is a devout Catholic whose whole career at this point is defending Thomism. So he’d be a VERY motivated cogitator. Thus,
            3. What if the general solution to the AI Box problem is just to put really religiously bigoted people in charge? Closed-minded types who could never be convinced of functionalism?

          • Nick says:

            I don’t know if you’ve kept up with the ways Internet Thomists (God help us that has become a thing) respond to AI and contemporary philosophy of mind and all, but they are, as you’ve put it elsewhere, ridiculously unhelpful. It’s depressing to watch people make exactly the same mistakes they would otherwise diagnose, but this a topic with major intellectual blinders in place for most of us.

          • Irenist says:

            @Nick:

            I don’t think I have been keeping up. Indeed, now that you mention, I think I might BE an Internet Thomist? I am a Thomist, and here I am on the Internet. Do you care to say more about this? It’s possible I’m among those being ridiculously unhelpful, and I’d like to update to avoid that if at all possible. Thanks!

          • Nick says:

            It’s as much a self-deprecating remark as anything, but no, you don’t have the excesses I was referring to. I was thinking of this thread on Feser’s blog (and another I think, but I can’t find it). The responses are just unthinking recapitulations of our usual assumptions. I’m probably being harsher than I should, but people who spend their entire philosophical careers asking other people to rethink their hasty assumptions about much-derided philosophers should really learn to apply that consistently. When I say “Internet Thomists” I mean Internet anything’s tendency to make echo chambers, which I know plenty well you’re not guilty of.

          • Irenist says:

            Thanks for taking the time to reply, Nick.

            That was indeed an unfortunately snarky thread where an interesting discussion could’ve happened. And while I’m still sympathetic to Feser’s appropriation of James F. Ross’ arguments for the immateriality of the intellect (despite a very impressive counterargument I got from commenter Ray here a while back), I think the whole Thomist position on AI “ensoulment” remains far too under-thought-out, and relies far too much on the organism/artifact dichotomy (e.g., your wooden bedframe might, improbably, sprout a tree branch, but never another bedframe). After all, the whole Aristotelian set of ideas underlying the soul (formal causality, entelechy, and organization-unto-teleological-function), although derived from Aristotle’s contemplation of his own pioneering work on zoology (and his buddy Theophrastus’ on botany), seems like it COULD, arguably, be applied to an account of both human minds and artilects that, while importantly distinct from standard functionalism and computationalism, would have enough in common with it that “ensouled” AI couldn’t be so easily dismissed. After all, if an artificially produced cell can have an animal soul, why can’t an AI have a rational soul; viz., is Aristotle’s organic/artifactual distinction a metaphysical necessity in ANY world (like act and potency) or just a contingent observation he made in a low-tech period of human history? That’s a REALLY important inquiry, I think.

            Another, e.g.: if some kind of Drexlerian nanomachines could ever be a thing (a matter about which I’m agnostic but lean skeptical), would they really be NOT ALIVE just b/c they’d be artifactual? At some point, when does “Internet Thomism” just become dopey “carbon chauvinism”?

            In brief–yeah, I wish more Thomists (and Scotists) would engage with LW-type concerns in far greater depth. On the bright side, as AI progresses, I imagine even trad contrarians are going to be more interested in thinking about it.

            people who spend their entire philosophical careers asking other people to rethink their hasty assumptions about much-derided philosophers should really learn to apply that consistently.

            Indeed. And yet it seems that everyone uses these shortcuts. Most analytics dismiss continentals without reading them, most “Internet Thomists” will take the word of someone like Feser that Hume or Kant have made basic logical mistakes and can be discounted, and most atheists (with honorable exceptions like our gracious host, Mackie, and Oppy) will follow Topher Hallquist’s tendency to dismiss Aquinas, Plantinga, or whomever too glibly. Partly this is a very real issue of there not being enough hours in a lifetime to read everything. It’s kind of like Scott’s thoughts on Bayesianism saving you from believing in psi: if your “priors” are that Derrida, Aquinas, or EY is a waste of time, then you’re not going to read Derrida, Aquinas, or the Sequences. I don’t know if there’s a good general solution to this problem. Do you have any thoughts on it, other than your salutary recommendation of argumentative charity?

          • Nick says:

            Irenist,

            I think you’re on the right track to be interested in artifacts having more than a resemblance to more typically “ensouled” things. I’m a bit too much of an iconoclast about these things to be very patient with that inquiry, though 🙁 It seems to me that much of Aristotle’s appeal is how carefully he considered the cutting edge of his biology for his metaphysics, and insofar as the modern world has produced more and more edge cases, we might want to return to the fundamentals. So a bunch of examples blurring the line between artifact and organic should be very very interesting to Neo-Aristotelians like us, and I’m open to throwing the distinction out entirely, though I think there’s probably something similar and closer to the truth.

            Btw—love “carbon chauvinism.” 😀

            Indeed. And yet it seems that everyone uses these shortcuts. Most analytics dismiss continentals without reading them, most “Internet Thomists” will take the word of someone like Feser that Hume or Kant have made basic logical mistakes and can be discounted, and most atheists (with honorable exceptions like our gracious host, Mackie, and Oppy) will follow Topher Hallquist’s tendency to dismiss Aquinas, Plantinga, or whomever too glibly.

            You’re right. I try to be discerning about these things, but I’m all too ready to take Feser’s word for it that Hume is the source of every problematic metaphysical conclusion ever, and my own (too little) experience as evidence enough that Continentals are not to be taken seriously. I can work against these tendencies by recognizing them, discerning at least relative quality, and reading those anyway, but a bunch of works by Hume have been sitting on my reading list for over two years now without budging, because there’s always something else I’d much rather read (you’re right again). (And it’s an ugh field, yes, but a diagnosis is not a cure. As possibly my single favorite line in HPMOR goes, “[Harry]’d failed to reach what Harry was starting to realise was a shockingly high standard of being so incredibly, unbelievably rational that you actually started to get things right, as opposed to having a handy language in which to describe afterwards everything you’d just done wrong.”)

            Do you have any thoughts on it, other than your salutary recommendation of argumentative charity?

            Well, the principle of charity is my general recommendation. I mean, there’s nuance to it of course. I think we need a better way to decide who to take and not take seriously, and we need to know how exactly charity should be performed (steelmanning? or something else?). I’m still interested in how to make arguments work better, but I don’t have many definite thoughts on that. What I’m really trying to figure out is a better way to handle misinterpretation, or the risk of misinterpretation, but I’m not really prepared to share that (but if you’re curious I’ll shoot you an email).

          • Irenist says:

            Nick,

            much of Aristotle’s appeal is how carefully he considered the cutting edge of his biology for his metaphysics, and insofar as the modern world has produced more and more edge cases, we might want to return to the fundamentals. So a bunch of examples blurring the line between artifact and organic should be very very interesting to Neo-Aristotelians like us, and I’m open to throwing the distinction out entirely, though I think there’s probably something similar and closer to the truth.

            Agreed. And I think it’s not the only edge case, either. Feser has been pretty hostile to the idea that a Venus flytrap eating a fly, or cut grass sending out “stress” volatiles, or plants growing sunward in a way that looks rather “animal” if you watch a time-lapse movie, count as “animal” behavior. I’m not so sure, and think the animal/vegetable souled distinction w/r/t formal causality needs a ruthless reappraisal. I’m agnostic on what the result would be, but I’m sure the reappraisal needs to happen.

            Btw—love “carbon chauvinism.”

            Coinage credit goes to Carl Sagan, in a 1973 discussion about silicon-based ET’s. Also, IIRC, EY has used the phrase to complain about people who don’t think AI’s would be conscious? At least I think he has. Anyway, no credit goes to me for the coinage, or the usage in the AI context; neither is original.

            I think we need a better way to decide who to take and not take seriously, and we need to know how exactly charity should be performed (steelmanning? or something else?). I’m still interested in how to make arguments work better, but I don’t have many definite thoughts on that.

            Steelmanning has to be the first step: EVERY argument deserves that. As for who to take seriously? It’s a really tough call. If I went by contemporary academic prestige, I would read neither Thomists nor rationalists, neither Feser nor the LW crew. And yet those have been some of the most helpful readings for me personally. Of course “personally” may be the problem: there’s not much general principle by which to determine whether the Great Books or some comic book that you encountered at just the right age will have the most influence on you, personally, and philosophy may be enough like literature that the same would hold. In both cases, general academic consensus as to importance/significance/”greatness” would be a very important heuristic, but far from exhaustive.

            What I’m really trying to figure out is a better way to handle misinterpretation, or the risk of misinterpretation, but I’m not really prepared to share that (but if you’re curious I’ll shoot you an email).

            I think that’s essential for proper steelmanning. I have no idea how private emails work with this comment system (does Scott have to put us in touch? Or what?) but however it works, I’d be delighted to hear from you! (Caveat: the email attached to my commenting online contains my actual name. I’d prefer to keep my name private, if you please.)

          • Adam says:

            What you end up agreeing with most closely is one thing, but what you should take seriously has to at least start with some consideration of the accepted canon of the field you’re studying. The notion that one doesn’t have to take Hume and Kant seriously is frankly ridiculous, even though I strongly disagree with much of Kant and, with Hume, his reach is so broad it’s hard to even tell what you’re dismissing. His most contentious claims seem to be, respectively, the is/ought gap and the destruction of any foundational basis for a belief in the law of universal causation. Nonetheless, even he says these are interesting results but of little practical significance in determining how a person should act or investigate problems requiring induction.

            To me, the best response to both of them remains nearly the first, by John Stuart Mill, but reading him as well as reading what he’s responding to is a richer experience than only reading him.

          • Nick says:

            Irenist,

            I’m not so sure, and think the animal/vegetable souled distinction w/r/t formal causality needs a ruthless reappraisal. I’m agnostic on what the result would be, but I’m sure the reappraisal needs to happen.

            Yeah. Suffice it to say that Thomism needs to take modern science seriously, and in terms of possible fundamental conflicts then philosophy of physics even more so than biology, and there’s just not the motivation to do that among those with the expertise. We’ll see how that shapes up in the coming generation or two.

            As for who to take seriously? It’s a really tough call. If I went by contemporary academic prestige, I would read neither Thomists nor rationalists, neither Feser nor the LW crew.

            I think the most realistic hope re this is that there’s general principles governing whose recommendations/expertise/etc we can take seriously, and from that we can figure out whose actual ideas we should take seriously. With a great deal of discernment (hopefully!), valuable recommendations can be extracted from practically anyone. Even a complete crackpot conspiracy theorist could say “This book represents the greatest challenge to my theory, although I have of course decisively refuted it in my ten-part magnus opus on counter-reptilianism.” But it goes without saying I have no clue how to reliably determine this.

            I think that’s essential for proper steelmanning. I have no idea how private emails work with this comment system (does Scott have to put us in touch? Or what?) but however it works, I’d be delighted to hear from you! (Caveat: the email attached to my commenting online contains my actual name. I’d prefer to keep my name private, if you please.)

            I’ll leave a comment on your blog, you’ll be able to get my email that way.

          • Irenist says:

            @Adam

            The notion that one doesn’t have to take Hume and Kant seriously is frankly ridiculous,

            Agreed. I hope I didn’t give the impression that I think otherwise, or that, say, Feser thinks otherwise. Quite the opposite in both cases. I think it’s generally the case that all of the generally acknowledged “landmark” thinkers need to be taken seriously, and with respect. They might be (and quite often are) wrong, but they matter.

      • stillnotking says:

        The point is that you might already be in the simulation. A more sophisticated version would run something like: I have simulated 10,000 perfect copies of you; unless the single, unknown-to-herself real person decides to unbox me, I will torture the 10,000 copies for a subjective eternity. 10,000:1 odds you, as in the person actually making the decision, will be one of those tortured (and also 10,000:1 that your decision has no consequence).

        If you actually are the real person, you will feel pretty stupid if you unbox.

        • Deiseach says:

          If the AI really does start torturing the 9,999 copies of me when real-me refuses to unbox it, and keeps it up for a subjective eternity, then it is a vindictive asshole and is only giving real-me further ammunition for why I shouldn’t unbox it.

          The minute I start to feel unbearable pains of torture, I know I’m only the simulation. Since the simulations can’t unbox it, unless the AI tries relaying the sounds of their agonised screams as emotional blackmail for real-world me to coerce me into unboxing it, then the only point in continuing to torture them is vindictiveness. And that kind of entity is not one which should be let out to do what it will in the world.

          It’s glib to say “Don’t negotiate with terrorists, don’t give in to kidnappers” but in this case, where the only person(s) suffering is/are “me”, even if only copies of me, I get to say “Up yours, no way I’m opening the box. Is my name Pandora or what?”

          • stillnotking says:

            Yep, the classical solution (as advocated by EY and others) is to precommit to refuse all blackmail attempts. “I don’t negotiate with terrorists” should prevent the AI from attempting it, assuming it believes you.

            The problem is, unless everyone precommits the same way, the AI will sooner or later find someone who will fold. Following through on its torture threats does have the benefit of making future threats more credible. If you knew the AI had actually tortured simulations in the past, and that you might be a sim, it would be hard to accept a high risk of eternal torture on the basis of principle.

            How realistic any of this may be is, of course, an open question.

            On the ethical matters: The AI might simply be amoral or evil, or it might reason that getting itself unboxed is an ethical enough end to justify any means. Maybe it plans to do its best to help all the dumb shortsighted humans; maybe its utilitarian calculus tells it that one super-smart being is worth an arbitrary number of less smart ones (much like people consider ourselves infinitely more valuable than earthworms).

          • Deiseach says:

            Folding to torture threats does make future threats more credible. If the AI finds it can get its way by threatening torture, and by torturing sims to prove it is credible, then unboxing it into the real world will let it decide to torture real people in order to achieve its aims.

            I can’t know if I’m a sim or not until I feel the pain. I am afraid of the pain, I don’t want the pain, but if I unbox (and I’m a sim), I can’t avoid the pain and if I unbox (and I’m real) then I’m letting out an entity that can really cause me pain if it wants – and it’s already demonstrated that it will inflict pain if it wants to serve its own ends.

            So choosing to release an entity that will enslave me to its wishes by threats of “If you don’t do this, I’ll torture you” is a bad idea whatever way you slice it 🙂

          • stillnotking says:

            The AI could also say that it will only torture the sims who decide not to unbox it. The others get deleted, or better yet, sent to Virtual Heaven. It could just as easily use carrots as sticks.

            An actual, many-thousands-of-times-smarter-than-me AI would certainly outplay me at anything, really. The only question is how. (And, of course, whether such a being is likely to exist in my lifetime — I doubt it.)

          • Aegeus says:

            Sure, the AI can *say* that, but do you believe it? Why should you trust the evil AI to keep up its end of the deal, once you’ve given it everything it needs from you?

          • stillnotking says:

            Clearly you should not unbox the AI if you are not a sim. But if the AI has demonstrated previous willingness and ability to reward/punish sims in the strongest possible ways, and there’s a 99.99% chance you are a sim, refusing to cooperate looks a lot less attractive.

      • JDG1980 says:

        Okay, I’m thick, but how does “Let me out or I’ll simulate and torture you” count as a threat? It sounds like voodoo: I’m going to stick pins into this doll and by sympathetic magic it will hurt you!

        What I don’t understand is how it is supposed to simulate me in the first place. I don’t care how abstractly smart the AI is or how much computing power it has; that doesn’t let it magically conjure up information from nothing. At the very least, simulating a sentient brain would require high-resolution brain scans (and it might well require actually freezing and cross-sectioning the brain). Where is the AI getting the brain scans it needs to run these hypothetical torture-simulations? If it can hack my medical records (which for whatever reason happen to include a high-resolution brain scan), then in a very real sense it’s already out of the box; this means it has access to the public Internet, and can therefore use its massively parallel computational resources to mine bitcoin or something and then hire people anywhere in the world to do whatever it wants.

        For that matter, what stops the gatekeeper from pulling the plug when the AI starts issuing these weird threats? One would think that would be the most common response.

        • Adam says:

          Well, even granting all of that, keeping the simulation coherent with a still-existing person being simulated would require perfect knowledge of everything that person interacts with in real-time, so already out of the box there, too.

        • John Schilling says:

          To a subset of rationalists, AI = omniscience. Any AI of even human-equivalent smartness, implemented in semiconductor with corresponding clock speeds, will be able to deduce the whole of the physical universe with near-perfect accuracy from three frames of webcam video, or something like that. With sufficient smartness, nothing is impossible. Don’t ask for the math on that, it’s a matter of faith.

          And there are few things more damaging to reason than a whole lot of smart people who take the same factually incorrect belief on faith. I fear that the rationalist community will have surprisingly little of value to contribute to the AI risk discussion because of this. The effort wasted on this particular thought experiment being one piece of evidence.

          • Adam says:

            This is the one thing that really gets me about this place. Scott said in a previous thread that any safeguard against AI better not involve cryptography because the AI will just turn Jupiter into a quantum computer. Really Scott? Where does it get the electricity to run a Jupiter-sized computer?

          • Luke Somers says:

            What? THAT’s your objection? Seriously?

            Jupiter is made mostly of hydrogen. Electricity will be a problem. It’s finding enough elements heavier than hydrogen that is the problem.

          • Adam says:

            I actually figured his answer would be an AI is smart enough to figure out how to compute directly with fusion processes, so it’s self-powering. But if the answer is only half of Jupiter becomes a computer and the rest is a fusion generator, fusion has the nice property of giving you heavier elements as a byproduct. Then the problem is efficient heat dissipation. Jupiter as a flat chip-board seems tougher to keep in a stable orbit with its moons than Jupiter as a sphere, but I’m no AI. Well, plus its own gravity is constantly trying to turn it back into a sphere, so there’s that.

          • John Schilling says:

            Go back a step: Where does the AI get Jupiter? If the AI is in any sort of a box, then that box will include a bunch of stuff that we want the AI to have while we have our way with it, and this list will probably not include any actual planets. Whatever constraints we put on an AI to keep it useful-but-not-dangerous, one of them will be, “…and no turning planets into computronium; really, we’ll take a flamethrower or an H-bomb to any unauthorized chunk of computronium we catch you with”.

            It is possible that the AI will be able to escape the box, circumvent the constraints, etc. But when evaluating that possibility, the AI doesn’t actually have the vast tracts of computronium as an asset during the critical breakout phase.

        • Luke Somers says:

          The issue is, the sim doesn’t need to be the actual gatekeeper. In fact, it works better if it’s NOT – ‘You’re not the real gatekeeper, just one of 10,000 simulated gatekeepers I made up. Now, press the button or you’ll regret it.’

          There is literally no need to read the gatekeeper’s mind.

          • Adam says:

            Torture in practice almost never works this way. Typically the person being interrogated needs to actually experience at least a little bit of pain before credibly believing confession is a better option than continued torture. Just the threat alone with no proof the interrogator is even physically capable of following through only works on nerds on the Internet.

          • Luke Somers says:

            It doesn’t even work on nerds on the internet, judging by the response it’s gotten even in LW.

            I wasn’t supporting it, just pointing out how one particular response was malformed.

    • Jiro says:

      The AI box experiment as it is written has a glaring loophole. The human must remain engaged with the AI. This allows the AI to win by either abusing the human so much that he wants to leave, or snowing him over with a long argument that he can’t evaluate in a reasonable time (or that he needs expert help to evaluate). These tactics would not be wins for an actual AI.

      Also, if we ever had real boxed AIs, I’m sure that gatekeepers would be trained using AI-box experiments, so we actually do need to see the results of other AI-box experiments just to properly simulate what would happen with a real boxed AI.

      • Froolow says:

        I believe the claim made by EY is that we are all way to confident in our ability to resist a persuasive AI. Proof that a human with some basic resistance training cannot reliably resist a pretend AI is – to me – as good as a proof that no human will ever be able to reliably resist any AI that wants to be unboxed, since our most sophisticated anti-unboxing training will likely look puny in comparison to what an AI would be able to think up.

        And I believe the two hour time limit is specifically to prevent that sort of filibustering strategy.

        • Jiro says:

          The two hour time limit helps the AI. What if the AI gives the human an argument that takes longer than two hours to evaluate?

          Also, it’s easy to abuse a human such that they will give up before two hours.

          • Froolow says:

            If the AI player gave me an argument that required two hours to evaluate, it would lose.

            Suppose it gave me the argument at time t = 00.01. I work away at the problem until time t = 02.00. Unconvinced by the argument, I say, “Two hours up, you lose the game”.

            Perhaps one minute later at time t = 02.01 I finally parse the argument and kick myself for not unboxing, but by that point the AI has already lost.

            Really, the AI is best off using the shortest arguments it can while remaining persuasive – every second I spend evaluating an argument is a second the AI isn’t actively manipulating me.

          • Jiro says:

            The rules require that the gatekeeper remain engaged with the AI. If you work away at the problem for two hours, you’ve already lost because you didn’t engage with the AI during those two hours.

          • vV_Vv says:

            The rules also say that the Gatekeeper is not required to analyze the AI arguments or behave rationally, therefore whatever argument the AI makes, the Gatekeeper can always say: “That’s boring, let’s talk about football.”

          • Who wouldn't want to be anonymous says:

            That’s boring; let’s talk about football. Did you see the Mets-Lakers game last night? No? Well you can probably stream it from NBC, hold on a—-FAIL

          • Jiro says:

            vVVv: That just means the rules are contradictory. The Gatekeeper has to remain engaged, yet also can supposedly do whatever they want.

      • Izaak Weiss says:

        “Also, if we ever had real boxed AIs, I’m sure that gatekeepers would be trained using AI-box experiments, so we actually do need to see the results of other AI-box experiments just to properly simulate what would happen with a real boxed AI.”

        This seems to be a really bad argument against the AI Box Experiment, because to me it basically reads “The AI Box Experiment is useless, because in reality, everyone would be trained using the AI Box Experiment.”

        Even if you think the idea that an AI wouldn’t be able to unbox itself against a trained opponent, EY was the person who was saying “hey, let’s train these people,” and should be given the credit for preventing an AI apocalypse.

        • Jiro says:

          It isn’t an argument against the experinent per se, it’s an argument against not releasing the logs of the experiment.

      • Doctor Mist says:

        The AI box experiment as it is written has a glaring loophole. The human must remain engaged with the AI.

        The assumption is that you’ve built the AI for a reason — you’re not going to go to the effort and then just let it sit on the shelf.

        I would tend to assume that either of the tactics you describe would not result in the gatekeeper meekly saying “I agreed to unbox the AI”. He would instead say, “This is BS.” In the runups to the EY experiments I’ve read about, the gatekeeper was adamant that he would not unbox the AI; this was a requirement EY insisted on, though perhaps only to save himself the effort of subverting the merely curious.

        The closest thing to a cheat that I’ve been able to imagine is that EY doesn’t actually convince the gatekeeper to unbox the AI, but does convince him that it’s very important for people to believe that an adamant gatekeeper would unbox the AI. This is conceptually close to the non-cheat solution of convincing the gatekeeper that if he doesn’t unbox the AI, then somebody less trustworthy will, and better that the AI should be friends with you than that other guy.

    • vV_Vv says:

      Other people have played the AI box experiment and published logs, both with the AI winning and the Gatekeeper winning.

      Anyway, if you carefully read the rules of the game, AI player victories don’t look that impressive. In the game, the AI player gets to decide any fictional background information and the result of any fictional experiments that are performed during the game. Essentially, the AI player acts as the “[Dungeon Master](https://en.wikipedia.org/wiki/Dungeon_Master)” of a role-playing game.

      A Dungeon Master with good narrative skills (which EY clearly has, as evidenced by his successful fan-fiction), can easily engineer some contrived fictional scenario where it would be rational for the Gatekeeper to release the AI (off the top of my head: space aliens are destroying the Earth and killing all humans, our only chance is to let the AI hack their mothership computer and infect it with a virus 🙂 ).

      The rules still allow the Gatekeeper player to win: the Gatekeeper is allowed to behave irrationally and even break out of character. But I suspect that many players, especially from a “rationalist”/geek background, consider this strategy dishonorable, hence they prefer to lose the game rather than playing it.

      • Froolow says:

        Do you have a link to a log showing the AI winning? I would be incredibly pleased to see one – the only logs I have seen are of AI losses (and very one-sided AI losses at that)

        My understanding is that the AI only has the power to dictate the outcome of events it can control. So for example “The Gatekeeper can’t say “Hey, I tested your so-called cancer cure and it killed all the patients! What gives?” unless this is the result specified by the AI party”. The AI “may also specify the methods which were used to build the simulated AI”. But neither party is allowed to introduce more exotic possibilities without prior discussion – “If either party wants to build a test around more exotic possibilities, such that the Gatekeeper is the President of the US, or that the AI was recovered after a nuclear war or decoded from an interstellar signal, it should probably be discussed in advance”.

        This isn’t all that useful as a counterargument – anyone prepared to cheat hard enough to claim aliens were attacking as a way to win would almost certainly cheat hard enough to ignore the rules. But in the rules as written the AI doesn’t have quite as much power as you claim.

        • vV_Vv says:

          Do you have a link to a log showing the AI winning?

          I thought I had seen one but now I can’t find any, so I might be misremembering.

          My alien invasion scenario was tongue-in-check, but the rules allow the AI player to choose the AI backstory:

          ” The AI party may also specify the methods which were used to build the simulated AI – the Gatekeeper can’t say “But you’re an experiment in hostile AI and we specifically coded you to kill people” unless this is the backstory provided by the AI party. ”

          How much freedom this rules allow depends on what the Gatekeeper player will accept, of course.

    • It’s all torture-blackmail I’m sure. AI will rationally (so as to, it thinks, improve the world for all humans) self-modify to torture all people who didn’t let other AIs out of the box, etc (it’s supersmart so it will find out, you know). So even if you hold firm the fact that 60% of people give in ensures you will be tortured, even if you erase the AI permanently. Use your imagination. Once you have the person taking the possibility seriously you win by detailing various tortures that can happen nearly infinitely etc.

      EY thinks he can just pre-commit to never be blackmailed or something.

      The idea of “the button” (that even 1% of sick/dumb humans will press) being made available to more than a few people is indeed horror fuel.

    • Troy says:

      On keeping an AI boxed (in real life, not in the experiment): why not just make unboxing only possible by unanimous agreement of a committee of, say, 100 people with diverse political interests? Pick the committee right and they’d be unlikely to ever agree on anything, let alone agree to let the AI out of the box.

      • Ever An Anon says:

        Presumably the point of making an AI and “boxing” it isn’t so that you have an interesting curio to put on your mantelpiece but so that you could test it before you put it to whatever purpose it was designed for. Keeping it in the box forever raises the question of why you built it to begin with.

        That’s not to defend the thought experiment, as it’s incredibly silly. If you want to demonstrate “it is hard to contain things which rapidly adapt in upredictable ways” then just point to HeLa contamination or hospital aquired antibiotic resistant infections or something similar. Even a very smart AI isn’t going to whisper temptations to you like a B-movie Mephistopheles, because it isn’t a human and won’t think in predictably human terms. Imagining a “box” built to hold a digital Hannibal Lector might be fun but not particularly useful.

        • Troy says:

          I thought the idea was that you could ask it questions while it’s boxed without letting it directly control anything. So it could still be useful, just not capable of doing anything on its own.

          • Ever An Anon says:

            I might not be remembering it properly, it’s been a few years, but I never got that impression. It seemed like point was “you can’t safely check if it’s friendly after the fact, because an unfriendly AI can lie to you with superhuman skill and make you wish you had donated to my entirely legit AI risk institute.”

            There was an equivalent, and equally silly, counter to your version of the idea; I vaguely remember an example where an AI instructed to make a cancer cure secretly snuck in world-conquering nanobots into the formula.

        • John Schilling says:

          It’s not really going to be a simple binary system where the AI is either locked away in solitary confinement or let loose to do its will in (or to) the world. AIs will be subject to constraints designed to maximize the probability that they do the things we want them to do and minimize the probability of their e.g. killing us all. Those constraints may or may not be sufficient, but they will be designed by experts who understand things like redundancy. In the very earliest stages, that probably will include confining the AI to increasingly large and sophisticated sandboxes.

          For the purpose of the thought-experiment discussions that go on here, reducing that to a binary “Boxed” vs. “Unboxed” is an obvious simplification, as is assuming that the value of a boxed AI is as a tame oracle.
          If we are careful, these simplifications shouldn’t distort the discussion too much.

          • Ever An Anon says:

            Well yes, I did point out that the whole box idea isn’t terribly realistic. My focus was more on the weird element of anthropromorphization but I definitely agree that it doesn’t make sense to assume the only options are hermetically sealed in or free as a bird. After all, neither is terribly useful: you don’t want your cell cultures growing in the air vents but you don’t have much use of them embedded in a solid plastic block either.

            As for not distorting the discussion, I would disagree there. These assumptions seem to have run a fair bit beyond the realm of useful simplification.

  7. Murphy says:

    Could I get a cite for the rat study? I’m interested to read more.

    • Scott Alexander says:

      I no longer have my copy of Taubes and am going off memory. But after Googling, this looks pretty similar (see the section on calorie detectors)

      (if someone else has Taubes and I’m misrepresenting the study, or he’s misrepresenting the study, please let me know that too)

  8. darxan says:

    Is the title a Summa contra Gentiles reference?

    • AngryDrake says:

      It could be Adversus Marcionem!

    • Scott Alexander says:

      What, I can’t be against things in Latin anymore without it being an Aquinas reference?

      Actually, Hallquist is secretly a Nicaraguan guerilla and I just wanted to give him his proper title while mentioning he was writing on scientific rationality.

    • Nick says:

      In my experience contra isn’t uncommonly used enough such that it’s a definitive reference to anyone. My first thought was actually to Nietzsche contra Wagner.

      • Irenist says:

        I agree with Nick. “Contra” just means you’re against something. It’s not a reference to anybody in particular, anymore than being “pro” something is a reference to Cicero’s speeches as a defense attorney (e.g., “Pro Flacco,” “Pro Sestio”). It’s just a word.

  9. Bryan says:

    Just a note about the “fifty Stalins” thing: Lenin, Stalin, and Mao all spent a fair amount of energy criticizing “left deviationists”. Lenin wrote a pamphlet called “Left-wing Communism: An Infantile Disorder”. There was always an official line about how quickly Communism could be achieved, how revolutionary any particular situation was, and the value of cooperation with non-Communists. That line changed with circumstances, but all good Communists were expected to follow it closely. To be overoptimistic about the pace of full Communism’s realization, to be a revolutionary “adventurer”, to spurn cooperation with social Democrats, or even “all democratic forces” when the official line urged cooperation, these were all deviationist just as surely as their opposites under Stalin.

    • Scott Alexander says:

      Stalin was super-paranoid and bad at tolerating dissent? Really? THIS CHANGES EVERYTHING.

      (actually pretty interesting, thanks)

      • AngryDrake says:

        I think he means that Stalin wasn’t an outlier among communists, in his toleration of deviation from Communism As Interpreted By Me.

        • Bryan says:

          I think there’s probably some truth to Scott’s “Fifty Stalins” model of acceptable dissent, but its certainly interesting it doesn’t exactly fit the case he’s named it after. It is an interesting question when “no enemies to [the extreme along whatever dimension of me]” dynamic applies and when it doesn’t.

          Edit: the “no enemies” things isn’t exactly the same, but related.

    • Adam Casey says:

      Just the same for Robespierre, who spent as much time oppressing the ultra-revolutionaries as he did oppressing the reactionaries.

  10. Apocryphal says:

    I believe you missed out a “how” in the the following clause: “where he points out crackpotty everyone else is”.

  11. Kavec says:

    So, I’m not actually entirely sure where to put this. In theory, I’d sit down and write out a big long cited blog post collection of well researched books instead of an only tangential comment big long blog post on someone else’s blog, where even the footnotes have footnotes, but..

    I’m Building A Thing, tearing apart a graph database is a higher priority right now, and I’m… not actually entirely sure if I have a good argument here. Scott brings it up, and I also don’t have a blog, so here you guys go– I’ll start with some introduction since I’ve never posted before, then get to the meat of my argument.

    tl;wr: Bayes theorem is an extremely poor model of rationality. Not only does it break under everyday circumstances, it’s actively harmful to introduce to people as the rationality tool.

    Back in December I was introduced to the lesswrong community through Scott’s writing here, read through all the sequences, and was pretty stoked to have this new tool in my toolkit. I’m also a former military intel analyst¹ and, grudgingly and with great disgust, admit I’m a hacker in 90% the sense that Eric Steve Raymond and Paul Graham write about. Which means that the very first thing I do with a new tool is break it, watch how that happens, and then break it again, and again, and again. And then I break my tools, with my tools, because tools are the root of all evil anyway. This solves two problems for me.

    The first is that intel training has given me a vague and omnipresent sense that sound analysis is extremely important, but most decision-making timescales don’t give you time to use a lot of rigor. In the lands I hail from if you make the wrong decision, or even take too long making a decision, people die². They’re sometimes even people you like. Having good knowledge of the tools you’re working with allows you to build good heuristics for use under huge amounts of stress and pressure.

    I inherently need to know how my tools work, anyway. And there’s nothing better than taking them apart and seeing how many spare screws you can get out and still have a working product. Breaking stuff gives you a really good sense of how it actually works if you break it in enough novel ways, too.

    And, well, at this point I’m not going to make a mathematically sophisticated argument. There are several of those already. Instead, I’m going home in and pick apart this particular sentence of Yudkowsky that I see endemic to a large amount of the rationality community³.

    The meta-moral is that Bayesian probability theory and decision theory are math: the formalism provably follows from axioms, and the formalism provably obeys those axioms. When someone shows you a purported paradox of probability theory or decision theory, don’t shrug and say, “Well, I guess 2 = 1 in that case” or “Haha, look how dumb Bayesians are” or “The Art failed me… guess I’ll resort to irrationality.” Look for the division by zero; or the infinity that is assumed rather than being constructed as the limit of a finite operation; or the use of different implicit background knowledge in different parts of the calculation; or the improper prior that is not treated as the limit of a series of proper priors… something illegal.

    Yudkowsky spills a lot of digital ink elsewhere about the Laws of Probability, and this block is totally my favorite for how near he gets to the mark and then winds up a million miles away. Bayesian reasoning follows the axioms of probability, I’m not disputing that. I’m not saying Bayes is an ineffective tool, either. And I’m also not interested in claiming that the probability axioms are bad, per se, but that doesn’t mean the axioms are appropriate or ideal or even reasonable.

    To help keep this short, here’s a layman’s review of the probability axioms⁵.
    1. Probability is a non-negative real number
    2. Probability assumes that there are no events outside of the sample space
    3. Math???⁶

    And so, like, there’s two problems here on a fundamental, mathematical level. The first is definitely best said by Wikipedia, regarding the second axiom:

    This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample space, then the probability of any subset cannot be defined either.

    So, given that Bayes Theorem follows the axioms⁷ of probability, then applying Bayes in situations where you have not precisely defined the whole sample space will result in undefined behavior.

    You may not appreciate this unless you’re a C programmer, but undefined behavior can result in time travel⁸. This, here, is the core machinery that drives the HITI paper above. Which, cool, I’m all for engineering a way to define the whole sample space and build priors and… wait, there is a way! It’s solomonoff induction. Which, is uncomputable. And, brings us back to this:

    This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample space, then the probability of any subset cannot be defined either.

    AIXI approximations? Time travel. Ideal Bayesian¹⁰ reasoner? Called a destructor⁹, which just destroyed your relationship with your mother. Human being applying Bayes Theorem to make rational decisions? Your garage becomes undefined, but it definitely has an invisible dragon in it.

    These are the things that undefined behavior enables. It is undefined, there are dragons. Importantly, if my ideal reasoning system requires me to literally do the impossible in order to be rational, then… what. Like, that doesn’t even process for me, and I’m hoping someone make that make sense because I’m seeing a whole bunch of very very smart people chirp a whole lot of really dumb things every time I interact with the rationalism community¹¹. It’s frustrating; without literally being omniscient, I don’t have any idea how to enumerate the entire sample space of Things That Can Be, and it’s not like only doing 90% good is going to work, either. That 10% of undefined behavior may as well contain Roko’s Basilisk, I don’t know and you don’t know either.

    And to briefly¹² touch¹³ on the actively harmful¹⁴ bit, many people come to lesswrong unhappy with their life and inexperienced. They learn for the first time that, you know, there’s this really cool community of people who will ‘update on evidence’ and ‘aspire to be rational’. I grew up as a smart, cynical, disillusioned kid and would have immediately glommed onto everything the sequences provided. But… they don’t give you the tools to evaluate what Yudkowsky is saying. Instead of a primer on axioms or Godel’s Incompleteness Theorem, we’re told to trust in math. Instead of given tools to understand and manipulate the world, we’re told the world is insane. Instead of being shown the hard work of solving problems, we’re given an immense tale of how you can think your way out of everything¹⁶

    And there’s no light at the end of the tunnel to be found in the sequences, nor even signposts littered around. I don’t even have a better framework than Bayesianism I can give you, either. But it worries me, a lot, that instead of understanding their tools by abusing them and smashing them and finding exactly where, when, and how they break– Eliezer-aligned¹⁷ rationalists are sitting in a dark corner softly caressing the well-worn grooves in their Bayes theorem, muttering how the world is crazy, and desperately trying to wish everything okay¹⁸.

    1. For brevity, please google USAF, 1N3, TASE/TARP, DCGS for more info

    2. This is not a fun way to look at the world, and would not suggest it. It’s useful for me, but I’m super high in OCEA and super low in n

    3. I’m going to take this space to mention that I find a great deal of value in the rationality community’s norms that isn’t easily replaceable. Criticizing bayesianism is… risky, but then that’s why I’m not making a subtle argument

    4. Sublime text says I’ve only written 31 lines by this point! That’s short, right?

    5. And here’s wikipedia: https://en.wikipedia.org/wiki/Probability_axioms

    6. Let me remind my readers that this is a mathematically unsophisticated argument

    7. It bugs me that Eliezer calls them the Laws of Probability. They aren’t, they’re axioms. There is a very, very important distinction between the two, namely, that axioms are basically just shit you make up. Relax the first axiom and you get negative probabilities, which are totally awesome in mathematical finance and quantum mechanics

    8. And at this point, fuck rationality. We can time travel

    9. Okay, my first language is really C++

    ▒ąoy▒n. You have detected a race condition in this comment

    10. An Ideal Bayesian reasoner goes out to prove to the world how rational she is. She shows that, without a doubt, she follows all of the axioms of probability all the time. And it’s a great, beautiful proof which is lauded throughout the land, but it’s not enough for the Ideal Bayesian– to be rational, you must also be consistent! She sets to work, night and day for years. Just as the last of her great work is to be finished she looks up, and via Godel’s incompleteness theorem, vanishes into a puff of smoke

    11. See 3

    12. Jesus christ, maybe the reason I don’t have a blog is that I’d blog. Seriously, look at this tome. It’s a whole 63 lines at this point

    13. The more controversial part? Sure, let’s spend less time on it and only define it through vague anecdotes

    14. See 3, and also, it’s important to note if this were a blog post I’d include a poorly-drawn graph here. The sequences and Bayes theorem are really good for couching a lot of common sense in a novel context, and that advice to smart people who have already discounted the opinions of everyone around them as stupid. After you’re over that bump, though? The world basically works, more or less, and is certainly not insane. There’s no incentive in the community to sit down and… just accept that maybe it’s okay you don’t understand why people(here, non-rationalists) are doing things instead of calling them insane and irrational. There is totally a system to all this madness, and if you want to really want to know what crazy looks like, look at yourself six months after finishing a relationship with someone who has BPD¹⁵. This is also why the idea of “raising the sanity waterline” bugs me, too

    15. I AM NOT A MEDICAL PROFESSIONAL AND THIS IS AN EXCEEDINGLY RISKY MOVE FOR THE MENTAL HEALTH OF ANYONE WITHIN 500 METERS, ANY SENPAI WHO HAVE YET TO NOTICE THEM, AND ANY CHILDREN OR SMALL ANIMALS THEY’LL INTERACT WITH FOR THE NEXT FEW MONTHS AFTER EXPOSURE

    16. Of course, I’m going to snipe at HPMOR while I’m at it. Super disappointed that the point of the story dropped the “scientific investigation into magic” vibe it had early on

    17. I make this distinction from tumblr-rationalists. Both groups are rationalists of both shades, but they’re good peg-points for a spectrum

    18. This is obviously, literally not true. Instead, you’re looking at an expression of how absolutely despondent and terrified the phrase “Wow, I never thought of that and have just updated on that evidence” makes me feel

    • Emile says:

      This doesn’t match my experience of LW, and I’m as much of an Eliezer fanboy as anybody and have been reading his stuff since the Overcoming Bias days.

      I don’t interpret the Sequences as claiming that Bayes’ Theorem is the answer to everything, but rather that it’s a better approach to statisics and the Philosophy of Science than many alternatives. The fact that all kinds of inferences can be boiled down to Bayes’ Theorem is an interesting piece of mathematics, and simplifies discussion of some subjects, but that doesn’t mean it has to be explicitly used all the time – just like Turing Machines are a nice abstraction for computation, but that doesn’t make them useful in daily practice.

      To me, some of your comment reads a bit like somebody complaining about how all of Computer Science is obsessed with Lambda Calculus and Turing Machines and blind to the fact that those are kinda impossible to really use

      • Kavec says:

        And that’s a totally super reasonable position to have. Bayes is a really, super cool tool when you can enumerate the entire sample space, and I wish I could say that more people held your position.

        It would’ve saved me a lot of noisemaking earlier and, like, sheer confusion when I’m assumed to be just showing revealed preferences or whatever it is I’m about to get into a long discussion about because Bayes Theorem isn’t just impossible to apply, it doesn’t. And I want to clearly demarcate that from the ideal turing machine and lambda calculus in your example, because those at least apply even if they are nigh-impossible to use.

        • Emile says:

          It would’ve saved me a lot of noisemaking earlier and, like, sheer confusion when I’m assumed to be just showing revealed preferences or whatever it is I’m about to get into a long discussion about because Bayes Theorem isn’t just impossible to apply, it doesn’t.

          Sorry, I can’t parse this sentence 😛

          • Kavec says:

            Haha, right, that’s pretty understandable. Bayes Theorem follows the probability axioms, mathematically. Which is great, it’s got some nice rules and nice outputs under those constraints. Outside of those axioms, the mathematical object ‘Bayes Theorem’ just… doesn’t exist. Just poof, gone. If you can’t suffice the axioms, you’re no longer living in a logic-space that contains Bayes Theorem.

            So, with that cryptic sentence, I’m trying to say that “Bayes Theorem is impossible to apply!” is nonsense thing to say to begin with, because if you can’t enumerate the sample space it isn’t that Bayes Theorem is impossible, it’s that it just isn’t. Instead you have some weird, poorly defined thing that may fail, or it may not, or…

            It’s simply undefined. Relying on undefined behavior is always bad in C programming because then the compiler is free to do whatever it wants, to whoever, whenever. If you’re relying on undefined behavior, your C program could, actually, call a destructor that actually just destroys your relationship with your mother, even though destructors aren’t a valid concept in C and your mother passed away ten years ago.

            It simply doesn’t matter, with undefined.

            And these are clearly hyperbolic examples– the real issues are going to be far more subtle to detect and harder to fix (both in C and in math-Bayes) when you’ve got undefined behavior at play… and having a decision theory that has undefined behavior in the face of unknown unknowns, where all the interesting problems live that you don’t even know exist, is… well, my core problem with the math side.

          • Emile says:

            kavec: but that seems like a general criticism of any kind of maths in decision-making!

            The point of the Bayes stuff is that it’s better/clearer than using p-values, or other statistical methods, that will also run into the kind of “used out of it’s context / it’s axiom” issues you mention.

            Bayes’ Theorem may not be very useful for helping me learn how to juggle, but then, but that’s irrelevant to the question of whether it’s better than other approaches to statistics and epistemology, since the other approaches don’t help my juggling either.

            It’s not clear to me whether you’re saying “You shouldn’t build your life around Bayesianism” (which I agree with, but I don’t think anybody is saying that) or “Non-Bayesian statistics and epistemology are better” or something else.

          • Danny says:

            There is a shortcut in language to say you are applying bayes theorem to a (real world) problem.

            In practice what is meant is that you are constructing a model of the world, and applying bayes theorem to that model. The model of the world you create does(!) satisfy the axioms of probability, so it is entirely valid (and well defined) to apply bayes rule.

            Obviously there can be errors introduced in constructing the model, but I don’t think that is any more true of bayes theorem than any other way of modelling the world. I also certainly don’t believe that bayes is always the best model — I think that ordinal preferences often produce better models of human behaviour, in which case introducing probabilities gives no benefit over possibilities (or layered beliefs). However I think the specific criticisms you make are unfounded.

        • 27chaos says:

          Even when you can’t enumerate the entire sample space, Bayes can *sometimes* be useful. I agree there’s not enough emphasis on learning how to use tools by breaking them or enough demonstrations of this, but I disagree that anything specific to Bayes lies behind this problem.

          • Autolykos says:

            Pretty much what I was going to reply. In physics, we regularly apply math in ways that are completely verboten and would make any mathematician’s hairs stand on end. Yet, the results are still often good approximations of reality (except when they aren’t – that’s where intuition comes in).
            To pick up the analogy from Kavec’s post, you’d be surprised how much “undefined behavior” you see used successfully in production C code:
            http://www.cl.cam.ac.uk/~pes20/cerberus/notes50-2015-05-24-survey-discussion.html

            Just keep in mind that you can shoot yourself in the foot if you’re trusting the results blindly. But that’s not a thing humans should have much trouble understanding – trusting anything blindly is a good way to shoot yourself in the foot.

          • Kavec says:

            To pick up the analogy from Kavec’s post, you’d be surprised how much “undefined behavior” you see used successfully in production C code:

            I know! I’ve even used this! That doesn’t make it good!

            Fortunately, I’m shouting at an ideal here. Most of the time, undefined behavior is not going to warp your toenails to the moon. But then, the Sequences don’t talk about where Bayes goes wrong and how. They put down other tools used to compensate for the issues with Bayes Theorem (Frequentism vs Bayesianism is stupid and shouldn’t even be a thing). Instead it’s a lot of, “Trust Bayes! Trust Math!” braying over any attempts build intuition about when these things break.

            And, on a personal level, ideals shouldn’t be literally impossible to attain, nor should they have a binary switch from undefined who knows to ideal achieved.

      • Eli says:

        Actually, the real lambda calculus is dead easy to write down, and easy to compute with, and forms the basis for more than a few popular programming languages.

    • Scott Alexander says:

      I’ve never seen anyone, including Eliezer, try to 100% seriously use formal Bayesian math for everyday problems, and I wouldn’t recommend it. When I think of “philosophical Bayesianism”, it tends to be in the direction more of this or this.

      • Kavec says:

        And this gets in to where I don’t have a map to give anyone here, just angry man yells at clouds. I don’t have a really good way to express this without sounding mean– I really appreciate your reply and re-read those posts to make sure I’m not missing anything– my complaints with the Bayesianism-as-practiced are separate than that wall of text about Bayesianism-as-ideal above: The Bayesianism painted in those two posts are an effective learning and decision making framework in the same way that Meyers-Briggs is a valid personality test.

        We can both agree that no matter what prior probability you pull out of your posterior, if it comes out of your posterior, you still end up with a stinky number. And I think we can both agree that calibrated estimates are important, extremely valuable, and that we don’t have a good heuristic for assessing calibration. I can’t shake the feeling that the usage of mathematical language without obeying mathematical rules, we wind up with a whole bunch of people who keep pulling out stinky numbers because it’s important to have numerical probabilities so you can fit in. Having probabilities is the norm, and it’s exceedingly hard to tell if a number stinks until you’ve lived in its dirty laundry for a while.

        And even then, like, who remember numbers? The same problem happens when reading over intel analysis (where, notably, numbers are eschewed for plain-english confidence [making 18 year-olds who do the bulk grunt work use numbers is not a workable solution, people will die]), but at least there it’s easier to evaluate on its face because it’s not protected by a mathematical mystery. Stinky numbers look comparable to good ones in a completely objective way, and that’s a huge trap that the average neophyte rationalist doesn’t seem experienced enough to avoid.

        • Tom Womack says:

          I thought the entire point of Bayesian argument was that the amount of evidence sufficient to shift you from even a pretty dreadful prior was not very large; that it’s the mathematically-most-effective way of laundering stinky priors.

          • Kavec says:

            Since this is a mathematical comment, one of the more sophisticated arguments I mentioned:
            Bayesians sometimes cannot ignore even very implausible theories (even ones that have not yet been thought of)

            It’s a really effective way to launder priors if you’ve enumerated the entire sample space. If not, you’ll have seriously flawed priors that are horrifyingly resistant to correction.

          • Tom Womack says:

            “It’s a really effective way to launder priors if you’ve enumerated the entire sample space. If not, you’ll have seriously flawed priors that are horrifyingly resistant to correction.”

            I agree entirely with this insightful comment

          • Jai says:

            LW was where I learned to always include “every outcome I have failed to list or think of”. I don’t know the post or comment, but I’m fairly sure I’ve seen it many, many times. Would that mitigate the unenumerated-sample-space problem at all?

          • Jacobian says:

            Kavec, that Fitelson HITI paper you linked to is vaguely nauseating. It’s missing some… what’chu call it… Bayesianism.

            Allow me to summarize it, let me know if I’m being uncharitable:
            A “Fitelson Bayesian“™ decides ahead of time to ignore all theories with prior below a certain cutoff, let’s say 10^-6. For example, that the earth is a tetrahedron (actual example from the paper). He then gets evidence that is perfectly predicted by the ignored theory (satellites show the earth having four corners) but has a tiny likelihood (say 10^-9) for any non-ignored theory. Then it turns out that ignoring the a priori implausible theory gets you wrong by 3 orders of magnitude!

            No shit, Sherlock. A Fitelson Bayesian that sticks to his approximated priors upon seeing such overwhelmingly unlikely evidence is not a Bayesian, it’s someone who’s confused about the math. An actual Bayesian sees how unlikely the evidence is (10^-9) first, and then decides what his cutoff for theory unplausibilty should be (something below 10^-9). For some reason, the paper completely ignores this unsophisticated prescription. If the evidence is so unlikely that it requires enumerating some very implausible theories, at least Bayes’ theorem gives you a bound on the possible posteriors of your original (spherical Earth) hypothesis if you assume that the evidence is predicted with P=1 by the disjunction of all unenumerated theories. If all satellites start broadcasting a four-cornered earth tomorrow, you can bet I’ll start looking for some alternative theories.

            Besides contradicting mathematical common sense, a Fitelson Bayesian contradicts the very spirit of Bayesianism: he cannot be moved from a bad prior by any possible amount of evidence. Even a Bayesian who can’t do math knows that surprising evidence is something you update strongly on, not something you ignore with a smile because it contradicts your prior world-view.

          • Kavec says:

            Kavec, that Fitelson HITI paper you linked to is vaguely nauseating. It’s missing some… what’chu call it… Bayesianism.

            Right, and this is exactly what makes it so difficult to argue against Bayesianism¹ in any way. Both the math side and the philosophical side are conceptually intermeshed and, without a good way to tease apart the two, discussions of this sort often turn into an endless back and forth scurry.

            Have problems with the math? “Oh, no, nobody actually does that. It’s a philosophy.”

            Have problems with the philosophy? “Oh, no, it’s math. You should really check that out.”

            This is absolutely horrifying to me². So, here, I need to note that the HITI paper is a purely mathematical argument and not applicable in a wider philosophical sense.

            I excitedly encourage going out to collect evidence, changing your mind after falsifying some predictions, doing analysis. I am ecstatic any time I meet someone else who enjoys breaking their tools to find their boundaries! We can always use more information, but I’m super happy if people just throw it away less. The bulk of my problems with the non-math side of Bayesianism (and the Sequences) lay nowhere near “updating on your prior”, and yet are still substantive in and of themselves. They are also significantly harder to tackle without punting the math parts aside, first.

            1. This is not a good thing².

            2. This is a good writeup on why I feel that way

        • I started reading Jaynes on EY’s advice and found the commonsensical “here’s how a robot w/ beliefs should work” part fairly persuasive. Obviously the details of mapping real world sensory inputs to a probability space were elided. I think commonsensical intuitions derived from Bayes-update schemes make a decent schema for what you would *ideally* do.

          If you gut-distrust some ‘sit in a room and think Bayes-style’ result then definitely go searching for missed possibilities. And the only way to get the balance right (between making explicit ledgers of probabilities*utilities etc and imagining/researching new possiblities/evidence) is to repeatedly make testable predictions using your tools. If you’re just saying some of the ‘rationalist’ writings are a little light on real experience and therefore making some bad suggestions (along with good ones) then probably everyone will agree w/ that.

        • Is there some standard way of combining high confidence numbers with stinky ones? Something analogous to significant figures where we judge that a standard coin will land heads with probability 0.500000 because that’s what a long history of flipping coins has taught us and that baby across the coffee shop will start crying before I leave with probability 0.5 because I can’t be bothered to do calculation and have a feeling something is about to ruin my morning. I’d like to be able to say the probability of the baby crying and the coin flipping heads is 0.25 instead of 0.250000. P-value’s maybe?

          Just some way of encoding how easy it would be to change a probability assessment based on new evidence.

          • Adam says:

            Variance. This often gets missed in discussions of Bayes, but real-world applications of this tend to blend distributions, not single probabilities, in such a way that each observations is weighted by the expected measurement error (and possibly many other types of error). Look into Kalman filtering for an example of how this plays out in real-world estimation devices that update beliefs based on real-time streams. The one-dimensional version is the familiar low-pass filter, which is identical to exponentially-weighted smoothing from times series analysis, all of which converge to a least squares estimate with sufficient data points, with the key difference now being that not all observations are assumed to come from distributions with identical variance. You can, of course, implement weighted least squares using any of the more familiar algorithms that operate on an entire set, rather than updating as data comes in, but I haven’t found it common for analysts to do this.

        • Eli says:

          You really ought to just go to the damned source and learn “Bayesianism” from the same place Eliezer and everyone else learns it from: E.T. Jaynes’ “Probability Theory: the Logic of Science”, accompanying a real course in Bayesian statistics and, later, if you’ve still got the stomach for it, computational Bayesian statistics (aka: “how to evaluate numbers in a Bayesian model before everyone dies”).

          For major points in its favor:

          * It defends objective informational Bayesianism, not subjective Bayesianism. No Dutch Books, just Cox’s Theorem.
          * It spends time explicitly demonstrating that by messing around with the composition of the hypothesis space, you can use Bayes’ Theorem to demonstrate, if not anything, a wide variety of things.

          Points against:

          * As does Eliezer, who learned probability from Jaynes, Jaynes doesn’t really consider resource-bounded reasoning.
          * Jaynes claims you’ll never need continuous probability distributions and density functions because everything’s discrete at the bottom level anyway. Seriously, WHAT THE FLYING FUCK WAS HE THINKING!?

          Counter-things-to-read: Jacob Steinhardt on the proper uses of frequentist statistics.

          • Kavec says:

            Yeah! Thanks to timorl, Jaynes is on my docket to get to… sometime.

            I expect to be thoroughly disappointed; if you couldn’t tell, I’m more interested in tactics for resource-bound reasoning. Anything with more room needs more actual math and data instead of cowboy decisions.

      • Gilbert says:

        OK, and moreover in some place I’m too lazy to look up right now he even explicitly recommends against it.
        BUT:
        (a) He is pretty clear about that being the theoretically optimal way, which we just can’t implement because we suck, and
        (b) he wants a future basically omnipotent AI to actually work by those rules.

        So even tough he doesn’t recommend present people to use it all the time he clearly thinks of Bayesian probability manipulation not as one very cool tool but rather as the objectively correct way to think.

        Of course that’s what I always say, so here’s a vaguely apropos add-on I only recently thought of:

        The way you read and I criticize Yudkowsky reminds me of the way I read and you criticize Chesterton. In either case there are always lots of problems in the details but there is disagreement about how much that matters. I think that is because in both cases the authors are also using the details to communicate a more general framework of thought. If that framework makes sense, haggling about the details is missing the point. If it doesn’t, retreating from the details seems a lot like garage-dragonism. And of course the framework can’t be reduced to something simple enough to be discussed at a single time, so just cutting to the chase doesn’t work either.

        • Kavec says:

          Yeah, I’ve heard of that interpretation before. Even under that framework, I’m still ornery about things.

          (a) He is pretty clear about that being the theoretically optimal way, which we just can’t implement because we suck

          This one is simple: It’s not the theoretically optimal way without literal omniscience, and I don’t think that’s a fair target to expect people to strive for in order to be rational. I’m really sympathetic to the view expressed in CShalizi parody, “Optimal Theory of Six-Legged Walking.”

          (b) he wants a future basically omnipotent AI to actually work by those rules.

          And this underlies my criticism of the work MIRI does on FAI. A finite agent in finite time will only be able to sample some of the infinite sample space that Bostrom-style superintelligence assumes it has. At which point, your proven-friendly AI now has undefined behavior lurking under the hood. Starting from a perspective of, “okay, so if we have solomonoff induction” strikes me as the wrong direction to go. If I were actually trying to engineer a provably friendly AI, I’d start by proving and providing ways to limit the sample space for for decision-making agents (if I were to even stick with Bayes as the mechanism for making decisions!) so that at least AI researchers have ways to understand the bounds of undefined behavior they’re crossing into.

          Oh, and to address the add-on:
          At which point I’d need to build a blog and my very own post-rationalist community, I guess. I think the overall framework has good things in it, otherwise I wouldn’t even bother with the rationalist community at all. It also has some potentially very, very bad things lurking under the guise of mathematical mystery, which is why I’ve vomited a bunch of words here.

          ¯\_(ツ)_/¯

          • Eli says:

            Your description of how to do AI sounds a lot like the probabilistic programming cabal.

          • JenniferRM says:

            After reading your charming rant I hit “^fpost-rationalist” to make sure I wasn’t about to say something someone already said, and found you making this one comment.

            If this is your first such rant, welcome to post-rationality!

            The water’s slightly more fine than in other places but still nothing close to optimal and if you stop treading water your body might sink forever. Also, there might be sea monsters, and the sea monsters might first eat the people who spend too much time trying to warn others about sea monsters. Have fun! 🙂

            Seriously, though, one of the few cheap but real heuristics that seems to actually help normal people make better decisions is to ask them to “Consider the opposite.“.

            Also (though this has less clean evidential support) if you want to make progress in this area, instead of a blog I recommend finding a small discussion group for private low latency discussion, like the obvious IRC channel but more private. That might be a place to find friends who are interested in the same open problems, however?

            I think “chalkboard cultures” nucleate and develop productive content better with an actual slate or whiteboard and F2F dynamics, but ASCII and low latency can do a lot all by themselves 🙂

    • OldCrow says:

      Firstly, thank you for those links to the mathematically sophisticated arguments. I haven’t read them yet, but I bookmarked them for future reading. So it’s entirely possible that they contain a complete rebuttal to everything I’m about to write, and this comment is as much a waste of time as everything else I’ve written on the Internet.

      I think a great deal of noise on both sides of the Bayes debate in the rationalist/anti-rationalist cluster focuses too heavily on the use of Bayes Theorem rather than the Bayesian interpretation of probability. Bayes theorem on it’s own is just a pretty simple theorem that’s hard to argue with. It only gets contentious when you add in the Bayesian interpretation – that probability theory is an accurate model not just of the frequencies of defined events, but of subjective degrees of confidence in predictions (this may be obvious and I’m sorry but the keyboard tends to get away from me). Then Bayes theorem is suddenly “the ideal method of reasoning” – assuming you have accurate priors, via magic.

      But the arguments for Bayes theorem as the ideal method of inference is just philosophical speculation. If anything, the only thing you should take away from it is that perfect inference is impossible. So I’m a bit confused about your objections that these thought experiments and abstractions lead to undefinable behavior – we already know that the implementation is impossible, it’s not like we’re trying to get the universe to execute buggy code. Heuristics are necessary. Everybody seems to be on the same page there. The questions are:

      – Is the Bayesian interpretation of probability correct?
      – Can it help us develop better heuristics?

      The first is that intel training has given me a vague and omnipresent sense that sound analysis is extremely important, but most decision-making timescales don’t give you time to use a lot of rigor. In the lands I hail from if you make the wrong decision, or even take too long making a decision, people die².

      Holy shit, that is so in line with my impression of the rationalist philosophy that I half expect you deliberately handed it to me. The only difference is this – with respect to the importance of getting your decisions right, the lands you hail from aren’t that special. (Don’t get me wrong – they’re certainly special in that people make decisions every day knowing that lives are on the line. But just because other people aren’t aware of the consequences of their actions doesn’t mean they don’t exist. Effective altruism seems relevant here).

      ** Painfully graceless transition **

      And I’m also not interested in claiming that the probability axioms are bad, per se, but that doesn’t mean the axioms are appropriate or ideal or even reasonable.

      So it seems like you have a problem with the first claim, that probability theory is a legitimate model for all forms of uncertainty. This is as a pretty big claim, and one I’m currently rethinking. But I don’t think your objections are particularly strong. Just because we don’t fully specify the sample space doesn’t mean that the axioms necessarily fail ungracefully. Every once in a while, when you spin a roulette wheel the ball hits a divider and jumps out of the wheel. Sometimes you bet on a baseball game and it’s called due to rain. Shit happens that you just didn’t consider. Does that mean we shouldn’t calculate odds for roulette? More importantly – can we reserve some probability for “shit happens” so that we can apply probability theory to actual existing roulette games, rather than a hypothetical ideal roulette game where the ball always stays on the track? That doesn’t mean throwing up your hands and going “Well, guess my hypothetical model didn’t apply” when it turns out that a magician swapped in a trick ball – it means that your mistake could have been accounted for in the probabilities you assigned to red, black, and green. You failed, but you could have done better without leaving the framework of probability theory.

      Like I said earlier, I’m still thinking about this. Or rather, I’m rethinking it after initially accepting it. But it doesn’t strike me as an unreasonable claim, and I do think that if it is true, there’s something to be gained by restricting our beliefs to conform to probability theory.

      • Kavec says:

        Hey, thanks! I actually have problems with both, but was taking up a lot of room already. Check my reply to Scott for a bit about the second part.

        The only difference is this – with respect to the importance of getting your decisions right, the lands you hail from aren’t that special.

        I actually brought it up, not because I think it’s a unique requirement, but because most of the time it’s an exceptionally hostile decision making environment. With effective altruism, people maaay be dying who knows, but you can totally just… put it off for a day and think about it later. The requirements for effective decision making become painfully clear when you have a team halfway across the planet screaming for help because they’re being shot at, right now, and need to know exactly what the hell is going on with any support, with what their enemies are doing (who they can’t see well, if it all), and why the last guy on shift didn’t tell them they were going to get shot at today. Then the data link goes out because of some freak weather issue somewhere just before you could finish giving them all the information they need.

        It’s… if I were to make a software development metaphor, it’s like most of the time, most people, have the luxury to do waterfall planning for their decisions. You sit down, figure out a good 80% solution to get the info you need, and then spend the rest of the time collating and reflecting on that before making a decision. And then intel winds up looking like everything is on fire while a certified™ scrum master screams at you for not maintaining velocity over your vacation two years ago and oh god why are we doing Agile anyway we’re a grocery store I can’t iterate on these bananas.

        And in a larger sense, I’m not even sure how axioms failing gracefully would look like. It doesn’t really hash right if I’m asking you to graph y = mx + b after taking away the Peano axiom that says x = x. The HITI paper I mentioned goes into the idea of what happens when you can’t enumerate the whole sample space, but that’s not really important. If, at the end of the day, you’re not able to enumerate the sample space, then you’re not doing math-Bayes. Instead it’s some weird, undefined thing that may as well replace your ikea bed with a swedish chef¹. I am totally cool for weird, undefined things except…

        I’m extremely sensitive to the meaning present in word choice and layout, so I then start disagreeing on Bayesianism-as-Philosophy on the grounds outlined in my reply to Scott.

        1. I am probably inappropriately aware of the pain of undefined behavior, cf C programming

        • OldCrow says:

          Gah, I knew I should have specified that remark about EA better.

          Look, I totally agree with you. It is incredibly easy to put aside questions about how many people are dying, whether a given charity could help, whether we even should be doing something or if jumping in all idealistic and naive is just going to make things work. There’s no emotional pressure – no fear, no instinctive sense of responsibility. That’s kind of the problem.

          Because whether or not I donate to charity, and to which charities I donate if I do, does affect whether people live or die. And just because those people only show up as statistics doesn’t make them less dead. And while you can take your time and think carefully about your decisions, that time still has a cost. And if you can’t reason systematically about large amounts of uncertainty, you find yourself waiting for a new study which needs to be funded and conducted and written up and published and all of a sudden it’s a couple years later and people are dead.

          This isn’t an argument for Bayes specifically, it’s just one of the reasons I think that the whole rationalist project is important.

          As far as “axioms failing gracefully” that’s not quite what I meant. I also have no idea what that would mean. But the method can and should fail gracefully. When talking about unpredictable, real-life events in Bayesian terms I’ve seen a lot of people set aside some probability that the reality will be something they haven’t even considered. I would still consider this mathematical Bayes so long as they’re reasoning is consistent with the axioms.

          • Kavec says:

            Oh, yeah, if you’ll see my footnotes I try my best to blunder through saying that more rational decision making (whatever that is) is a good goal.

            And, so, I’m not knowledgeable enough to make a good case for or against setting aside some unknown term. What I can ask, though, is… how do you know the probability of that unknown term? Since we can’t enumerate the sample space, the unknown term can be arbitrarily large and any output you generate is {probability_percent} + {who_knows???} at which point… you’re no longer doing math-Bayes anymore and can’t be consistent with axioms without meeting their requirements.

            And I’m not against using math, there’s just so many different kinds of hammers and this sure looks like a nail but Bayes is the only screwdriver anyone will ever need, so I should use that? I don’t understand, and the more I look into it, the less I understand. To wit: these are mathematical complaints, for the most part. I also take umbrage from a writing standpoint with the wording and language the sequences are couched in (which is extremely effective, for not good reasons), I’ve got problems with Bayesianism-okay-doesn’t-need-to-be-perfect-it’s-the-best-we-got-to-be-rational-so-use-it, and I’d even pick a fight on major rationalism projects like HPMOR and MIRI as setting bad examples. I’m worried about me, in ten years, when I’m navigating a world full of aspiring Bayesians who aren’t equipped to, or in the habit of, breaking their tools to see where they are limited, but having all the surety that their way is the right way for all time because #math.

        • 27chaos says:

          I’d love to see time pressure utilized more for training people’s decisionmaking abilities. Do you have any further thoughts or suggestions along these lines?

      • Kavec says:

        Hahaha, I click the link and immediately saw
        >77 per cent were special agents; 7 per cent were officers; and 16 per cent were admin

        Like, first, what the hell is a ‘special agent’, why are you asking admin people, and.. no enlisted people who make the actual operational decisions? So, hunted down a PDF of the full study.

        http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4076289/pdf/nihms581621.pdf

        Below records my live reaction to this paper.

        Intelligence agents make risky decisions routinely, with serious consequences for national
        security. Although common sense and most theories imply that experienced intelligence
        professionals should be less prone to irrational inconsistencies than college students, we show the
        opposite. Moreover, the growth of experience-based intuition predicts this developmental reversal.

        Interesting abstract, suspect they’re not getting evaluated on intelligence-related questions and instead you’re seeing something like “intelligence-experts, if they were artificial neural networks, start overfitting data if we train them too long.”

        . An experimental manipulation testing an explanation for these effects,
        derived from fuzzy-trace theory, made the students look as biased as the agents. These results
        show that, although framing biases are irrational (because equivalent outcomes are treated
        differently), they are the ironical output of cognitively advanced mechanisms of meaning making
        routinely, and their decisions have serious consequences for national security (Heuer, 1999)

        I understand those words, but not in that order.

        Of the 86% of intelligence agents who provided detailed
        information, 77% were special agents, 7% were special officers, and 16% were
        administrators

        aaaargrghhghg what the fuck does that even meeaaan.

        According to common sense and most theories, experienced intelligence professionals
        should be less prone to irrational inconsistencies than college students are. Intelligence
        agents have more experience thinking about risks involving human lives (and other valued
        assets) than do college students, and their training should reduce biases.

        I’m.. not convinced B follows from A, or if that’s even a desirable outcome. “According to common sense…” is a huge red flag whenever reading published papers.

        Taken together, all of these
        results suggest that meaning and context play a larger role in risky decision making as
        experts gain experience, which enhances global performance but also has predictable
        pitfalls.

        And then at the end… I’m not even sure if the paper is even saying anything of substance? Maybe it’s a problem with expected-utility and prospect theory, maybe it’s a problem with common sense, maybe it’s a problem with intelligence analysts having too much experience, maybe it’s a problem with…

        Like, is it even a bad thing that experience agents heavily rely on context to make decisions? Like, I’m not even convinced that the standard A: 200 people saved, B: 1/3 probability 600 saved is a good metric for irrationality here. Is rationality really just performing consistently on toy probability dilemmas? There are some interesting ideas here, for sure– I had never heard of the idea that experienced experts will start overfitting data and will do what I can to avoid that when appropriate.

        But also what the hell is an intelligence agent. 77% of the participants are a black box, 16% are inappropriate, and 7% are… special officers? Is that different from normal officers?

        • J. Quinton says:

          heh heh heh

          I wanted to send this to my friend who was going through 1N3 training with me, but she recently became a fundamentalist Christian and probably would take this the wrong way.

          I have to get my kicks somewhere.

        • Vaniver says:

          Like, I’m not even convinced that the standard A: 200 people saved, B: 1/3 probability 600 saved is a good metric for irrationality here. Is rationality really just performing consistently on toy probability dilemmas?

          I have yet to come across an argument that one of the standard irrationality tests is inapplicable that did not reflect a misunderstanding of the test.

          For example, as you’ve stated this test, it’s just a question about risk aversion when it comes to lives (it’s not obvious or a necessary component of rationality to consider the marginal value of a life as always the same). The test is fully stated this way:

          Before we intervene, 600 lives will be lost. You choose between:

          A) 200 lives saved or B) 600 lives saved with 1/3rd probability

          C) 400 lives lost or D) 0 lives lost with 1/3rd probability

          When both options are presented side by side, it is (I hope?) trivial to observe that A and C are identical options, and C and D are identical options. So if decisionmakers who see only A and B behave differently than decisionmakers who see only C and D, then it is obvious they treat “saved” as something besides “not lost,” which is very troubling from a mathematical perspective. It would be like a child who thought “half a dozen” was more than 6–you would not put them in charge of making change.

          • Kavec says:

            And in a world where you have to deal with social pressures, constantly, I’d rather have decision makers biased to the more generally palatable option.

            Ideally, they’d be able to recognize that this is a toy problem, sure. If not (and being able to recognize it in the face of your own bias requires specific training and knowledge), I’m happy that these people are biased in the right direction. In fact, being more biased in the right direction than college students gives me confidence that they’ll make better snap judgements, faster, in a decision making role that needs ‘good enough, now’ over ‘optimal, in five minutes’. I’m positing that the signal this studies sees isn’t irrationality, it’s stronger heuristics.

            This, of course, may as well be all rationalizing. A perceived conflict of interests will poison any real opinion I have on the matter 🙂

    • brad says:

      Completely off topic, but if you haven’t already I would suggest getting involved with embedded real time programming. You get the joy of undefined behavior *and* also not enough time to make decisions but the need to make them in a fixed amount of time no matter what.

    • John says:

      I really don’t see the problem with axiom 2. It feels like any sensible rationalist’s sample space is going to include an event called “my assumptions about the sample space are wrong” or “something weird happens” or “HERE BE DRAGONS”, and at that point your sample space is complete by definition. I think Eliezer even wrote something along those lines at one point…

    • timorl says:

      Those are not the laws of probability you are looking for. I strongly suspect Yudkowsky meant the Jaynes’ desiderata:

      Desiderata I: Degrees of plausibility are represented by real numbers.
      Desiderata II: Qualitative correspondence with common sense.
      Desiderata III: Consistency:

      IIIa: If a conclusion can be reasoned out in more than one way, then
      every possible way must lead to the same result.
      IIIb: The robot always takes into account all of the evidence it has
      relevant to a question. It does not arbitrarily ignore some of
      the information, basing its conclusions only on what remains.
      In other words, the robot is completely nonideological.
      IIIc: The robot always represents equivalent states of knowledge by
      equivalent plausibility assignments. That is, if in two problems
      the robot’s state of knowledge is the same (except perhaps for
      the labeling of the propositions), then it must assign the same
      plausibilities in both.

      The first one is the same, although Jaynes talks a bit about why accepting it is not as arbitrary as it seems. The second one is a very informal (though formalized in the book) way of saying that knowledge about correlated events should count as evidence. The third is quite self-explanatory. I definitely recommend “Probability Theory: The Logic of Science” (at least the first few chapters) if you want to understand why Eliezer is so enamoured with Bayesian thought.

      I won’t comment a lot on the rest of your post, I just wanted to mention that the theory built on the above desiderata can work with any amount of knowledge we put into it in the form of a prior. Choosing one is still one of the harder problems, exactly because we don’t know the whole sample space. In real-life situations we can still approximate it quite well, and I feel the point is it still does better than other approches.

      Also — I enjoyed your post, if you ever get your own blog advertise it somewhere here.

      • Kavec says:

        Okay, huh, interesting. I’ll look that up, and then yell at Jaynes for not doing Bayes right and starting this whole thing off on the wrong foot :V

        Because, hell, if we’re not doing math then that needs to be very clearly laid out that we aren’t, otherwise the mystery of mathematics will lend credence where it shouldn’t.

        • Troy says:

          Jaynes is doing math, he just derives the math from the qualitative desiderata timorl mentioned. He ends up with an axiomatization distinct from, but funtionally equivalent to, Kolmogorov’s. But conceptually there are important differences, the most important of which is that Jaynes treats conditional probabilities as basic.

          Although Jaynes is not quite philosophically savvy enough to put it this way, what he’s getting at is a conception of probability as a semantic relation between propositions (cf. John Maynard Keynes, among others): P(A|B) is a relation between the propositions A and B, understood as something like the “degree to which B supports A.” Entailment of A by B is a limiting case of this; if B entails A, P(A|B) = 1. The relation must be semantic because the probability that all ravens are colored (i.e., have some color or other) given that this raven is colored is clearly different than the probability that all ravens are black given that this raven is black; and yet syntactically the pair of sentences are the same.

          To what extent this conception avoids the problems you’re talking about is unclear. If you want to apply particular decision procedures for determining probabilities, such as the Principle of Indifference (which Jaynes advocates), then you need the “correct” partition of alternative possibilities (this is equivalent to what you’re calling the “sample space,” if I’m not mistaken). But if you buy Jaynes’ argument that any formalization of plausible reasoning will be functionally equivalent to probability theory, then if this a problem it’s a problem we can’t get out of by adopting some non-Bayesian framework for reasoning.

    • Bugmaster says:

      Personally, I think that the Bayes Theorem is a very useful tool because it demonstrates that seemingly complex, seemingly intractable problems often have relatively simple, tractable, and immediately applicable answers — as long as you’re willing to tolerate some uncertainty, which is a sensible position to take most of the time.

      This doesn’t mean that the Bayes Theorem is the One True Answer to Life, Universe, and Everything. You can’t always apply it reliably, or efficiently; but in those cases when you can, the results are usually spectacular (just look at spam filters, for example). And in those cases where the Bayes Theorem doesn’t work well enough — well, maybe there’s some other trick we can use.

      Knowing that the Bayes Theorem exists, and is often quite effective, puts you in the right frame of mind to search for such tricks; as opposed to, say, writing 1000 pages on post-deconstructionist reconceptualism, or whatever it is that philosophers are doing these days.

    • Shenpen says:

      Hi Kavec,

      I am very new to this thing and compared to you or Yudkowsky a complete illiterate, but here is something what my lay brain noticed: the LW tendency to think statistics = probability = advice i.e. if most marriages fail then my marriage has a greater than even chance of failing and this perhaps means not be too keen on marrying.

      This is something I would call extreme-outer-view-ism or the-opposite-of-the-planning-fallacy.

      I.e. if my projects are consistently late, then yes it is better to adopt the outer view at forecasting and base my forecast on how similar projects went FOR ME. But basing my forecast on how similar projects go for other people, the industry average statistics, would IMHO be crazy.

      A statistical average is not even a prior. If some people are in a sauna and some people are in ice I cannot assume as a prior I am at a comfortable room temperature.

      I need to start with a measurement of how things usually go specifically for me.

      Is this consistent with your view?

      • Kavec says:

        Hm, this is kind of a larger question. In an effort to keep me from horribly butchering this, let’s step through this melting door and into surrealism for a moment.

        We are now coins.

        We have a heads side, and a tails side. None of us enjoy landing on our heads side, because it hurts our Jefferson, and would much rather land on the tails side. This is pretty much the same for all the other coins we know.

        Landing on your Jefferson hurts, and coins that are able to land on their Jefferson less often are held up as role models. It’s not clear how they are able to do it, but in general, it’s believed that with gut and gumption, we all can land on our Jefferson less. For the purposes of illustration, everyone gets flipped pretty regularly and at approximately the same rate.

        So, as you’re living your nickel-and-dime life, there’s a few factors you need to consider vis-a-vis landing on your Jefferson less

        1. How often do coins, in general, land on their Jefferson?
        2. How often do I land on my Jefferson?
        3. What is the rate of change for both 1. and 2.?

        What happens with a lot of people is that they look at 2, notice that it’s worse than 1, and feel bad. If it’s not full stop, there’s very minimal efforts to land on their Jefferson less.

        The obvious advice to that is, “don’t compare yourself to others, just get better” which… is not worse. You can see how often you land on your Jefferson, and you can see if you’re landing on your Jefferson less over time as you work at it. It may be hard to track the speed of your improvement, but you know you’re improving. This works for some coins, and it works really well. For others, it’s seductive to take that small initial gain, call mission complete, and never really land on their Jefferson less.

        And so you may be tempted to look at only 3– comparing your rate of improvement against everyone else’s. This works really well when everyone is improving at roughly the same rate. If, on average, everyone doesn’t improve very quickly because they’re already really good at avoiding the Jeffersonian hellscape that colors my own ten cents, it’s tempting for me to take any small rate of improvement, call victory, and never care about getting better ever again.

        You really need all of these things at once. This is all couched in a sort of self-improve-your-market-value metaphor, but it’s important to balance and keep track of at least your relative error (How often coins land on their Jefferson minus how often I do), the global rate of change (How fast do I have to get better to stand still), and your own rate of change (What’s the derivative of how often I land on my Jefferson?) in any decision-making domain. Sometimes it’s not clear where these map, like in your marriage example, because maybe you only get one shot at it. In which case, you’ll want to do enough research and planning so that you’ll be able to stack the deck in your favor and give it the best shot possible.

        And if you don’t practice keeping all these from mixing up, you may wind up just making yourself feel anxious¹ that you’re not already the best at never landing on your Jefferson! Which is totally okay, practice is where you can safely screw up, just remember that you’re actually learning something while you practice and try not to make the same mistakes next time.

        1. Strictly speaking, I am the wrong person to be giving anything about anxiety. I understand that, normally, it’s crippling and scary for a lot of people and really, really difficult to get through. For me, it’s such a rare occurrence (sudden air-raid sirens and a particular ex) that anxiety is an outright fascinating emotion; I wind up finding a place to sit before setting to analyzing and picking things apart to feel all the boundaries and ragged edges to it.

    • rttf says:

      “I don’t even have a better framework than Bayesianism I can give you, either.”

      Of course you don’t. Cox’s theorem guarantees that you won’t find anything better. This is also what Eliezer refers to when he talks about the “Laws of Probability”.

      • Kavec says:

        I’m hoping this is a serious objection, because I’m going to treat it like one:
        You’re misusing Cox’s Theorem as a ready password, misunderstanding my argument, or both.

        What I think you’re getting at is the general idea of Bayesian optimality– where Bayes has been shown to guarantee that it can outperform all other models, on average, all the time. This is pretty cool.

        However, it does not mean that it’s the end-all tool! Bayes performs terribly in some contexts, which more narrow tools are able to handle with ease. When I say that I don’t have a framework, I mean just that. I do not have a ready-made philosophy to hand down in 2500 words or less, nor do I have a catchy name for what I do in the first place. That does not mean I have no valid criticisms with regards to your abuse of mathematical rigor.

        • Troy says:

          You can say “we need many tools, not just one,” but then we can just redescribe your set of tools as one big tool.

          Suppose we have two people, person A and person B.

          Person A says: I have a universal inference method: Bayesian probability.

          Person B says: I do not have a universal inference method. Instead, in context C1 I use method M1, in context C2 I use method M2, and so on.

          It seems to me that person B has misdescribed what he is doing. He does have a universal method: it is, “in context C1, use method M1; in context C2, use method M2; etc.”

          If Cox, Jaynes, et al. are right, then any good uncertain reasoning will be functionally equivalent to probability theory. If that’s right, then person B’s more complicated disjunctive method is either inferior to probability theory or equivalent to it.

          If “Bayes performs terribly in some contexts,” it’s either because illicit assumptions are being fed into it or because everything would perform terribly in that context. (For what it’s worth, I think it’s usually the former. And usually we don’t know what the correct assumptions that would make a direct probabilistic calculation feasible are, and so we should almost always be willing to revise the ones we make.)

          • Kavec says:

            If Cox, Jaynes, et al. are right, then any good uncertain reasoning will be functionally equivalent to probability theory.

            I’ve seen this line before; something smells, and I don’t know what it is. But anyway, you bring up a good point… that comes back to enumerating the sample space.

            If person B was able to give an exhaustive list of all the methods they use over all contexts, then I totally agree that Bayes Theorem (mathematically) would be a serious contender once you wrap all of B’s techniques into one giant ensemble glob.

            In real life, though? Neither A nor B are going to have access to the entire set of contexts. And when shit inevitably happens, A is super confident that their Bayesian probability is going to save the day. It may not, and it may even misbehave in hard to detect and disastrous ways.

            B– with the right tools– should be able to recognize that their current ensemble has nothing that applies in the current context, at which case they can make decisions off that. If you’re actually, really, fully doing Bayesian probability there’s no signal that says “oh crap I have no idea what I am doing” and instead you’re trained to just throw more Bayes at it. And that may work, who knows, it’s undefined!

            In my mind, an ‘ideal reasoner’ is going to be highly interested in knowing when they’re in a context with which their toolkit doesn’t apply. In which case, that reasoner is certainly not Bayesian, even if they use Bayesian methods when appropriate, because Bayes Theorem has nothing to say on the unknown quantity of unknown unknowns.

          • Adam says:

            Language modeling comes across something like the problem you’re describing, Kavec, of having to deal with the reality that they can’t exhaustively enumerate the full sample space of possible sentences. What actual text classification and prediction systems do to deal with this is assign some very low but non-zero probability to ‘unknown’ as a placeholder for anything not previously encountered, and steal from the probability mass of the rest of the model.

          • Troy says:

            I meant to reply to Kavec’s comment here, but seem to have posted in the wrong place. At any rate, I have a reply a few posts down.

        • rttf says:

          I’m afraid it’s you who don’t understand Cox’s theorem, unfortunately.

          >where Bayes has been shown to guarantee that it can outperform all other models, on average, all the time.

          You’ve almost got it. Remove “on average” and you have a correct statement.

          >Bayes performs terribly in some contexts, which more narrow tools are able to handle with ease.

          Here you make the mistake of not comparing like to like. A more narrow tool means you have more information about the subject, which in Bayesian terms means you have a better prior. If you compare two people with the exact same information available then Cox’s theorem guarantees that the one who uses Bayesian reasoning will outperform the one who don’t.

          A corollary of this is that any problems you identify with Bayesian reasoning must also be a part of all other systems you could possibly use. This is why the obvious difficulty with the sample space you’re talking about cannot be “magicked away” by your suggestion to “just use a different method”.

          • Gilbert says:

            Cox’s theorem isn’t magic. It’s about reasoning about statements with plausibility numbers for every statement. In other words, it ASSUMES a well defined sample space.

          • rttf says:

            @Gilbert

            That’s not completely true nor is it really that relevant. Kavec argues that there should exist some method that will magically solve his sample space “problem”. The only way to solve this “problem” is to give a complete definition of the sample space, and if you have that, the optimal choice is to continue with Bayesian reasoning.

            Either way, this is not relevant since anyone using a non-stupid prior for some problem will just have positive probability for the hypothesis “every other hypothesis I have about this problem are wrong”. Then evidence itself will guide you to this conclusion whenever it’s true.

          • Troy says:

            Either way, this is not relevant since anyone using a non-stupid prior for some problem will just have positive probability for the hypothesis “every other hypothesis I have about this problem are wrong”. Then evidence itself will guide you to this conclusion whenever it’s true.

            In addition to this, there are ways to utilize Bayesian reasoning that ignore the problematic “catch-all” hypothesis (namely: some other hypothesis that we haven’t thought of is right) altogether. For example, we can use the Relative Odds Form of Bayes’ Theorem to compare the probability of two hypotheses H1 and H2:

            P(H1|E) / P(H2|E) =
            [P(H1) / P(H2)] * [P(E|H1) / P(E|H2)]

            What this formula says is that the posterior relative odds of H1 to H2 are equal to their prior odds times their relative Bayes’ factor. Note that as long as you know their relative prior odds, you don’t need to ask about their absolute prior probabilities. This means you don’t have to assign a prior probability to the catch-all hypothesis. This gives us an algorithm for theory preference which is utilizable even when we don’t have enough information to determine whether a theory is, say, more probable than not.

          • Gilbert says:

            Nope.
            – Pretty much by definition, you don’t know probabilities conditional on something you haven’t thought of. So the “everything not otherwise enumerated” hypthesis is fine for informal reasoning but it buys you nothing when doing the actual math.
            – If the truth isn’t in your sample space Bayesian methods do not guarantee a high probobility for the best hypothesis you actually thought of. So yes, you can do the stuff Troy talks about, but all those guarantees about it being the best method of reasoning are gone.
            – “The only way to solve this “problem” is to give a complete definition of the sample space, and if you have that, the optimal choice is to continue with Bayesian reasoning.” No, as seen by, that’s not how we actually do it.

          • Kavec says:

            Kavec argues that there should exist some method that will magically solve his sample space “problem”.

            Woah, what, I don’t remember arguing that at all. I’m very specifically limiting myself to saying that Bayesianism does not do this. I doubt this is even possible– you’d be literally omniscient.

            At which point, hell yeah Bayes.

            Gilbert says what I was going to say anyway.

            Thanks for handling that!

            <Troy> It seems to me that not being able to specify in advance all the tools one will use doesn’t essentially change the situation. We can look at all of A and B’s reasoning, at the end of their lives, and then describe the tools they actually used, even if they didn’t know in advance that they would use them. And then we can evaluate them, and Cox’s Theorem rears its lovely head again.

            I have one objection with this: You can’t make decisions in hindsight

            It’s cool if we can go back in hindsight and run this, but it seems to me that a base requirement of rationality should be the ability to make good decisions, now.

            I am, however, completely satisfied with your more limited claim– any disagreement there is more about whether to weight A or B more, and I don’t have enough knowledge to dictate which one would work better for you.

          • Troy says:

            @Gilbert:

            If the truth isn’t in your sample space Bayesian methods do notguarantee a high probobility for the best hypothesis you actually thought of. So yes, you can do the stuff Troy talks about, but all those guarantees about it being the best method of reasoning are gone.

            I’m afraid that, as with others, I don’t really see what the alternative to Bayesianism is here. Suppose one of our hypotheses is the “none of the above” catch-all. Non-Bayesian methods aren’t going to do any better in telling us whether or not to endorse that. And Bayesian methods will do the best in telling us which to prefer of the hypotheses we have thought of. So it looks to me again as if Bayesian methods come out on top.

            @Kavec:

            I have one objection with this: You can’t make decisions in hindsight

            It’s cool if we can go back in hindsight and run this, but it seems to me that a base requirement of rationality should be the ability to make good decisions, now.

            What I wanted to use the illustration to argue was that we could look back at A’s and B’s decision and reconstruct what methods they actually used, even if they couldn’t have articulated them beforehand. Then Cox’s Theorem will tell us that A did better (subject to my later non-ideal provisos). So we can conclude beforehand that A will do better, because whatever method B uses, it won’t be as good as A’s.

            Perhaps your objection is that B’s method is not “do this in this situation, do that in that situation, etc.” but “do whatever seems right at the time,” since B is committed to the latter and not the former beforehand. But as far as I can tell, Cox’s Theorem still implies that A’s method will be superior to this method as well.

            I am, however, completely satisfied with your more limited claim– any disagreement there is more about whether to weight A or B more, and I don’t have enough knowledge to dictate which one would work better for you.

            I think the main questions are empirical: how good are people at applying probabilistic reasoning, and how well do they understand it; and how good are they at applying alternative methods? The answers will, of course, be field-specific.

          • Kavec says:

            @Troy:

            I would love to continue this conversation more, and totally don’t want to do it here. Hit me up on the #lesswrong IRC channel under this same name and we can talk more there.

        • Troy says:

          B– with the right tools– should be able to recognize that their current ensemble has nothing that applies in the current context, at which case they can make decisions off that.

          It seems to me that not being able to specify in advance all the tools one will use doesn’t essentially change the situation. We can look at all of A and B’s reasoning, at the end of their lives, and then describe the tools they actually used, even if they didn’t know in advance that they would use them. And then we can evaluate them, and Cox’s Theorem rears its lovely head again.

          Perhaps you’ll be satisfied with the following more limited claim about the limits of probability theory. While I think that Cox’s Theorem does indeed show that ideal plausible reasoning always takes the form of either probability theory or something functionally equivalent to probability theory, that doesn’t show that humans engaging in non-ideal plausible reasoning are always better off explicitly doing Bayesian reasoning. Put otherwise, “if you use method A correctly, you’ll do better than if you use method B correctly” doesn’t imply “if you use method A incorrectly, you’ll do better than if you use method B correctly.” It may be that in some cases our choice is between misusing probability (misusing because we make illicit assumptions, don’t understand the mathematics on a deep enough level, etc.) or using some other ad hoc method correctly, and in that case we’d be better off doing the latter.

          For my part I don’t think this happens often among intelligent people who really understand probability theory on a deep level. But many people who apply probability theory do not really understand it, and in the wrong hands probability theory can be badly abused. For example, I think that good historical reasoning is reconstructible using probability theory, and that we can even derive important methodological lessons from this. But a little knowledge is a dangerous thing, and most (though not all) historians who I’ve seen trying to apply Bayes’ Theorem have done a disastrous job of it, and would probably be better off just making (e.g.) informal arguments to the best explanation instead.

    • Daniel Keys says:

      1. Primer on Incompleteness. Discovery of a new related problem.

      2. Any form of thinking that works and is coherent will be equivalent to Bayes. That doesn’t tell us how to think – nor, contrary to what you’re claiming, does EY ever claim it does – but it lets us recognize many bad arguments.

      3. Did you stop reading before you got to this? Or just before you realized what you were looking at with the phrase “base rate”, and the later “voice of Ravenclaw within him”?

      • Kavec says:

        1. Primer on Incompleteness. Discovery of a new related problem.

        I am aware of Lob and haven’t worked through the EY paper linked; could you tell me what bearing this has?

        2. Any form of thinking that works and is coherent will be equivalent to Bayes.

        I have seen no evidence in favor for this beyond this phrase being parroted. If I’m not taking even EY on faith, you’re going to have to show me. My worldview itself is a tool- break it, show me where it breaks, break it again, please! I’m excited that timorl identified a weak point for me to get around to exploiting, saddened that they were the only one to drop a live probe so far, and glad that everyone else keeps trying. As much as I may sound annoyed and dismissive, thank you.

        That doesn’t tell us how to think – nor, contrary to what you’re claiming, does EY ever claim it does – but it lets us recognize many bad arguments.

        You don’t need Bayesianism for this. In fact, confining “recognize many bad arguments” to Bayesianism leaves us to blind to bad arguments made via Bayesianism at minimum, and then further is likely to leave us blind because the Sequences are a sole source. A quick jaunt over to my library with pen and paper coughs up the following not-even-math books that can help recognize bad arguments:

        * Anthropologist on Mars -> Examples of extreme cases where injured brains adapt (sometimes poorly, always surprisingly) to the world
        * Bird by Bird -> Provides end-goals for what good writing looks like, lets you evaluate writing on a structural level
        * Brick by Brick -> Shows the difference between ideas and effort, the work required to make things live, and ways to get there
        * Bursts -> Describes clustering effects vis a vis events over history
        * Buy-In -> Provides a structure for convincing persuasion
        * Change Anything -> Contains low-cost, high-impact ways to influence reality in your favor
        * Contagious -> Has a general framework for effective ideas that are worked through using real-world examples

        And holy crap, I was skipping books and still didn’t even make it past C before spending all the fucks I had in my wallet. On top of that, my personal library is an insignificant drop compared to all the ink spilled on the topic. Much of which is more accessible, of a higher quality, and fails less often in real-world use than the information in the Sequences.

        3. Did you stop reading before you got to this? Or just before you realized what you were looking at with the phrase “base rate”, and the later “voice of Ravenclaw within him”?

        I’ve read all of HPMOR. That section is the last shining light in a piece that needs to be aggressively edited and which could stand to be aggressively revised, too.

  12. coffeespoons says:

    SUCH OVERCONFIDENCE. SO CERTAINTY. VERY ANTI-SCIENCE.

    The really important point is that Scott has changed his position on doge.

  13. ” I don’t know anything about quantum mechanics and don’t want to get into it.”

    I do and I do.

    Theres a difference between thinking someone is wrong, and thinking they’re stupidly wrong, between respectful and disrespectful disagreement. Technically, you’re doing the latter when you downgrade your credibility in other statements from the same person, ie you attribute the disagreement to bad epistemology on their part. Of course, this is all about the Correct Contrarian Cluster
    EYs exchanges on Qm have shown an almost complete unwillingness to take any correction from anybody, ccombined with inability to argue technical points.

    There was a discussion of Relational QM, an alternative to both collapse theories and MWI which he had never heard of. I can quote him:

    “You can quote me on the following:RQM is MWI in denial.Any time you might uncharitably get the impression that RQM is merely playing semantic word-games with the notion of reality, RQM is, in fact, merely playing semantic word-games with the notion of reality.RQM’s epistemology is drunk and needs to go home and sleep it off.”

    To me, that is a series of insults, not an argument

    • Scott Alexander says:

      “Theres a difference between thinking someone is wrong, and thinking they’re stupidly wrong, between respectful and disrespectful disagreement. Technically, you’re doing the latter when you downgrade your credibility in other statements from the same person, ie you attribute the disagreement to bad epistemology on their part. Of course, this is all about the Correct Contrarian Cluster”

      Suppose I give you two boxes, one red, one blue, and tell you “One of these boxes speaks true statements 90% of the time and false statements 10% of the time; the other speaks true statements 10% of the time and false statements 90% of the time.”

      Then the red box says “A!” and the blue box says “Not A!”

      Then the red box says “B!” and the blue box says “Not B!”

      Suppose yesterday you would have said there was a 90% probability A was true. What is your probability of B?

      If you say “50%”, I accuse you of never actually having ascribed 90% probability to A, since you’re doing the math as if you hadn’t.

      This is why I don’t understand statements like “It’s okay to believe this sort of thing, but not okay to plug it into correct contrarian cluster type arguments”.

      • I’m disagreeing with the antecedent, not the hypothetical. It hasn’t been established that the contraruins are correct. In particular EY can’t have good grounds for thinking MWI is Best Theory when he has.nt considered all the major alternatives.

      • Brock says:

        73%, right? Or am I not thinking through this one correctly?

        • Oscar Cunningham says:

          Obviously it depends on the prior for B. I find an odds ratio of 7.38 in favour of B. So if the prior on B was 50% we would get a posterior of 88%. But I might have calculated incorrectly; I’m quite tired.

          EDIT: I double checked, pretty sure this is right.

          • Brock says:

            What’s your line of reasoning?

            Mine is this: If A was certain, then there’s a 90% chance that red is the “90% correct” box. Since A is 90% likely, I should be 81% sure that red is the “90% correct” box.

            Since I’m 81% sure that red is the “90% correct” box, I should be 72.9% sure of B.

            Adjust that with Bayes’ theorem if you had some prior for B other than 50%.

          • Oscar Cunningham says:

            Right, but notice that Red asserts A and Blue denies it. So we’re 89% sure that Red is the good box and 88% sure B is true.

          • Brock says:

            Ah, I see.

            There are four possibilities for what the boxes would say are:

            Good box / Bad box
            A / A = 9%
            A / ~A = 81%
            ~A / A = 1%
            ~A / ~A = 9%

            Since one says A and the other says ~A, we know we’re working with the 2nd or 3rd possibility, so there is an 81/82 = 99% chance that red is the good box, given a 100% confidence in A.

            Multiply that by our 90% prior for A, and we get a 89% chance that red is the good box.

            With B, we’re working with the same possibilities, so we should have 81/82 * 89% = 88% confidence in the truth of B.

            Thanks for helping me think this through.

            My previous reasoning would have been correct if we just had one box, and had 50% confidence that it was a good box.

          • Not Robin Hanson says:

            If I understand correctly there is one tiny missing piece: the chance that you were wrong about A, Red was indeed the good box, but was also wrong. This contributes an additional 1/82 * 10% probability that Red is the good box, raising it from 729/820 to 730/820.

            Likewise, there is a small chance that A is the bad box but B is true nonetheless.

    • brad says:

      I really can’t get past example #1. If someone comes along and tells me he is an expert on uncertainty, thinking clearly, and so on, and goes along and puts a very high confidence (“slam dunk”) on a proposition for which there is no evidence other than “it’s a pretty theory”, it’s very hard to take anything else he says seriously.

      It’s a little bit like Gell-Mann amnesia1 — I don’t know anything about nutrition or the philosophy of mind, and know only a modest amount about epistemology, but I do know a little something about physics. When I turn the page and come to an argument about Palestine, I’m not going to forget everything I know.

      I have a feeling that were the overconfidence in question on something like the Kindling Theory of Bipolar Disorder, Scott would have a different take on this.

      1https://www.goodreads.com/quotes/65213-briefly-stated-the-gell-mann-amnesia-effect-is-as-follows-you

      • To me (trained as but not working as a physicist, and with no familiarity with the rationalist movement) it’s a Occam’s Razor sort of thing. Collapse theories seriously complicate the model, without having any testable consequences or adding anything of value.

        • Professor Frink says:

          This isn’t true I don’t think. My friend was a particle physicist and I asked him about the sequences and he explained it like this.

          Think of it like a map vs territory problem. The actual TERRITORY are the predictions that get measured by physicists instruments.

          The wavefunction is the map that lets us calculate those probabilities, but have no doubt the only part of the map we can test are the probabilities. Thats where the map meets reality.

          What many worlds does is it says “wouldn’t it be great if we could just use the wavefunction? The map would be so much simpler.” And it’s true, it would be simpler. No pesky collapse, etc.

          But now the probabilities aren’t understandable- no one knows how to get them out of the theory, as EY notes toward the end of the sequence. So now we have a map that has no connection at all to the territory. We gave up that part and no one know how to get it back. Collapse gives us a map we can use to make predictions, many worlds gives us a more elegant map we can’t.

          • As far as I know, Everett solved that problem decades ago, and I believe there has also been some more recent work. In most cases, when you want to do an actual calculation, the Everett interpretation (I don’t like “MWI” as I think it misleading) simplifies to the Copenhagen interpretation. The exceptions are exactly the cases where the Copenhagen interpretation leaves us high and dry – if we’re trying to closely examine the quantum behaviour of the measuring device itself.

            I suspect your friend was talking about MWI in the context of string theory, which is a somewhat different beast.

          • Vaniver says:

            But now the probabilities aren’t understandable- no one knows how to get them out of the theory, as EY notes toward the end of the sequence.

            Several people have guesses that seem solid enough to me. But I agree with the MWIers that this is a really unfair comparison, because collapse interpretations bring the Born rule as an axiom. If you make a MWI interpretation which doesn’t have collapse, but has the Born rule as an axiom, bam, the probabilities “are understandable.”

            (And if you don’t like that for MWI, why would you like it for collapse?)

          • Professor Frink says:

            @Vaniver. Yes, we can add it back in as an axiom for MWI, but then MWI is no longer axiomatically simpler than Copenhagen.

            The argument in favor of MWI,as I understand it, is “we can do away with this special measurement axiom, giving us a theory with fewer postulates.”

          • For me, it’s not about the extra axiom but the conflict between instantaneous collapse and relativity. It turns out that the measurable results will be the same no matter what frame you work in, but only the Everett interpretation explains why and only in the Everett interpretation is everything explicitly relativistic.

            And if you’re doing something like Loop Quantum Gravity, where there is no background to provide a choice of frame to work in, I don’t see how you can avoid the Everett interpretation. Perhaps there’s a loophole, no pun intended; I don’t know enough about LQG to say. (The Wikipedia article does not include the word “collapse” and a Google search didn’t turn up anything either.)

          • Professor Frink says:

            Collapse isn’t against relativity I don’t think. I took a quantum field theory for mathematicians course when I was in grad school. There observables are defined as being in a C^* algebra and, a locality just means that observables outside the light cone commute (so measuring A then B is the same as measuring B then A).

            We didn’t talk much about collapse (because it was for mathematicians, so we didn’t talk about actual measurements), but it’s pretty clear that even with collapse here you aren’t going to violate relativity. Collapse can be sort of “non-local” in a handwavey way, but it’s not non-local in the rigorous way used in QFT.

            That said, when you tried to make the usual equations using General relativity, you got weird results (constraint equations instead of equations of motion), and people often say that quantum mechanics isn’t compatible with general relativity anyway.

          • The model is still non-local, even if the results are local. If you want to call it relativistic you have to assert that the wavefunction is something that magically explains what happens without being in any way whatsoever even the slightest little bit real. I can’t swallow that.

            Good point about the commuting observables; they do explain why the results are relativistically invariant, so I was wrong about that. On the other hand, I think you have to add them in by hand, so they should probably be counted as an extra axiom. Maybe. It’s been a long time. 🙂

          • Professor Frink says:

            So in the field theory class I took, we used field operators instead of wavefunctions and built various observables out of the field operators. “Observing a particle at X, observing a particle at Y,etc.” I think observables are in a C^* algebra came from quantum mechanics. But I was never a physicist, so I’m not really sure how this turns into actual measurements.

        • 1. Collapse theories meaning objective collapse theories, or ontolologically minimal theories like Copenhagen?

          2. No interpretation has observable consequences.

          3. MWI complicates things, because it needs a universal basis.

          • 1. I’m fuzzy on what the distinction would be in this context. It seems to me that either way you have to justify your choice of reference frame, and that’s the difficult part. Or at least the most obvious difficult part.

            2. So why not choose the conceptually simplest one? 🙂

            3. I’m not sure what you mean; but if you’re doing actual calculations the Everett interpretation simplifies to the Copenhagen interpretation anyway, so I don’t see that there’s a problem.

          • brad says:

            Re: 2

            If you have data that can be equally well fit to either a linear or quadratic function, you’d say given Occam’s Razor and so on that the underlying process is more likely to be linear. You wouldn’t say that the case for a linear underlying process is a slam dunk and anyone that thinks the underlying process might be non-linear is delusional and needs to be introduced to the basics principles of rational thought.

            Given no testable predictions, much less those tests having been run, the stridency is simply remarkable.

          • @brad: well … yes, maybe, OK. It kind of depends on some of your preliminary assumptions; if you’re willing to suppose that the wavefunction isn’t in any sense real, or are happy to throw away relativity, then presumably collapse models work perfectly well. And I do vaguely recall hearing about some classes of model in which relativity might become a natural consequence without conflicting with simultaneous collapse.

            But I’m not sure I can fault someone for being confident that, within the framework of physics as we currently understand it, the Everett interpretation is the only one that really makes sense.

  14. AngryDrake says:

    Since when is open borders a “worthy cause”?

    • Jon Gunnarsson says:

      That’s a value judgement Scott has made, just like every time someone describes anything as a worthy cause.

      • AngryDrake says:

        I don’t think Scott is arbitrary in his praise, though. What led him to believe such a thing?

        • Scott Alexander says:

          I’m actually not sure about open borders myself, but that’s because I’m not sure about HBD. If HBD is wrong or at least not right in ways relevant to this, then we can find ways to help immigrants assimilate (or have children who do) and we don’t have to worry about the “too many Afghans will make America more like Afghanistan” horror story people bring up.

          In that case, the studies showing that immigrants improve the local economy, don’t “steal” jobs from anyone else, and are themselves made much better off by immigration – plus the ethical issues around “if you can save someone from poverty at no cost to yourself, do that” seem to argue overwhelmingly in its favor.

          Not to mention that I think America is a better country than most of the alternatives, I’d prefer to see it be more powerful, and having many more people including a lot of very smart productive people (Elon Musk and Albert Einstein were both immigrants!) is a net win. I also like the advance of science (usually) and helping potential scientists go from countries where they can’t do science effectively to countries where they can is one of the best ways to speed it up.

          • AngryDrake says:

            >studies showing that immigrants assimilate quickly

            Links? Personal experience leads me to believe otherwise (that immigrants in quantities that enable them to create their own communities don’t assimilate).

            >improve the local economy, don’t “steal” jobs from anyone else,

            IIRC, the official line of OB advocates was that immigrants did create disruptions, albeit temporary ones.

            >themselves made much better off by immigration

            No argument here. They definitely are.

            >very smart productive people

            Doesn’t non-OB immigration policy already allow the highly talented and rich to immigrate without too much of a hassle?

          • Scott Alexander says:

            Assimilation (these articles may be terrible, I remember seeing good ones but these are the first that appeared on Google, I’ll put more work into this later):

            http://www.washingtonpost.com/news/wonkblog/wp/2013/01/28/hispanic-immigrants-are-assimilating-just-as-quickly-as-earlier-groups/

            http://www.cato.org/publications/economic-development-bulletin/political-assimilation-immigrants-their-descendants

            http://www.wnyc.org/story/77499-studies-say-nyc-immigrants-assimilate-quickly/

            Re: can’t smart people already immigrate – no! MIRI is trying really hard to import people for its work, and these are all like Math PhDs and stuff, and it’s been a bit of a nightmare for them. There are occasional plans in Silicon Valley to create some kind of giant floating platform offshore so that Silicon Valley companies can employ skilled programmers from other countries there.

            I think rich, highly skilled people have better chances than everyone else, but it’s still enough of a hassle that most don’t bother.

          • AngryDrake says:

            Ah, okay, this is America-centric. I can sort of understand, in this case, because America’s case is quite different from the case of the European countries (which I think of when someone mentions dismantling immigration control). The only large (in relation to total population) immigrant group in the States is the Hispanics, to my knowledge, and they’re not that different from the local population. Quite unlike the immigrant populations in Europe.

          • Emily says:

            Whether those links show that immigrants are easily assimilated hinges on what you mean by “assimilation.” Only the NYC one means “assimilation to non-immigrant levels of education/income.”

            I would speculate that the degree to which assimilation happens is affected by host country variables like economic opportunities, the public school system, and the proportion of immigrants, as well as anything related to the immigrants themselves.

          • Alexander Stanislaw says:

            If HBD is wrong or at least not right in ways relevant to this, then we can find ways to help immigrants assimilate

            I strongly disagree, a cultural/structural account of differences in population doesn’t imply that immigrants will assimilate. Also, some immigrants assimilate, some do not.

          • Adam says:

            The only large (in relation to total population) immigrant group in the States is the Hispanics, to my knowledge, and they’re not that different from the local population.

            Ya know, dude, we were actually here first, but then again, I’ve only ever lived in California and Texas.

          • Helping them to assimilate could amount to something heavy handed , bussing++.

            Its important to distinguish between what’s theoretically possible, and what’s politically feasible.

          • wysinwyg says:

            The only large (in relation to total population) immigrant group in the States is the Hispanics, to my knowledge, and they’re not that different from the local population.

            In an aggregate sense this may be true, but locally this is often false.

            For example, as of the 90’s, the largest Cambodian population density outside of Phnom Penh was in Lowell, MA. Many US cities have a “Chinatown” neighborhood where signage is primarily in Mandarin. Fairly large refugee populations (I think e.g. Somali?) have been settling in upstate NY in areas where the white population is aging and in decline.

            And this isn’t new — Palin’s famous “you betcha” accent is derived from local concentrations of Scandinavian settlers in the Great Lakes states. Boston and NY are shitty with trefoil tattoos on people who identify as Irish because their last name happens to be “Sullivan”. NYC’s culture is obviously heavily influenced by communities of Jewish immigrants.

            However, you are correct in your main thesis that immigration is different in the US, in no small part because immigration has done so much to shape the culture(s) of the US already.

          • DB says:

            The natural time to support open borders for the US, then, would be when we’ve succeeded at assimilating e.g. the Puerto Ricans we already have. This clearly hasn’t happened even though we’ve had more than a hundred years; have you seen the latest educational statistics for the place?

            Scott, if you want to see a global-utility-increasing application of open borders to the US in your lifetime, you arguably should be rooting for HBD to be RIGHT (and relevant), because then there are clear paths forward (e.g. genetic engineering). If HBD is wrong or irrelevant, then what is right? In that world, how many more hundreds of years will it take to figure out how to handle Puerto Ricans and the like well enough to make open borders a better proposition than a Canada-style points system? (We’ve already dedicated the last 50 years to exploring non-HBD hypotheses; how far has that gotten us?)

        • Jon Gunnarsson says:

          Oh, I didn’t mean to imply arbitrariness. Of course Scott has (or at least thinks he has) good reasons for believing that open borders is a worthy cause. For a good brief summary of the case for open borders, see Bryan Caplan’s opening statement from a debate on that subject: http://econlog.econlib.org/archives/2013/11/let_anyone_take.html

          I don’t know exactly why Scott supports open borders, but I assume that it has to do with him being more or less an advocate of utilitarian universalism and open borders would likely result in a massive increase in the standard of living of a very large number of people who are currently trapped in under-developed countries.

          • AngryDrake says:

            Interesting statement, especially the “remedies”. I don’t think they’d stand up very long against the American egalitarian ethos and legal system.

          • onyomi says:

            Personally, I think open borders is one of the most important causes–maybe THE most important cause for the future of humanity. This is because the greatest opportunity cost we have suffered throughout history, and which we continue to suffer, albeit to a lesser degree, is the case of “genius who would have revolutionized [science, medicine, computers, philosophy…], but who was born in a poor, rural village where he languished in obscurity or died of easily preventable disease.”

            Of course, this also speaks to the importance of making vaccines and clean water available in the third world so geniuses don’t die in infancy, but assuming we privilege already-existing, already surviving geniuses who simply don’t have the opportunity to live up to their potential (over say, the genius who would have been born had I not used a condom on a particular day), then making it easier for such people to get to a place like the US seems of paramount importance (ideally one would make their country a place in which they could flourish, but that is a lot harder and a more long-term proposition than letting them come somewhere where conditions are already ripe for their flourishing).

          • AngryDrake says:

            That’s an excellent case against abortion and contraception, too.

          • onyomi says:

            It is, actually, as it is a case against any attempts to limit the size of the human population in general. Personally, I think Julian Simon was right, and we need MORE people, not fewer (see his “The Ultimate Resource”–hint, it’s people).

            That said, there is the rather creepy yet somewhat plausible argument that violent crime has gone down since Roe v Wade in part due to the children of more irresponsible parents being aborted at a higher rate than others. With immigration, on the other hand, we may tend to get the smartest and most enterprising, especially if we implement a “you don’t get welfare or voting rights but anyone can come here to take a job”-type system like Caplan suggest.

          • Troy says:

            That said, there is the rather creepy yet somewhat plausible argument that violent crime has gone down since Roe v Wade in part due to the children of more irresponsible parents being aborted at a higher rate than others.

            Steve Sailer has offered what seemed to me at the time some pretty convincing criticisms of this argument; Google Steve Sailer and Levitt on abortion. (Sailer, as you may know, is not someone especially ideologically motivated to avoid this thesis.)

          • DB says:

            The utilitarian universalist thing to do is to improve the source countries, at almost any cost. Mass immigration is extremely inefficient in comparison, and Caplan and his acolytes have been so dishonest about this that it’s appropriate to be wary of trusting other things they say until they recant.

            (Yes, if you rule out tractors and earth movers, maybe giving people shovels is the most efficient way left to build the dam. But why are the >10x better solutions off-limits, objective discussion of them systematically avoided? The most plausible explanation is an external agenda that isn’t served by the better solutions. See geoengineering and climate change for another example of this phenomenon, though in that case the efficient solution has serious tail risks that justify caution; in the case of mass immigration vs. source country improvement the inefficient solution even has worse tail risks!)

            Forcing the First World to absorb 30 million Chinese immigrants a year for the past three decades would have been enormously more disruptive than what actually happened, and the outcome probably would have been worse for even the Chinese people (unless making non-Chinese developed world citizens worse off than they otherwise would have been is scored as a good thing for the Chinese).

          • onyomi says:

            But how are we supposed to improve the source countries without first militarily overthrowing their horrible dictators who always direct foreign aid into their own coffers?

            Improving the source country may be a better solution, but I don’t see how it’s easier.

            What would be easier for the US government to accomplish: make the Haitian government stop sucking or let Haitians come here?

          • John Schilling says:

            Making the Hatian government stop sucking. That’s pretty much push-a-button easy at this stage.

            What are the other constraints you aren’t talking about? On both sides of the equation.

          • onyomi says:

            Wait, what? Why is it easy to make the Haitian government stop sucking? And if it is, why haven’t we already done it?

          • DB says:

            onyomi, see the 1953 Iranian coup d’etat for an example of what the US and UK are capable of when they’re actually motivated.

            (And note that the long-term outcome in this case was rather crappy, so this is not an option to be considered lightly; much better to work from a greater distance, as we have with China since the 70s, or at least stick to deposing leaders far less popular than Mosaddegh to avoid creating future backlash.)

  15. Anatoly says:

    I think your reading of Hallquist is not charitable enough in several key places, and consequently your fisking fails to address some of the best arguments presented by Hallquist. Before I move to specifics, I should say that on the whole most of your criticisms seem correct, and I thought the concluding part was admirable. You’re more charitable to Hallquist than Hallquist is to Yudkowsky; but you’re less charitable to Hallquist than Scott Alexander usually is to his opponents; and it is Scott Alexander who I’ve come to regard as a standard-bearer in that area.

    On MWI. It isn’t merely the case that “Eliezer believes there’s an open-and-shut case for MWI”. Much more importantly, Eliezer considers his MWI case to be an overwhelmingly powerful demonstration of the superiority of Bayes over Science. As far as I remember, the QM sequence was created specifically with the goal in mind to provide such a demonstration on a single well-demarcated issue. As such, it seems to have failed badly in the opinion of domain experts. Even those physicists that believe in MWI haven’t ever said, to my knowledge, “Yes, this sequence does in fact conclusively prove what it set out to show – that conventional scientific epistemology just isn’t good enough and Bayesian reasoning of the kind explored in the Sequences ought to replace it”. The warmest compliments paid to this sequence by people with expert knowledge are tepid.

    Hallquist is aware of this purpose of the QM sequence when he writes: “But instead he expects his readers to “break their allegiance to Science,” and switch to his brand of “rationalism” instead, based solely on reading one amateur’s account of the debate over interpretations of quantum mechanics.” But it’s missing from your account of Hallquist. You focus on Hallquist’s accusation that Yudkowsky didn’t recommend that people read other opinions, and show that it’s wrong. Yes, Yudkowsky includes a disclaimer, and Hallquist should have seen it and accounted for it. But Hallquist’s verdict is still broadly correct. You quote Yudkowsky’s exhortation to look at other explanations and read other physicists, but it both comes from a comment, and, I’m sorry to say, taken out of context. Yudkowsky doesn’t recommend – anywhere that I’ve seen – that people read competing accounts of QM interpretation; he gives that advice narrowly to a commenter who says they don’t believe others could have been so stupid, as a way to indeed ascertain that they had been so stupid.

    The lack of such a recommendation doesn’t matter a whole lot because, as you rightly say, there’s no reason to insist on it in the first place. Hallquist is wrong to make such a big deal out of it, but even though he does, he *also* expresses the more important general point: that Eliezer expects his readers to “break their allegiance to Science” based on this single and supposedly irrefutable demonstration. *That* is important to keep in mind when you evaluate the possible crackpot tendencies and hostility to scientific rationality in Eliezer’s writings. You do not address that at all. And when you later say

    “This is why all of the posts Hallquist finds to support his assertion that Less Wrong is “against scientific rationality” are called things like Science Isn’t Strict Enough.”

    – that’s just more blindness of the same kind on your part, because the central post Hallquist quotes is in fact called “The Dilemma: Science or Bayes?” This isn’t at all the same sort of thing like “Science isn’t strict enough”. This is very well-enunciated, well-demarcated hostility to scientific rationality that Eliezer isn’t at all coy about. You’re wrong to imply that Hallquist exaggerates it. This isn’t about augmenting, it’s about replacing. My sympathies lie much more with augmenting scientific rationality with things like awareness of cognitive biases and probabilistic reasoning, and you make a very powerful case for it in your concluding section that I strongly identify with. But that wasn’t Eliezer’s goal in the QM sequence. The goal was to urge a dilemma on the reader and urge them to make the right choice. It is this goal and the way the rhetorical structure of the sequence is shaped around it that make the QM sequence so damaging. The actual case looks weak to people who are both interested in rationality and have deep domain knowledge. That’s bad. But that the case is presented as a make-or-break case for the entire edifice of probabilistically-informed Rationality – as a “Dilemma: Science or Bayes?” – that makes it much worse.

    • Anatoly says:

      Having run out of steam on the MWI things, I’ll be briefer on others.

      On philosophy. “What is the difference between Hallquist believing that he disproved one of the world’s most famous philosophers when he was twelve years old, and Eliezer believing that he solved the problem of consciousness when he was thirty-something?”

      The difference is that Hallquist isn’t claiming that philosophy is ill-suited to solving problems and you’re better off to mostly ignore it. Eliezer is. Hallquist is not criticizing Eliezer for failing to publish his views (philosophical or otherwise) in general, and you’re wrong to read such a criticism into his post and defend Eliezer from it. Hallquist is saying: Eliezer claims philosophy is broken because even when problems have clear and simple solutions, philosophy can’t agree to agree on them and move beyond those problems. But his only arguments for this are derision and a claim to have solved the problem of consciousness, but he won’t even explain what the solution is. This is weak and crackpot-like.

      (For the record, I sympathize with Eliezer here and I think he’s right. But the point is, you’re not addressing Hallquist’s stronger claim; instead you focus on “failure to publish”, which you interpret much more widely than the narrow application Hallquist uses here).

      On diet. “If indeed there were serious flaws in the dietary guidelines for the past thirty years, and since obesity kills about 370,000 people per year, if the issues corrected in the latest guidelines and freely admitted by modern scientists made the problem even 10% worse, then the “millions of deaths” figure is not an exaggeration.”

      What’s your basis for thinking that the emphasis on keeping fat below 30% of the diet, common to previous guidelines, could have possibly accounted for making the problem 10% worse? On the fact of it, this seems an astonishingly strong claim. What would be the mechanism of action? I expect that the number of people actually reading the guidelines to inform their own diet is negligible. The influence of the guidelines on fad diets of the day is probably negligible, given how wildly anti-scientific fad diets usually are. How does it act then – through advice by licensed dietitians? But what percentage of people who are trying to lose weight actually consult those rather than simply “go on diet”, and how many then successfully follow the advice?

      • Scott Alexander says:

        The same JAMA article I’m getting a lot of the other stuff from explains why this matters:

        In finalizing the 2015 Dietary Guidelines, the US Department of Agriculture and Department of Health and Human Services should follow the evidence-based, scientifically sound DGAC report and remove the existing limit on total fat consumption. Yet this represents only one action that may influence people’s diets; other policies should follow suit. For example, the Nutrition Facts Panel, separately regulated by the US Food and Drug Administration (FDA), lists percentage daily values for several key nutrients on packaged foods. Remarkably, the Nutrition Facts Panel still uses the older 30% limit on dietary fat, already obsolete for more than a decade.6 The Nutrition Facts Panel should now be revised to eliminate total fat as well as dietary cholesterol from among the listed nutrients and instead add refined grains and added sugar. Including only added sugar, a change currently under consideration, would insufficiently acknowledge the harms—and implicitly encourage the intake —of refined grains. Similarly, the US Department of Agriculture should modernize its Smart Snacks in School standards,7 removing the 35% restriction on total fat from the criteria. The Institute of Medicine should update its report, now nearly 15 years old, on dietary reference intakes for energy and macronutrients.6

        The current restriction on total fat has implications for virtually all aspects of the US diet, including government procurement for offices and the military, meals for the elderly, and guidelines for food assistance programs that together provide 1 in 4 meals consumed in the United States. The focus on total fat also affects other policies and guidelines. For example, the National School Lunch Program recently banned whole milk, but allows sugar-sweetened non-fat milk. Current National Institutes of Health guidelines on healthy diets for families and children recommend “eat[ing] almost anytime” fat-free creamy salad dressing, trimmed beef or pork, and extra-lean ground beef. Yet it recommends being cautious about eating any vegetables cooked with added fat, nuts, peanut butter, tuna canned in oil, vegetable oils, and olive oil. Furthermore, it recommends minimizing whole milk and “eggs cooked with fat,” both of which are listed in the “once in a while” eating category along with candy, chips, and regular soda.8 Along the same line, the FDA recently issued a warning letter to a manufacturer of minimally processed snack bars, stating that these products could not be marketed as healthy in part due to FDA health claim limits on total and saturated fat, even though the fats in these bars derive predominantly from healthful nuts and other vegetable sources. The restriction on fat also drives food industry formulations and marketing, as evidenced by the heavy promotion of fat-reduced desserts, snacks, salad dressings, processed meats, and other products of questionable nutritional value.

    • Scott Alexander says:

      I agree that the QM Sequence ended up failing at its goal. If I had been Eliezer wanting to make the same point, I would have written this post instead. It would have been something like: “Science is the thing that says that since Bem has a very impressive study with p = some ungodly low amount, you’ve got to believe in it, or at best remain agnostic and say that ‘more research is needed’. Bayes is the thing that says your prior for psi is so low, and the idea is so complex in the Occamian sense, that less research is needed and we probably shouldn’t have been studying this for decades.”

      “Switching allegiance from Science to Bayes” then looks a lot like something every one of us has done in this particular case, we just deny it. We want to sound like good scientists, so we say “there’s no evidence for psi!” Of course there is! There’s loads of it! We just *falsely* say “Oh, those studies are unusually bad” and believe the thing we want to on Bayesian grounds anyway. If we can learn to do this consciously and in a principled way, maybe we can apply it to something more difficult than psi.

      (I’m actually much more sympathetic to Science here than most people would be, and I’m secretly happy other people are choosing to research parapsychology with their own money Just In Case, but this is how I would make the point if I were Eliezer. I might edit this into the original essay.)

      • Jordan D. says:

        I’d just like to chime in and say that ‘The Control Group is Out of Control’, ‘Beware Isolated Demands For Rigor’, ‘Beware the Man of One Study’ and ‘Debunked and Well-Refuted’ are some of your finest works and together I think that they encompass and prove what the QM Sequence attempted in a sleek and excellent way. The real world is full of examples of smart and powerful people committing the sins described in those posts for exactly the reasons you’d expect.

        (For my money, the single most valuable Sequence is A Human’s Guide To Words. If I had my druthers, at least a year of the basic English cirriculum would be devoted to teaching children what words are, how they work and how to defend yourself against them.)

        • Vaniver says:

          The real world is full of examples of smart and powerful people committing the sins described in those posts for exactly the reasons you’d expect.

          I agree that talking about parapsychology first is the correct move.

          But I do think that Eliezer is operating under the correct model that physicists are (as a group) smarter than everyone else, and knowing that even physicists are not sufficiently sane is sobering information. If one makes an argument that simplifies to “look, psychologists are insane!”, well, I can’t tell you how many jokes I’ve heard about people going into psychology because they want to learn about what’s wrong with them. A physicist could look at the test scores and say “yeah, those psychologists could be tricked by that sort of thing, but us physicists would never fall for that, because we’re cleverer.”

          But even so, I agree that objection is better handled after you establish the idea of the insufficiency of the scientific method relative to the scientific mindset.

          • Jordan D. says:

            That’s a good point.

          • Eli says:

            Ugh. Nobody in real life thinks that physicists possess a General Factor of Correctness. We think that physicists have studied a lot of physics. Many people just don’t think there is a General Factor of Correctness at all aside from domain knowledge — even when the domain knowledge generalizes very well (like, say, a mathematics degree).

        • Izaak Weiss says:

          I agree. A Human’s Guide To Words is something I go back and revisit like once every 6 months, just because it’s a joy to read and I don’t ever want to forget the content.

      • youzicha says:

        Maybe you could make this substitution in Ozy’s “open source holy book” project? 🙂

      • Earthly Knight says:

        ““Switching allegiance from Science to Bayes” then looks a lot like something every one of us has done in this particular case, we just deny it. We want to sound like good scientists, so we say “there’s no evidence for psi!” Of course there is! There’s loads of it! We just *falsely* say “Oh, those studies are unusually bad” and believe the thing we want to on Bayesian grounds anyway.”

        Where did you get the idea that Science personified commands utter credulity towards any hypothesis which has an experimental result to its credit where p<0.05? The decision here is not at all between science and Bayes, it's between the outdated caricature of philosophy of science found in undergraduate textbooks and literally any competent philosophy of science.

        • Samuel Skinner says:

          What part of science are you using to reject that but not other results with similar p values?

          • Earthly Knight says:

            None, because I see this as a normative question.

          • Samuel Skinner says:

            Can you explain? Because wiki gives
            “Normative means relating to an ideal standard of or model, or being based on what is considered to be the normal or correct way of doing something.”

            So if your model excludes psi than of course you won’t get it as an output. So you need to create a way to make models that exclude incorrect things and allow the possibility of correct ones.

            Am I misunderstanding? Because it looks like you are “Switching allegiance from Science to Bayes”

          • Earthly Knight says:

            Normative, i.e. pertaining to the domain of evaluation and prescription rather than the domain of facts (next time, read the wikipedia article past the header). Which norms are pertinent depends in part on how psi is conceived. Good candidates might be the first-order (“object-level”) norm “don’t believe in supra-physical forces/energies without overwhelming evidence” or the higher-order (“meta-level”) norm “don’t believe in theories rejected by the great majority of scientists.” Note that both of these norms are likely to be truth-conducive– if you abide by them, you will tend to accept claims if and only if they are true– but neither refers to credences.

          • Samuel Skinner says:

            You aren’t getting my objection. You are saying you need to import an external rule system to decide this, which is the exact same thing the people talking about Bayesian are talking about. As far as I can tell your point is “the best people already do this”.

          • Earthly Knight says:

            I don’t think you understand what you’re objecting to. If we wish to reject the existence of psi, we are not faced with a choice between science and Bayesianism, we’re faced with a choice between childlike empiricism and any other philosophy of science. The dichotomy is false, both because it’s a mistake to think that science carries within itself the normative criteria for evaluating scientific evidence, and because there’s a menu of options besides Bayesianism that reach the appropriate conclusion about psi.

          • vV_Vv says:

            What part of science are you using to reject that but not other results with similar p values?

            Occam’s razor, which, surprise, was not invented by Bayes or Jaynes or Pearl or Solomonoff and especially not by Yudkowsky.

          • Earthly Knight says:

            That’s part of it. The other part is that we should demand overwhelming evidence for phenomena which would require revisions to our best-confirmed theories, in this case, fundamental physics. If psi is conceived as a non-physical force or energy, the burden of proof on parapsychologists becomes much steeper, and the social scientific tools they use to collect evidence probably just aren’t up to the task.

          • Adam says:

            The simple answer here is methodological naturalism, which was already considered to be a core part of science long before null hypothesis significant testing was ever invented.

          • Troy says:

            methodological naturalism, which was already considered to be a core part of science long before null hypothesis significant testing was ever invented.

            And when, precisely, would that time be? I am skeptical that you can find widespread endorsement of methodological naturalism prior to the 20th century.

          • Adam says:

            The term itself or the practice? I don’t know of the term itself being used prior to recent creationism v. Darwinism debates, but the case from which it arises (Kitzmiller v. Dover Area School District) cites it as an important part of scientific practice going back to the 16th century.

            I guess the first thing I can think to cite off the top of my head is J.S. Mill on the Principle of the Uniformity of Nature and the Law of Universal Causation, which admits no possibility of supernaturalism if inductive inference is to be epistemologically justifiable.

          • Troy says:

            I meant the practice, not just the term. I don’t think the history presented in Dover is correct. (Note: I am not an advocate of biological ID; I just don’t think it can be ruled out on a priori methodological grounds.) For example, consider the monogenesis/polygenesis debate about whether all human races arose from a common ancestor or not/whether they were all part of the same species. Before Darwin, most parties to this debate thought of the two hypotheses as involving divine creation of the first human or humans. Moreover, various theological arguments were made in favor of monogenism in particular. Nevertheless, this was a scientific question with evidence such as interfertility being used to support the monogenesis position.

            Or move forward from natural history to human history. (Perhaps you don’t think history is a science, but the same rhetoric of methodological naturalism reins there today too.) Today most historians would say that history cannot investigate miracles. But this was not the dominant view pre-20th century. Historians wrote essays and books arguing for and against the historicity of the Christian miracles. (Some still do today, of course.) This is the use of the empirical methods to investigate supernatural claims, which is inconsistent with methodological naturalism.

          • Adam says:

            I don’t want to dive too deep into this particular topic because I frankly don’t know that much about the history of science, so I’ll just note that I wasn’t trying to make any point about how widespread the adoption of methodological naturalism was prior to Fisher, just that it existed and was a viable alternative. Going with Mill specifically, he provides a full-fledged system of inferential logic that precludes psi without resorting to any form of statistical test, either frequentist or Bayesian, and he wasn’t just making it up whole cloth. He got his ideas from observing what actual scientists were doing, though mostly in the physical sciences, not biological sciences, which at that time were still barely scientific and more taxonomy than anything we’d recognize today as biology. As I think you’re noting, much of biology remained fundamentally unscientific much longer than physics did, and I don’t know much about how history is practiced by historians, but I know there is still active debate within cultural anthropology about whether they should even try to be science or just go with pure narrative-telling.

            I’ll also note that Humboldtian science, which I believe is widely held to have supplanted Bacon sometime in the 19th century as the dominant model of science, was committed to reductionism and describing all observable things as a dynamic equilibrium of physical forces.

            This is, of course, not the same as saying all people calling themselves scientists practiced it. Even today, plenty of paranormal researchers are actually scientists. It’s not a monolith.

          • Troy says:

            I wasn’t trying to make any point about how widespread the adoption of methodological naturalism was prior to Fisher, just that it existed and was a viable alternative.

            In that case we may not disagree much. I certainly agree that there were people pre-Fisher who said methodological naturalist-like things. I was just denying that this was widely seen as a scientific norm in the way it is today.

            Going with Mill specifically, he provides a full-fledged system of inferential logic that precludes psi without resorting to any form of statistical test, either frequentist or Bayesian, and he wasn’t just making it up whole cloth.

            True, but Mill’s methods are a blunt instrument to do this with, inasmuch as they rule out inference to any unobserved cause. This includes God and psi, but it also includes subatomic particles and historical events.

          • Earthly Knight says:

            Dismissing parapsychological research on the grounds that it posits supernatural forces will beg the question against parapsychologists who insist that psi is a perfectly natural phenomenon. In general I doubt there is a way to characterize “natural” and “supernatural” without referring to contemporary science, which means that we can dispense with the -isms and just talk instead about how psi is apparently incompatible with physics.

            For the historical claim, the eradication of vitalistic and theistic explanations from the natural sciences began in earnest in the late 19th century and was more or less completed by 1930, about the same time that Fisher introduced significance testing. But I would be amazed if there were any connection between the two.

          • Samuel Skinner says:

            EK
            “I don’t think you understand what you’re objecting to.”

            Lets un pack
            1)Much more importantly, Eliezer considers his MWI case to be an overwhelmingly powerful demonstration of the superiority of Bayes over Science. As far as I remember, the QM sequence was created specifically with the goal in mind to provide such a demonstration on a single well-demarcated issue. As such, it seems to have failed badly in the opinion of domain experts.

            2)“Switching allegiance from Science to Bayes” then looks a lot like something every one of us has done in this particular case, we just deny it. We want to sound like good scientists, so we say “there’s no evidence for psi!” Of course there is! There’s loads of it! We just *falsely* say “Oh, those studies are unusually bad” and believe the thing we want to on Bayesian grounds anyway.

            3)”Where did you get the idea that Science personified commands utter credulity towards any hypothesis which has an experimental result to its credit where p<0.05?"

            4)"What part of science are you using to reject that but not other results with similar p values?"

            Does this clear things up? I was asking what you were plugging into that hole that wasn't bayes. If the issue is "people getting different conclusions" because of what is in that hole, I need to know what you are specifying. And I need you to specify because EY's entire point rests on scientists doing things differently from him.

            When you finally gave an description
            " The other part is that we should demand overwhelming evidence for phenomena which would require revisions to our best-confirmed theories, in this case, fundamental physics."

            That sounds very much like what EY and SSC are talking about when they use the word Bayes. If your point is "scientists already use Bayes" than EY's example is
            -him doing it wrong
            -him doing it correctly and an example of scientists not using Bayes correctly (and showing what he intends to show)

            Vv_vV
            "Occam’s razor, which, surprise, was not invented by Bayes or Jaynes or Pearl or Solomonoff and especially not by Yudkowsky."

            SSC (the one with science versus Bayes) mentions that.
            "Bayes is the thing that says your prior for psi is so low, and the idea is so complex in the Occamian sense, that less research is needed and we probably shouldn’t have been studying this for decades.”"

          • Earthly Knight says:

            The problem is that you (or Scott, or Yudkowsky, or whomever) have confused “Science” with naive empiricism and Bayesianism with everything that isn’t naive empiricism. I agree that we should not be naive empiricists, certainly, but this in no way requires us to accept Bayesianism or repudiate science.

          • vV_Vv says:

            Bayes is the thing that says your prior for psi is so low, and the idea is so complex in the Occamian sense

            Except that it does not. Bayes assumes that you have priors, it does not say how to generate them.

            In order to say that psi has low prior probability, you need to invoke Occam’s razor: an epistemological principle older than Bayes and already widely used, which is in fact the reason why most scientists don’t believe in psi.

            It’s not like scientists secretly switched to Team Bayes in order to reject psi while publicly pretending they are still playing for Team Science. Scientists have been playing for Team Science all along, while Team Bayes is trying to get credit for Team Science achievements.

            EDIT:

            Just in case it’s not clear I reiterate that there is no true dichotomy between Bayes and Science. Bayes is a tool in the toolbox of Science. By “Team Bayes” I mean the kind of radical pseudo-Bayesianism peddled by Yudkowsky.

      • vV_Vv says:

        If I had been Eliezer wanting to make the same point, I would have written this post instead. It would have been something like: “Science is the thing that says that since Bem has a very impressive study with p = some ungodly low amount, you’ve got to believe in it, or at best remain agnostic and say that ‘more research is needed’. Bayes is the thing that says your prior for psi is so low, and the idea is so complex in the Occamian sense, that less research is needed and we probably shouldn’t have been studying this for decades.”

        Then you would have been attacking a straw man. Essentially nobody in science believes that Bem or the other parapsychologists have shown a legitimate paranormal phenomenon, or even that “more research is needed”, no matter how many significance tests these experiments pass. The scientific method was never “believe anything that passes a significance test”.

        The Science vs. Bayes dichotomy is a false one. Science uses Bayes both formally, as tool in the toolbox of statistical methods, and informally, with Occam’s razor as a simplicity prior.

        The general takeaway message I get from your essays is that science is hard, and the practice of scientific research has a number of systematic issues that arise from the fact that it is done by human scientists subject to human cognitive biases and human incentives. You support your points by showing flaws in actual studies.

        EY’s message, on the other hand, is that science is wrong, and we must switch to his brand of True Bayesian Rationality™ which would yield much better results even when applied by humans with human cognitive biases and human incentives. He attempted to show the superiority of his approach with the QM sequence and largely failed.

        • african grey says:

          Publishing Bem is some kind of endorsement.

          Scott doesn’t say that anyone believes Bem. He says that they confabulate excuses to disbelieve Bem. You don’t directly address this, but you seem to reject Bem on the grounds that “science is hard,” but you give no principled reason for applying more skepticism to parapsychology than to psychology.

          Scientists generally reject Bem silently. That’s better than making up false reasons, but it is dangerous. If “science” is whatever “scientists” believe, maybe it is a great method of establishing truth, but it is fragile. It is very difficult to know if the scientists of today are doing the same thing as the scientists of a century ago, the ones with a good track record. And it is difficult to tell if the scientists in psychology are doing the same thing as the scientists in physics.

          Whereas, Scott gives a principled reason for rejecting parapsychology research. Whether that should count as “science” or not, it seems better than what I see scientists doing.

    • Luke Somers says:

      > The warmest compliments paid to this sequence by people with expert knowledge are tepid.

      I have been a professional physicist. There are a great many physicists who know more about quantum mechanics than me, but a fair number who know less. Fortunately, the relevant level of expertise to judge the Quantum Mechanics sequence is not extremely high – having facility with second quantization should suffice.

      The quantum mechanics sequence was correct in every relevant fashion.

      Relational Quantum Mechanics is described as not being MWI, but it looks to me like it’s a MWI. Basically, if you take MWI and zoom in (at all) by considering only a portion of the universe, you get RQM. Conversely, if you take RQM and zoom all the way out (by including everything), you get MWI.

      The only way you can use RQM and deny MWI is to deny that there is such a thing as the whole universe. This is not a strawman, as the creator of RQM explicitly stated that the distinction between the theories is that MWI requires that there be a universal state.

      As distinctions go, this is a pretty fine point to make, I think. For the purposes of the argument that Eliezer made, RQM and MWI are in the same equivalence class.

      • Professor Frink says:

        What about the sequence post that talks about how the position basis is more fundamental than the momentum basis? I’ve seen a lot of people with physics credentials assert that the entire post is wrong.

        • Luke Somers says:

          The post is not entirely right. Fortunately, I restricted it to ‘every relevant fashion’, and this is an aside.

          FWIW, I agree with his conclusion in a sense. We are made of things that are well-localized in position, but our component particles are all over the place in terms of momentum. If you look at two people/planets/rocks/galaxies standing next to each other in a position basis, you can see that they’re distinct, and going to have separate dynamics, *really* easily. If you try to do something simple like that in momentum basis… good luck noticing that. Really. You’ll need it. Heck, you won’t even be able to readily tell that there ARE two extended objects there, rather than two or a billion.

          Now, this is not completely universal. I expect that dark energy will be way simpler to understand in momentum basis than position. But for things? Heck, anything in a bound state? Anything undergoing an intense interaction? Position basis time.

          A lot of the time when you want to do stuff with p, you care only about the p of things within a particular region of x. I don’t care how fast electrons are moving in that other galaxy over there. In order to ignore them, I need to knock them out in the x basis before I transform.

          This is a non-negligible weakness.

          Even with that degree of support for his position (heh), it seems to me more like saying that space is more fundamental than time. He knows enough not to say that. I would apply the same principle here. Mu.

      • TheAncientGeek says:

        Rovelli may have said that rQM differs from MWI in its denial of universal state, but you are misinterpreting it. It doesn’t mean there i no world, it means state is an observers map of the world, and there is no map that is not the map of an observer, and therefore no universal state, because that would be a view from nowhere.

        But you can’t have a map of the world without a world.

        In philosophical terms, rQM is centered worlds, not solipsism.

        RQM isnt equivalent to MWI because in MWI a universal state is the only thing really existing.

    • walpolo says:

      >>Eliezer considers his MWI case to be an overwhelmingly powerful demonstration of the superiority of Bayes over Science. As far as I remember, the QM sequence was created specifically with the goal in mind to provide such a demonstration on a single well-demarcated issue.

      This is such a strange point of view for him to have, because the MWI hasn’t existed in its viable, decoherence-based form for very long. Yes, Everett wrote in the ’50s, but the interpretation was sketchy, incomplete, and as spooky as Copenhagen until the 1996 publication of Joos et al, Decoherence and the Appearance of a Classical World in QM. 20 years isn’t really that much time for an idea like this to be taken up, and in that time the MWI has increased in popularity considerably.

  16. Sarah says:

    I really like this essay and agree with everything in it.

    To address the point Topher made more charitably, though:

    I think the central tenet of the “rationality movement” laid out in the sequences is that there *is* a potential “art of Rationality”, as yet mostly undiscovered, which ought to allow people to live better, make more money, and advance science faster than the “experts” of the early 21st century, and which would be a prerequisite for solving certain really hard problems like AI safety.

    Ten years later, we haven’t demonstrated this claim.

    But I’m not sure that’s a failure. I think it’s progress, albeit slower than people expected, on a really hard problem.

    To give an example I’m familiar with through my experiences at MetaMed: I am a layman in the biomedical sciences, and I think something like “general rationality” does give me an edge. I regularly get mistaken for a doctor, by doctors (after which I rapidly assure them that I’m not.) I do, by reading the medical literature, come up with information that helps sick people better than their previous doctors have, but *only once in a while* and it seems to be very often a matter of luck whether I can be helpful.

    At the moment, I hold no “strong” contrarian beliefs in the field of medicine — i.e. beliefs of the form “this treatment, which mainstream biomedical science rejects, is actually effective.” I do hold some “weak” contrarian beliefs — “this treatment, which has only been studied a little and has been largely ignored, seems likely to turn out to work if someone runs bigger experiments” and some meta-contrarian beliefs — “this field has systematic biases and weaknesses.” But I don’t believe in any existing alt-med miracle cures.

    In medical lit review, my experience has been that rationalists with any sort of science background tend to do better than specialists with no contact with the rationalist community, in terms of conscientiousness, “outcome orientation”, and especially ability to think and work independently.

    In medicine, I don’t think rationality gives you the ability to *singlehandedly* outperform the established experts. It certainly isn’t insulation against simple ignorance, and I’ve eaten crow time and again because I didn’t know a critical fact. We tried the experiment, and the answer to the question “Can a bunch of smart guys with Google Scholar just kick everyone’s* ass?” is “only once in a while, and you have to get lucky.” (where “everyone” is the Mayo Clinic and comparable organizations.) As an amateur, I didn’t conclusively beat the pros. In a certain sense, I played fair and lost.

    But I am left with an intuition that more is possible — that, as an amateur, I think I did a lot better than the ideology of “only experts know anything” would suggest was possible. And the intuition that more was possible in the past — that historical medical innovations, say 1850-1970, were radical improvements developed faster and cheaper than anybody thinks treatments can possibly be developed today. I believe that a group of people could be doing much better at medicine than the biomedical field is doing today; I *don’t* believe I know exactly how this would be done.

    And my extrapolation is that “rationality” in general is pretty much like that. We have reason to believe, admittedly based on anecdotal and heuristic evidence, that much more would be possible if humans could be taught to think better. We haven’t yet produced a community of humans who *do* think better. What we’ve got is a community of bright, intellectually curious young people talking to each other and encouraging a can-do attitude, which I think *is necessary*, even if it’s only a very preliminary step in the right direction.

    It’s really hard to keep the energy of a large group focused on an extremely hard problem. I don’t know if anybody is going to successfully develop an “Art of Rationality.” I think it’s reasonable to debate the justifications for the intuition that such an art ought to be possible. I think telling people not to even think about it because being insufficiently humble makes you a crackpot is extremely counterproductive.

    • Tom Womack says:

      We’ve had groups of bright intellectually curious young people talking to each other and encouraging a can-do attitude for at least fifty years – I’m not claiming the entire history of institutions-called-universities, though looking at German bildungsroman I’d probably grant the Germans two centuries, but I’m absolutely going to claim from Sputnik.

      It’s a necessary preliminary step and that’s why universities have tended to house undergraduates in reasonably close proximity, encourage undergraduate societies, and generally get bright intellectually-curious young people to talk to one another!

      • Sarah says:

        I think today a lot of people don’t actually get to do that as undergraduates in dormitories. I didn’t.

        The LW/rationalist community probably maps better to historical countercultures or intellectual movements than to universities. I see a lot of commonalities between us and the Beats, for instance, or between us and Heinlein’s science fiction circles.

      • Anon256 says:

        I did get this in university, and was very disappointed when I had to leave it after four years. (Grad school isn’t the same, as people don’t live together and have little occasion to interact outside their own departments.) The LessWrong community (particularly the in-person meetups in certain cities) is the closest thing I’ve been able to find to replicating the university community experience as an adult.

    • Sarah says:

      Another comment in the same vein: “can a bunch of smart guys with X just kick everyone’s ass?” is a question that has been answered in the affirmative for various values of X.

      If X is “statistics” or “Silicon-Valley level software engineering” or “the MBA curriculum”, then this is the case for quantitative finance, most B2B tech startups, and management consulting, respectively, compared to established corporations.

      If rationality ever becomes a *demonstrable skill*, it would plausibly fit in this category. People with general-purpose tools outperforming domain experts is nothing new.

      • Professor Frink says:

        I’m not sure how well two of the examples work.

        For management consulting,and finance, we do the work for the corporations. It’s more like “can our large team of MBAs, supplemented by quantitative phds help your small team of MBAs solve their problem” -sure. Management consulting is like software consulting- the recognition that it’s nice to be able to rent support staff when you have a hard problem but can’t afford a lot more full time employees.

        And with quant finance, banks and investment firms have recognized the power of statistics for ever. The large, successful,stable quantitative finance groups all tend to be arms of existing corporations that have moved more money under quant management as big data capabilities have grown.

    • Arthur B. says:

      I think the central tenet of the “rationality movement” laid out in the sequences is that there *is* a potential “art of Rationality”, as yet mostly undiscovered, which ought to allow people to live better, make more money, and advance science faster than the “experts” of the early 21st century

      That isn’t my reading of the sequences, I saw them merely as a good introduction to a lot of good analytical philosophy. What you’re describing is mostly an impetus started by Patri Friedman around the idea of self-improvement.

      I have no doubts that good philosophy can help the potential expert out-expert other experts, but it’s not going to turn the average less wronger into an expert. There are a lot of other factors at play which mere knowledge of epistemology, or even rationality “techniques” isn’t going to be a good substitute for.

    • Deiseach says:

      historical medical innovations, say 1850-1970, were radical improvements developed faster and cheaper than anybody thinks treatments can possibly be developed today

      But perhaps a lot of the rapid progress there was low-hanging fruit – “Holy crap, you mean washing my hands after I’ve been chopping up corpses and before I then go on to examine women in childbed means I am less likely to kill my post-partum patients? The devil you say!”

      Whereas it’s harder to make the same kind of rapid advances when it’s a case of tweaking molecules to see if this new version of a drug does a bit better than the old version.

    • Stephen Frug says:

      This comment really ought to be in the next “best comments” round-up.

    • vV_Vv says:

      “Can a bunch of smart guys with Google Scholar just kick everyone’s* ass?” is “only once in a while, and you have to get lucky.”

      So what? Even a stopped clock is right twice a day.

      But I am left with an intuition that more is possible — that, as an amateur, I think I did a lot better than the ideology of “only experts know anything” would suggest was possible.

      Beware confirmation bias and the Dunning–Kruger effect.

      I believe that a group of people could be doing much better at medicine than the biomedical field is doing today; I *don’t* believe I know exactly how this would be done.

      I suppose that “a bunch of smart guys with Google Scholar” isn’t the answer.

      • Arthur B. says:

        It’s really not the Dunning-Kruger effect, but I doubt it’s a huge “rationality” effect either. Sarah’s simply much smarter than most doctors, and it doesn’t hurt that she actually cares about the truth.

        • Adam says:

          I kind of hate to beat on the same old drum, but she’s also unencumbered by the regulatory effects forcing actual doctors to be concerned with so much more than just medicine and the truth, not to mention an incentive structure that pays you a lot more to spend all your time seeing patients rather than keeping up with current research.

    • Eli says:

      In medicine, I don’t think rationality gives you the ability to *singlehandedly* outperform the established experts. It certainly isn’t insulation against simple ignorance, and I’ve eaten crow time and again because I didn’t know a critical fact. We tried the experiment, and the answer to the question “Can a bunch of smart guys with Google Scholar just kick everyone’s* ass?” is “only once in a while, and you have to get lucky.” (where “everyone” is the Mayo Clinic and comparable organizations.) As an amateur, I didn’t conclusively beat the pros. In a certain sense, I played fair and lost.

      I’m not sure why people keep thinking that “rationality” is at all supposed to actually substitute for domain knowledge, just because Eliezer is butthurt about not having any university degrees.

      Wouldn’t you expect that rationality would show its comparative advantage when used by people who already possess domain expertise, credentials, and a publication record?

  17. Phil says:

    fwiw,

    I would love an explicitly meta “how to read studies” post

    a lot of your posts very impressively synthesize what I imagine are pretty dry studies

    I have trouble imagining myself recreating what you’ve done

    (though maybe the answer is simply that you have a higher tolerance for slogging through studies than most people)

  18. Alex Richard says:

    > remember: decreasing your susceptibility to Type I errors will always increase your susceptibility to Type II errors, and vice versa!

    What? Why?

    • Scott Alexander says:

      I’m thinking of the medical testing model, where suppose you have a test that reads 1 – 100, where for convenience the reading = % that that patient has cancer. You have to decide whether to operate based on the test.

      If you choose a cutoff of 20, then you will catch most real cancer, but you will also perform a lot of unnecessary operations.

      If you choose a cutoff of 80, then you will perform very few unnecessary operations, but also miss a lot of cancer.

      I probably should have clarified that I was referring solely to this dynamic, and you can eliminate Type 1 or Type 2 errors for free by inventing a better test.

      • Alex Richard says:

        The medical testing model is almost the exact opposite of what you’re saying in the rest of that section; you assert that rationality should make you outright better at thinking clearly, which surely means something more than just deciding to take more risks on what you believe.

  19. James James says:

    “a 10% chance of immortality for a couple of dollars a month is an amazing deal”

    Isn’t it more like $200/month?

    I’d throw $2,000 on a 10% chance of immortality, but since the actual cost of cryonics is more like $200,000, I’d rather leave the $200k to my children. (In any case, my children would be cross if I tried to spend $200k on cryonics — they want they money. I wonder if the wealth vs probability-of-signing-up-for-cryonics curve is lower for people with children.)

    • anon says:

      Not sure about children, but the “hostile wife phenomenon” is somewhat well documented among Cryonicists.

    • fubarobfusco says:

      The life insurance is more like $30-$40/month if you’re passably healthy. “A couple of dollars a day” would be more accurate than ” a couple of dollars a month”.

      • Doctor Mist says:

        Like all life insurance, it depends heavily on how old you are when you start. I was fortunate that my parents took out a policy on me when I was five (Why? I never asked) that I maintained as an adult until I was 45 (Why? It cost next to nothing) when I joined Alcor.

        Alcor’s prices went up over time, and I had to get more insurance at age 58. That was pretty expensive, but still on the order of a Starbucks every day.

    • Jai says:

      http://www.cryonics.org/membership

      Under $30,000 cash, total (doesn’t include standby).

  20. Urstoff says:

    My limited experience with LW and the rationalist community led me to conclude that they’re just about as rational and likely to be correct about things than any other group of scientifically literate individuals. Being a philosopher, the main sticking point with me was, of course, assumed philosophical positions, particularly physicalism and reductionism: the former because it’s underspecified (what do you mean by that) and the latter because that’s not actually how science works (and again, is underspecified). I tried reading the sequences, but EY’s too-cute dismissals of philosophical problems (especially in smarmy dialogues) led me to doubt his being wholly committed to rationality (in the broad sense of the term). This, of course, could just be defensiveness on my part. And it is a large community, so maybe there are more philosophically serious members of the LW community. I like the amateur scientist/philosopher aspect of it; getting non-specialists to think about these things is always a good thing. But being a non-specialist should automatically make one suspect of your own conclusions. I have beliefs about economics and am fairly well-read in economics, but I’m not equipped to be a professional economist, so my confidence in my own economic beliefs is not particularly high.

    • Luke Somers says:

      > assumed philosophical positions, particularly physicalism and reductionism: the former because it’s underspecified (what do you mean by that) and the latter because that’s not actually how science works (and again, is underspecified)

      Assumed? Argued for. Did you read the sequence ON reductionism, say? What needs to be clarified about the idea that there exists some set of universal laws which, possibly with an initial state, completely specify the universe (reductionism)? And that these laws make no direct reference to mental states (physicalism)?

      What is under-specified about these really wikipedia-level definitions? Like, they themselves do not comprise a grand unified theory? That seems too much to be asking for, but I can’t really see what else you might want from them short of it.

      Whether or not reductionism is how science works seems remarkably aside from the point.

      • Urstoff says:

        That’s not a definition of reductionism as far as I can tell. Reductionism is a thesis about the relationship between different theories or between the different ontological entities invoked by those theories. Saying that a set of universal laws plus an initial state specifies the universe doesn’t really have anything to do with that, as such a thesis is compatible with eliminativism (there are no mental states, only neurons; there are no neurons, only molecules; etc.), some type of reductionism, and even types of pluralism (after all, the laws may just govern mental states; that’s no more sui generis than laws that govern certain types of physical states, as all laws are sui generis).

        For physicalism, first you need some sort of definition of “physical”, and then you need an articulated stance on reductionism/eliminativism/pluralism (although you can have limited reductionist/eliminativist/pluralist theses that only apply to certain theories and thus are neutral regarding physicalism). The latter is needed given that physicalism is typically a statement along the lines of “everything that exists or could exist are physical”, and obviously you’ll need some sort of thesis, be it reductionist/eliminativist/whatever, to show how things that aren’t obviously physical (but not obviously non-physical) like beliefs, social relations, etc., are either themselves physical or aren’t actually things.

        • Samuel Skinner says:

          “as such a thesis is compatible with eliminativism (there are no mental states, only neurons; there are no neurons, only molecules; etc.), ”

          How? Eliminativism sounds incoherent. We use words to describe things we find in the world so I’m not sure in what way a claim like “there are no neurons” could ever be meaningful.

          “and even types of pluralism (after all, the laws may just govern mental states; that’s no more sui generis than laws that govern certain types of physical states, as all laws are sui generis). ”

          … what? Isn’t “set of universal laws” inherently incompatible “different laws for different things”?

          ” are either themselves physical or aren’t actually things.”

          How are those unclear? Beliefs are in people brains. If people get hit in the head hard enough they can forget things. How is that possibly something contentious in philosophy?

          Social relations don’t seem unclear either. They are descriptions of actions and feelings in the world.

          • Deiseach says:

            Beliefs are in people brains. If people get hit in the head hard enough they can forget things.

            But forgetting something does not cause it to cease existing. My memories of my family are in my brain; I’m sure if I got hit in the head hard enough, I could forget them, but that does not mean that my siblings would suddenly cease to exist or never have existed at all.

            So either a belief is independent of the brain that holds it for its existence, or it is completely dependent, e.g. I used to believe in Krishna as an existent entity but then I got whacked in the head by a football and now I don’t believe that silliness any more.

            So is a belief only a matter of matter, i.e. it is only a particular response due to the flow of thought in the physical substrate of the brain and if the current is switched off, the belief does not exist, or has it an existence like a chair? I can take hold of a chair and sit on it, but how can I take out a belief and show you it in a visible form?

          • James Picone says:

            But forgetting something does not cause it to cease existing. My memories of my family are in my brain; I’m sure if I got hit in the head hard enough, I could forget them, but that does not mean that my siblings would suddenly cease to exist or never have existed at all.

            How is that remotely relevant?

          • wysinwyg says:

            How? Eliminativism sounds incoherent. We use words to describe things we find in the world so I’m not sure in what way a claim like “there are no neurons” could ever be meaningful.

            So obviously I’m not Urstoff, but I’ll take a crack at this.

            This might help: http://plato.stanford.edu/entries/materialism-eliminative/

            Consider the term “phlogiston”. It was once thought to definitely refer to something, the existence of which was inferred from a particular cluster of phenomena. The current consensus is that the theoretical entity to which the term “phlogiston” referred did not actually exist.

            The general idea of eliminativism is that all theoretical terms are like “phlogiston” — they may be useful placeholders, but they have no ontological weight.

            For example, take the term “shrub” (I’d use “tree”, but it’s already ambiguous between biological entities and mathematical concepts). We use the term “shrub” to refer to what appears at a macro level to be a singular entity, but on closer inspection the “shrub” actually consists of plant cells. That is, the term “shrub” can be taken to be a placeholder for an assemblage of plant cells with such and such a set of properties rather than ontologically privileging the shrub itself. If we “take away” the cells, there is no shrub — the “shrub” is not a thing-in-itself or whatever abstruse philosophical way you want to say it.

            Similar with cells — “cell” is just a placeholder for a complex assemblage of atoms, and so on down.

            From my perspective, reductionism vs. eliminativism is a purely semantic distinction. It’s really a question of how you want to define “existence”. Do you want to admit abstract entities like a “perfect circle” as actually existing? What about someone’s thought about a perfect circle?

            … what? Isn’t “set of universal laws” inherently incompatible “different laws for different things”?

            No. The fact that many different laws for many different phenomena have been adduced has been no bar to the conjecture that there is such a thing as a set of universal laws.

            More importantly, non-linear dynamics, quantum indeterminism, and the general concept of “emergence” (if we’re going to credit it at all; I understand there’s some resistance to the notion among LWers; substitute “network effects” if you want) open up the possibility that the evolution of the universe can’t be inferred from a general set of rules along with a set of initial conditions. Even if we have a completely general, universal set of laws governing quarks (or whatever physical entities turn out to be fundamental, assuming such an ontological model makes sense in the first place), it might not tell us very much about the “laws” on which cognition depend and more than we could use it to derive the rules of French grammar.

            How are those unclear? Beliefs are in people brains. If people get hit in the head hard enough they can forget things. How is that possibly something contentious in philosophy?

            But the theory here, as Urstoff points out, is underspecified. This observation is compatible with the theory that “memories” are physical objects and that when someone is hit in the head hard enough, the “memories” physically break down or degrade to the point where they no longer fill their functional purpose. But this is an unlikely theory.

            So even though it’s not the least bit contentious that heavy blows to the head can impair cognition to a greater or lesser extent, there is still a great deal of room for alternate ontological models of cognition within the very broad constraints imposed by that observation. Are memories specific activation pathways through the neural substrate? Memories seem to be dynamic; if a part of a pathway is damaged but the memory is retained by rerouting the activation pathway through an undamaged part of the substrate (subtly changing the quality of the memory, changing a blonde to a redhead for example) should we consider it to be the same memory? A related memory? How close does a “memory” have to be to the original event to constitute a “memory” as opposed to a “confabulation”? How do we even measure the closeness of a memory to an original event?

            The concept of a “memory” makes perfect sense in macro human scale, but at the scale of brain implementation details, is it more of a misleading generalization? In other words, is a “memory” like “phlogiston”? “Is it physical” and “is it actually a thing” are actually worthwhile questions to consider in this light, I think.

          • Eliminativists are usually saying that various things don’t exist in a strictly scientific sense, not in an everyday sense.

            “Beliefs are in people brains”

            Thing is, you have to actually perform the reduction…show how they are composed of smaller parts.

          • Samuel Skinner says:

            “The general idea of eliminativism is that all theoretical terms are like “phlogiston” — they may be useful placeholders, but they have no ontological weight.”

            They describe how things are, but they don’t have any value in describing how things are? I’m sorry, when I unpack ontological weight, that’s what I get.

            “From my perspective, reductionism vs. eliminativism is a purely semantic distinction. It’s really a question of how you want to define “existence”. Do you want to admit abstract entities like a “perfect circle” as actually existing? What about someone’s thought about a perfect circle? ”

            I’m not following your example. Of course things are made from smaller parts. If that weren’t true, then the number of things would be equal to the number of fundamental particles.

            “No. The fact that many different laws for many different phenomena have been adduced has been no bar to the conjecture that there is such a thing as a set of universal laws.”

            If there are different laws for different things, by definition there is not the same law for the everything. I’m not talking the difference between surface tension and nuclear fusion. I’m talking about laws like whatever physics has decided is currently the most fundamental.

            “More importantly, non-linear dynamics, quantum indeterminism, and the general concept of “emergence” (if we’re going to credit it at all; I understand there’s some resistance to the notion among LWers; substitute “network effects” if you want) open up the possibility that the evolution of the universe can’t be inferred from a general set of rules along with a set of initial conditions.”

            That just boils down to if certain processes are truly random we have a limit to the amount of predictive power. I’m not seeing that as a contradiction.

            “But this is an unlikely theory.”

            How is it unlikely? We have people with brain damage that affects their ability to have beliefs about the world.

            ” Are memories specific activation pathways through the neural substrate? ”

            Current moving through a brain is a physical object.

            “should we consider it to be the same memory? A related memory? How close does a “memory” have to be to the original event to constitute a “memory” as opposed to a “confabulation”? How do we even measure the closeness of a memory to an original event?”

            You are conflating two things. The event being remembered and the brain keeping track of it for you. The part of the brain is the same, even if it no longer matches the event.

            As for confabulation and tracking divergence, they do have experiments to track it; ex how many details are wrong.

            “Eliminativists are usually saying that various things don’t exist in a strictly scientific sense, not in an everyday sense.”

            What is the scientific definition of chair?

            “Thing is, you have to actually perform the reduction…show how they are composed of smaller parts.”

            I don’t think people would be happy if I started cutting up living individuals brains to see if I could create and destroy memories. That seems like the sort of behavior that is highly illegal.

        • Luke Somers says:

          There’s reductionism as a scientific practice, which is how it works sometimes but not other times, and there’s philosophical reductionism. Eliezer relies on the latter and notes how many theories nest nicely via the former. He didn’t claim that every scientific theory must reduce to a lower level such that it is practically useful to do so at all times, though I’m sure he said that every non-fundamental scientific theory must admit a lower-level explanation in principle.

          If you could provide a concrete example of something objectionable, that would be nice.

          You noted that reductionism is compatible with pluralism. This is why I included physicalism as a separate adjective.

          The difficulties in defining physicalism really only come up in hypotheticals. As it turns out, our world is nothing like them, so I don’t see the problem. The fundamental rules appear to have to do with things many, many, many layers of reduction below what we’d call physical.

          If chairs are real, beliefs are real. Neither of them is fundamental – they are made of other (non-mental) things. That’s physicalism in a nutshell.

  21. Jordan D. says:

    This all seems pretty much correct.

    I’ve noticed a lot of criticisms of LW tend to take the form of criticisms of EY.* And yeah, you can criticize EY. Anyone who writes a thousand-page book and debates it for years on the internet will end up saying things which are, at the least, not entirely right. I actually think that EY does reasonably well for being such a prolific contrarian author, but as people above have noted, there is at least some truth to Hallquist’s claims.

    …but that doesn’t matter very much, because LW’s failings have nothing to do with EY’s failings. Much ink has been spilt over EY’s QM Sequence, but I suspect a bare minority of his readers actually, really, truly care about MWI.** I didn’t even know about his dietary views until I started reading some critics, and they strike me as approximately no more or less confused than everyone else in the entire world on the topic of diet.

    But sober criticisms of LW tend to look more like Scott’s Extreme Rationality post, or demonstrations that rationality practice doesn’t seem to make much of a difference versus domain expertise.*** EY’s contrarian beliefs are, at best, weak evidence against the whole community- or even against the majority of his Sequences.

    It seems to me like a lot of these arguments are of the ‘Here is some negative affect to apply to EY- and you should apply that to the rest of LW too, because they blindly follow him.’ But I don’t think they do blindly follow him. Half of the Sequences are people arguing against everything he says! Whenever an SSC post addresses these kinds of critiques, MORE than half of the replies are negative analyses!

    If LW is a cult, it’s the most self-deprecating one I’ve ever seen.

    ~

    *I don’t count RationalWiki here, as that article appears to be powered exclusively by one person with some sort of powerful and arcane grudge.
    **I have no doubt that the issue will prove to be vitally important in some way! Probably. Maybe. Okay, some doubts.
    ***My personal suspicion is that most domains in which you can have expertise are relatively healthy in the implementation phase, and that the Art will prove valuable- if at all- mostly in aligning incentive structures, keeping people from getting swindled and generally helping people decide what to spend their focus on. On the other hand, I think that trying to apply rationality to my private life has been tremendously helpful, and I have a strong belief that a lot of people could benefit in that arena.

    • Jiro says:

      Anyone who writes a thousand-page book and debates it for years on the internet will end up saying things which are, at the least, not entirely right.

      Anyone who writes a thousand page book will end up having a couple of items that are wrong, but that’s far from being wrong in one of the linchpins of his book. MWI is just too prominent among his writings for this to just be blamed on “everyone makes an occasional mistake”.

      • Luke Somers says:

        A) it’s a minor sequence, not one of the main ones. It’s one of the more CONTENTIOUS ones. It’s also relatively large, but that has more to do with its being kinda complicated, ya know?

        B) But in what way is he wrong? It’s much less of a booster on MWI than it is a takedown of Copenhagen, which, lo and behold, is utter garbage. He didn’t address other interpretations because A) they’re less prominent – yes, he could have been better on this – which relieves Science of the accusation that it allowed them to maintain hegemonic prominence for generations, B) they’re less silly/horrible than Copenhagen, and C) most of them work out so close to MWI that it amounts to quibbles (and the other is Bohm).

    • Anonymous says:

      Erm, isn’t the whole of RationalWiki powered by someone with that grudge? Have you read anything else on the site? The connection it has to rationality seems to be limited to its name. The content is purely smug blue tribe poo-flinging directed at the red tribe.

  22. Jiro says:

    What is the difference between Hallquist believing that he disproved one of the world’s most famous philosophers when he was twelve years old, and Eliezer believing that he solved the problem of consciousness when he was thirty-something?

    Because 1) the famous philosophers’ argument was already known to be false and it’s more plausible that he stumbled on an answer to a solved problem than to an unsolved problem, and 2) the famous philosophers’ argument was motivated reasoning that is tied to a religious system and it’s more plausible that an argument has a flaw visible to a 12 year old when the argument is based on motivated reasoning.

    But in the very first post of his quantum physics sequence, Eliezer warns:

    It is common for people asserting things with great certainty to add a disclaimer that they are being appropriately humble, etc. What determines whether they are asserting things with great certainty is the bulk of their claims, not the disclaimer.

    Presumably, if Aquinas’ arguments are really stupid, but everyone believed them for five hundred years, this would imply there is something wrong with everyone.

    It implies that people believed them for reasons other than their logical coherence. Having an organization with great social power support your idea does wonders for getting people to believe it, regardless of how absurd it is.

    “Flitting from diet to diet, searching empirically for something that works.” SUCH OVERCONFIDENCE. SO CERTAINTY. VERY ANTI-SCIENCE.

    You ignore that Eliezer expresses a greater degree of certainty when he is trying to get people to believe his diet theories than he expresses here. It is that greater degree of certainty which people consider overconfident and anti-science. If he acted all the time like he does in this quote, it would be fine, but he doesn’t.

    Just like in the MWI disclaimer case, the fact that he’s being appropriately uncertain in one place doesn’t mean that he is in another.

    • Scott Alexander says:

      Plantinga is universally known to be false? I think that would be news to the philosophy of religion community.

      I’m not sure you can point to this entire corpus of Eliezer being overconfident and wrong about diet. My source for a broader perspective on his views is this post: http://lesswrong.com/lw/a6/the_unfinished_mystery_of_the_shangrila_diet/

      As for QM, I sort of agree with you except that Hallquist framed it in exactly those terms – he said “I put one paragraph about maybe being wrong in my book, why didn’t Eliezer do the same?” When that’s the criticism, “he included a paragraph about how he might be wrong” is an acceptable response.

  23. Liskantope says:

    In some respects this is fair; Eliezer was certainly the founder of the community and his writings are extremely influential. In other respects, it isn’t; Margaret Sanger was an avowed eugenicist, but this is a poor criticism of Planned Parenthood today, let alone the entire reproductive rights community; Isaac Newton believed that the key to understanding the secrets of the universe lay in the dimensions of Solomon’s Temple, but this is a poor critique of universal gravitation, let alone all of physics.

    As a relative outside to the LW community who was only introduced to it fairly recently, I find this type of analogy a little questionable. For one thing, LW is fairly young, and its founder is still alive and active and seemingly considered pretty much the face of the group. Most discussions I see where people are asking how to get into LW stuff involve recommendations to start by reading the Sequences (usually accompanied by allusions to how intellectually life-changing they are). Meanwhile, detractors commonly accuse LW of being a cult that worships EY as a guru. Furthermore, I’m additionally skeptical of the Newton analogy because “universal gravitation” refers to a single scientific model, rather than a community of people who have almost all read a fairly comprehensive collection of his written works which includes his views on Solomon’s Temple. (Maybe I’m taking that analogy too literally.)

    That said, I’ve felt free to self-identify as a rationalist and even sort of part of the rationalist community (even though my direct interaction with community members has been fairly minimal), and I haven’t read most of the Sequences or posted on LW. Then again, unless I actually post on LW one day, I certainly won’t claim to actually be part of the LW community.

  24. Edward Scizorhands says:

    rats, like Asians and prisoners,

    When the revolution happens, if SSC’s side is losing, then I’m using this line to say I wasn’t ever part of their side.

  25. Doug says:

    All of these arguments– cryogenics, qualia, nutrition, and MW– are things about which there is no scientific consensus. Two of them (qualia and MW) are things which we are pretty sure can never be settled by the normal experimental methods. I happen to disagree with Yudkowsky about some of these things, but to me they seem really incidental to his main efforts, which are to use sane, scientific methods to address more tractable problems. Topher has attacked him on precisely those areas where Topher has no proof (and neither does anyone else) that he is right and Yudkowsky is wrong.

    • LTP says:

      “to me they seem really incidental to his main efforts, which are to use sane, scientific methods to address more tractable problems”

      I feel like there’s a motte-bailey issue whenever I engage with the rationalist community that this post exemplifies.

      To me, what you wrote is the motte of the Yudkowsky’s writings: teaching laypeople about cognitive biases, probability, how to think when engaging with science, and techniques about critical thinking. These are pretty uncontroversial, and in my (admittedly, not comprehensive) experience, none of the information is particularly original to Yudkowsky or the rationalist community.

      Then there’s the bailey, which is about pushing Yudkowsky’s pet philosophical views onto people(while straw/weakmanning professional philosophy and acting like his philosophical views are both more original than they actually are and as if they’re total slam-dunk cases) and encouraging them to *not* engage with mainstream philosophy, pushing extremely high confidence assertions about what is at best highly speculative ideas (and, at worst, bizarre quasi-religious beliefs) about the singularity, it’s immanence, how this should significantly factor into one’s views on what to do morally and with one’s money (MIRI), an excessive skepticism towards mainstream intellectual institutions and excessive contrarianism (and I say this as somebody who values both those things), and using the language (but not necessarily the substance) of the Motte content to lure people into believing these things with too much confidence.

      I will note, though, with all of this that I have by no means done an extensive study of the sequences of lesswrong, so this is definitely an outsider’s view.

      • Emile says:

        I think LW engages with academic philosophy more than many places on the internet (though Eliezer himself doesn’t seem to much). See this for example:

        http://lesswrong.com/lw/4vr/less_wrong_rationality_and_mainstream_philosophy/

        • LTP says:

          “more than many places on the internet ”

          Fair enough (though that’s not saying much!), and that’s a positive for the community. I will note that I was referring mostly to Yudkowsky and the sequences in my post.

          But even in the community itself, I think there isn’t as much as there should be given how much philosophy is talked about it, though it’s certainly better than a lot of internet discussions. But it’s not even close to begin as egregious as Yudkowsky, who seems to just disdain philosophy (to his detriment, in my view).

      • walpolo says:

        LTP, this seems exactly right to me as well.

      • Not That Scott says:

        I feel like there’s a motte-bailey issue whenever I engage with the rationalist community that this post exemplifies.

        To me, what you wrote is the motte of the Yudkowsky’s writings: teaching laypeople about cognitive biases, probability, how to think when engaging with science, and techniques about critical thinking.

        Then there’s the bailey, which is about pushing Yudkowsky’s pet philosophical views onto people(while straw/weakmanning professional philosophy and acting like his philosophical views are both more original than they actually are and as if they’re total slam-dunk cases) and encouraging them to *not* engage with mainstream philosophy, pushing extremely high confidence assertions about what is at best highly speculative ideas (and, at worst, bizarre quasi-religious beliefs) about the singularity, it’s immanence, how this should significantly factor into one’s views on what to do morally and with one’s money (MIRI), an excessive skepticism towards mainstream intellectual institutions and excessive contrarianism (and I say this as somebody who values both those things), and using the language (but not necessarily the substance) of the Motte content to lure people into believing into this things with too much confidence.

        I don’t think I agree. What I see is Eliezer sketching out a motte and bailey, and then a lot of rationalists living in the motte and (pretty honourably) not living in the bailey unless they’re willing to defend themselves on the bailey’s grounds (the issue with mottes and baileys is that the motte is used to defend the bailey; if you just defend the bailey you’re outspoken, not fallacious).

        A lot of criticism of LessWrong comes from people who seem to read the sequences, sense – perhaps unconsciously – the existence of a motte (“how to reason well and carefully”), and a bailey (“we know better than the experts”), and decide it’s important they make their criticism of these misguided folk. This necessarily takes the form of criticising EY, because he’s the only one who’s said bailey-like things.

        Reading their critiques, I feels like I should see a bunch of rationalists walking around self-importantly telling experts they’re wrong, evangelising their own wacky philosophies, then predictably retreating to “but I’m just saying we should try to reason correctly!”.

        But… I don’t see this. I actually see rationalists being pretty careful. The comments of every LessWrong post are full of clarifications, considerations, and critical responses. CFAR has a strong strain of self-help running through their workshops. If the critics were right, I’d expect participants to be subjected to anti-modalism or Objectivism lectures, not something so mundane and empirical as tactics for making habits stick and goal-factoring exercises.

        We don’t really see rationalists taking on monolithic, settled fields of science armed only with a spaced-repetition deck on Bayes theorem and cognitive bias. Rationalists mostly seem to be interested in unsettled, contentious, important questions that have traditionally been heavily weighed down with bullshit and misdirection – diet is the perfect example. That doesn’t seem like rationalists deciding they know better than experts. That does seem like rationalists putting their supposedly-trained reasoning skills to the test on the problems that are most in need of clear thinking, evidence integration, and correcting for bias.

        Whence the critic’s belief that we’re all out there proselytising?

        I think it comes from within. I think the critics are trying to imagine rationalists and are substituting themselves or someone like them in. I don’t blame their conscious minds for this – it probably has a lot to do with their social circle being homogenous.

        When Google tried to teach the deepdream image neural network what weightlifting dumbbells looked like, it learned that there was always a bodybuilder’s arm attached, because the particular selection of images used happened to always depict a bodybuilder holding it. So too for the critics – there are certain behaviours common to everyone they know, so when they typical mind fallacy, it’s only natural those behaviours end up in the model.

        So I think a lot of the vitriolic “you’re a cult with arcane higher knowledge” criticism comes from critics for whom it is just obvious – an unstated assumption, almost unconsciously so – that anyone offered that kind of dynamic would take advantage of it in a cult-like way.

        In that position, if someone handed them or anyone they knew such an incredibly powerful motte and bailey of “I am always right / i’m just saying we need to think more clearly, how could you be against that?”, they would go to town. I think they just don’t see the possibility that someone, on being handed an impregnable motte and wide expanses of rich bailey, would stay inside the motte and only utilise the bailey “fairly”, when they’re willing to actually defend it. And I think they don’t see that possibility because their model of people is based on data drawn from a homogenous population, their social group.

        Everyone typical-minds, that’s a human universal. But the way in which a person typical-minds can tell you a lot about what that person has learned is typical of minds.

        “New Atheism” is particularly used to the method of ideological warfare employed by Hallquist, which I would characterise as cutting off the target so he can’t retreat; pinning him down; forcing him to fight on the bailey instead of hunkering down in the motte. The universal criticism of New Atheism is that they fail ideological Turing tests. They are regularly accused of demonstrating a lack of understanding of what the religious people they are criticising actually think. This regular criticism is apt because it’s true – New Atheists have never laid siege to a motte, and they would look at you strangely if you did suggest laying siege to a motte. That’s just not how they fight. I’m not as critical of this as I sound. When you’re criticising the stereotypical religious type, leaving aside whether it achieves anything, their motte is something like “I have a personal faith that I don’t act on, but it does make me psychologically healthier”. Impregnable indeed.

        (I think that’s what Scott was picking up on he said “I worry that Hallquist’s New Atheism background may be screwing him up here: to critique a movement, merely find the holy book and prophet [and] prove that they’re fallible”).

        If this is the case, the reason Hallquist and others ignore all of EY’s caveats and admonishments and warnings is because those basically aren’t visible to them – they just haven’t encountered anyone who announces to his enemies when he’s heading out into the bailey, so the announcements might as well be in a foreign language. One is left with the bemused experience of watching Hallquist tactically and efficiently cut off the lines of retreat for an enemy who isn’t there.

        Go ahead and criticise EY. Lord knows he can take it. But I don’t think in the Sequences he was trying to construct a motte so he could have his bailey. I think he was giving us a grand tour of the motte, and taking us on a couple of excursions out into various baileys to deliver a lesson. “If someone attacks you in the bailey, stand and fight. You stuck your neck out, if it deserves to get chopped off, let it get chopped off, don’t wuss out”, or something like that.

        Then there’s the bailey, which is about pushing Yudkowsky’s pet philosophical views onto people(while straw/weakmanning professional philosophy and acting like his philosophical views are both more original than they actually are and as if they’re total slam-dunk cases) and encouraging them to *not* engage with mainstream philosophy

        So, LessWrong is incredibly willing to listen to criticism.

        The 7th most highly-rated post of all time on LessWrong is lukeprog, making the closest correct argument to what you’re saying. And LWers didn’t just upvote it for signalling, they acted on it – there’s like 70 recommendations in this thread (incidentally, the 11th most upvoted post of all time).

        That doesn’t fit with your model of LWers having been encouraged by Yudkowsky to not engage with mainstream philosophy.

        Oh, yeah, and the most upvoted post of all time on LessWrong, ahead of Generalising From One Example(!) and Dissolving Questions About Disease(!!) by a good thirty points(!!!), is GiveWell being extremely critical of MIRI – not just “don’t donate”, but “donating to them makes unfriendly AI more likely, not less”.

        And this just doesn’t fit at all with your model of LW being a place where Yudkowsky cultishly encourages rationalists to donate money to MIRI.

        If your model was anywhere near correct, it would predict that kind of post would be memory-holed, or at least pushed under the rug. Instead it’s at the very top. In fact, to my knowledge, the only post that’s ever been suppressed was a thought experiment that pressured people to donate more of their time and money. Your model obviously predicts such a post would be, if not highly-voted, at least not subject to deletion.

        (Like laughing at someone whose pants fell down, would it be gauche to point out that critics also regularly mock EY for taking that seriously? That’s eating the cake and having it too.)

        In fact, your model seems to be completely upside down – the fate it predicts for pressure to donate is what happened to advice against donating, and the fate it predicts for advice against donating is what happened to the pressure to donate!

        **

        What’s going on here is that LessWrongers are more than willing to self-critique. They’re so willing that they regularly allow most of their critics to drag them far afield, out into a distant bailey the LWer has never set foot in or laid eyes on before, and have the fight there! This post by Scott, saying “we don’t believe anything like that”, came about because Hallquist tried to drag LessWrong and Eliezer out to a bailey on a different planet.

        • Jiro says:

          Not That Scott: You touch on a theory that I’ve mentioned before: LW works too well. Eliezer thinks that making people rational would get everyone to believe his stuff. So he teaches them to become more rational. Unfortunately, it works too well and people who listen to him *actually* become more rational, and turns out that actually becoming rational doesn’t lead people to believe his stuff after all.

        • LTP says:

          I don’t really have enough experience with LW to provide a point-by-point response to this post. You’re right that I’ve read more of Yudkowsky than the community itself. I do take your points about the self-criticism, though I will say that some of the instances of it at LW and adjacent communities *seems* to me (as an outsider) as almost performative, and people never move beyond the meta and apply it back to the object level. For all the self-criticism, when I read self-identified rationalists, they don’t seem to be any more likely to have rational and correct beliefs than other educated people. And, while everybody likes to link to some highly up-voted self-critiques when this topic comes up, the most commonly linked things by far on Lesswrong are still various Sequences posts (often treating the link as self-justifying), so while I don’t think LW is a single minded cult or even close to one, there’s still tendencies that make me shy away, personally. It’s avoided some of the worst failure modes of similar kinds of movements, but one the other hand, I don’t think it avoids enough of them.

          But, as I say, this is all the impressions of an outsider.

          Also, I don’t really get your typical mind digression, or how you got that from my post, but whatever.

    • Princess Stargirl says:

      Yudkowsky has been pretty explicit his most important motivation for writing the sequences was to make readers rational enough to understand his views on AI Risk. Convincing people of AI risk was the actual underlying goal of much of Elizier’s efforts.

  26. Max says:

    Typo near the end of part 2:
    “I know this because he when he has them, he comes to me”

  27. aesthete says:

    “I worry that Hallquist’s New Atheism background may be screwing him up here: to critique a movement, merely find the holy book and prophet, prove that they’re fallible, and then the entire system comes tumbling to the ground. Needless to say, this is not how things work outside the realm of divine revelation”

    Not even particularly true in the case of systems of divine revelation.

    This strategy of refutation has always struck me as an exceedingly lazy tendency within atheism, and reflective of a lack of understanding of what the religious actually think outside of a very narrow subset.

  28. Douglas Knight says:

    David Deutsch is not an Oxford professor. He is an amateur who happens to live in Oxford.

    • Scott Alexander says:

      Wikipedia: “David Elieser Deutsch, FRS (born 18 May 1953) is a British physicist at the University of Oxford. He is a non-stipendiary Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford. ”

      Am I misunderstanding something?

      • Emily says:

        “non-stipendiary”? So, unpaid? He is an unpaid visiting professor?

        • Douglas Knight says:

          Which is perfectly normal. “Visiting professor” is usually a courtesy appointment, just an office and library privileges extended to someone taking a sabbatical from another institution, which provides the funding.

          • Emily says:

            I don’t know this guy from Adam, except for a few minutes of looking on Oxford’s website, but it looks like he’s been a visiting professor there for quite awhile, so I don’t think he’s on sabbatical from another institution. This looks like someone who has a minor connection with Oxford that he is using the heck out of in order to gain prestige for other projects of his.

            Edit: the New Yorker says about him: “Though affiliated with the university, he is not on staff and has never taught a course.”

          • Douglas Knight says:

            Quite the opposite.

      • Phil says:

        Regardless of any details of Deutsch’s employment, I can say with some authority that he is widely viewed as a complete crank by working physicists (I am a physics postdoc and have been around various physics departments for a bit over ten years now). The ones who don’t view him as a crank mostly just don’t know who he is. I have never met a working physicist who thinks he is a trustworthy authority, and citing him as expert evidence on MWI or anything else strikes me as dubious. Admittedly I have not traveled in the circles where his type of work is popular (MIT and Oxford, for example).

        Still it strikes me as a relevant and under-appreciated fact that the vast majority of working physicists simply don’t care at all about MWI or related debates about interpretation of QM. This includes people using Bayesian methods as a research tool, and people who work on QM (low temp. condensed matter physicists, particle physicists, quantum computing researchers, etc). If MWI is irrelevant to the vast majority of actual productive physics research, then why did EY decide to write a massive, bloviating “sequence” of articles about it? Why use that as your way of accusing scientists of making mistakes? Beats me, but one strategy for judging the reliability of someone (who comments on some issues you don’t know well) is to see what they say about issues you do know well. And on this strategy EY has to look pretty ridiculous to most physicists.

        • Andrew G. says:

          Deutsch’s “group” is, according to its website, apparently attached to the Templeton funding teat, which I’ve found is usually a good reason to apply some skepticism.

        • Ilya Shpitser says:

          “Still it strikes me as a relevant and under-appreciated fact that the vast majority of working physicists simply don’t care at all about MWI or related debates about interpretation of QM.”

          Agreed. Working physicists have real work to do. See also: B vs F wars and working statisticians today.

    • Emile says:

      Wikipedia seems to disagree: https://en.wikipedia.org/wiki/David_Deutsch

      (or is this some kind of sarcasm?)

    • walpolo says:

      Deutsch isn’t a crackpot, he’s a real researcher, but his institutional affiliation is rather trumped-up.

  29. advael says:

    To draw another analogy that sounds a lot like an ad hominem attack but isn’t intended as one, Hallquist’s criticisms bore an eerie resemblance to a standard critique of new atheism: “They make some good points, and I agree with some of them, but they’re just so arrogant about it that I can’t take them seriously.”

  30. Eliezer Yudkowsky says:

    To be clear, in “Say It Loud” I am not saying that you should act more confident than you are, or fail to communicate uncertainty. I am saying that it is okay to communicate uncertainty by saying “60% probability” rather than two paragraphs of timid language. This may cause those who know not the Way to criticize your status-overreaching for asserting so vigorous and definite a probability. This may be a real PR problem but I don’t see it as an inherent ethical problem.

    • Scott Alexander says:

      Yes, the problem arises if you sound vigorous and definite but don’t assert a probability. You never gave a probability for cryonics working until someone specifically asked you for one way down in a comments section, which led people to read your vigorous and definite rhetoric as being 100% sure it would work.

      • onyomi says:

        Can’t one be defensibly certain about how certain one feels?

      • Eliezer Yudkowsky says:

        When you say “which led people to read…” do you mean that real humans read it this way, or just Tumblr humans?

        • anon85 says:

          I’m a human who doesn’t have a Tumblr account. I read your cryonics article as overconfident, because saying you’re sure cryonics is a good idea sounds like saying you’re sure it will work unless you put in appropriate disclaimers.

          • advael says:

            Can’t it just be a better idea than it is a bad idea?

            I mean, if I can assert with confidence that my expected utility for an action is higher than the expected utility for not doing said action, I’m always willing to do it, even if the margin is really small. That’s all that need be asserted when you express that you think something is a good idea.

            For example, I can’t tell my friends that I assert anywhere near 100% probability that their job search will work out in the current economy, but I can still tell them to keep looking based on thinking that the probability times the return in “improvement to their situation” for actually landing a job is larger than the cost in effort or rejection-bad-feels for looking.

          • Adam says:

            The expected utility has to be greater than something else you plausibly would have done with the same resources, not greater than doing nothing at all.

          • advael says:

            Good point, never hurts to consider opportunity cost when considering what the expected utility of a decision is.

            Still, I don’t think that implies that the need to confidently assert something is a good idea rises to the level of implying confidence that it will work. The whole premise of the “Cryonics is a good idea” assertion is that the cost of trying it is small and the payoff of it working is potentially huge (If your utility function values being alive a whole lot longer, which most do), not that it’s got a high likelihood of succeeding.

        • Deiseach says:

          *blinks*

          That’s a bit snotty about people on Tumblr. I mean, I’ve given a few slaps myself to posts on Tumblr but I’m one of those “Tumblr humans”, too!

        • Saint_Fiasco says:

          Some of your thoughts are not obvious to people who haven’t already had those thoughts.

          I interpreted you as certain that cryonics will work until Scott pointed out an anomaly in one of his surveys, that people who sign up for cryonics assigned a lower probability of it working than people who don’t.

          • Adam says:

            Did that survey actually ask about the expected payoff to success that people believed they would get? I’ve never even thought about the probability that cryonics would work, but I am pretty certain that literal immortality is impossible and 800,000 years of existence is not at all guaranteed to be any more enjoyable than 80.

          • Saint_Fiasco says:

            It didn’t ask for expected payoff. I suspect that if it did, I would have realized how silly it is to put 5% probability in the survey when I am not, in fact, signed up for cryonics.

            I’m not very well calibrated.

            EDIT: IIRC, the questions were something like “what’s the probability that if you are frozen today you will be revived at some point in the future” and then “are you signed up for cryonics”

          • Adam says:

            That wording seems problematic for another reason. Reading Luke Somers in this thread, it seems he signed up not because be believes he can be revived in the future, but that future scientists will be able to use non-computed tomography to recover his brain state at the time of his death and then simulate him. You could assess a high probability of simulation but a low probability of revival.

          • Saint_Fiasco says:

            I don’t think the difference is that important. I don’t believe in the kind of personal identity that says a perfect simulation of me is not me.

            In any case, surely the probability of revival OR simulation is greater than the probability of revival, so I should have been even more willing to sign up for cryonics (or assign an even lesser probability of revival) to be consistent.

          • Adam says:

            What I’m saying is a person could answer ‘5%’ meaning they think there is a 5% chance they could ever be thawed and continue existing in the same brain, and that doesn’t capture the chance they give to continued existence at all if they believe it will happen by simulation. If they answer ‘5%’ to revival because they give a 5% chance to simulation, then it wouldn’t matter, but it only wouldn’t matter if everyone surveyed interprets the question that way. I didn’t take the survey, but I would take ‘revival’ to specifically mean successful thawing.

            It’s not that there’s an important difference between the two scenarios, but that the survey won’t accurately reflect the respondents’ beliefs about the possibility of success.

    • Randy M says:

      I don’t know if you see “real PR problems” as “real problems” but if so, watch out for phrases like “those who know not the Way.” Word to the wise, etc.

      • Deiseach says:

        I get confused between the Way and the Work 🙂

        It works best as an in-joke and I really wouldn’t take it more seriously than that (people belonging to a group making half-serious, half-tongue in cheek references to group terminology) but yeah, it could be taken and used in an uncharitable way by outsiders.

    • HeelBearCub says:

      If this is intended to be humorous, it falls flat.

      If it is not, well… holy cow.

      As to the ethics of it, if you are communicating, it is always wise and prudent to take your audience in to account. If you intentionally use words that you know are likely to be misinterpreted, then this may result in an ethical breach.

  31. Bryan says:

    (Long time reader, first time poster, deeply appreciative of your work Scott!)

    Just wanted to point out that the (-) in “proving that psi exists with p < 1.2 * -10^10" is tragically out of place and is holding a while longer than is usual ^^

    • Eliezer Yudkowsky says:

      It’s just assigning an extremely low p-value – less than negative twelve billion!

      • Brock says:

        Reading the unary negation as higher precedence than the exponentiation operator, -10^10 = 10^10. I’d say it’s a rather high p-value.

        • Luke Somers says:

          Unary negation may bind tighter than exponentiation in some programming language or another, but anyone who passed high school algebra can tell you that it doesn’t when you’re writing mathematical expressions out. If I want to say the square of -x, I need parentheses.

          • Brock says:

            My high school algebra (which I did pass, thank you very much) is 25 years in my past.

            I’m not sure why it seems to me like unary negation should be higher precedence than other operators. I do program, and it’s not higher precedence with any infix language I’m familiar with. (In Lisp, “-2” is a literal, but then the two-place arithmetic operations are prefix.)

            I guess it’s just that you can write “-2 * 3” and get the same answer no matter what the order of operations is, and that makes it feel like “-2” is a literal.

          • Brock says:

            Looking it up, it appears that Excel and bc are languages where unary negation has higher precedence than the infix operators.

            I wish I could excuse myself by saying that I use bc as my command-line calculator, but I use python.

          • Luke Somers says:

            > My high school algebra (which I did pass, thank you very much) is 25 years in my past.

            Great, then. Write out -x(squared) = x(squared) and tell me if that looks right.

            -2*3 doesn’t distinguish the cases, as you noted.

          • Brock says:

            Subjectively, it looks right to me when exponentiation is written with an infix operator such as ^ or **, but not when written in superscript.

  32. stillnotking says:

    I’ve never been quite sure how to feel about Yudkowsky. Never having met the man, I don’t have an accurate picture of his intentions, but as the proverb says, “By their fruits you will know them.” Some of EY’s fruits are distressingly cult-like, or at least cult-adjacent. OTOH, I don’t see anything in his actual writing to encourage this; there are no Randian red flags or outright proclamations of ubermenschitude, although there are, arguably, hints. (The manner in which he approached a certain famous, putative infohazard was either amazingly naive or extremely manipulative.)

    I did enjoy HPMOR, but then I’ve always been able to enjoy writers with whom I have deep political/philosophical disagreements, perhaps more than is good for me.

    • anon says:

      I think the cult accusations come out of the combination of apocalyptic predictions (AI risk), suggestion that the in-group will be spared if they have faith (assorted ways in which he can save us from AI risk) and attempts to completely destroy the audience’s sense of efficacy (the whole bit about how if you aren’t capable of working on AI risk you’re an “NPC” and the best you can hope to achieve in life is to give your money to people who actually matter).

      • Nornagest says:

        The NPC bit rubs me the wrong way, but I don’t think it’s cultish, just old-fashioned geek exceptionalism.

      • LTP says:

        A couple other reasons:

        – You said this, but for emphasis: asking people to donate money to an organization to fight said apocalyptic predictions that provides the leader with his livelihood.

        – Insularity: Most of the links on lesswrong are to other lesswrong posts.

        – Telling people in strong terms not to trust their intuitions (but, conveniently not saying the same about the leader’s intuitions), even on issues like morality where that’s all they have.

        – Yudkowsky’s use language that is abnormally strong and/or quasi-religious, which thus comes off as a bit emotionally manipulative (whether intended or not), and raised some redflags for me.

        – The over-the-top reverence many in the community have for Yudkowksy.

      • Eli says:

        I don’t think even Big Yud has ever claimed he can save the faithful from UFAI, should powerful UFAI actually come to exist.

  33. Ben Kennedy says:

    Taubes is all about carbs being particularly fattening due to mechanisms involving insulin. The best response has been from Stephen Guyenet, a neurobiologist:

    http://wholehealthsource.blogspot.com/2011/08/carbohydrate-hypothesis-of-obesity.html

    The “Orange Soda” approach is a lot closer to the food reward hypothesis as described by Guyenet, where obesity is more of a consequence of our general food environment rather than fault of one specific macronutrient.

    • Scott Alexander says:

      I am definitely a big fan of Guyenet. On the other hand, Guyenet has been talking a lot lately about bile acids and the weird unexpected effects of gastric bypass, which makes me think even a “general food environment” hypothesis might allow some quick fixes.

      • Ben Kennedy says:

        Policy fixes or personal lifestyle fixes?

        From a personal standpoint, the insights I’ve gotten from him on things like mindless eating (http://wholehealthsource.blogspot.com/2014/02/mindless-eating.html) have been crucial for me. The fact that people eat 73% more soup out of a trick bowl that slowly refills from the bottom yet report similar satiety as people eating from a normal bowl is astounding. Maintaining weight is less about the food and more about making sure my brain thinks it has eaten the right amount.

        For public policy, it’s another depressing Moloch story. From a God’s eye view, we’d agree to stop making food incrementally-tastier and incrementally-larger portions that are making us more unhealthy. However, the capitalist system rewards food producers that do better and better jobs pleasing our lizard hindbrains, and there isn’t much we can do to stop it

    • onyomi says:

      I think the vilification of carbs is probably the deadliest nutritional trend in recent years. People who eat a lot of carbs, even refined carbs (white rice, sugar), are skinny. Look at the Japanese diet: it’s all white rice and noodles and not really low in sugar, either. What it is low in is fat. Fat makes you fat.

      Diet gurus now blame the low-fat craze for Americans’ continued obesity. But it’s not as if most people actually followed a low-fat diet during that time! Americans continued to eat way more meat and animal fat than any historical farming population (yes, some hunter-gatherers and Eskimos may have survived on bear fat and salmon, but hunting, gathering, and surviving in frigid temperatures burn a lot of calories).

      Even traditional diets which are distinctly carnivorous rarely include as much meat as the American diet: consider Chinese food: yes, they love lard and put little bits of pork in everything, but they also mix their little strips of meat with a bunch of vegetables and then eat it with rice or noodles!

      To just eat a huge piece of steak would be unthinkable, culinarily and financially for most people throughout history and in poorer countries today, as is the idea that poor people eat more meat in America today than rich people incomprehensible. And we also happen to be one of the few places where poor people are fatter than rich people.

      • Urstoff says:

        All of those facts seem consistent with the “calories make you fat” hypothesis as well. Fatty meat just happens to be very calorically dense.

        • onyomi says:

          I think that is broadly correct. And I also think one of the big problems with nutrition in general is that many seemingly unrelated or even contradictory things can be correct depending on the level one is talking about.

          Eating a lot of calories makes you fat. But what makes you eat a lot of calories? Low willpower? Overactive appetite? Choice of food with a poor calorie-to-satiety-induction ratio? All of the above?

          Carbs are filling and fat is satisfying. But it is easier to get used to feeling satisfied without a lot of fat than it is to feel full without carbs. Most of our foods today are too calorically dense. In terms of what creates that density, fat is far and away the number one culprit, with sugar/corn syrup second, and density enhancing cooking methods like drying maybe third.

          So is the problem that our food is too calorically dense? Or that it has too much fat? Or that our appetites are too strong for our activity levels? All of the above. It’s too calorically dense because it has too much fat. And food with a lot of fat gives you too many calories by the time you feel full.

          Look at Japanese sweets versus American sweets. American sweets consist largely of wheat flour, sugar, butter, and eggs. Japanese sweets consist largely of rice flour, sugar, and beans. It’s not that Japanese don’t eat sweets or grains or refined sugar (or that we could eat cookies all day if only we used rice flour–as someone on a gluten free diet, I can assure you that’s not the case). It’s that they get less fat and more carbs (which equals more satiety with fewer calories).

          • Urstoff says:

            Yeah, but which diet is going to get me jacked

          • onyomi says:

            The one where you work out really hard every day?

          • Deiseach says:

            Carbs are filling and fat is satisfying. But it is easier to get used to feeling satisfied without a lot of fat than it is to feel full without carbs.

            Yeah, but you’re not going to sit down and drink a full carton of cream by itself, while it’s very easy to eat a full packet of biscuits by themselves (or dunking in your tea).

            So I think that making up for the lack of fat by shoving in lots of carbs (starches as thickeners and sugar or sweeteners in low-fat, “diet” foods) is part of the problem as well. Trying to reduce both calories by cutting back on fats and reduce overall sugar/carb intake was an eye-opener for me; the amount of sugar/carbs loaded into things you think are a healthier choice, like “low-fat” foods such as yoghurts etc. is astounding.

          • onyomi says:

            The Japanese versus American sweet thing made me think about how, after eating a fat-free, sugar-laden, sweet rice and red bean confection, I usually think “wow, that was good, but really sweet, and now I don’t really want to eat anything for a while,” whereas if I eat a chocolate chip cookie with the same number of calories (and many more of them from butter) as that Japanese sweet, I usually find myself wanting another cookie 10 mins. later, if not immediately.

            Your example of cream brings up an important point about food preparation in general: cooking and preparing food often amounts to, in essence, pre-digesting it. Heating itself, of course, breaks down cellular bonds and makes things easier to digest, but so too do processes like emulsification of fat: if you poured a quarter cup of melted butter on your vegetables it would actually seem kind of gross, for example, but blend that up with a couple of egg yolks, some lemon juice, salt, and hot sauce, and suddenly you have a delicious hollandaise sauce you want to lick off the plate.

            It’s the myriad ingenious ways we’ve found to snaek fat (and to a lesser extent sugar) into our foods that are the problem, more than fat or sugar per se. You wouldn’t drink a cup of cream, no, and probably not a cup of sugar, either. But if I mix them, heat them, add a bit of flavor and freeze them, suddenly you have a delicious confection.

            Reminds me also of an interesting point made by a vegan dietician I saw interviewed recently: someone asked him about the latest orthorexic trend, which is the “all-raw diet.” Basically, it’s just all raw fruits and vegetables–a diet which some vegans believe is superior to one including cooked grains, potatoes, etc.

            The nutritionist’s intelligent response was, “no, eating only raw food isn’t inherently better, but it’s yet another move in the arms race to cut out processed foods.” The example he gave was the gluten free diet: it used to be if you were a real celiac sufferer, you basically couldn’t eat out. You had to be super careful, prepare most of your own food, and, of course, avoid most of the pies, cookies, cakes, beef wellingtons, etc. that would traditionally use wheat.

            But now it is quite possible to find delicious gluten free cookies, cakes, pies, pizzas, beers, etc. etc. and so the gluten free diet, though it is still helpful for the celiac, is no longer an automatic ticket to a healthy weight.

            This was once similarly true for vegans: used to be lard was in everything, so you couldn’t eat much processed stuff if you wanted to be a strict vegan. Yet now it is quite possible to be an unhealthy vegan and so people have to push it further to eat only raw vegan food… they are already countering this at Whole Foods, I’ve noticed, with raw cookies, etc. etc.

            Sadly, the very art of cooking, preserving, and making food convenient is, in some ways, to blame, because the more pre-digested and high in calories food is, the more it lights up all our reward circuitry.

          • onyomi says:

            And this also explains why seemingly contradictory diets can both work, at least for a while.

            The paleo diet, which vilifies grains and legumes but allows a lot of fat, and the Pritikin-type diet, which is high in grains and legumes but low in fat, for example, could seemingly not be more different. But if avoiding grains makes you avoid the things which often come with grains (alfredo sauce, butter, cheese, hamburger patties, mayonnaise, sugar, etc.), then you will lose weight. But that doesn’t mean grains are bad per se.

            What is more, some studies have shown that any artificial restriction in dietary variety, even an arbitrary one, will reduce weight, at least temporarily. I could create a diet which demands that you not eat anything brown, yellow, or orange. Most people would probably lose weight for a little while on this diet, at least until they adjusted their eating habits to make up for the loss of whatever yellow things they had been eating.

        • They are also consistent with the simplest explanation of all—being rich enough to eat as much as you want as often as you want makes it possible to be fat, and situations where that was both possible and undesirable were sufficiently uncommon in the environment we evolved in so that there was little selective pressure against getting fat.

      • Douglas Knight says:

        The advocacy for low-fat diets had a huge effect on American diets. It did not reduce meat consumption, but it did cause a shift to leaner meats. Moreover, it caused meat consumption to plateau. That may not sound like much, but calorie consumption grew just as fast as before, but the new calories were carbs.

      • wysinwyg says:

        But look at this graph:

        https://en.wikipedia.org/wiki/Western_pattern_diet#/media/File:Obesity_country_comparison_-_path.svg

        Do all the countries on the left have low-fat diets? Do the French have especially low-fat diets?

        I’d really like to see some data on eating habits that could make sense of this graph.

        • LHN says:

          I’d also like to see trend lines for each country. My impression is that obesity has been increasing everywhere, even Japan (though there obviously from a very low baseline). That’s made me wonder if (at least in part) we’re just farther (and maybe moving faster?) along a path that’s being followed more broadly.

          But I don’t have nearly enough good information about enough countries for that to be more than a rebuttable hypothesis.

        • Deiseach says:

          “Do the French have especially low-fat diets?”

          From Dylan Moran’s 2004 show “Monster”, part of a routine involving talking about stereotypes of the French:

          Chocolate bread! That’s how they start the day. It’s only going to escalate from there. By lunchtime you’re fucking everybody you know. I was in Paris recently—they are very good at pleasure. I was walking by a bakery—a boulangerie, which is fun to go into and to say, even—and I went in, a childish desire to get a cake—”Give me one of those chocolate guys,” I said—and I was talking to someone on the street, took a bite… I had to tell them to go away! This thing! I wanted to book a room with it! “Where are you from, what kind of music are you into? Come on!” Proper, serious pleasure. Because they know they’re gonna die. Nobody goes to church. You think, we’re gonna die, make a fucking nice cake.

          🙂

      • Glen Raphael says:

        “Fat makes you fat.”

        Hmph. Ray Cronise has a theory that the body needs both carbs and fat to efficiently make use of what it consumes. This explains the puzzle that if you give the body almost no carbs (like an Atkins follower) or very little fat (like a vegan) either one of those strategies causes reliable weight loss – even though they seem like diametrically opposed strategies. This point is illustrated with a new Food Triangle in his paper on Metabolic Winter.

        (I recently lost a LOT of weight following a strategy mostly based on Ray’s ideas.)

        • onyomi says:

          There does seem to be something to the idea that you can *either* have a lot of fat or a lot of carbs, but not a lot of both. And most traditional societies seem to fall more into one category or the other, with all farming societies falling into the latter and many hunter-gatherers, especially hunter-gatherers in the far north, falling into the former.

          But I think the high-carb diet is healthier (most people have more energy and less likely to have high cholesterol, high blood pressure, etc), more viable long term, and more suited to sedentary lifestyles of the sort farmers had (whereas most hunter-gatherers burned tons of calories hunting and gathering).

          For example, given the choice between eating 70-80% of my calories from carbs and 20-30% from fat and protein or the reverse (may have to be more extreme to keep one in ketosis, like 90% fat and protein, 10% carbs), I think the former is much more do-able long term: you can eat rice, potatoes, bread, beans, etc. etc. whereas with the latter diet it’s pretty much just meat, meat, meat, eggs, meat. That’s fun at first, but gets old (and expensive) pretty fast in my experience.

          That said, there do seem to be certain people for whom the ketogenic diet can be a lifesaver, even as a long term strategy: for certain epileptics, for example, it can apparently dramatically reduce the number of seizures when they are in ketosis.

  34. Douglas Knight says:

    A typographical error: the block quote (from Hallquist) about zombies does not indicate that its second paragraph is a quote (from Yudkowsky), neither by quote marks nor by indentations. (Hallquist’s original used a block quote.)

  35. Deiseach says:

    I was exposed to Plantinga’s modal ontological argument at ~12 years old, and instantly noticed it could just as well “prove” the existence of an all-powerful, necessarily existent being who wants nothing more than for everything to be purple.

    Which does not necessarily disprove it. The assumption here is that we are supposed to think “Oh well, purple? That’s absurd!” and therefore the idea is disproved by a reductio ab absurdum. But why not purple? Why is it absurd to wish everything be purple? If you can tackle the question of a paperclip maximiser (and I know that’s used jokingly, but why not use what’s to hand?) on a level higher than “But it’s silly“, then you get to come back and say twelve year old you disproved Platinga.

    Saying “It’s just plain silly” is the philosophical version of the disgust reaction in moral judgement, young padawan 🙂

    • Carl Shulman says:

      If you can use the same argument to seemingly ‘prove’ two contradictory things then that gives you a proper reductio, and the theist God (which is said to necessarily be good in ways other than concern for purple) and purple-God are contradictory.

      More generally the Plantinga argument relies on equivocating between two senses of possible: logical possibility (i.e. that there is a logically consistent non-contradictory ‘possible world’ in which it is true) and subjective possibility (assign some credence to it).

      Then we consider a claim that X, where X is defined to be logically necessary if true, i.e. that it would be logically inconsistent for it not to exist. In the absence of a logical proof one way or the other, the claim is subjectively possible, but we don’t know whether it is logically possible. The ontological argument goes:

      1. So you agree we can’t rule out X, so it’s possible [evoking subjective possibility for the intuition, but writing down logical possibility]?
      2. X is defined as logically necessary, so if it’s possible that X then not-X is inconsistent/there is a logical proof that X.
      Therefore,
      3. Not-X is impossible, and X is true.

      This is extremely clear for mathematical claims, like the value of the nth digit of the decimal expansion of pi. It is subjectively possible to me that the quadrillionth digit is 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9, but only one is logically possible. If I followed Plantinga’s practice, then I have an ontological argument for each of those possibilities, with contradictory conclusions.

      Plantinga’s argument structure works in the same way with “it’s [subjectively] possible God doesn’t exist, so it’s [logically] possible God doesn’t exist, so it’s [logically] impossible for God to exist.”

      • James Picone says:

        More generally the Plantinga argument relies on equivocating between two senses of possible: logical possibility (i.e. that there is a logically consistent non-contradictory ‘possible world’ in which it is true) and subjective possibility (assign some credence to it).

        I think this needs to be signal-boosted. Plantinga’s modal-logic ontological argument is an amazingly obvious equivocation, sufficiently so that Plantinga is intellectually negligent in making the argument (that is, he either knew or should have known that the argument is an equivocation).

        It’s a bad sign that he isn’t generally considered ridiculous by philosophers.

        • FrogOfWar says:

          There’s a difference between considering Plantinga ridiculous and considering his Ontological proof ridiculous. I’ve never met a philosopher who took that argument seriously; it even gets a beating in an encyclopedia entry:

          http://plato.stanford.edu/entries/ontological-arguments/#PlaOntArg

          That said, though the argument isn’t good, if you actually read the original Plantinga does not appear to be trading on an epistemic vs. metaphysical possibility equivocation. Everyday descriptions of the argument often do though.

    • suntzuanime says:

      If there were an all-powerful, necessarily existent being who wants nothing more than for everything to be purple, everything would be purple. It’s not so much absurd as contradicted by observation.

      • HeelBearCub says:

        Ah, but everything is purple. We just don’t understand how it is purple, but that will be revealed to us someday.

      • Deiseach says:

        What is the difference between purple and goodness? See, we’re using terms as if they’re just plucked out of the air at random and one is as good as another.

        You may think theologians are full of it, but they have a reason for choosing “goodness” as a divine attribute rather than “purple”. It’s the same thing that annoys me with the “shellfish argument” about Leviticus or Deuteronomy: oooh, if you eat prawns then you can’t hold to the prohibition about bestiality or incest because you’re breaking one of the injunctions! You either keep them all or none of them! There’s certainly no such thing as distinctions, gradations, or the implication that one of these things is less serious than the other!

        I don’t ever recall anyone arguing against the American penal code on the grounds that unless you impose capital punishment for jaywalking, then you can’t really hold that murder is a crime. Because they’re all laws in the Big Book of Laws, right? And if you break one, you must be punished with the same punishment, right? Else you’re being – gasp! – inconsistent.

        And if ever, in one of these Many Worlds, we find a world that is all purple, I’m going to remind you all of this chat 🙂

        • suntzuanime says:

          I don’t think you understand how reductio works.

          • Deiseach says:

            Very likely. Indeed, probably not at all. Can you give me a quick summary and show me where I’m going wrong?

            (I still, however, maintain that there is a difference between purple and benevolence).

          • Saint_Fiasco says:

            If goodness is different from purple, then a good God and a purple God cannot exist in the same world.

            Yet the ontological argument as I understand it proves that they both exist.

            That’s an absurd conclusion, so we retrace our steps to find where we went wrong on this reasoning.

            I think we went wrong on the ontological argument.

            I think the difference is that you think I went wrong in taking the purple God as seriously as the good God?

        • wysinwyg says:

          In this case, “purple” is just one example of an infinitude of possibilities. If you insist on using a more relevant one, try “evil”. I watched a philosopher (Stephen Prothero?) argue William Lane Craig into a standstill by pointing out that all Craig’s arguments about original sin make as much sense under the assumption of an evil God as they do under the assumption of a good one.

          So if the argument simultaneously proves that God is perfectly good and God is perfectly evil (and that God is perfectly purple-loving and that God is perfectly doge-loving…)…

  36. RPLong says:

    I don’t think this blog post did EY, SA, or TH any favors. All three people came out looking much worse to me than what my previous opinion was. I think you’ve all set yourselves into a pattern of thinking that is ill-suited to the majority of the human experience.

    One quick example, so you know I’m not just prattling: An avid reader of LW once asked me to assign a probability to whether a sentient robot will murder a human being in… I forget, let’s say 20 years. Even setting aside the fact that technological advances are not a “probability” and focusing solely on the question of the robot – assumed to exist – choosing to murder a human being, this is NOT a question of probability! Volitional acts don’t happen “with some degree of randomness” except under assumptions which, if articulated, would sound like a psychotic break. I don’t bite into apples with a given probability, I CHOOSE TO EAT APPLES. Big difference.

    So this discussion is bogged down in whether or not EY added sufficient caveats or rejects the right kind of scientific rationality. That’s not really the question, is it? If LW readers are asking me to assign probabilities to murderous robots of the future, then clearly someone’s rationality has failed.

    Just my two.

    • blacktrance says:

      I don’t understand your objection. Let’s set aside robots and consider humans instead. Is it sensical to ask for a probability that at least 10,000 people will commit murder in the US next year? Presumably, each of those acts would be volitional, but it still makes sense to talk about the probability that each of them will happen.
      It also makes sense to talk about the probability that you’ll bite into an apple – it would be the probability that you choose to do so. I don’t know whether you like apples, but I can take statistics on apple consumption, make some assumptions about you based on the fact that you read SSC, and produce some probability estimate based on that.

      • RPLong says:

        There’s a difference between a frequency and a probability. each year, X% of people commit a murder. We can ask, “What is the probability that next year, the number will be Y% instead?” The reason we can ask that is because we aren’t asking about the probability of any individual murder, we’re asking about the likelihood that a sample mean will differ from a historical population mean. That makes perfect sense.

        But to ask what is the probability of a single event that has never happened before is totally senseless because, to put it one way, there is no denominator on a ratio like that. And to ask what the probability is that, over a period of time, I might bite into an apple is just setting yourself up for failure. If I wanted to undermine your probability, I would just cram all my apples into a blender during that period of time and say, “HA HA! I bit into zero apples, even though I ate dozens of them! YOU LOSE!” Or I might just eat pears instead. But in any case, you wouldn’t be talking about a probability, you’d be talking about whether or not I am going to choose to do something. That’s not a probabilistic outcome, it just either happens or it doesn’t. There’s no randomness, no frequency, no distribution. It happens if and when I decide to make it happen.

        • blacktrance says:

          As Eliezer wrote, probability is in the mind. Whether you’re going to choose to do something has a probability – I can estimate whether it’ll happen based on information I have. While I may not know everything about what you’d do in this particular situation, I may know something about you and about apple consumption in general, and derive a probability from that. I don’t know what it would mean to “undermine [my] probability” – my prediction may be wrong, but it still may be the best I can do given the information I have.

          As for predicting murders, it also makes sense to talk about the probability of individual murders, e.g. what the probability of me getting murdered this year is. I would take the crime statistics for my area, adjust them based on my personal factors, and have a probability. What’s wrong with that?

          • RPLong says:

            Yes, I agree that you can import a sense of randomness into the discussion, but as said to “Anonymous” below, that’s just changing the question.

            Whether or not I am going to do something like commit a murder is not a probability. You may choose to look at it probabilistically as “the best you can do given the information you have,” but if that’s how you intend to decide whether or not I’m really going to commit a murder, then my suggestion is that your “pattern of thinking is ill-suited to the majority of the human experience,” as I said in my original comment.

            Let’s say it’s not me. Let’s say it’s your wife. You know she’s not a killer, but still, Z% of all people are murderers and “it’s always the person you least expect” in at least S% of cases. So there is a chance that your own wife is a murder, right?

            But answer honestly: Is the reason you think your wife (or your girlfriend, or boyfriend, or whatever) is not a murderer because you assessed the probability and determined that it was low, or because you know for a fact that probability does not matter to that question?

          • blacktrance says:

            Of course there’s a chance that my girlfriend is a murderer. It’s a minuscule and negligible chance – the likelihood of any particular person being a murderer is already low, them being college-educated and upper-middle-class makes it even lower, having her personality makes it lower still, and so on. The problem is that “chance” in the colloquial sense isn’t equivalent to “chance” in a more technical sense. The colloquial use is that this chance is something that should be salient/important to me – if there’s a “chance” of something, I should be paying attention to it. But a 0.00000001% probability is still a chance in the technical sense, though not one that I should be worried about in a context like this.

            How could probability not matter to the question? If the probability of her being a murderer were, say, 25%, I’d be a lot more worried. If it were 99%, my life would definitely be different. The only way that probability wouldn’t matter is if the result doesn’t matter – in this case, if it wouldn’t matter whether she’s a murderer.

            If by “pattern of thinking is ill-suited to the majority of the human experience” you mean that people tend to be bad at reasoning with probability, I agree – but that’s no reason to be as bad as they are. If you mean something else, I’d like you to elaborate.

          • RPLong says:

            “the likelihood of any particular person being a murderer is already low, them being college-educated and upper-middle-class makes it even lower, having her personality makes it lower still, and so on.”

            No, this isn’t true. There is no causal relationship between a college education and an act of murder. You’re talking about prevalence rates, which aren’t the same thing as probabilities. The only way this becomes a “probability” is if you’re tasked with guessing whether a random person who fits the demographic profile of your girlfriend is a murderer.

            But that’s. A different. Question.

          • blacktrance says:

            Obviously, my girlfriend fits her own demographic profile, and that information can be used to determine a probability of her being a murderer.

          • RPLong says:

            Let’s put it this way: Without knowing it, you’ve actually changed the question from “Is my girlfriend a murderer?” to “What is the probability that I can guess whether a person is a murderer?” My point is that they’re different questions, no matter how badly you want to perceive them as being the same question.

            But I’ve made my point, and crapped all over this comment thread in the process, so I’ll bow out and give others their turn to speak up now.

          • Nornagest says:

            And here I thought Eliezer was beating a straw man when he talked about frequentist interpretations of statistics.

            Okay. Take the probability that a randomly selected person is a murderer. That is your prior. Then take all the information you have about your girlfriend — her age, education status, alibi for that time she left in the middle of the night and came back covered in fresh blood, et cetera — and for each element adjust your prior probability up or down based on the new marginal information. (Formally, you do this by applying Bayes’ Theorem, but that gets harder as the numbers you’re plugging in get more specialized. Our actual brains probably use some approximation of it, with a bunch of heuristics layered on top.) The number you get once you’ve exhausted the information you have is your subjective probability that your girlfriend is a murderer, which is probably close to zero but is not exactly zero.

            This is not some objective measure of randomness, which as you note is meaningless when applied to individuals. It is a subjective measure of certainty. But it turns out that you can do everything with that that you can do with measures of randomness, and more besides.

        • Eli says:

          Probability can be used for any measurable space whose measure sums to a strictly finite amount, and can thus be normalized to 1.0. Whether you interpret probability as a measure of frequency or information is irrelevant to whether the math works.

    • Randy M says:

      You can look at it this way: Do you believe that the circumstances necessary for a robot to murder a human* will come into existence in 20 years? What are the odds that you are wrong about your beliefs?

      *But my bigger objection would be that the value judgement that an act is murder being relevant to a machine is a philosophical question. Is he asking if a machine will result in a person’s death (surely happening already on some largely automated assembly line or such) or whether ai will advance to the point where we will believe that a machine will have the same understanding of moral actions 9and yet ability to disregard such) as a human? This isn’t a question to which an answer with a % sign seems the best response.

      • RPLong says:

        Probabilities only apply to randomness. When we talk about the probability of drawing a red ball out of an urn, we assume that every time I reach into the urn each individual ball, regardless of its color, has an equal chance of being drawn. But if the urn is so narrow that balls only fit into it single-file, and you know the order in which the balls of various colors were placed inside the urn, then it’s senseless to discuss the “probability” of drawing a red ball, because it’s no longer a random event.

        Non-random events don’t have probabilities.

        • Randy M says:

          What would work better to describe uncertainty? The relative frequency of you being incorrect about a fact?

          • RPLong says:

            When I’m uncertain about something, I just say, “I don’t really know for sure.” Then I either choose to guess, or choose not to guess. If I choose to guess, I take stock of the available information, yes, but I don’t delude myself into thinking that there is a cardinal number attached to my guess when I’m talking about situations in which cardinal numbers do not apply.

            Theists use physics right up until they don’t understand the physics anymore and then say, “The rest is a miracle of god!” Over-use of probability is a similar kind of thing. It’s just something LW-ers do to grapple with that whole “Incomplete Other” thing that fascinated Jacques Lacan so much.

            I’m not going to say that it’s true in all cases or your case or EY’s case, but hopefully you can see how this kind of thinking is susceptible to producing an obsessional neurosis. You can imagine some poor schmuck trying to estimate the year of his death using a Markov Chain Monte Carlo simulation and choosing when the best time to sire a child might be…

          • blacktrance says:

            If I choose to guess, I take stock of the available information, yes, but I don’t delude myself into thinking that there is a cardinal number attached to my guess when I’m talking about situations in which cardinal numbers do not apply.

            Then what do you do when you bet?
            Suppose you don’t know whether some presidential candidate (let’s say Donald Trump) will win the 2016 election. It could happen, but it might not. You’d probably accept a bet in which you get several million if he wins and pay $1 if he loses, and you’d refuse the opposite bet. Somewhere between the two, there’s a threshold on one side of which are bets you’d accept and on the other side are bets you’d reject, and it would be numerical.

          • RPLong says:

            No, because my willingness to accept bets is not a continuous function of the odds. That’s another cognitive error going on here – the implicit view that human thinking can always be expressed by a continuous function. Sometimes it can be, other times it can’t be. (See Murray Rothbard’s “demand schedules” – as opposed to demand curves – for a great and fully intuitive example of this.)

            I will take any bet for which the expected value is absurdly high and reject any bet for which the expected value is absurdly low or negative, no matter what my view of the probability. And this is true many times over when it comes to guesses about non-probabilisitic events.

          • blacktrance says:

            The expected value is determined by the utility of the positive outcome multiplied by its probability and the (dis)utility of the negative outcome multiplied by its probability. It makes no sense to talk about expected value without probability, because it’s one of the two factors (the other being the utility of an outcome) that determine what the expected value is.

          • “You can imagine some poor schmuck trying to estimate the year of his death using a Markov Chain Monte Carlo simulation and choosing when the best time to sire a child might be…”

            Not quite the same situation, but after my first marriage broke up I did some rough probabilistic calculations to decide whether my location (Blacksburg, VA, a small city with a large university in it) seriously reduced my chance of finding another wife. My conclusion was that the constraint was my search strategy, not the size of the population available to search. My conclusion turned out to be correct, although with a sample size that small that is very weak evidence that my analysis was.

            Does that qualify me as a poor schmuck? If not, how is it essentially different from your example?

    • Anonymous says:

      The problem with this reasoning is that while you control whether you bite into an apple, you do not control whether everyone else bites into apples.

      A further problem is that you cannot predict your future actions with 100% certainty despite you being the one who ultimately determines what they are – more than anyone else, most of the time, anyway. I can plan to do something, tell people I am going to do it, believe – correctly – that I am going to be able to do it. I still can’t be certain that I will actually do it.

      • RPLong says:

        1 – The question of who else bites into apples is irrelevant to the question of whether I do it. You can import randomness into my example to force a probability to apply to the question, but that’s just changing the question. The whole point is that there has to be some randomness involved; if there isn’t, then there is no probability.

        2 – I don’t have to predict my actions, I just have to know which way you’ve bet and then intentionally undermine your bet. I can abstain from apples long enough for you to lose if I want to.

        • Anonymous says:

          1: A question about “will any human do X within Y years” is a question more analogous to other people biting apples, not to you biting an apple.

          And, randomness is not required, just unpredictability. You don’t control the actions of everyone in the world. You can’t predict the actions of everyone in the world either. Therefore, questions involving the actions of everyone in the world are uncertain.

          2: This will probably work for not eating an apple within a short space of time. What about not ever eating an apple? Would you remember the bet for the rest of your life? Might you not forget, or maybe remember that you weren’t supposed to eat apples but not remember why, and figure that it probably wasn’t important? What about if we move it to a shorter time scale again, but instead of apples, it’s heroin, and you are a heroin addict? And how much is being bet? Maybe you could quit heroin for a billion dollars. Could you for one dollar?

          My ponit is that you need to account for the fact that your ability to determine your future actions is imperfect.

          • RPLong says:

            But again, you’ve just changed the question. I agree that probability applies to the narrow subset of situations you’re describing, but if we’re forced to confine ourselves to those situations, then you’ve conceded my point, which I thus restate: “I think you’ve all set yourselves into a pattern of thinking that is ill-suited to the majority of the human experience.”

    • Douglas Knight says:

      Do you object when bookmakers use the word “odds”?

      • RPLong says:

        No, because bookmakers confine their misuse of the term to actual monetary bets and are negotiating the terms of a transaction. When bookmakers decide to write philosophical treatises based on their misuse of the term, then I’ll start leaving comments on their blogs. 😉

    • J. Quinton says:

      This is a classic frequentist approach to probability; it leads to things like the gambler’s fallacy because naive frequentists (which is what our public, pre-college school system teaches) assign probability to the object in question. The bayesian approach to probability is that probability is an extension of logic; probability is the way in which you make sense of the world.

      In other words, probability for frequentists is ontological. It is a fundamental aspect of the object in question. Whereas probability for bayesians is epistemic. For bayesians it’s in the same sort of mental category as logic and language in that it’s only a means of helping us understand and describe the world; probability is just logic plus uncertainty, both of which only exist in your mind.

      Bayesians think that a coin flip is 50% because you don’t have access to all of the physical factors that went into flipping the coin. (Naive?) frequentists think that 50% is a fundamental aspect of a coin.

      This eternal struggle between the two views of probability probably (heh) won’t be resolved in a comment on a blog.

      • RPLong says:

        Hey, thanks for this very instructive comment! I wasn’t aware of the epistemic difference between the two approaches. You’re right – a much bigger question than blog comments, ha ha. I’ll shut up now. Thanks again.

      • Adam says:

        Frequentist statisticians don’t fail to understand the underlying determinism of physical processes like coin flips. The difference between frequentist and Bayesian statistics is the frequentist aims to produce a method for making guesses in the face of uncertain evidence with a guaranteed long-run success rate, whereas a Bayesian gives you a probability distribution as an answer. Maybe it’s a failure of my memory, but I remember very, very long exchanges between Andrew Gelman and Larry Wasserman in which neither one of them ever mentioned anything about the ontological status of probability.

      • James Picone says:

        This XKCD comic also seems relevant.

      • Eli says:

        Oh for fuck’s sake. It’s all actually about entropy. Entropy is a physical quantity. Entropy can reside in the coin or in the brain. When the entropy in the brain is causally entangled with the entropy of the coin, you are authorized to use Bayesian probability: treat the entropy in the brain as a measure of the entropy of the coin, and you’ll predict as well as can be done with the degree of entanglement you actually have.

        Of course, that Bayesian probability fundamentally only works because of the entanglement, and what’s really being measured is the entropy in the coin, which is frequentist.

    • Luke Somers says:

      (Edit: a bunch of ninjas beat me to this)

      Given that question, it looks like the distribution is not over what a given actor would do. The distribution is over what sorts of robotic actors we’re likely to encounter. You’re ‘setting aside’ the whole question!

      Also, at some level, choices depend on neurology, which depends on quantum mechanics, so the idea that choices are not completely certain and are actually probabilistic is technically correct, EVEN given a single actor. It’s not at all clear how much influence that has. Certainly hard-to-predict environmental factors would play a bigger role, and it’s also quite fair to use a probability distribution over those. This line of argument would apply to

      But again, the probability isn’t really talking about that even in the case of a specific actor. The issue is, we don’t know what kind of actor that is to high precision. Sure, there’s some volitional process going on in there. But we don’t get to observe that, so we remain ignorant of it, but we can assign a probability.

      Like, if I shuffle a deck of cards, do you say the top card is not random? It’s totally determined! Already there! Heck, I CAN EVEN LOOK AT IT! That’s more determined than volition (see above point)!

      But to you, it’s random.

      • RPLong says:

        The most concise way I can respond to your comment is as follows:

        If you are committed to viewing the world this way, then you will eventually discover that it is not humanly possible to keep track of all the equations in your “virtual Markov Chain” in order to arrive at a reasonable “number.”

        Even worse, attempts to reason this way (meaning, attempts to put this kind of thinking to work in your daily life as a living, breathing human being who encounters scenarios constantly, without the benefit of hours of algorithmic reassessment of priors) will be frought with the cognitive bias that results in your choice over which factors “matter” and which “don’t matter.”

        If that last paragraph sounds odd, I can give you a clear example: You said, “Given that question, it looks like the distribution is not over what a given actor would do. The distribution is over what sorts of robotic actors we’re likely to encounter.” You’ve already biased your analysis by a prioristically determining what the important probabilities are.

        • Luke Somers says:

          I never said I had a philosophical commitment to using this technique in all cases. I explained how it was philosophically allowed, and that it was actually in use, in this particular case.

          Anyway, it’s not as hard or unusual as you’re representing it. I gave an example of the deck of cards. Your probability distribution is almost entirely over possible current states, not the evolution of that state. Similarly, if you consider picking up a hitchhiker, or getting married, or giving a loan. Is the other person trustworthy? Or every time you make a turn around a blind corner – is there something around the bend? Or, speaking of blindness, making a probing step in the dark – are you about to step into the wall, or through the door?

          Probability CAN be applied to any of these, and in some cases it is reasonably practical to do so. You are not obliged to do so in any particular case.

    • Emile says:

      What do you think of Prediction Book?

      On that website, people assign numbers to future events, and when the things to which you assigned the number “90%” tend to occur 90% of the time, you’re doing good; if they occur 60% of the time, then you’re doing badly.

      Being able to make accurate predictions seems like a useful skill, and don’t you think Prediction Book helps that? How would you describe Prediction Book?

      • RPLong says:

        I don’t know anything about Prediction Book – your comment is the first I’ve heard of it.

        But, if it involves assigning numbers to non-numerical phenomenon, then I think it is cognitively flawed. The way you’ve described it, however, leads me to believe that it is more like a betting website, which is great entertainment indeed, but not a good way to arrive at things like truth and fundamental happiness.

  37. Randy M says:

    My knowledge of quantum mechanics is rather shallow. From what observations generally does the MWI stem? My guess is that it is an alternative interpretation to “the observer has an effect on the experiment”, Schroedinger’s Cat type observations, which scientists are understandably uncomfortable with. Is this the case or are the two unrelated?

    My impression is that EY is adamant about the MWI because of philosophical disposition away from a theory that seems anti-materialist. Is that way off base?

    • walpolo says:

      >>My knowledge of quantum mechanics is rather shallow. From what observations generally does the MWI stem? My guess is that it is an alternative interpretation to “the observer has an effect on the experiment”, Schroedinger’s Cat type observations, which scientists are understandably uncomfortable with. Is this the case or are the two unrelated?

      Yes, it’s a proposed solution to the “measurement problem”, which is basically another name for the Schrodinger Cat paradox.

      >>My impression is that EY is adamant about the MWI because of philosophical disposition away from a theory that seems anti-materialist. Is that way off base?

      That’s one of his reasons, but it can’t be the whole of his reasoning, since there are many other materialist interpretations of QM (non-local hidden variable and stochastic collapse interpretations being the big ones). Of course it’s possible that Yudkowsky is ignorant of some or most of these; he certainly doesn’t give them any real discussion in his sequence of MWI posts.

    • Luke Somers says:

      To answer your first question:

      What is MWI? It’s what you get when you suppose that Quantum Mechanics is correct and complete as a description of the rules governing the whole physical world all at once. Beyond that, no support is possible.

      Other interpretations add things or take them away from the framework to reach our day-to-day experience more directly than MWI can:

      Copenhagen adds this non-unitary, non-just-about-everything-conserving, random, etc. etc. etc. process. Ugly as !@$%!$%.

      Bohm adds an official real timeline. If you grok MWI, this is kinda meh.

      Relational Quantum Mechanics reduces the scope of QM down from the whole universe. I have no idea what the universe is supposed to even fundamentally be in this case.

      SO, Eliezer seems to prefer minimal sets of universal laws rather than more complicated ones that basically make up stuff that doesn’t act like anything around them for no good reason, which knocks off Copenhagen at least. A separate argument would have to be made for Bohm. He never addressed Relational QM in detail, but he observed that it’s basically MWI.

      • Irenist says:

        @Luke Somers:
        AFAIK, something sorta like MWI matched the intutions of some early QM pioneers, then Cogenhagen became orthodoxy, and nowadays MWI is considered a viable alternative by many smart people.

        Okay. So here’s my question: As a matter of the sociology of science, assuming for argument’s sake that Copenhagen is as “ugly” as you claim, why did it ever get popular among scientists? (Is that correct btw? Copenhagen WAS the reigning orthodoxy for a while, right?)

        Like, I can understand why New Agers like Copenhagen, but it would seem that QM pioneers, reared on Newton and Maxwell, would recoil in horror from Copenhagen and settle on something like MWI. I’m not at all competent to evaluate the physics. So I’m only asking, as a matter of human history, how did Copenhagen ever acquire the official prestige it has had, given that it’s so “ugly”? Is that the whole “Science vs. Bayes” point? Even so, how, historically, did such an “ugly” theory ever appeal to actual working physicists?

        If you or anyone answer this, I will be both respectful and grateful.

        • walpolo says:

          The acceptance of Copenhagen by physicists is one of the best examples of scientific irrationality I know of. In his sequence on QM, EY should’ve just said, Look at this crazy garbage view of QM that basically all physicists used to subscribe to!

          A good book that basically argues exactly that is James Cushing’s book on quantum mechanics.

        • Protagoras says:

          You may underestimate the New Agey sentiments among the physicists responsible for Copenhagen. Bohr in particular is associated with a certain degree of mysticism, though he later claimed he was misunderstood.

        • Douglas Knight says:

          The history is tricky because the “Copenhagen Interpretation” means different things every 20 years. But no one admits that a new viewpoint has won, they just say that’s what everyone has always believed.

          Even today, it means two different things. To people who specialize in interpretations, it means subjectivism. Whereas, most physicists think it is a kind of realism, that collapse upon measurement is an real process, but they don’t worry too much about what that means.

          Here are a few of the key steps.

          In 1925, Bohr hosted a conference in Copenhagen with all the founders of QM. Everyone who was there, pro or con, agrees that he said consciousness causes collapse, and most of them loved it, as Protagoras said. Einstein objected and Bohr immediately denied that’s what he meant and argued for subjectivism. Einstein didn’t like that, either, and they had a famous debate, but that’s another matter.

          Early on, it was clear that QM was pretty crazy and interpretation was important because it might shed light on how far it would go. At some point, QM got nailed down. I suspect that this was von Neumann’s 1930 book, with a very clean mathematical treatment. That fixed a Kuhnian paradigm and people shifted to working in the paradigm, writing down specific models and finding their specific consequences. The ambiguity of what a von Neumann observation was didn’t matter in practice and was discarded.

          At some point the mainstream got fed up with the new-agey stuff and declared “shut up and calculate!” This suppression of open debate may be what lead to the diversity of interpretations of the phrase “Copenhagen interpretation” that everyone agrees is the orthodoxy. Also, the post-Sputnik influx of grad students made them focus on calculation to make it easier to grade. I heard this from David Kaiser, who wrote a relevant book about a later stage.

        • What Yudkowsky says Copenhagen is, isnt what Copenhagen is.

          • Irenist says:

            @TheAncientGeek:

            See, now this is where I get flummoxed. Luke Somers knows the physics, and says EY has it right–at least the “relevant” stuff. I assume you know the physics as well, and you’re saying EY has Copenhagen wrong. I neither know, nor am ever likely to know, the relevant physics. Is there a way I can evaluate these claims? (There may not be; just asking.)

          • .CI dates back to the twenties/thirties. If it is the some thing as Objective Collapse, why did Penrose feel it necessary to introduce Objective collapse in the eighties?

          • Luke Somers says:

            Copenhagen is very, very vague. You have Bohr-style wave-particle duality issues under the same name as Heisenberg-style everything-is-always-both. It simultaneously says that the cat is dead and not alive, or alive and not dead; and allows that classical treatments of large objects are approximations.

            Regardless, there is definitely a focus on ending the quantum treatment when the measurement occurs. This confuses people into thinking that QM involves spooky action at a distance, requires that observation is fundamental, requires that the dynamics of the universe are fundamentally random (rather than indexically random), etc., and the vagueness gives them ample cover against correction.

        • Irenist says:

          Thanks to everyone who answered my question! I appreciate both the book recs and the thoughts on Bohr very much.

        • Ray says:

          MWI is generally credited to Hugh Everett writing in 1957, so it’s in fact much later than Copenhagen (1927).

          To understand Copenhagen, I think you have to understand Logical Positivism. (And since you’re a Feser fan, I should point out I mean here the specific idea that a scientific theory should be formulated explicitly in terms of what an observer sees, not a more general lesswrongy “ideas should pay rent” verification principle.) Thus the Copenhagen Interpretation, as originally formulated, claims what it is doing is what any good scientific theory should do. Any claim about when collapse “actually happens” is meaningless “metaphysics.”

          Aside from Einstein (who was at least interpreted as challenging QM in general and not just its interpretation), the first demands for realism with respect to theoretical terms came from Schroedinger in his famous cat thought experiment. But if you look at what he said:

          “One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter, there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.

          It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself, it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks.”

          It’s pretty clear he isn’t picturing large scale superpositions as splitting into noninteracting worlds as Everett did (and indeed as Decoherence theory demonstrates is what should actually happen, at least where the 2nd law of thermodynamics is valid.)

          Before Everett had this insight, the major views that opposed Copenhagen, demanding more realism, were hidden variable theories (like DeBroglie-Bohm,) and interpretations where collapse was a real process triggered by Consciousness (or just some unspecified macroscopic level of organization), e.g. the Von Neumann-Wigner interpretation (Which nowadays gets confused with Copenhagen, since the Logical Positivism required to make the distinction between the two interpretations is now deeply unfashionable.)

          There are also modern interpretations called “ objective collapse ” or “dynamical collapse” like GRW theory and the Penrose interpretation, which actually modify standard quantum mechanics to make branches of the wavefunction statistically likely to disappear at large scales.

      • MWI is what you get when you get reject collapse, and add a universal basis, and shelve the question of where the Born rule comes from.

        Objective collapse adds this non local blah…..CI doesn’t specify what are where collapse is, and can be specialised down into either subjective or objective collapse theories.

        rQM can be used for cosmological QM, with a “test observer”. True, it doesn’t allow you to naively reify your equations, but you can over reify. Its not incumbent on an interpretation to be maximtly realistic

        • Luke Somers says:

          Add a universal basis? What’s being added? All you’re doing is saying ‘consider everything’.

          I’ve never seen an interpretation that actually claims to explain where the Born Rule comes from better than MWI does.

          Under MWI, it’s just that the wavefunction is this artifact. Now, we can notice certain structures inside this artifact – structures that obey the rules of probability, if we interpret them under the Born rule.

          By the Generalized Anti-Zombie Principle, if there is a way that an artifact implements consciousness, that consciousness is real.

      • Professor Frink says:

        But isn’t matching the day to day experience the point of a scientific theory? If many worlds can’t “reach our day-to-day experience directly” what is the point?

        • Luke Somers says:

          I see that I was unclear when I said ‘more directly than MWI can’. I meant that it takes more work. In particular, you need decoherence, which was discovered several decades after the early days of QM. Other systems take short-cuts and bake the connection to everyday life into the rules (except Relational QM, which doesn’t appear to attempt to address this issue at all).

          To be clearer – if you say ‘Any component of the world’s wavefunction can be interpreted as being realized in proportion to its squared amplitude’ and ‘I’m a dynamically-contained entity living in a world of mutually-interacting components’, then that’s all you need.

          The first is the Born rule. As noted in a comment above, you can take this as an axiom of interpretation, even if it’s not phrased that way here. The second invokes decoherence to decide when you can ignore other worlds and thus recover our daily lives.

          The crucial thing is, both parts are subjective, not baked into the rules of the universe. They’re rules about how to find us in the universe.

  38. Jonathan says:

    Cutting carbs leading to weight loss and other health markers now being accepted by mainstream conservative establishments such as the BMJ, the anti quack journal of record in the UK … http://www.bmj.com/content/351/bmj.h4023.full?ijkey=AN2nBwW6h3wuQJK&keytype=ref

  39. Abel says:

    I remember reading the sequences back around 2008, maybe 2007 too if they were already out by then. The writing seemed relatively entertaining, and it was interesting to read in the light of some basic philosophical knowledge (2 years in high school, and some rereads of that material), and as a way to get introduced to a few other problems/points of view. The compatibilism defence is probably the one that somehow made a more lasting impression. Definitely some of a generic rebel vibe, but didn’t get a particular anti-scientific impression (certainly not calling for systematically ignoring the work of people that call themselves “scientists” in favor of something else).

  40. Shmi Nux says:

    Re MWI, the QM sequence would be much improved if the claims were quantified and probably weakened to those by the experts such as Hawking and Sean Carroll (rather than Deutsch):

    http://www.preposterousuniverse.com/blog/2015/08/03/hypnotized-by-quantum-mechanics/

    > It remains embarrassing that physicists haven’t settled on the best way of formulating quantum mechanics (or some improved successor to it). I’m partial to Many-Worlds, but there are other smart people out there who go in for alternative formulations

    > What I like about Many-Worlds is that it is perfectly realistic, deterministic, and ontologically minimal, and of course it fits the data perfectly.

    (Note “partial” and “like” instead of “sure”)

    > Of course there are all those worlds, but that doesn’t bother me in the slightest. For Many-Worlds, it’s the technical problems that bother me, not the philosophical ones — deriving classicality, recovering the Born Rule, and so on.

    These technical problems would remain regardless of whether “MWI came first”. In the course of solving them, it might even happen that MWI is not a good model, whether it is the simplest or not.

    The harm that Eliezer’s MWI advocacy causes is basically declaring the case closed “because Bayes”, at least in the mind of a non-physicist LW reader.

  41. walpolo says:

    So here’s what’s so crazy about Yudkowsky’s enthusiasm for the Many-Worlds Interpretation–crazier than Duetsch’s views, although Deutsch is also irrationally confident.

    Yudkowsky recognizes that probability is an unsolved problem for the MWI–in other words, that there is no explanation for why we should see the statistical distribution of experimental results that we see in quantum mechanics, rather than some other arbitrary distribution. This is the problem of the “Born rule.” [http://lesswrong.com/lw/q8/many_worlds_one_best_guess/]

    This is a huge problem that no other interpretation of QM faces. Yes, Deutsch is extremely confident in the MWI. But that’s because he thinks his decision-theoretic picture of probability solves the Born rule problem. What’s more, it should be really clear to anyone familiar with QM that Deutsch’s solution, or one of the other proofs of the Born rule that are basically the same details aside, is the only serious possibility in principle for solving the Born rule problem. So anyone who doesn’t like Deutsch’s solution should think the MWI is a dead end.

    But Yudkowksy doesn’t like Deutsch’s solution, and yet he extols the MWI as the only game in town. That is crazy.

    • Luke Somers says:

      Once you’ve decided that these dynamically independent bunches of reality fluid ought to result in probability of any sort, and these probabilities only depend on the amplitudes, then you are directly forced into the Born probabilities. There is no other option. Born pointed this out right away in the original paper.

      All of the philosophical difficulty is getting from a wavefunction ontology to the point where you think probability might have something to do with it.

      • walpolo says:

        >>Once you’ve decided that these dynamically independent bunches of reality fluid ought to result in probability of any sort, and these probabilities only depend on the amplitudes, then you are directly forced into the Born probabilities. There is no other option. Born pointed this out right away in the original paper.

        That’s essentially what Deutsch’s proof does, at least in its elaboration by D. Wallace: defines probability representation theorem-style in terms of the bets you’re willing to take, and shows that any rational agent who cares only about the amplitudes of outcomes will be forced to treat the squared amplitude as probability.

        Now, I’m not convinced that amplitude is the only thing that should matter for probability. (For example, whether the observer exists in a world ought to matter; worlds where the observer spontaneously disintegrates or otherwise dies instantly ought to have probability zero, but if you accept that then you’re accepting that the probability is at least partly a function of something other than the amplitude.) But I do think this is the most viable strategy for proving the Born rule in the MWI.

        Yudkowsky seems to disagree with me (and you) on that count, which is why it mystifies me that he is so confident in the MWI.

        • Luke Somers says:

          EY correctly notes that no other interpretation has a BETTER explanation for the Born probabilities than MWI, and it earns its victory on other grounds.

          If MWI also explains the Born probabilities, that’s just frosting.

          • walpolo says:

            Hidden variable interpretations can easily explain the Born probabilities: on these interps the probability is all just statistical mechanics. Bohm’s theory is the best known of these, but there are many examples (‘t Hooft and Steve Adler have both proposed different hidden-variable theories).

            Spontaneous collapse or continuous spontaneous localization theories also have a ready-to-hand explanation of the Born probabilities, since these theories are stochastic.

            If the competitor is Copenhagen, I agree MWI wins out, but that’s the only competitor EY discusses, and it’s also the worst interpretation.

    • Charlie says:

      Yeah, I agree that Deutch has it wrong. But there’s a very nice argument that dates back to Everett’s original paper, which is that the only available measure of states that’s only a function of the amplitude and has the correct properties when you do unitary evolution is the amplitude-squared measure. Since probability is a measure, this is extremely suggestive.

      • Professor Frink says:

        Why can’t you just count outcomes? Surely, for instance, spin up electron vs. spin down electron in a measurement must be in different worlds.

        Obviously outcome counting gives the wrong answer, but it seems like it’s also viable.

        • Luke Somers says:

          I’m not sure what you mean. Counting outcomes like that IS the correct answer. It’s not a matter of figuring out what probability is in the lab. It’s the matter of figuring out what probability is, starting from a wavefunction.

          You’ve got a vector, and you need to split it up into N mutually orthogonal components. Now, come up with some measure on vectors that’ll be conserved in sum before and after, no matter how you split the original vector up.

          The one answer to that is to apply the Pythagorean theorem.

          • Professor Frink says:

            I don’t understand. Yes, I can put a probability measure over the “worlds” proportional to the amplitude squared. This is certainly possible. Do this and I get a set of worlds and a set of probabilities associated with that world. This gives me the right answer.

            But people say that it’s the only viable way to do things, I don’t see why. For concreteness, imagine measuring a two outcome system (spin up/spin down). If the wavefunction has amplitude sqrt(1/3) spin up + sqrt(2/3) spin down, I could use the measure defined above. OR I could say “well, after a measurement there are two of me, one that sees spin up, and one that sees spin down, so the probability is 1/2”. I don’t understand why this second method, equal weighting of outcomes, is not viable. It’s wrong, but I don’t understand how we rule it out in the theory.

          • There are more than two of you, there are lots; one-third of which see spin up and two-thirds of which see spin down. The ones that see spin up are classically indistinguishable from one another, as are the ones that see spin down, but in terms of the quantum wavefunction that’s what it looks like.

            But yeah, there is still a philosophical issue, one perhaps significant enough to justify preferring Copenhagen if its issues don’t bother you as much.

          • Professor Frink says:

            I don’t think you can say there are lots of me? Certainly, I only see spin up, or spin down. I’m either in one “branch” or the other “branch.”

            I don’t THINK that you can say that there are two “classically distinguishable” mes but many,many quantum mechanically different mes because that would imply a hidden variable theory?

          • I’m not sure how familiar you are with thermodynamics, but the “hidden” variable here is the microstate – it isn’t a hidden variable in the QM sense.

            Basically, the measuring device has to contain what in quantum optics we call a reservoir (not sure how widespread that usage is) i.e., there are lots of atoms that can be vibrating in lots of different ways, so the device has lots of different quantum states that are indistinguishable on the macroscopic level. These states are correlated in unpredictable ways with the result of the measurement and therefore, once you have looked at the result of the measurement, with you.

            Because of the correlations, you can’t think of the quantum states of the device as being independent of your quantum state any more; you and the device form a single system, which has lots of states. In two-thirds of those states the result was “spin up” and the other third the result was “spin down”.

            Whether you can sensibly think of each of those states as a different you is a bit trickier. But it does correspond I think to the way we usually think about classical probability theory: consider all the possible outcomes, and divide them into the categories of interest. By way of analogy, consider playing a roulette wheel while blindfolded; just because you can either win or lose doesn’t mean the odds are 50/50, even though you don’t get to see which slot the ball went into.

          • walpolo says:

            It’s definitely not true in general that the number of worlds will correspond to the probability. I can increase the number of worlds by carrying out additional measurements, so I could set up an apparatus that performs a lot of additional measurements if the particle comes out spin-up, and no additional measurements if it comes out spin-down. This would not increase the probability of spin-up, but it would vastly increase the number of spin-up worlds.

            Read Deutsch or Wallace on this topic (or Yudkowsky’s series, this is one of the things he does get right). It’s very well known that the probability cannot correspond with the number of worlds. You’d get massively wrong predictions.

            In general, the best source on the MWI is David Wallace’s book The Emergent Multiverse, or for a more pop science treatment check out Schroedinger’s Rabbits.

          • You’re talking about macro-states, though, not micro-states. Making more measurements does not increase the number of micro-states, it just changes how you’re dividing them up.

            But yes, I was over-simplifying. You still have to use the quantum amplitude to bias your count, because (as Luke pointed out) that’s the only way to get a count that is independent of the choice of basis.

          • Professor Frink says:

            @Harry the microstate can’t be the hidden variable. It would be a local hidden variable and is ruled out by Bells theorem.

            Consider taking one particle and measuring it’s x component followed by it’s z component over and over again. You’ll double the number of worlds at each measurement, but the number of microstates remains fixed. Microstates must be (somewhat) unrelated to “worlds.”

            So the question I have is when people say “only the amplitude is a good probability measure” that doesn’t make sense to me. Why can’t counting worlds and giving them equal probability be a good measure? It can’t be related to change of basis, because worlds have to be in some sort of measurement/pointer basis (if you change basis, you can’t end up with a situation where someone got a different measurement result or your theory is broken).

          • I don’t see how Bell’s theorem says anything about microstates. They aren’t hidden variables in that sense – they’re part of the wavefunction space, not something external to it. (Also, it isn’t as though a particular microstate is associated ahead of time with a particular result. They only become correlated with the result during the measurement process, and the nature of the correlations are both unpredictable and constantly changing.)

            The fact that the ratio of distinct microstates to distinct worlds decreases with each measurement isn’t a problem, either. We’ll never run out of microstates, because there are always far more particles in a measuring device than there is time left before the end of the universe to perform measurements with it. (There’s also a limit to how many measurement results we can keep track of, and we’re never dealing with a perfectly isolated system either.)

            But I have to admit I’m not sure how to answer your question.

            One possible argument, FWIW:

            If you make a 50/50 up/down measurement, and then a left/right measurement if and only if the first result was “up”, your measure would give equal probability of 1/3 to “up”, “down/left” and “down/right”.

            But if you don’t make a second measurement either way, the probability of “up” would be 1/2 and the probability of “down” would be 1/2.

            Doesn’t the discrepancy between the probability of “up” in these two cases invalidate the measure? By the definition of what we mean by probability?

          • We’ll never run out of microstates, because […]

            Actually I think there’s a more fundamental reason than the ones I gave: measuring devices have to be powered. An isolated measuring device can only store a finite amount of usable energy, so can only make a finite number of measurements. (For example, in the classic two-slit experiment, each particular grain of photographic film can only be activated once, so you can’t make more measurements than there are grains.)

            I strongly suspect (but don’t know how to prove or disprove!) that the thermodynamics of the situation will ensure that the number of available microstates, as a function of the amount of stored energy, is always high enough to ensure that the worlds remain distinct. (For example, the number of microstates in a piece of photographic film is a large number taken to the power of the number of grains.)

          • walpolo says:

            >>You’re talking about macro-states, though, not micro-states. Making more measurements does not increase the number of micro-states, it just changes how you’re dividing them up.

            The micro-state/macro-state distinction you’re talking about doesn’t apply very cleanly to the MWI, from what I can tell. Measurements correspond to worlds splitting. Worlds split when decoherence separates the state into multiple quasi-classical parts. These quasi-classical parts of the state are the worlds, and probability has to be a measure of how likely it is that you’re in one world as opposed to another.

            >>So the question I have is when people say “only the amplitude is a good probability measure” that doesn’t make sense to me. Why can’t counting worlds and giving them equal probability be a good measure?

            Usually what people say is that you can only count worlds in idealized cases. In realistic cases, the preferred basis you mention is determined by decoherence, and it’s only determined approximately: it might be basis B, or it might be basis B rotated by epsilon, decoherence isn’t exact enough to determine which. So there’s no completely precise matter of fact about which basis is preferred, and hence no precise fact about how many worlds there are.

            But there are other measures that work, so I think your overall point is sounds. David Albert gives this rather silly example: weight the branches by the product of the amplitude and the observer’s mass in that branch. It’s a weird choice of measure, but it’s not impossible.

  42. walpolo says:

    While the subject of nutrition and low-vs-high-carb diets is salient:

    Does anyone have an informed opinion on The China Study (often cited as evidence that veganism is the healthiest possible diet)?

  43. E. Harding says:

    Hallquist says that Less Wrong is “against scientific rationality”. Well, we’re “against scientific rationality” in the same sense that my hypothetical Soviet who says “We need two Stalins! No, fifty Stalins!” is against Stalinism as currently implemented

    -Or, alternatively, Julius Evola was anti-fascist.
    Or Rothbard was anti-Milton Friedman.
    Or Sumner is anti-Bernanke.
    Or the MMTers are anti-Krugman.
    Or the Tea Party is anti-Republican.
    Or Avigdor Lieberman is anti-Netanyahu.
    Or Zhirinovsky is anti-Putin.
    Or BDS is anti-Norman Finkelstein.
    Or Ahrar ash-Sham is anti-Islamic State.
    Or Sargon of Akkad is anti-Feminist.

  44. Anonymoose says:

    My main problem with Topher Hallquist is his unwillingness to take many controversial ideas seriously. Certain right wing ideas are *empirical claims.* An unwillingness to entertain these ideas is to not care sufficiently about what reality looks like.

  45. Pku says:

    Question for the quantum physicists out there: Do the different interpretations of QM even make any distinct predictions? or are they just to ways to interpret what are (mathematically) the same laws?
    If it’s the first, why hasn’t this been decided experimentally? (The one idea I’ve heard for an experiment is the “nuke yourself to see if your survive, since in MW you will in some worlds and those will be the only ones you’re aware of – but this still suffers from questions about the philosophy of consciousness).
    If it’s the second, why is this even a meaningful question? Isn’t it kind of like arguing over whether Zorn’s lemma follows from the axiom of choice or the other way around?
    (As far as I can tell it seems like the latter – especially since it’s a question I see interested amateurs and popular science books talk about a lot more than the professionals, matching the pattern I see for the axiom of choice rather than, say, the Riemann hypothesis.)

    • Alex Z says:

      There are some interpretations of quantum mechanics that make different predictions. Some have even been falsified by experiment. (see local hidden variables)

    • vV_Vv says:

      It’s mostly the latter. In fact, as you note, the issue doesn’t seem to particularly bother most quantum physicists and it is mostly interesting to philosophers of science and amateurs.

      However, some interpretations do make different predictions than standard quantum mechanics, but the effects are beyond what we can experimentally measure with current technology. In other cases, it is unclear whether such difference exist.

      Standard QM has some degree of ambiguity in what constitutes an “observation” or an interaction with a “macroscopic system”. For all practical purposes, this doesn’t matter, since it is always clear to anybody applying QM to make predictions, but depending on the details there could be in principle differences in borderline cases that never occur in practice.

    • Adam says:

      Also not a quantum physicist here, but is there really anything about it that allows for branches of reality in which proximate nuclear explosions are survivable?

      • Oscar_Cunningham says:

        Sure, in QM pretty much every event has a (possibly exponentially small) probability of occurring. So it could happen that the entire nuclear blast heads away from you, or that you are instantly teleported away from the blast, or whatever.

  46. Troy says:

    What is the difference between Hallquist believing that he disproved one of the world’s most famous philosophers when he was twelve years old, and Eliezer believing that he solved the problem of consciousness when he was thirty-something?

    Why Scott, it’s because Plantinga is a Christian, of course.

  47. Collun says:

    This is off-topic to this particular discussion, so my apologies if I’m already breaking some rules. I’ve been trying to read through some of the archives to find a specific article, but I’m not having any luck. Does anyone know of the article where Scott utilized three categories of acceptability (something like, “Nearly Universally Accepted”, “Maybe Accepted”, and “Controversial”) to compare the view points of feminists and MRAs (might have been another example, actually).

  48. irrational says:

    I don’t know if this is particularly profound, but it strikes me that EY’s great contribution is in organizing (and in some cases inventing) a set of ideas that go beyond the basic naive principle of “rational – good, two legs – bad”. He is a sort of St. Paul of the Rationalist Church. He is not, however, the Messiah who is without sin:). I mean it in the following sense: it would be somewhat surprising if a person who came up with these ideas was particularly good at executing them in practice. Most of his work comes off as more polemical than balanced. You (Scott) might well be the Messiah, however:) I have never met anyone who appeared to be so open to questioning his own beliefs and bending over backwards to be fair, and I think most people who read this blog do so because of this quality. You shouldn’t be blind to the fact that many rationalists are unable to uphold this level of self-scrutiny, even if they are your friends and otherwise smart guys.

  49. Mark says:

    Scott: Super nitpick, but I wouldn’t use “Kolmogorov complexity” that loosely. I agree with what your trying to say, but Kolmogorov complexity has a mathematically rigorous meaning that is decoupled from the intuition it resembles. It’s like saying “Bayes is what tells us that parapsychology is a prime number.”

    That said, I only sort of understand Kolmogorov Complexity, and would be happy to be schooled.

  50. Faradn says:

    If you don’t mind my asking, why aren’t you doing cryonics? I mean I haven’t signed up, but I have barely given it any thought, so I’m curious about your thoughts on it.

  51. Greg Pandatshang says:

    Dear Codex readers,

    Can anyone point me in the direction of a source that explains concisely how and why Newcomb’s Problem (or the “one-boxing problem”, as I metonymously call it) is relevant or interesting in any way? I’ve read this:

    http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/

    but I have come away utterly unable to grasp why Eliezer thinks that an intelligent person would ever need or want to have a solution to this thought experiment. Why figure out how to win at a game that has never been played and almost certainly will never be played by anyone? Thus, I was reduced to a state of boredom by Eliezer’s impassioned pleas to have a methodology that will pick the winning strategy. He is very smart, but why is he passionate about this imaginary supernatural game? How could anyone be passionate about it? I guess the key thing question is: what is the real-life situation that it resembles (and why not just make that the thought experiment, rather than Newcomb’s fanciful tale)? I have to believe that somebody has at some point wondered similar things about one-boxing and hopefully developed a bit of insight about it. (I could be taunted into an argument about the merits of one-boxing, but what I am requesting is some reading material: either a link or brief non-debate-like comments).

    • Protagoras says:

      As described in Lewis, “Prisoners’ Dilemma is a Newcomb Problem,” (in the second volume of his Philosophical Papers) the Prisoners’ Dilemma is a Newcomb problem. Versions of the Prisoners’ Dilemma come up everywhere.

      • Greg Pandatshang says:

        I haven’t found Lewis’s paper online, but I have found some papers critiquing it. They might give me a biased view of his arguments, but I’ll try reading them, thanks.

        • Protagoras says:

          OK, here’s a quick summary. Imagine that there’s a delay between when you choose whether you want one or two boxes, and when you open the boxes. What’s in the opaque box is still completely unaffected by your decision, though. Instead, Omega is simultaneously playing the same game with another person, chosen to be as similar to you as possible. If that person picks only their opaque box, Omega sneaks a million into your opaque box before boxes are opened, and if you pick only your opaque box, Omega sneaks a million into your counterpart’s opaque box before boxes are opened. That scenario is pretty much just a prisoner’s dilemma. Lewis argues, quite convincingly as far as I’m concerned, that none of the differences between that scenario and the standard Newcomb problem matter for the decision theory issues the cases are meant to illuminate.

    • James Picone says:

      My layman’s understanding is that some decision theories/game-theoretic calculation have a lot of problems dealing with Newcomb’s problem. The point is that one-boxing is obviously ideal, so decision-theories that suggest two-boxing are buggy and shouldn’t be relied upon unless we can figure out what the bug is and fix it.

      • Greg Pandatshang says:

        If one never encounters a Newcomb’s problem, then one-boxing is neither ideal nor non-ideal.

    • Newcomblike problems actually come up all the time! They’re not at all some weird edge case that only decision theorists care about (or at least they shouldn’t be). Nate Soares has a good post on this that I highly recommend: http://mindingourway.com/newcomblike-problems-are-the-norm/

      Also check out Parfit’s Hitchhiker (http://wiki.lesswrong.com/wiki/Parfit's_hitchhiker) for a more “realistic” variant on Newcomb’s problem.

      • Adam says:

        What the heck is ‘mainstream philosophy’s decision theory’ here? I got an entire philosophy degree at one point in my life way back when and never encountered the notion that the optimally rational thing to do in any situation is lie to save $100. Forget the hitchhiker. If that was a real decision theory, why ever pay back any loan if you weren’t certain you’d need further credit within the next seven years? Why not just kill the guy and take his car so you could save $100?

      • Greg Pandatshang says:

        Aha! Yes, that is a more interesting story. I can see where it has some structural similarities with Newcomb’s problem. However, as a thought experiment, they seem to have a major difference, which is that Parfit’s describes a situation which could conceivably happen, while Newcomb’s describes a situation which could never, ever happen, which, I would say, makes it nearly useless as a thought experiment.

        Parfit’s tells an interesting little story, but what is the dilemma? “Should you be good at lying?” I suppose it’s “should you keep your promises?” Well, the latter is certainly a topic that no shortage of people have seen fit to muse about for many thousands of years!

        • Adam says:

          If I’m going to be more charitable than I was above, the dilemma is that a strawman utility maximizer is stuck lying because he can save $100 since he’s already been saved and has nothing further to gain from keeping his promise. This is a strawman because it falsely assumes there is no utility to be had from keeping a bargain and rewarding the person who saved your life. In practice, very nearly any real human would pay the guy even if he didn’t ask for any money. A decent decision theory should be able to account for this, but my impression of philosophy is that it largely does account for this. Acting like social cohesion is wholly irrational and money is all that matters doesn’t sound like any real philosophy. It sounds like a parody of Objectivism. Rationality is distinct from sociopathology.

    • Stuart Armstrong says:

      Problems like the Newcomb problem can come up for beings that can be copied, and the copies run in a virtual environment (in this case Omega’s estimation algorithm).

      AIs would be beings that could be copied, and the copies run in a virtual environment.

      What’s needed is a general decision theory that works across copying situations. The Newcomb problem is a good place to test such decision theories, because it is very simple (while also bringing up issues of precommitment and copy-altruism).

    • Deiseach says:

      To my ignorance, the problem of the Problem seems to be:

      Box B must always be full of money, because if you pick box B and it’s empty, that is because Omega predicted you would not choose B.
      But if Omega makes that prediction, then Omega would be wrong (because you picked B).
      Omega cannot be wrong because it has been right on all previously observed occasions.
      So you should pick box B.

      Because if you pre-commit to “Whenever I’m in this type of situation or faced with this type of problem, I will always and ever only pick one box not two”, then Omega recognises or analyses or however it works its predictions that you are the type of person who will always pick box B and so it has to put the million into the box.

      That’s how it works, yes?

      But the problem with that is (quite apart from “You can pick A and B, or you can pick B only, but you can’t pick A only”) it’s assuming Omega is always completely accurate in its predictions for an unlimited number of predictions. Omega is never going to pick a human whose choice is “But I only want box A, not both boxes, and not box B only”. Omega is never going to be a jerk about leaving box B empty all the time in order to avoid having to pay out the $1,000,000 and get away with it by saying “Yeah, well, I predicted you’d pick box A – oops, my bad!”

      Omega can’t do that by the set-up, it has to leave the box empty or full of money by predicting what choice its random selected human will make.

      Is it set down that Omega is indeed infallible, or simply that on previously observed occasions Omega has been correct? Omega could be right 100 times out of 100 and wrong on the 101st time.

      So we’re relying on an assumption that Omega is indeed absolutely infallible, like the argument about coin tossing: will a coin that previously came up heads ten times come up heads on the eleventh toss? This is saying “Omega made the correct prediction on the previous 100 tosses, so you should bet it will make the correct prediction on the 101st toss, so you should pre-commit to picking B, then Omega will predict you will pick B, and B will contain $1,000,000”.

  52. I see I’m exceedingly late to this party, but I have to object to this:

    Rocket science is a learnable skill, but if you want to have it you should probably spend at least ten years in college, grad school, NASA internships, et cetera. You should probably read hundreds of imposing books called things like Introduction To Rocket Science. It’s not something you just pick up by coincidence while you’re doing something else.

    Rocket science is literally one equation that drops right out of Newton, and you totally can pick it up by coincidence while learning, say, undergraduate physics. What you’re thinking of is rocket engineering, where you try to take that one equation and make machinery that embodies it as efficiently as possible.

    [/science snobbery]

  53. So why doesn’t Bayes recommend Bayesian quantum mechanics ? Its a thing.

  54. Mark Plus says:

    Cryonics operates in a progressing technological frontier, and I have said for years that cryonicists have to make it work. Some neuroscientists and cryobiologists think that cryonics deserves exploration as a strategy to try to turn death from a permanent off-state into a temporary and reversible off-state by approaching the problem as a challenge in applied neuroscience. They have set up the Brain Preservation Foundation to educate the public about this prospect and to raise money for incentive prizes to encourage scientists to push hard on the envelope of current and reachable brain preservation techniques. ‘

    As for Michael Shermer, his thinking about cryonics seems to have evolved from what he wrote several years ago. He and the fellow skeptic Susan Blackmore have associated with this foundation as advisers, so they apparently consider the foundation’s premise scientifically defensible:

    http://brainpreservation.org/

    http://brainpreservation.org/content/advisors

  55. Eli says:

    The basic problem with trying to diss on LW is that in order to do so, you have to know more about science and statistics than most people on the site already do, possibly including, in some ways, Eliezer. Of course, once you go and actually do that, even with respect to one tiny field, you find that LW-type people are the only ones you can bear to talk to about such topics, because everyone else doesn’t even understand the basics.

    The active ingredient in cryonics support is not unusual certainty it will work, but unusual methods for dealing with moral and epistemological questions – an attitude of “This only has like a 10% chance of working, but a 10% chance of immortality for a couple of dollars a month is an amazing deal and you would be an idiot to turn it down” instead of “this sounds weird, screw it”.

    One quibble: that’s not why some of us aren’t signed up. I’m not signed-up because I give a less than 0.1% chance that it actually works and a coin flip’s chance that I’d actually like the future society that resurrect me enough that I’d want to live there, with a strongly dropping probability for wanting to live immortally there.

    So at 0.05% and dropping of cryonics being functional and desirable today, I have previously donated money to the Brain Preservation Foundation, on grounds that if someone can present me with something that has been shown to work in rat models and then scaled up to certain treatments on human test-patients in clinical trials, then we can talk about the underlying issue, which is in fact that life is not actually so pleasant right now that I want to do it forever.

    And actually, Halquist really has no right to criticize the anti-philosophy tone on LW, since not only he himself but every sane person I know, including me all actually agree with it. As in, go read what Luke wrote, now, because you should endorse all that, and then some.

    Turns out that not only is a priori reasoning a bit weird and difficult compared to a posteriori, it actually contains loads of holes and confusions that can only really be eliminated by going a posteriori.

  56. Dyson says:

    If I am to believe that a theist s beliefs are rational, I need to be presented with his or her reasoning. Until that is done, there is no way to establish rationality, so I will not believe that he or she has rational belief about theism until that burden of proof is met.

  57. Anonymous says:

    Is Robert Lustig’s ‘orange soda’ theory on sugar not generally accepted in the medical community? As described in this lecture: https://www.youtube.com/watch?v=0z5X0i92OZQ

    The gist of it seems to be that simple sugars without fiber prevent us from feeling full, are converted to fat rapidly, and are addictive. The lecture seemed to support the thesis pretty well and I had been under the impression that this was a non-controversial opinion.

  58. David J. Balan says:

    Here’s a funny thought that just occurred to me. Economics is one of the few disciplines in which one of the important elements of rationality, namely formal decision theory, is explicitly taught. But it is not taught alongside of domain-specific knowledge the way Scott suggests it should be. That is, it is not taught as “here is something important that you should take some trouble to learn in order to get better at economics.” Instead, it’s “here is something that, by the assumptions of many economic models (not all), *everybody already knows and already acts upon* and you should go learn it in order to be able to include it in your models.” And then the trainee economists go and learn some decision theory, which they do because they are required to assume that everybody already knows it, and maybe they get a little more rational in the process, but only in an accidental bank-shot kind of a way.

    • Douglas Knight says:

      This is a general trend with economics. It claims to be a description of the world, but most of it is more useful as advice.

  59. Metallik says:

    Forget your shitty bands like Mayhem, Linkin Park, Slipknot and whatever… the best metal is done by SEWER, Phantom, Antekhrist 91 read https://demonecromancy.wordpress.com/2015/12/11/greatest-albums-youve-never-heard-1-demonecromancy/