Topher Hallquist recently wrote a long article calling Less Wrong and the rationalist community “against scientific rationality” and having “crackpot tendencies”.
The piece claims to be about “the Less Wrong community”, but mostly takes the form of a series of criticisms against Eliezer Yudkowsky for holding beliefs that Hallquist thinks are false or overconfident. In some respects this is fair; Eliezer was certainly the founder of the community and his writings are extremely influential. In other respects, it isn’t; Margaret Sanger was an avowed eugenicist, but this is a poor criticism of Planned Parenthood today, let alone the entire reproductive rights community; Isaac Newton believed that the key to understanding the secrets of the universe lay in the dimensions of Solomon’s Temple, but this is a poor critique of universal gravitation, let alone all of physics. I worry that Hallquist’s New Atheism background may be screwing him up here: to critique a movement, merely find the holy book and prophet, prove that they’re fallible, and then the entire system comes tumbling to the ground. Needless to say, this is not how things work outside the realm of divine revelation.
On the other hand, it seems like the same argument that suggests Hallquist shouldn’t say such things would suggest I shouldn’t care much about arguing against them. I wish I lived in a universe where this was true, but “guilt by association” is a thing, the Internet has more than its share of people who have conceived this deep abiding hatred toward all rationalists, and “crackpot” and “anti-intellectual” are especially sticky accusations around these parts. Past experience tells me if I let this slide then at some point I’m going to be mentioning I’m interested in rationality and the automatic response will be “Oh, those are those anti-intellectual crackpots who hate science” and nothing I say will convince them that they are wrong, because why listen to an anti-science crackpot? Some things need to be nipped in the bud.
Also, a lot of Hallquist’s criticism is genuinely wrong and unfair. Also also, I like Eliezer Yudkowsky.
This is not to say that Eliezer – or anyone on Less Wrong – or anyone in the world – is never wrong or never overconfident. I happen to find Eliezer overconfident as heck a lot of the time. I have told him that, and he has pointed me to his essay on how if you really understand what confidence means in a probabilistic way, then you keep track of your uncertainty internally but don’t worry too much about the social niceties of broadcasting how uncertain you are to everyone. My opinion of this is the same as my opinion of most other appeals to not needing to worry about social niceties.
If Hallquist had made this reasonable critique, I would have endorsed it. Instead, I find his critique consistently misrepresents Eliezer, most of the ideas involved, and the entire Less Wrong community. I am going to fisk it hard, which I don’t like to do, but which seems like the only alternative to allowing these misrepresentations to stand. If you want to skip the (very boring) fisking, avoid part II and go straight from here to part III.
So, to start with, I count four broad Eliezer-critiques:
1. Eliezer believes there’s an open-and-shut case for the Many Worlds Interpretation of quantum mechanics.
2. Eliezer believes that some philosophical problems are really easy, and philosophers are morons for not having settled them already
3. Eliezer is certain that cryonics will work
4. Eliezer believes that “dietary science has killed millions of people” and cites borderline-crackpot Gary Taubes
The first critique, about quantum mechanics, is potentially the strongest. Yudkowsky’s support for his preferred Many Worlds interpretation is remarkably strong for a subject where some of the world’s top geniuses disagree. On one hand, it is no stronger than that of many experts in the field – for example, Oxford quantum pioneer David Deutsch describes the theory’s lukewarm reception as “the first time in history that physicists have refused to believe what their reigning theory says about the world…like Galileo refusing to believe that Earth orbits the sun” and calls arguments against it “complex rationalizations for avoiding the most straightforward implications of quantum theory”. On the other, perhaps one could argue that a level of confidence appropriate in an Oxford professor is inappropriate in a self-taught amateur. I don’t know anything about quantum mechanics and don’t want to get into it.
Neither does Hallquist; he admits that Many Worlds is a reasonable position, but makes a different accusation. He says Yudkowsky has failed to inform readers to investigate other views:
Years ago in college I wrote a book debunking claims by Christian apologists claiming to “prove” that Jesus rose from the dead using historical evidence. At the very end of the book I included a little paragraph about how you shouldn’t just take my word for anything and should do your own research and form your own conclusions. In retrospect, that paragraph feels cheesy and obvious, but seeing the alternative makes me glad I included it.
Yudkowsky could have, after arguing at length for the many worlds interpretation of quantum mechanics, said, “I recommend going and studying the arguments of physicists who defend other interpretations, and when you do that I think you’ll see that physicists are screwing up.” That might have been reasonable. Many physicists accept many worlds, and I can accept that it’s sometimes reasonable for a dedicated amateur to have strong opinions on issues that divide the experts.
But in the very first post of his quantum physics sequence, Eliezer warns:
Everyone should be aware that, even though I’m not going to discuss the issue at first, there is a sizable community of scientists who dispute the realist perspective on QM. Myself, I don’t think it’s worth figuring both ways; I’m a pure realist, for reasons that will become apparent. But if you read my introduction, you are getting my view. It is not only my view. It is probably the majority view among theoretical physicists, if that counts for anything (though I will argue the matter separately from opinion polls). Still, it is not the only view that exists in the modern physics community. I do not feel obliged to present the other views right away, but I feel obliged to warn my readers that there are other views, which I will not be presenting during the initial stages of the introduction.
Okay, but that’s not exactly the same thing as telling his readers that they need to read other books and do other research and see whether he’s right or wr…
Go back and look at other explanations of QM and see if they make sense now. Check a textbook. Alternatively, check Feynman’s QED. Find a physicist you trust, ask them if I got it wrong, if I did post a comment. Bear in mind that a lot of physicists do believe MWI.
That’s from this comment. So Yudkowsky clearly did the thing Hallquist is accusing him of not doing.
This is to his credit, but in my opinion supererogatory. Despite his anecdote about the paragraph in his book, Hallquist does not end his own essay on Less Wrong with “There are many people who disagree with me on this, be sure to check some pro-Less Wrong views to get the opposite side of the story.” Nor does he include such a sentence in the vast majority of his voluminous blog posts and writings. I don’t blame him for that – I don’t use such disclaimers either. There is a general conversational norm that to assert something is to say that you believe it and will provide evidence for your belief, not to say “You must believe me on this and may never go find any other sources to get any other side of the story.” You listen to me, you get my perspective, if for some reason I think you might not be aware that other perspectives exist I’ll tell you that they do, but I’m not going to end every blog post with “But be sure to read other sources on this so you can have your own opinion”.
Nevertheless, that is the standard Hallquist demands of Yudkowsky. And Yudkowsky meets it. And Hallquist ignores him doing so.
Here’s philosophy. Note that I’m editing liberally to reduce length; I don’t think I’ve removed any essential parts of the argument but you might want to go over there and check:
In another post, Yudkowsky lodges a similar complaint about philosophy:
Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out – possibly fatally – whether they got it right or wrong. Philosophy doesn’t resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I’m sorry, but as Jeffreyssai says: Too slow! It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct. But philosophy, which hasn’t come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn’t seem very likely to build complex correct structures of conclusions.
I agree that progress in academic fields is sometimes slowed by the irrationality of the participants. People don’t like admitting being wrong. Unfortunately, knowing this isn’t much help unless you’ve discovered a magic formula for overcoming this flaw in human nature. More on that later. But thinking irrationality is the only reason why progress is slow ignores the fact that often, progress is slow because the questions are just really hard
The reason I’m saying all this is because when philosophers act like they’re not really trying to resolve debates, it’s because they know such attempts have a track record of not working. That doesn’t mean we will never put philosophy on a solid footing, but it does mean that anyone who shows up claiming to have done so single-handedly deserves a fair dose of skepticism.
The zombie debate is as good an example as any of this, so let’s talk about that. David Chalmers’ claim is that there could exist (in other possible worlds with different psychophysical laws–not the actual world) beings that are physically identical to us, but who lack consciousness. The intuition that such “zombies” are possible leads Chalmers to a view that at least looks a lot like epiphenomenalism (the belief in a separate mental realm affected by, but which does not affect, the physical realm).
Epiphenomenalism strikes a lot of people as crazy–me included! But Chalmers realizes this. So in The Conscious Mind he tries to do two things (1) argue his view is not quite epiphenomenalism (2) argue that some of the apparent advantages of certain other views over epiphenomenalism are illusory.
Does he succeed? I don’t know. But what makes me sympathetic to Chalmers is the sense that what he calls the hard problem of consciousness is a real problem, and alternative solutions aren’t any better. And Yudkowsky, as far as I can tell, isn’t one of those people who says, “the so-called ‘hard problem’ is a fake problem.” He agrees that it’s real–and then claims to have a secret solution he’ll sell you for several thousand dollars.
I think it’s enormously unlikely that Yudkowsky has really found the secret solution to consciousness. But even if he had, I don’t think anyone could know, including him. It’s like an otherwise competent scientist refusing to submit their work for peer review. Even top experts are fallible–and the solution is to have other experts check their work.
My response is going to sound mean and kind of ad hominem. That’s not really where I’m going here, and I promise I’m going somewhere a little more subtle than that. So, keeping that in mind – Hallquist once said on his blog:
I seem to have some philosophy-related skills in abundance. I’m good at spotting bad philosophical arguments. I was exposed to Plantinga’s modal ontological argument at ~12 years old, and instantly noticed it could just as well “prove” the existence of an all-powerful, necessarily existent being who wants nothing more than for everything to be purple.My experience with philosophy professors suggests that sadly, the knack for seeing through bad arguments is far from universal, even among the “professionals.”
What is the difference between Hallquist believing that he disproved one of the world’s most famous philosophers when he was twelve years old, and Eliezer believing that he solved the problem of consciousness when he was thirty-something?
Likewise, in another blog post Hallquist wrote about how Aquinas’ Arguments Just Suck and have no redeeming value:
Now, the broad point here is that while these arguments are supposedly derived from Aristotle, there doesn’t seem to be some secret Aristotelian assumption that would make them work. They’re just plain old bad arguments. I feel comfortable saying this, because respected living philosophers often give arguments that just stink, and being a contemporary of those philosophers I’m confident that the issue isn’t some peculiarly 21st century assumption.
Remember that Aquinas’ arguments convinced nearly all the brightest people in the Western world for five hundred years, and that many PhD philosophy professors still believe them with all their heart. Hallquist says they just suck and that there is no chance he might just be missing something or be misunderstanding their terms. In fact, he was so sure that there was no chance he was just a modern misunderstanding things that when a philosophy professor wrote a book claiming that modern people’s contempt for Aquinas is based on a misunderstanding, Hallquist read twenty pages of it, decided there was no chance that the remainder could possibly contain anything to change his mind, and stopped.
(I read the entire book, and if I’d stopped at page twenty I would have saved myself several valuable hours of my life I could have spent doing something productive, BUT THAT’S NOT THE POINT!)
Presumably, if Aquinas’ arguments are really stupid, but everyone believed them for five hundred years, this would imply there is something wrong with everyone. It might be worth saying “When someone makes a stupid argument, is there anything we can do to dismiss it in less than five hundred years?” But take that step, and you’re venturing into exactly the territory Hallquist is criticizing Eliezer for crossing into.
So what is my point beyond just an ad hominem attack on Hallquist?
Philosophy is hard. It’s not just hard. It’s hard in a way such that it seems easy and obvious to each individual person involved. This is not just an Eliezer Yudkowsky problem, or a Topher Hallquist problem – I myself may have previously said something along the lines of that anybody who isn’t a consequentialist needs to have their head examined. It’s hard in the same way politics is hard, where it seems like astounding hubris to call yourself a liberal when some of the brightest minds in history have been conservative, and insane overconfidence to call yourself a conservative when some of civilization’s greatest geniuses were liberals. Nevertheless, this is something everybody does. I do it. Eliezer Yudkowsky does it. Even Topher Hallquist does it. All we can offer in our own defense is to say, with Quine, “to believe something is to believe that it is true”. If we are wise people, we don’t try to use force to push our beliefs on others. If we are very wise, we don’t even insult and dehumanize those whom we disagree with. But we are allowed to believe that our beliefs are true. When Hallquist condemns Yudkowsky for doing it, it risks crossing the line into an isolated demand for rigor.
The most charitable I can be to Hallquist is that his gripe is not with Yudkowsky’s hubris in holding strong beliefs, but with an apparent reluctance to publish them. Once again, from the end of the quoted material:
What makes me sympathetic to Chalmers is the sense that what he calls the hard problem of consciousness is a real problem, and alternative solutions aren’t any better. And Yudkowsky, as far as I can tell, isn’t one of those people who says, “the so-called ‘hard problem’ is a fake problem.” He agrees that it’s real–and then claims to have a secret solution he’ll sell you for several thousand dollars.
I think it’s enormously unlikely that Yudkowsky has really found the secret solution to consciousness. But even if he had, I don’t think anyone could know, including him. It’s like an otherwise competent scientist refusing to submit their work for peer review. Even top experts are fallible–and the solution is to have other experts check their work.
So perhaps the difference between Hallquist (and the rest of us) and Yudkowsky is that the latter doesn’t believe in peer review and openness because he “hasn’t published” his “secret” solution to consciousness?
Hallquist’s only source for Eliezer having such a solution is the Author Notes for Harry Potter and the Methods of Rationality, Chapter 98, where somewhere in the middle he says:
I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds).
Even if, as Hallquist claims, he has private information that this isn’t a joke, this hardly seems like the central case of Eliezer having a philosophical opinion. One might ask: has Eliezer ever written about his other philosophical beliefs, the ones that seem most important to him?
The Less Wrong Sequences are somewhere between 500,000 and a million words long (for comparison, all three Lord of the Rings books combined are 454,000, and War and Peace is 587,000). Eliezer may be one of the most diligent people alive about publicizing his philosophical opinions in great detail. In some cases, he explicitly frames his Sequence work in terms of getting feedback – for example, in the same comment on the Quantum Physics Sequence I linked earlier, he writes about his approach to double-checking his quantum work:
I myself am mostly relying on the fact that neither Scott Aaronson nor Robin Hanson nor any of several thousand readers have said anything like “Wrong physics” or “Well, that’s sort of right, but wrong in the details…”
And this works – far from being unwililng to debate academic philosophers about his opinion of Chalmers’ epiphenomenalism, he argues the subject with David Chalmers himself in the comments of that Less Wrong post, which is a heck of a lot more than I’ve ever done when I disagree with an academic philosopher.
On the other hand, since he placed a single sentence in an HPMOR Author’s Note about a solution to the hard problem of consciousness which he hasn’t written about, Hallquist accuses him of being against publicizing his work for review or criticism. Once again, I think these complaints are startlingly unfair and a remarkably flimsy ground on which to set out to tarnish the reputation of an entire community.
Hallquist’s next complaint is that Eliezer is strongly in favor of cryonics; he starts by noting that Eliezer criticizs Michael Shermer for mocking the role of molecular nanotechnology in cryonics, then writes:
Full disclosure: I’m signed up for cryonics. But the idea that nanomachines will one day be able to repair frozen brains strikes me as highly unlikely. I think there’s a better chance that it will be possible to use frozen brains as the basis for whole brain emulation, but I’m not even sure about that. Too much depends on guesses both about the effects of current freezing techniques and about future technology.
Eliezer, meanwhile, is sure cryonics will work, based, as far as I can tell, on loose analogies with computer hard drives. Faced with such confident predictions, pointing out the lack of evidence and large element of wish-fulfillment (as Shermer does) is an eminently reasonable [reaction]. A “rationalism” that condemns such caution isn’t worthy of the name.
I can’t stress enough the enormous difference between trying to do some informed speculation about what technologies might be possible in the future, and thinking you can know what technologies will be possible in the future based on just knowing a little physics. Take, for example, Richard Feynman’s talk “There’s Plenty of Room at the Bottom”, often cited as one of the foundational sources in the field of nanotechnology.
Today, the part of Feynman’s talk about computers looks prophetic, especially considering the talk was given several years before Gordon Moore made his famous observation about computer power doubling every couple of years. But other things he speculates about are, to say the least, a long way off. Do we blame Feynman for this?
No, because Feynman knew enough to include appropriate caveats. When he talks about the possibility of tiny medical robots, for example, he says it’s “very interesting possibility… although it is a very wild idea.” He doesn’t say that this will definitely happen and be the secret to immortality. And some futurists, like Ray Kurzweil, do say things like that. That’s the difference between having a grasp of the difficulty of the topic and your own fallibility, and well, not.
Yet Yudkowsky is so confident in his beliefs about things like cryonics that he’s willing to use them as a reason to distrust mainstream experts.
I hate to keep getting into these tu quoque style arguments. But I do feel like once you have signed up for cryonics, you lose the right to criticize other people for being crackpots for being signed up for cryonics for a slightly different reason than you are. If you think molecular nanotechnology is highly unlikely but whole brain emulations aren’t, please at least be aware that from the perspective of everyone else in society you are the equivalent of a schizophrenic guy who believes he’s Napoleon making fun of the other schizophrenic guy who thinks he’s Jesus.
But Hallquist’s main criticism isn’t just that Yudkowsky believes in cryonics, or nanotechnology, or whatever. It’s that he’s overconfident in these things. As per the quote, he’s “sure cryonics will work” and says the equivalent of “this will definitely happen and be the secret to immortality”.
At this point, it will probably come as no surprise that Yudkowsky has never said anything of the sort and has in fact clearly said the opposite.
One of my favorite results from the Less Wrong Survey, which I’ve written about again and again, shows that people who sign up for cryonics are less likely to believe it will work than demographically similar people who don’t sign up (yes, you read that right) – and the average person signed up for cryonics only estimated a 12% chance it would work. The active ingredient in cryonics support is not unusual certainty it will work, but unusual methods for dealing with moral and epistemological questions – an attitude of “This only has like a 10% chance of working, but a 10% chance of immortality for a couple of dollars a month is an amazing deal and you would be an idiot to turn it down” instead of “this sounds weird, screw it”. Once you think of it that way, signing up doesn’t mean “I’m sure it will work” but rather “I’m not 100% sure it won’t.” Thus, Eliezer’s former co-blogger Robin Hanson, who frequently joined forces with Eliezer in passionate appeals for their mutual readers to sign up for cryonics, gives his probability of cryonics working at 6%.
I don’t have as accurate a picture of Eliezer’s beliefs. The best I can do is try to interpret this comment, where he describes the success of cryonics as a chain of at least three probabilities. First, the probability that the core technology works. Second, the probability that cryonics organizations don’t go bankrupt. Third, the probability that humankind lasts long enough to develop appropriate resurrection technology.
Eliezer gives the first probability as 80 – 90%, the second probability as “outside the range of my comparative advantage in predictions”, and the third probability as “the weakest link in the chain” and as “something I’ve refused to put a number on, with the excuse that I don’t know how to estimate the probability of doing the ‘impossible'”. So his probability of cryonics working seems to be 80-90% * something-he-doesn’t-know * something-he-thinks-is-near-impossible. If we are willing to totally ignore his request not to put numbers on these, perhaps something like 90% * 75% * 10%? Which would land him right between Robin Hanson’s 6% and the survey average of 12%.
I think Eliezer’s overconfident on his belief on the first of his three bins, the one about the core technologies working. But Hallquist ignores this relatively modest issue and instead chooses to sensationalize Eliezer into being “sure cryonics will work” and “[thinking] this will definitely happen”, when his actual probability is unknown but probably closer to 6% than 100%. Once again I think this complaint is startlingly unfair and a remarkably flimsy ground on which to set out to tarnish the reputation of an entire community.
[my own disclaimer: I am not signed up for cryonics and don’t currently intend to, but I respect people who are]
Oh God, we’re going to have to get into diet again. You can skip this part. Seriously, you can.
Eliezer writes that “dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup” and links to this article by Taubes.
I do not want to defend Gary Taubes. Science has progressed to the point where we have been able to evaluate most of his claims, and they were a mixture of 50% wrong, 25% right but well-known enough that he gets no credit for them, and 25% right ideas that were actually poorly known enough at the time that I do give him credit. This is not a bad record for a contrarian, but I subtract points because he misrepresented a lot of stuff and wasn’t very good at what might be called scientific ethics. I personally learned a lot from reading him – I was able to quickly debunk the wrong claims, and the correct claims taught me things I wouldn’t have learned any other way. Yudkowsky’s reading of him seems unsophisticated and contains a mix of right and wrong claims. But Hallquist’s reading seems to be a prime example of reversed stupidity not being intelligence. He writes:
The use of the diet example is even more embarrassing than the other claims I’ve looked at so far. The line about “dietary scientists ignoring their own experimental evidence” links to an article by Gary Taubes. Taubes champions the diet claims of Robert Atkins, who literally claimed that you could eat unlimited amounts of fat and not gain weight, because you would pee out the excess calories. This, needless to say, is not true.
After reading two of Taubes’ books, I haven’t been able to find anywhere where he addresses the urine claim, but he’s very clear about claiming that no amount of dietary fat can cause weight gain. How Taubes thinks this is supposed to be true, I have no idea. His attempted explanations are, as far as I can tell, simply incoherent. (Atkins at least had the virtue of making a coherent wrong claim.)
I’m startled by Hallquist’s claim that he read two Taubes books and couldn’t find Taubes’ explanation for why people don’t gain calories on a high-fat diet. Taubes lays out this mechanism very clearly as the major thesis of his book.
Taubes believes the human body is good at regulating its own weight via the hunger mechanism. For example, most Asian people are normal weight, despite the Asian staple food being rice, which is high-calorie and available in abundance. Asians don’t get fat because they eat a healthy amount of rice, then stop. This doesn’t seem to require amazing willpower on their part; it just happens naturally.
In a similar vein is one of Taubes’ favorite studies, the Vermont Prison Experiment, where healthy thin prisoners were asked to gain lots of weight to see if they could do it. The prisoners had lots of trouble doing so – they had to force themselves to eat even after they were full, and many failed, disgusted by the task. Some were able to eat enough food, only to find that they were filled with an almost irresistible urge to exercise, pace back and forth, tap their legs, or otherwise burn off the extra calories. Those prisoners who were able to successfully gain weight lost it almost instantly after the experiment was over and they were no longer being absolutely forced to maintain it. The conclusion was that healthy people just can’t gain weight even if they want to, a far cry from the standard paradigm of “it takes lots of willpower not to gain weight”.
Other such experiments focused on healthy thin rats. The rats were being fed as much rat food as they wanted, but never overate. The researchers tried to trick the rats by increasing the caloric density of the rat food without changing the taste, but the rats just ate less of it to get the same amount of calories as before. Then the researchers took the extreme step of surgically implanting food in the rats’ stomachs; the rats compensated by eating precisely that amount less of normal rat food and maintaining their weight. The conclusion was that rats, like Asians and prisoners, have an uncanny ability to maintain normal weight even in the presence of unlimited amounts of food they could theoretically be binging on.
Modern Westerners seem to be pretty unusual in the degree to which they lack this uncanny ability, suggesting something has disrupted it. If we can un-disrupt it, “just eat whatever and let your body take care of things” becomes a passable diet plan.
I sometimes explain this to people with the following metaphor: severe weight gain is a common side effect of psychiatric drug Clozaril. The average Clozaril user gains fifteen pounds, and on high doses fifty or a hundred pounds is not unheard of. Clozaril is otherwise very effective, so there have been a lot of efforts to cut down on this weight gain with clever diet programs. The journal articles about these all find that they fail, or “succeed” in the special social science way where if you dig deep enough you can invent a new endpoint that appears to have gotten 1% better if you squint. This Clozaril-related weight gain isn’t magic – it still happens because people eat more calories – but it’s not something you can just wish away either.
Imagine that some weird conspiracy is secretly dumping whole bottles of Clozaril into orange soda. Since most Americans drink orange soda, we find that overnight most Americans gain fifty pounds and become very obese.
Goofus says: “Well, it looks like Americans will just have to diet harder. We know diets rarely work, but I’m sure if you have enough willpower you can make it happen. Count every calorie obsessively. Also, exercise.”
Gallant says: “The whole problem is orange soda. If you stop drinking that, you can eat whatever else you want.”
Taubes’ argument is that refined carbohydrates are playing the role of Clozaril-in-orange-soda. If you don’t eat refined carbohydrates, your satiety mechanism will eventually go back to normal just like in Asians and prisoners and rats, and you can eat whatever else you want and won’t be tempted to have too much of it – or if you do have too much of it, you’ll exercise or metabolize it away. When he says you can “eat as much fat as you want”, he expects that not to be very much, once your broken satiety mechanism is fixed.
Taubes is wrong. The best and most recent studies suggest that avoiding refined carbohydrates doesn’t fix weight gain much more than avoiding any other high-calorie food. However, the Clozaril-in-orange-soda model, which is not original to Taubes but which he helped popularize, has further gained ground and is now arguably the predominant model among dietary researchers. It’s unclear what exactly the orange soda is – the worst-case scenario is that it’s something like calorically-dense heavily-flavored food, in which case learning this won’t be very helpful beyond current diet plans. The best-case scenario is that it’s just a disruption to the microbiome, and we can restore obese people to normal weight with a basic procedure which is very simple and not super-gross at all.
But whether or not you agree with it, this Clozaril-in-orange-soda story is indisputably Taubes’ model; Hallquist seems to miss this and instead makes vague gestures towards a discredited 70s theory that calories are excreted as ketones in the urine.
Because Hallquist doesn’t understand Taubes’ main point, his criticisms miss the mark. He wrote a five-part Less Wrong series on Gary Taubes, in which he tries to figure out what Taubes’ theory of obesity is and as best I can tell somehow ends up simultaneously saying Taubes is dishonest for accusing mainstream researchers of thinking diet is a matter of willpower, and saying Taubes is silly because diet really is just a matter of willpower. If you don’t believe me, read the post, where he says:
Taubes goes on at great length about how obesity has other causes beyond simple calorie math as if this were somehow a refutation of mainstream nutrition science. So I’m going to provide a series of quotes from relevant sources to show that the experts are perfectly aware of that fact.
Which he does. But then later he says:
So what’s going on here? I think the answer lies Taubes’ eagerness to portray mainstream nutrition experts as big meanies who blame fat people for being fat…
But this puts Taubes in a bind: now if he says how much we eat has an effect on our weight, he’s a big meanie too. It doesn’t work for him to say fat people can’t help overeating because of something wrong with their metabolism, and this in turn causes them to gain weight, because he’s committed himself to the principle that blaming behavior equals blaming a character defect. So instead, we get wild rhetoric about how stupid the experts are with no coherent view underneath it.
A more sensible approach would’ve been to emphasize that akrasia is an extremely common problem for humans, and that people who don’t suffer from akrasia in regards to diet probably suffer from akrasia about something else. But that wouldn’t have made for as an exciting of a book.
So unless I am reading this wrong, he thinks the correct answer is to say that we should blame behavior, but that we should do it in a nice way where we say kind things about how everybody has willpower problems and it’s nothing to be ashamed of. I think he thinks he’s agreeing with mainstream consensus here, but mainstream consensus has already moved on to “Screw willpower, STOP DRINKING ORANGE SODA”.
Next, Hallquist attacks Taubes’ claim that “We don’t get fat because we overeat; we overeat because we’re getting fat”, saying that he’s “trying to be charitable” but “surely it wasn’t meant to be taken literally” and that he’s “playing with meanings” just so he can “portray nutrition experts as big meanies”.
But when you ask those nutrition experts – for example, Dr. David Ludwig MD PhD Professor of Nutrition at Harvard, they’re writing articles in the Journal of the American Medical Association with titles like Increasing Adiposity – Cause Or Consequence Of Overeating?, where they say things like:
Since [the early part of the century], billions of dollars have been spent on research into the biological factors affecting body weight, but the near-universal remedy remains virtually the same, to eat less and move more. According to an alternative view, chronic overeating represents a manifestation rather than the primary cause of increasing adiposity. Attempts to lower body weight without addressing the biological drivers of weight gain, including the quality of the diet, will inevitably fail for most individuals…a focus on total diet composition, not total calories, may best facilitate weight loss.
And although the journal article is relatively balanced and might be dismissed as just a scholarly investigation of a weird ideas, the same authors wrote up the same argument for the New York Times in a more obviously persuasive fashion.
Hallquist, again thinking he’s defending a consensus position, attacks Gary Taubes’ claim that government guidelines promoting low-fat diets were responsible for the increase in sugar and refined carbohydrate consumption. He calls this “a huge red flag”, said Taubes is “unaware of [reality] or tries to hide it from his readers”, engages in “irresponsible rhetoric” and wonders “how anyone reading this could avoid suspecting something was up.”
Meanwhile, let’s go to more experts writing in the Journal of the American Medical Association – we’ll keep Ludwig but add Dr. Dariush Mozaffarian, MD, MPH, Dean of Tufts University School of Nutrition – explaining why the new dietary guidelines have done a sudden about-face, removed previous restrictions on fat, and added more restrictions on refined carbohydrates:
With these quiet statements, the DGAC report reversed nearly 4 decades of nutrition policy that placed priority on reducing total fat consumption throughout the population. In 1980, the Dietary Guidelines recommended limiting dietary fat to less than 30% of calories. This recommendation was revised in 2005, to include a range from 20% to 35% of calories. The primary rationale for limiting total fat was to lower saturated fat and dietary cholesterol, which were thought to increase cardiovascular risk by raising low-density lipoprotein cholesterol blood concentrations. But the campaign against saturated fat quickly generalized to include all dietary fat. Because fat contains about twice the calories per gram as carbohydrate or protein, it was also reasoned that low-fat diets would help prevent obesity, a growing public health concern.
The complex lipid and lipoprotein effects of saturated fat are now recognized, including evidence for beneficial effects on high-density lipoprotein cholesterol and triglycerides and minimal effects on apolipoprotein B when compared with carbohydrate. These complexities explain why substitution of saturated fat with carbohydrate does not lower cardiovascular risk.
Most importantly, the policy focus on fat reduction did not account for the harms of highly processed carbohydrate (eg, refined grains, potato products, and added sugar)—consumption of which is inversely related to that of dietary fat…Based on years of inaccurate messages about total fat, a 2014 Gallup poll shows that a majority of US residents are still actively trying to avoid dietary fat, while eating far too many refined carbohydrates.
Hallquist reserves special mockery for Taubes’ claim that the government’s low-fat mania reached such intensity that nutritional initiatives promoted soda:
This portrayal of mainstream nutrition science is as false as Atkins’ claim about peeing out excess calories. Besides the obvious – who on earth ever believed Coca-Cola was a health food?
Once again, the United States’ top nutrition scientists, explaining the new consensus dietary guidelines in the New York Times:
The “We Can!” program, run by the National Institutes of Health, recommends that kids “eat almost anytime” fat-free salad dressing, ketchup, diet soda and trimmed beef, but only “eat sometimes or less often” all vegetables with added fat, nuts, peanut butter, tuna canned in oil and olive oil. Astoundingly, the National School Lunch Program bans whole milk, but allows sugar-sweetened skim milk. Consumers didn’t notice, either. Based on years of low-fat messaging, most Americans still actively avoid dietary fat, while eating far too much refined carbohydrates.
Hallquist critiques Yudkowsky’s worry dietary scientists have been too soft on high-fructose corn syrup. Once again, the latest dietary guidelines almost halve the allowed amount of HFCS from the permitted amount in the last set of guidelines.
Hallquist derides Yudkowsky’s claims that dietary science has “killed millions”. An article by one of Britain’s top doctors, in the British Medical Journal, which by the way is the third highest-impact medical journal in the world, asks Are Some Diets Mass Murder? and argues that
[A] consequence of the fat hypothesis is that around the world diets have come to include much more carbohydrate, including sugar and high fructose corn syrup, which is cheap, extremely sweet, and “a calorie source but not a nutrient.” More and more scientists believe that it is the surfeit of refined carbohydrates that is driving the global pandemic of obesity, diabetes, and non-communicable diseases. They dispute the idea that we get fat simply because energy in exceeds energy out, saying instead that the carbohydrates “trigger a hormonal response that drives the portioning of the fuel consumed as storage as fat… The successful attempt to reduce fat in the diet of Americans and others around the world has been a global, uncontrolled experiment, which like all experiments may well have led to bad outcomes
If indeed there were serious flaws in the dietary guidelines for the past thirty years, and since obesity kills about 370,000 people per year, if the issues corrected in the latest guidelines and freely admitted by modern scientists made the problem even 10% worse, then the “millions of deaths” figure is not an exaggeration.
Taubes is wrong about a lot of things. There is a lot of room for someone to criticize Taubes, and indeed, I have done so repeatedly. Hallquist tries to criticize Taubes, but fires so indiscriminately that he manages to reserve some of his strongest condemnation for claims of Taubes which are actually true and widely recognized as such.
Hallquist writes of Yudkowsky:
Remember, this is one of Yudkowsky’s go-to examples for why you shouldn’t trust the mainstream too much! And it’s not just wrong, it’s wrong in a way that could have been caught through common sense and basic fact-checking. But I guess common sense is just tribalistic bias, and who needs fact-checking when you’ve got superior rationality? The nicest thing you can say about this is that, when he encourages his followers to form strong opinions based on the writings of a single amateur, he’s only preaching what he practices.
I am usually in favor of being nice to people who get things wrong, because things are hard and goodness knows I am wrong often enough. But I am not in favor of being nice to people who get things wrong and are smug and mean to everyone else about them, because punishing defectors is the only way things ever get done in this world. So:
Topher. You seriously do not understand Taubes. You somehow read his book while by your own admission missing the entire mechanism he was trying to explain. You then go on to call a bunch of propositions ludicrous, idiotic, not-even-wrong, et cetera, when those propositions are widely acknowledged as true by the scientific community you think you are defending. You are nevertheless setting yourself up as an expert and trying to explain these subjects to other people. Many of these people told you this when you first posted on LW, and you ignored those people and keep trying to do it. Mozaffarian, Ludwig, Friedman, etc are the United States’ top nutritional scientists, and they are telling you this. I am telling you this. Everyone is telling you this, and you are putting your fingers in your ears and shouting “EVERYTHING IS SO OBVIOUS, I CAN’T BELIEVE OTHER PEOPLE GOT THIS WRONG, IT’S ALL SO EASY, EVERYONE EXCEPT ME IS AN IDIOT.”
Eliezer Yudkowsky has had some pretty silly ideas about diet. I know this because he when he has them, he comes to me and asks me if they are correct, and I tell him. At one point, he bought and sent me a book he was interested in so that I could review it and tell him if it made sense. I told him it was wrong, and he listened. If you had asked me if you were right in your criticisms of Taubes, I would also have reviewed them and explained them to you. You didn’t, because you were so certain that you had to be right that you didn’t need to consult with anybody else, despite the fact that you are an amateur with no medical knowledge.
Thus is it written: “Why do you look at the mote in Eliezer Yudkowsky’s eye, and ignore the beam in your own?”
The last I heard about Eliezer’s dietary philosophy was his OKCupid profile, where under “Food” he wrote: “Flitting from diet to diet, searching empirically for something that works.”
SUCH OVERCONFIDENCE. SO CERTAINTY. VERY ANTI-SCIENCE.
Okay, now that I’ve gotten my nitpicks out of the way, what about the actual meat of Hallquist’s criticism?
Hallquist claims that Less Wrong is fundamentally anti-science. All of his criticisms of Eliezer Yudkowsky were to show examples of him behaving in anti-science ways, but he also thinks that Eliezer comes right out and admits it:
Now that I’m thousands of words and about as many tangents into this post, let me circle back to something to something I said early in the post: pointing out the flaws in mainstream experts only gets you so far, unless you actually have a way to do better. This isn’t an original point. Robin Hanson has made it many times. (See here for just one example.) But I want to emphasize it anyway.
It’s the main reason I’m unimpressed with the material on LessWrong about how the rules of science aren’t the rules an ideal reasoner would follow. This is a huge chunk of Yudkowsky’s “Sequences”, but suppose that’s true, so what? We humans are observably non-ideal. Throwing out the rules of science because a hypothetical ideal reasoner wouldn’t need them is like advocating anarchism on the grounds that if Superman existed, we’d have no need for police.
I think this is more than a superficial analogy. To borrow another point from Hanson, most of us rely on peaceful societies rather than personal martial prowess for our safety. Similarly, we rely on the modern economy rather than personal survival skills for food and shelter. Given that, the fact that science is, to a large extent, a system of social rules and institutions doesn’t look like a flaw in science. It may be the only way for mere mortals to make progress on really hard questions.
Yudkowsky is aware of this argument, and his response appears to mostly depend on assuming the reader agrees with him that physicists are being stupid about quantum mechanics–that, combined with a large dose of flattery. “So, are you going to believe in faster-than-light quantum ‘collapse’ fairies after all? Or do you think you’re smarter than that?” asks one post.
This is combined with an even stranger argument, an apparent belief that it should be possible for amateurs to make progress faster than mainstream experts simply by deciding to make progress faster. Remember how the imagined future “master rationalist” complains “Eld scientists thought it was acceptable to take thirty years to solve a problem”? This is a strange thing to complain about. Either you have a way to make progress quickly or you don’t, and if you don’t, you don’t have much choice but to accept that fact.
Back in the real world, wishing away the difficulty of hard problems don’t make them stop being hard. This doesn’t mean progress is impossible, or that’s it’s not worth trying to improve on the current consensus of experts. It just means progress requires a lot of work, which most of the time includes first becoming an expert yourself, so you have a foundation to build on and a sense of what mistakes have already been made. There’s no way to skip out on the hard work by giving yourself superpowers.
I agree that you don’t make progress faster just by “wishing away the difficulty” or “giving yourself superpowers” or “deciding to make progress faster”.
On the other hand, if Yudkowsky thought that becoming more rational was a matter of “wishing away the difficulty”, he wouldn’t have written a larger-than-Lord-of-the-Rings introduction to the subject. He would have just wished.
Developing and learning an Art Of Thinking Clearly isn’t just “wishing away the difficulty” of settling on true ideas faster, any more than developing and learning rocket science is “wishing away the difficulty” of going to the moon. Thinking clearly is super-hard, but perhaps it is a learnable skill.
Rocket science is a learnable skill, but if you want to have it you should probably spend at least ten years in college, grad school, NASA internships, et cetera. You should probably read hundreds of imposing books called things like Introduction To Rocket Science. It’s not something you just pick up by coincidence while you’re doing something else.
If thinking clearly is a learnable skill, where are the grad schools for it? Where are the textbooks? Not in philosophy programs – Hallquist and I both agree about that. What all of this “only domain-specific knowledge stuff matters” effectively implies is that “thinking clearly” is so easy you can pick it up by coincidence while working on pretty much anything else – something we believe about practically no other skill. If you trusted a rocket scientist who had never read a single rocket science textbook to be any good at rocket science, you’d be insane, but we routinely trust the subjects we most need to think clearly about to people who have never read a How To Think Clearly textbook – and I can’t blame us, because such textbooks, or at least good evidence-based textbooks of the same quality as the rocket science ones, simply don’t exist.
The Sequences aren’t an assertion that you can wish away a problem. They’re a cry for textbooks.
But Hallquist has a counterargument:
The big difference between what [scientists] do and what Yudkowsky advocates is that probability theory is much less useful here than a good knowledge of cell biology.
If we want to get all hypothetical, we can imagine some kind of theorizing contest between a totally irrational person with an encyclopaedic knowledge of cell biology, versus a very rational person who knows nothing at all about the subject. Who would win? Well, who cares? Whoever wins, we lose. We lose because I want the people working on curing cancer to be good at both cell biology and thinking clearly, to know both the parts of science specific to their own field and the parts of science that are as broad as Thought itself. I have seen what happens when people know everything about cell biology and nothing about rationality. You get AMRI Nutrigenomics, where a bunch of people with PhDs and MDs give a breathtakingly beautiful analysis of the complexities of the methylation cycle, then use it to prove that vaccines cause autism. By all means, know as much about methylation as they do! But you’ve also got to have something they’re missing!
I want people who know as much about the methylation cycle as the Nutrigenomics folks, while also understanding the idea of privileging the hypothesis. I don’t want to defy experts, I want to give experts better tools.
In fact, even that framing isn’t quite right. Every day I have patients come to me and ask questions like “are benzodiazepines safe and effective?” or “is therapy better than SSRIs?” or “will this drug increase my risk of dementia?” or “does untreated bipolar increase my risk of converting to rapid-cycling?” or a host of other questions. And I ask my mentor, who’s one of the top psychiatrists in the state, and he gives me a nice, straightforward answer, and then I ask my mentor at the other hospital I go to, and he’s also one of the top psychiatrists in the state, and he gives me precisely the opposite answer. And when I mention to either of them that the other guy disagrees, they just assure me that if I do the research myself I’ll find that their point of view is obviously and self-evidently correct. And meanwhile, my patients are pressing me for answers and telling me that if I get this wrong it will ruin their life. And I can’t say “Wait fifty years until enough studies are done to be totally sure.”
“Don’t worry too much about learning rationality, just listen to the experts” is all nice and well up until the point where someone hands you a lab coat and says “Congratulations, you’re an expert!” And then you say: “Well, frick.” And when that day comes you had better already have learned something about the Art Of Thinking Clearly or else you have a heck of a steep learning curve ahead of you.
Hallquist says that Less Wrong is “against scientific rationality”. Well, we’re “against scientific rationality” in the same sense that my hypothetical Soviet who says “We need two Stalins! No, fifty Stalins!” is against Stalinism as currently implemented. It is in the right direction. But it needs to go further. This is why all of the posts Hallquist finds to support his assertion that Less Wrong is “against scientific rationality” are called things like Science Isn’t Strict Enough. I’m “against scientific rationality” insofar as when my patients demand answers to semi-impossible questions and say their lives depend on it, I want to have scientific rationality on my side, and another tool, and a third tool if I can think of it, and as many extra tools as it takes before I stop being terrified.
If you don’t trust the quantum mechanics sequence to make the point for you – and maybe you shouldn’t – I explain my own version of this revelation in the highly-Eliezer-inspired The Control Group Is Out Of Control. Science is what hands us an unusually well-conducted meta-analysis proving that psi exists with p < 1.2 * 10^-10, crowning fifty years of parapsychological research that finds positive results about as often as not. Bayes is what tells us that parapsychology makes no sense, has an ungodly level of Kolmogorov complexity, and is going to require a heck of a lot more than a good meta-analysis before we accept it. In that sense, "switching allegiance from Science to Bayes" isn't some cataclysmic event where we foresake Galileo thrice before an onyx altar, it's something we all do already under the right circumstances. The point is figuring out how to formalize it, so that we don't mess up and dismiss a result that's counterintuitive but true. I respect Yudkowsky's decision not to use an example like this because if he used this example people would assume he was only talking about parapsychology and real science is totally safe, but I think he was going for the same principle.
I have immense respect for Topher Hallquist. His blog has enlightened me about various philosophy-of-religion issues and he is my go-to person if I ever need to hear an excruciatingly complete roundup of the evidence about whether there was a historical Jesus or not. His commitment to and contribution to effective altruism is immense, his veganism puts him one moral tier above me (who eats meat and then feels bad about it and donates to animal charities as an offset), and his passion about sex worker rights, open borders, and other worthy political causes is remarkable. As long as Topher isn’t talking about diet or Eliezer Yudkowsky’s personal qualities, I have a lot of trust in his judgment.
But these things I like and respect about Topher are cases where he’s willing to go his own way. He views open borders as an pressing moral imperative even though you’ll have a hard time finding more than a handful of voters, sociologists, or economists who support it. He’s signed up for cryonics even though 99% of the population think that makes him crazy. He donated to fight AI risk way back when it was hard to find any AI experts willing to endorse the cause, and so gains extra credibility and moral authority now that many of them have. Heck, I even respect his ability to put down a terrible Aquinas book on the twenty-somethingth page when I trudged all the way through.
And – and this is a compliment, so I hope he takes it as one – I wish he would try to help spread his own good qualities. We need more people who are able to evaluate difficult moral and intellectual arguments and come to apparently-bizarre but in-fact-very-important conclusions, even when there is not a knock-down scientific argument proving them correct quite yet.
And a necessary consequence of having people who are able to go beyond the things that have knock-down scientific proofs, and go beyond the things that everyone by consensus agrees to be true, and who are able to discuss weird ideas like effective altruism and cryonics and the Singularity, is that occasionally some people will venture too far and say something genuinely out of line (remember: decreasing your susceptibility to Type I errors will always increase your susceptibility to Type II errors, and vice versa!) When this has happened in the rationality community, I have tried again and again to politely but firmly correct these people.
I would like to have Topher as a partner in this effort, but instead, I find him to be trawling the entire corpus of everything people in the rationalist community have ever said or done for quotes he can take out of context to “prove” that they are “crackpots” and that they universally “hate experts”. It’s led to him rushing through books he doesn’t really understand so he can get to the fun part where he points out how crackpotty everyone else is for not rejecting the book fast enough. It’s led to him gradually burning bridges with a lot of people who should be on his side by being needlessly hostile to them. It’s led to him turning Yudkowsky’s opinion that science needs to be stricter and stronger into Eliezer being “against scientific rationality” and “anti-intellectual” and “pro-crackpot”, peppered with a laundry list of out-of-context gripes. It’s not a productive way to have the discussion and, more importantly, it’s not true. And it’s not fair to the efforts that the rationalist community keeps putting in to improve themselves and their thought processes.
If Eliezer Yudkowsky ever showed up and said “I have perfected this Art, now I am never wrong,” then I would happily join Hallquist in laughing hysterically.
If Eliezer Yudkowsky showed up and said “I have tiny pieces of this Art and some promising leads on who can help us find more, let’s work on it together,” well, I’ve spent the past couple of years taking that offer and so far I don’t regret it.
And if Eliezer Yudkowsky showed up and said “I thought I had pieces of the Art, but I was wrong, I don’t know anything about it, nobody does,” then I will still go to my grave believing that whether or not we know it, such an art should exist, that even if it’s near-impossible we should be chipping away at the impossibility as much as we can in the hopes of getting a couple of tiny shards of something useful that we can cherish as precious.
But I think it isn’t as bad as all that. We do have some tiny preliminary seeds of such an Art. I think such an art involves learning to appreciate your cognitive biases on a gut level. I think it involves understanding the relevant basics of probability theory and calibration. I think it involves knowing when to use the Inside View or the Outside View, how to avoid getting bogged down in meaningless semantic arguments, and how to overcome your resistance to changing your mind in the face of new evidence. It also involves knowing how to read studies, learning to get a feel for the process of science and find out who is and isn’t a credible expert, learning when science does and doesn’t work and how to repair the latter category, learning to avoid the well-known pitfalls, and learning how to build communities where good epistemology can flourish.
It also involves a bunch of other things that I don’t know and Eliezer doesn’t know and maybe no one in our community knows, but once we find out, we intend to steal them, and you should help.