"Talks a good game about freedom when out of power, but once he’s in – bam! Everyone's enslaved in the human-flourishing mines."

Why I Am Not Rene Descartes

I.

Imagine that somebody wrote:

Some of my friends support Ron Paul. I think that’s wrong. After all, he’s a libertarian, and Wikipedia says a libertarian is a person who believes in free will. But free will is impossible in a deterministic universe. Ron Paul’s belief in free will is clearly why there are so few Swiss people among Ron Paul supporters, since Swiss people are Calvinists and so understand determinism better.

This is sort of how I feel reading Why I Am Not A Rationalist on Almost Diamonds.

I’m having trouble not quoting it in full:

I’m not a rationalist because I’m an empiricist. I find no value in “logical” arguments that are based in intuition and “common sense” rather than data. Such arguments can only perpetuate ignorance by giving it a shiny veneer of reason that it hasn’t earned.

I boggle that we haven’t sorted this out yet. I particularly boggle that atheists of my acquaintance promote rationalism over empiricism. The tensions between basic rationalism and empiricism parallel the tensions between church theology and the philosophy of science. We have no problem rejecting church theology as not being grounded in evidence. Why do so many atheists praise rationalism?

Let me stop here and make it clear that I’m not rejecting logic or critical thinking. Goodness knows that I’ve spent hours just this summer helping people share useful heuristics that will, in general, help them get to the right answers more often. I’ve led workshops and panels on evaluating science journalism and scientific results. When I’ve spoken to comparative religion classes in the past, I’ve talked about religious skepticism with an emphasis on the basics of epistemology.

The problem isn’t logic or critical thinking. The problem is a tendency to view those skills as central to getting the right answers. The problem is a tendency to view them as the solution. They’re not, and the idea that they are is in distinct contrast with the way humanity has actually grown in knowledge and understanding of the world.

Rationalism is, at heart, an individualist endeavor. It says that the path to getting things right lies in improving the self, improving the thinking of one person at a time. It’s not surprising that the ideology and movement appeal largely to the young, to men, to white people, to libertarians. It focuses primarily on individual action.

That’s not how we’ve come to learn about our world, though. It’s not how science or any other field of scholarship works. Scholarship is a collaborative process. And I don’t just mean peer review and working groups, though those are important as well.

Scholars add to our knowledge of the world by building on the work of others. They apply tools and methods developed by others to new material and questions. They study the work of other scholars to inspire them and give them the background to ask and answer new questions. They evaluate the work of others and consolidate the best of it into larger theoretical frameworks. Without the work of scholars before them, scholars today and evermore would always be recreating basic work and basic errors.

All too often, I find rationalists taking this repetitive approach. They think but they don’t study. As a consequence, they repeat the same naïve errors time and again. This is particularly noticeable when they engage in social or political theorizing by extrapolating from information they learned in secondary school and 101-level college classes, picked up in pop culture, or provided by people pushing a political cause. Their conclusions are necessarily as limited as their source material and reflect all its cultural biases.

As best I can tell, it is conflating a misunderstood version of rationalism (Descartes) with a misunderstood version of rationalism (Yudkowsky) and ending up with something unrelated to either, in the most bizarre possible way. But since there are commenters there who seem to agree with it, better nip this in the bud before it spreads.

II.

Rationalism (Descartes) is not simply the belief that sitting and thinking is more useful than observation. Descartes-style rationalism is complicated, but involves the claim that certain concepts are known prior to experience. For example, it is possible to understand mathematical truths like 1 + 1 = 2 separately from our experience of observing people add one apple to another. It is also possible to know them more completely than our knowledge of the external world, since our external senses can only tell us that all additions of one plus one have equalled two so far, but our reason can tell us something we have never observed – that it is necessarily true that everywhere and for all time 1 + 1 will equal two.

This so obviously gets bogged down in definitions of what is or isn’t “prior to experience” or a “concept” that philosophers today have mostly moved on to bigger and better things like hitting people with trolleys. It has nevertheless gotten a mild boost of interest recently with Chomsky’s claim that some features of human language are innate, and evolutionary psychology’s claim that certain preferences like fear of spiders may be innate. You can learn much more than you wanted to know at the Stanford Encyclopedia of Philosophy

But in no case does the debate ever resemble Almost Diamonds’ naive conception of some people thinking they have to study the world and other people sitting and speculating in armchairs and playing at self-improvement because they don’t want to get their hands dirty in the real world. In fact, Descartes himself was a devoted experimentalist – probably too devoted. His intense interest in anatomy combined with his belief that animals lack souls made him one of the most prolific vivisectionists of all time. We can thank him for such useful pieces of scientific information as “If you cut off the end of the heart of a living dog and insert your finger through the incision into one of the concavities, you will clearly feel that every time the heart shortens, it presses your finger, and stops pressing every time it lengthens.” One can accuse the author of this statement of a lot of things, but “not willing to get his hands dirty” isn’t one of them.

Likewise, Leibniz, the other most famous partisan of rationalism of all time, also made notable contributions to physics, geology, embryology, paleontology, and medicine. Either he was out exploring the world, or he had some armchair. Particularly ironically for Almost Diamonds’ thesis, he was one of the most prominent advocates of research as a collaborative endeavor, and founded various scientific discussion societies around Europe as well as calling for a giant international database of all scientific findings. All of this was entirely consistent with, and informed by, his rationalism.

The people on r/philosophy also do a good job of explaining this mistake in that blog’s conception of rationalism (the people on r/badphilosophy do an, um, less good job)

III.

But I think Almost Diamonds is mostly talking (also talking?) about rationalism (Yudkowsky), ie internet rationalism. After all, she mentions “the rationalist movement” and says they’re about “understanding cognitive biases” and “appeal largely to…libertarians”.

This is even more wrong.

At least rationalism (Descartes) is sort of about some kind of disconnect with empirical evidence. In the context of rationalism (Yudkowsky) this is about the same level of error as expecting Ron Paul to be a philosopher preaching free will just because “libertarianism” can mean something in metaphysics. Rationalism (Yudkowsky) and rationalism (Descartes) share a name, nothing more.

Almost Diamonds says:

I’m not a rationalist because I’m an empiricist. I find no value in “logical” arguments that are based in intuition and “common sense” rather than data…I boggle that we haven’t sorted this out yet. I particularly boggle that atheists of my acquaintance promote rationalism over empiricism.

Meanwhile, the founding document of rationalism (Yudkowsky), the Twelve Virtues of Rationality, states:

The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots?…Do not ask which beliefs to profess, but which experiences to anticipate.

It adds:

You cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.

This makes the same point as Almost Diamonds.

Now, granted, some movements can have Official Founding Beliefs that they don’t follow. Many rationalists take the Virtues very seriously (one just let me know it is hanging on the wall of his group house) but perhaps like some of Jesus’ more lovey-dovey commandments or the inconvenient parts of the Constitution, they are honored more in the breach than in the observance?

I don’t think so. Rationalists have taken this idea and run with it, which is why we are so obsessed with things like making beliefs pay rent in experience, discussion of “Bayesian updating”, and even making monetary bets on our beliefs to train ourselves to make sure they conform to real world outcomes. It’s why the rationalist proverb, upon being given a cool theory, goes “Name three examples”.

A typical (okay, I lied, highly extreme) example is Gwern, who consumed pretty much every chemical and then carefully recorded its effects on his sleep, emotions, performance on cognitive tests, et cetera and then performed Bayesian analysis on it. There’s obviously something wrong with that, but it’s not lack of empiricism!

Diamonds:

All too often, I find rationalists taking this repetitive approach. They think but they don’t study.

Twelve Virtues again:

The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains.

And in accordance with this, I will put the rationalist movement up, mano a mano, against any other movement on the entire Internet in terms of the quality of scholarship and empiricism.

Like, holy @#$%, we have Luke Muehlhauser, who can’t write a simple life hacks post on productivity without fifty-seven different journal article citations, and who writes at great length about how to study a field of research effectively, the best textbooks on every subject, software tools for efficient scholarship, etc, etc, etc.

We have a community-wide survey that collects information on one hundred thirty-six demographic categories, for over fifteen hundred community members, and then a tradition of obsessively arguing about the implications of the results for several weeks every year.

We know that 20% of rationalists over the age of 35 have Ph. Ds. 54% have either a Ph. D, an MD, or a Master’s!

And not to toot my own horn, but there’s a reason this blog’s series of impromptu literature reviews is called “Much More Than You Wanted To Know” and has investigated the literature on things like marijuana legalization and SSRI effectiveness, fifty or sixty studies per review, to a degree that’s gotten some coverage on major news sites including Andrew Sullivan’s blog and Vox.

And…wait a second! The author of that blog knows Kate Donovan! How do you know Kate Donovan and still accuse rationalists of “not studying”?!?! DO YOU EVEN HAVE EYES?

Finally, Almost Diamonds says:

Even in the modern rationalist movement, which speaks more to collecting evidence than classical rationalism, I have yet to see any emphasis on epistemic humility.

But the Twelve Virtues says:

The eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty. Who are most humble? Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans. Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities. But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse. If you compare yourself to others you will not see the biases that all humans share. To be human is to make ten thousand errors. No one in this world achieves perfection.

Really? “Yet to see any emphasis on epistemic humility”? The most important mission statement of the rationalist movement says that one of the movement’s twelve founding principles is humility, then waxes rhapsodic about it. Seriously, we’re the people who keep calling ourselves “aspiring rationalists” to remind ourselves that we’re not nearly as rational as we should be yet! We’re the people who obsessively calibrate with Prediction Book et cetera to remind ourselves just how high our error rate is. We’re the people who keep a community-wide keep a mistakes repository (with Gwern once again going above and beyond). THERE ARE FORTY ONE DIFFERENT POSTS ON LESS WRONG TAGGED ‘OVERCONFIDENCE’!

Almost Diamonds dislikes rationalism because she believes in an emphasis of empiricism over armchair speculation, careful scholarship over ignorance, and epistemic humility. But she’s just described the rationalist movement almost to a ‘T’! She’s attacking the rationalist movement for not living up to her ideal philosophy which is…the precise philosophy of rationalist movement!

This is mean, but I’m going to say it. Almost Diamonds describes the rationalist movement in a way that even the most cursory glance at any rationalist site or document would disprove. Her opinion seems to be based entirely on a distorted idea of the dictionary definition of the word “rationalism”.

It’s almost like she’s, I don’t know, sitting in an armchair speculating about what rationalism must be, rather than going out and looking for evidence.

ಠ_ಠ

IV.

Also, can I just mention that one of the commenters on that blog says that the problems with the rationalist movement are a lot like the problems with frequentist statistics, and what would really help them is if they investigated Bayesianism? I swear I am not joking. I swear this is a thing that happened.

V.

But aside from all this, I do think there’s an important point that needs to be made here. That is – given that empiricism and scholarship is obviously super-important, why is it not enough?

The very short answer is “A meta-analysis of hundreds of studies is what tells you that psychic powers exist. Critical thinking is what helps you figure out whether to trust that result.”

The longer answer: rationality is about drawing correct inferences from limited, confusing, contradictory, or maliciously doctored facts. Even the world’s most stubborn creationist would have to realize the truth of evolution if you could put her in a time machine and make her watch all 3 billion years of life on Earth. But more rational people can realize the truth of evolution after reading a couple of good biology textbooks and having some questions answered. And Darwin could realize the truth of evolution just by observing the natural world and speculating about finches. There’s something I do better than the creationist and Darwin does better than me, and it’s not “have access to data”.

Life is made up of limited, confusing, contradictory, and maliciously doctored facts. Anyone who says otherwise is either sticking to such incredibly easy solved problems that they never encounter anything outside their comfort level, or so closed-minded that they shut out any evidence that challenges their beliefs.

Given this state of affairs, obviously it’s useful to have as much evidence as possible, in the same way it’s useful to have as much money as possible. But equally obviously it’s useful to be able to use a limited amount of evidence wisely, in the same way it’s useful to be able to use a limited amount of money wisely.

I recently reviewed thirty-five studies on racism in the criminal justice system, a very controversial topic. But I would suggest that almost nobody would change their opinion about this based on simple number of studies reviewed. That is, if a person who has read five studies and believes the system is racist encountered another person who has read ten studies and believes the system is fair, she would not simply say “Well, you’ve read more studies than I have, so I guess you’re right and I’m wrong.” She would probably say “That’s interesting, but I need to double-check the methodologies of those studies, make sure they mean what you think they mean, make sure you haven’t specifically selected only studies that prove your view, and make sure you haven’t fallen into one of a million other possible failure modes.”

The part where you have read 5 studies but I have read 10 is the empiricism that Almost Diamonds would say is the only meaningful skill that exists. The part where we want to make sure they’re good studies and I understood them right is rationality. I would trust the opinion of a rational person who knows one study far more than that of an irrational person who knows fifty. If you don’t believe me, I invite you to check out the hundreds of studies published in creation science and homeopathy journals every year.

Or what if you’re working in an area where you don’t even have hypotheses yet? It’s your job to explain or predict something that’s never been explained or predicted before. Sure, you’ve got to have a background level of expertise and scholarship, but no matter how many x-ray crystallographers you have somebody has to be the one to say “You know, our data would make sense if this molecule were in the shape of a helix.” What if you’re trying to predict the future – like in what year fusion power will become a reality, or whether a stock is going to go up and down – and you’ve already reviewed all of the relevant evidence? What then?

If somebody says that rationality is all nice and well, but not really important because you can just use the facts, then this is the surest sign of somebody who doesn’t possess the skill and doesn’t even realize there is a skill there to be possessed. They have inoculated themselves with the cowpox of doubt, trained themselves on easy problems so long that they’ve dulled their senses and forgotten that problems that require more thought than just looking up the universally-agreed-upon scientific consensus in Wikipedia even exist.

There is one paragraph for which I will give Almost Diamonds credit: she is partly right when she says rationality is fundamentally an individual endeavor. I mean, only in the sense martial arts is an individual endeavor – you can train with lots of people, you have to train with lots of people, you’ve got to learn the craft from others and stand on the shoulders of giants – but in the end you’ve got to punch the other guy yourself.

Thousands of scientists have worked their entire lives to get you the evidence in favor of evolution. But thousands of creationists have worked their entire lives to obfuscate and confuse that evidence. Thousands of scientists have studied the criminal justice system, but many of them aren’t very good at it, many of them disagree with one another, and very likely none of them have worked on the exact subsubproblem that you’re interested in. Other people can present the facts to you, but in the end you’re the one deciding what and who to believe. Just like everybody dies alone, everybody decides on their beliefs alone. And rationality is what allows them to do that accurately.

If you’re investigating a problem even slightly more interesting than evolution versus creationism, you will always encounter limited, confusing, contradictory, and maliciously doctored facts. The more rationality you have, the greater your ability to draw accurate conclusions from this mess. And the differences aren’t subtle

A superintelligence can take a grain of sand and envision the entire universe.

Einstein can take a few basic facts about light and gravity and figure out the theory of relativity.

I can take a bunch of conflicting studies and feel sort of confident I’ve at least figured out the gist of the topic.

Some people can’t take a movement that emphasizes on its founding document “OUR VIRTUES ARE EMPIRICISM, SCHOLARSHIP, AND HUMILITY” and figure out that it considers empiricism, scholarship, and humility to be virtues.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

567 Responses to Why I Am Not Rene Descartes

  1. Anonymous says:

    In the interest of steelmanning, perhaps you should consider it a critique of the proclaimed values of rationalism vs the enacted ones. You do address this to some degree, but on some points you don’t address it. Re: Humility for example.

    • llamathatducks says:

      Yeah, that’s what I was thinking too. (I’m not part of the community except in that some of my friends are and I’m now unexpectedly into this blog and I do actually like the founding values, so I don’t know if the critique becomes correct or reasonable if taken that way, but it’s a reasonable interpretation.)

      As an outsider who values many of the same things you do but is a little reluctant to identify with y’all, I’ll share a bit of why. Honestly the first time I heard about this ideology, the name itself turned me off, because while I understand the stated philosophy is that you TRY to be as rational as possible, the name kind of makes it sound as though you think you actually are. Which is overconfident. And because this is a self-serving claim, it sounds a bit arrogant. I haven’t really been able to shake that, even though I understand the emphasis on the “aspiring” bit. Because of this, almost regardless of how much I like the founding ideas, I am scared of applying the word to myself.

      Now that I’ve read a bit more of what rationalists write, I’ve noticed another thing. While people occasionally admit that they will make mistakes, the overall tone used is highly confident-sounding. Pretty broad statements, sometimes, presented kind of matter-of-factly in ways that sound words to disagree with. Very few hedge words to soften the message or make it sound less like objectivity incarnate. This can make me feel nervous and uncomfortable.

      Perhaps that’s a weird response given how many other groups use extremely confident language. But for some reason this has been my impression.

      (Edit) Oh and the other thing is that I feel uncomfortable around any movement that has a central figure or founder who seems to have some form of authority and is alive. Which is maybe silly? I don’t know.

      • Ilya Shpitser says:

        “Which is maybe silly?”

        It is not silly at all, it is a very sound heuristic.

        My critique of the rationalist community is that it needs to stop wanting EY as a guru (and EY needs to stop wanting to be a guru).

        • zn289 says:

          +1

          EY is a smart guy with interesting, valuable ideas and an outsized ego. His outsized ego has been a thorn in the side of the spread of his interesting, valuable ideas for about as long as he’s been trying to spread them.

          • Eli says:

            EY is a smart, very nerdy guy who never learned to “hide his powerlevels”, hide his actual goals, and just reveal as much of his real self to people as they’re ready to see. At least, in my estimation.

            Mind, I’m not claiming that’s a pleasant way to live, but it does actually work, as a way of signaling. You can go through life this way, getting stuff done without getting undeserved infamy as a kook by your early 20s.

            On the other hand, when one guy comes out as an incredibly nerdy comp-sci/stats/AI kook, it seems to result in a social group where everyone of that ilk can hang out and enjoy each-other’s company, so something good did come of it.

      • shokwave says:

        My experience with several years of Less Wrong and rationality communities is that aspiring rationalists strive to use the correct number of hedge words in their statements.

        Rationality would probably say something like, “there is no virtue in excessively softening your message”. And indeed, upon checking this on LessWrong (virtues of scholarship!) it does:

        “…Physicists have this arrogant idea that their models should work all the time… But think of the social audacity of trying to be right all the time! I seriously suspect that if Science claimed that evolutionary theory is true most of the time but not all of the time—or if Science conceded that maybe on some days the Earth is flat, but who really knows—then scientists would have better social reputations … When you argue a lot, people look upon you as confrontational. If you repeatedly refuse to compromise, it’s even worse. Consider it as a question of tribal status: scientists have certainly earned some extra status in exchange for such socially useful tools as medicine and cellphones. But this social status does not justify their insistence that only scientific ideas on evolution be taught in public schools … They ought to be more humble, and compromise a little.”

        http://lesswrong.com/lw/gq/the_proper_use_of_humility/

        • 27chaos says:

          My experience is that the “correct” number of hedge words is whatever makes a statement difficult to argue with yet still forceful sounding. Sentences are used which have both a weak and a strong meaning, and those are equivocated between as the commenter desires. It’s more about skill with rhetoric than rationality.

          Thankfully, this blog is much better.

        • zn289 says:

          “Physicists have this arrogant idea that their models should work all the time”

          And yet that’s exactly what happened with newtonian physics. The problem with this kind of attitude is that the universe is consistently much more complicated than you think it is. It’s not about acceding status, it’s about realizing that what’s in your brain is a small subset of the world’s total information and even if you have all the relevant information, you haven’t necessarily put it together properly.

          • Anonymous says:

            But we found out about the limitations of Newtonian physics precisely because we tried to apply it to everything. The attitude is not “our model should always work so who cares about evidence”. It’s “our model should always work, so even if it fails on some edge case we still have work to do”. The rigidity of scientific theories is one of their most useful properties

      • Matthew says:

        This has been mentioned (and mentioned, and mentioned) before, but there are a lot of people, probably a large majority of people, associated with LessWrong-style rationality that disagree with Eliezer on at least one of his view of quantum mechanics, AI foom, donating to MIRI, the handling of the baslisk, polyamory, and probably some other things I’m not remembering right now. Eliezer only has authority up to the point where he says something dubious, at which point he is as likely to get rhetorically jumped all over as anyone else.

        • llamathatducks says:

          Sure, but notice how on this post lots of people are defending rationalism specifically by pointing to Yudkowsky’s writings. This dynamic is sufficient to make me feel uncomfortable.

          • Eli says:

            There’s lots of other people who understand the relevant math and cognitive science just fine, but don’t have the time or devotion to write out several books’ worth of popularization material and philosophical interpretation.

        • Jiro says:

          there are a lot of people, probably a large majority of people, associated with LessWrong-style rationality that disagree with Eliezer on at least one of his view of quantum mechanics, AI foom, donating to MIRI, the handling of the baslisk, polyamory, and probably some other things I’m not remembering right now.

          Thus my theory: LW is actually being counterproductive. The founders think rationality leads to LW ideas. So they try to get people to be more rational. Unfortunately, many of those people *actually* become more rational, which leads them to reject LW ideas.

          It’s like having a flat earth believer who falsely thinks that encyclopedias support flat earth theory. So whenever someone doubts the flat earth, he tells them to read the encyclopedia to discover the truth. They do, and so they end up disagreeing with him.

          • Matthew says:

            I disagree that LessWrong exists primarily to convince people of AI FOOM or the benefits of cryonics. It definitely wasn’t created for polyamory-related reasons.

          • suntzuanime says:

            I have enough faith in Eliezer at least to believe that, conditional on his beliefs being wrong, he doesn’t want to convince people of them. Helping people believe true things is just about the most virtuous possible way to try to convince people of thing you believe to be true.

        • Susebron says:

          Yes, and that’s a good thing. But to someone who’s looking over the movement cursorily, that’s not the impression they’ll get. To an outsider, EY looks very much like an active, charismatic leader who the community follows. This is not a good thing, particularly given how far away he is from most people in inferential distance. Sure, thinking “EY = LW = rationalists = crazy” is irrational, but the rationalist movement of all people should know that people are generally irrational.

      • LTP says:

        I am similarly somebody who likes many individuals of the Rationalist Movement (Scott Alexander, obviously, and Julia Galef are two that come to mind), but who is very off-put and skeptical of many of the claims of the movement itself.

        I’m with you, the lack of humility (despite purporting to be humble) is a problem. As an aspiring philosopher, I’ve really been put off by the community’s treatment of philosophy (I’m actually put off by a lot of other stuff, too, but I’ll stick to this example). Witness Yudkowsky dismissing entire sub-fields of and book-length arguments from philosophy with short blog posts. Boldly declaring he and his community have “solved” or “moved beyond” many philosophical problems that are still considered hot issues in the field. The community also makes a lot of epistemological commitments with minimal argument or outside sourcing of arguments for them. Now, there’s nothing wrong with that to see what follows if you acknowledge you’re doing that, but please don’t act like you’re “solved” those issues by assuming answers.

        It’s amateur-philosophy that condescends to professional philosophy.

        I’m all for teaching critical thinking skills. I’ve learned a few things in that area from the community. But there’s a lot of stuff mixed in that makes it hard to take the community seriously, even if I like some individuals who are a part of it.

        • LTP says:

          ” Now, there’s nothing wrong with that to see what follows if you acknowledge you’re doing that,”

          This should read:

          “Now, there’s nothing wrong with assuming things in order to see what follows *if* you acknowledge you’re doing that,”

        • suntzuanime says:

          Witness Yudkowsky dismissing entire sub-fields of and book-length arguments from philosophy with short blog posts. Boldly declaring he and his community have “solved” or “moved beyond” many philosophical problems that are still considered hot issues in the field.

          When you actually understand something you can often explain the core of it fairly concisely, as opposed to the reflexive squid-ink cloud of verbiage necessary to make your confusion look like there’s wisdom hidden inside it.

          • peterdjones says:

            If you think you have stern something about a complicated subject that the experts have missed, you should assume you are mistake, THAT IS THE WHOLE POINT OF THE RATIONAL VIRTUE OF HUMILITY.

          • LTP says:

            Or, it could be the Dunning-Kruger effect.

          • suntzuanime says:

            Rational humility is defeasible. Sometimes experts are just wrong. Especially when “experts” get that status by writing big long confusing books that are hard to argue with instead of any sort of model that cashes out in real observations.

          • peterdjones says:

            The factor that an assumption is defeasible is built into the meaning of the word assumption….it doesn’t need to be spelt out,

          • suntzuanime says:

            My point is that you assume he failed to assume it, whereas it seems to me that he defeated the assumption.

          • peterdjones says:

            He checked that he had really understood it using his really understand ometer? The closest thing in the real world to a really understand ometer is the opinion of an expert.

        • Paul Torek says:

          As a no-longer-aspiring philosopher (got the PhD but don’t work in the field), I have to come down somewhere in between you and Yudkowsky. There are an awful lot of wrong trees in professional philosophy with an awful lot of barking up them going on. Sometimes moving beyond a problem is exactly the right response.

          Sometimes philosophical positions are assumed, or alternatives are neglected, with much trouble or potential trouble in store. On the other hand, for an example dear to my heart, Yudkowsky’s solution of the free will problem is (most of) the solution to the problem. And the brevity of those writings is a virtue.

          • LTP says:

            The post on free will sort of makes my point on his hubris, though. If Yudkowsky’s goal is to educate the public on his views, that’s fine. However, instead of citing philosophers who agree with him on free will — and who have more extensive works on the issue and have responded to strong objections from colleagues — he writes down his views and acts like he has single-handedly solved the issue. Indeed, he juxtaposes his positions with “a philosopher”:

            “So a philosopher would say: Either we don’t have free will, or free will doesn’t require being the sole ultimate Author* source of your own decisions, QED.”

            … and yet he is ignoring the myriad of philosophers who don’t fall into that strawman binary and who have come to similar yet stronger positions on the issue as he does. Later in the post, he lays out a version of a compatiblist deterministic view of free will, and acting like nobody has ever suggested his version before, or at least, nobody has thought of it the exact way he has. As if philosophers just hadn’t considered the issue. In fact, many compatiblist philosophers *have* tried to figure out what that feeling of r If I were to interpret him charitably, he is just saying some philosophers take that view, but he still doesn’t make that clear to the layman and he doesn’t cite people who have come to stronger versions of the same conclusion decades before Yudkowski started his blog.

            I could make similar points about the way he states his views on reductionism and the computational theory of the mind.

            Furthermore, on all these issues, Yudkowski just dismisses philosophers who disagree with him rather than engaging with their works in detail. He seems to only “steel man” arguments that share certain premises with him, but not ones that challenge his fundamental beliefs. If he were to “steel man” David Chalmers’ argument on consciousness, for instance, he wouldn’t have a couple short posts that add up to an undergrad philosophy paper on the subject, he would have much more. Dozens of pages with extensive citations of Chalmers and others from both sides of the issue in the academic community. Now, it isn’t reasonable to expect an amateur to write such responses. However, if he isn’t going to do that, he either needs to be humble enough to cite philosophers who have responded to Chamlers at length that he agrees with, OR he needs to state his views with much less certainty.

            ETA: I am referencing this post mostly: http://lesswrong.com/lw/rc/the_ultimate_source/

          • Eliezer’s criticism of Chalmers’ view is correct, and stated more clearly than I’ve seen any professional philosopher state it. That doesn’t mean he successfully solved the hard problem, but he did refute one of the most prominent positions on it, without piggybacking on other philosophers’ responses to Chalmers. Maybe it would have been an efficient use of time for him to engage more with the literature, but as it stands asking him to be more self-doubting in his criticism of Chalmers is asking him to believe or assert probabilistic falsehoods.

          • peterdjones says:

            Yudkowskiy has refuted Chalmers on what? The existence of aqualia?The nature of qualia? The existence of a head problem? The existence of zombies? The implications of zombies?

          • He showed that Chalmers-style ideas in the neighborhood of panpsychism and Russellian monism fall victim to the most powerful objections to epiphenomenalism, by articulating and defending a causal theory of knowledge that excludes quiddities as grounds for knowledge.

          • Irenist says:

            Rob Bensinger,
            That’s an interesting thought.

            I saw EY as engaging with Chalmers’ panpsychism (and by implication other monisms like Russell’s) in the Zombie subsequence on a different level than discrediting quiddities. Are you talking about the anti-essentialism of the Reductionism sequence, generally, as the place where EY does this? Thanks.

          • Epiphenomenalism is already a pretty widely rejected idea among philosophers; even Frank Jackson eventually rejected it. Eliezer’s argument against Chalmers is, roughly, ‘The paradox of phenomenal judgment shows your view is epiphenomenalist, and therefore just as unbelievable as epiphenomenalism.’ I believe Chalmers’ response to this point was ‘no, my view isn’t epiphenomenalist’, the implication being that quiddities aren’t epiphenomenal.

            At this point the debate reaches a stalemate, unless we remember that the central thesis of the entire rest of the Sequences is that belief accuracy is a physical Bayesian process. If you don’t physically interact with a variable in a way that consistently changes your brain’s state, then it’s physically impossible for your belief to be predictably accurate. There are many philosophers who have objected to Chalmers’ view on the grounds that it makes reference to or knowledge or qualia impossible, but the Sequences provide an actual theory that places quiddities in the same category as more conventional epiphenomena. Eliezer’s discussion of zombies is an aside in the course of his exposition of his causal theory of knowledge (/objective-Bayesian theory of rationality), and the force of his argument (in particular, why his charge of ‘epiphenomenalism’ isn’t just a terminological mistake — why Chalmers is committed to something with the same problems as traditional epiphenomenalism) is only clear in that context.

          • Irenist says:

            Rob Bensinger,

            That was a really helpful clarification. Thanks!

          • peterdjones says:

            “Epiphenomenalism is already a pretty widely rejected idea among philosophers; even Frank Jackson eventually rejected it. ”

            I am not a fan of epiphenomenalism either.

            But rejecting epiphenomenalism is not sufficient to solve the Mind Body problem.

            “Eliezer’s argument against Chalmers is, roughly, ‘The paradox of phenomenal judgment shows your view is epiphenomenalist, and therefore just as unbelievable as epiphenomenalism.’ I believe Chalmers’ response to this point was ‘no, my view isn’t epiphenomenalist’, the implication being that quiddities aren’t epiphenomenal.At this point the debate reaches a stalemate, ”

            Yep.

            “…unless we remember that the central thesis of the entire rest of the Sequences is that belief accuracy is a physical Bayesian process. If you don’t physically interact with a variable in a way that consistently changes your brain’s state, then it’s physically impossible for your belief to be predictably accurate. ”

            That’s more of a description of how physicalists see things than a proof of how they logically must be.

            In any case, it is not a decisive refutation of an explicitly nonphysical theory to point out that an aspect of it is physically impossible.

            “There are many philosophers who have objected to Chalmers’ view on the grounds that it makes reference to or knowledge or qualia impossible,”

            You paraphrased physically impossible as impossible. That is an important shift.

            ” but the Sequences provide an actual theory that places quiddities in the same category as more conventional epiphenomena”

            I am not sure wwhat quiddities are supposed to be, but the sequences notably don’t offer any account of qualia at all.

            For my money, Chalmers done right is as a thouroughgoing dual aspect theory. That allows the required non arbitrary connection between mental states and reports..it is just that the connection is not composed entirely of physical causality.

          • But rejecting epiphenomenalism is not sufficient to solve the Mind Body problem.

            Eliezer didn’t claim to solve the hard problem of consciousness; he just claimed to refute Chalmers’ argument against physicalism. That still leaves open what consciousness is, what results in it, etc.

            That’s more of a description of how physicalists see things than a proof of how they logically must be.

            No. All interactionists, epiphenomenalists, and panpsychists must also accept that if the brain is reliably correlated with a complex fact, then there is a causal chain linking the brain-state to the fact. To reject this claim isn’t just to reject physicalism; it’s to posit a miraculous coincidence that can get arbitrarily large as brains become increasingly sophisticated psychologists and philosophers of mind. You don’t get to just say ‘I’m not a physicalist’ and then posit completely inexplicable miracles of arbitrary complexity. Not without your theory getting penalized correspondingly. Non-physicalist theories need to be parsimonious and evidence-based too.

            You paraphrased physically impossible as impossible.

            No, philosophers claim that Chalmers’ view makes knowledge of qualia metaphysically/logically impossible, not just physically impossible.

            I am not sure wwhat quiddities are supposed to be

            http://consc.net/papers/panpsychism.pdf

            For my money, Chalmers done right is as a thouroughgoing dual aspect theory.

            If the phenomenal aspect doesn’t causally constrain the physical aspect, then it’s a miracle that the two are correlated; e.g., it’s a miracle that phenomenal pleasure correlates with cognitive pleasure, rather than (say) phenomenal pain correlating with cognitive pleasure.

            On the other hand, if the phenomenal aspect does causally constrain the physical aspect, permitting only one model, then zombies and inverts are logically impossible. So you lose any reason to reject physicalism.

          • peterdjones says:

            Robby

            > he just claimed to refute Chalmers’ argument against physicalism.

            Join the the queue. That’s got to be one of the most unpopular arguments in contemporary philosophy.

            > No. All interactionists, epiphenomenalists, and panpsychists must also accept that if the brain is reliably correlated with a complex fact, then there is a causal chain linking the brain-state to the fact

            Reports of qualia aren’t reports of external facts, they are the brain reporting on itself. Howeaver, vanilla notions of causality require cause and effect to be different things.

            Miracles and coincidences need to be avoided. Dual aspect theory avoids them without embracing a causal link between the mental the physical. The two aspects are correlated, because they are two aspects of the same thing.

            EY does not have a decisive refutation of Chalmers because the target isn’t clear…the property dualism construal works differently to the dual aspect construal. He also doesn’t have a have a novel objection.

          • Tom Richards says:

            I’ve never really understood why it’s the existence of mental substances that tends to be doubted by people addressing the mind-body problem, rather than physical ones…

        • Alex Godofsky says:

          The community also makes a lot of epistemological commitments with minimal argument or outside sourcing of arguments for them.

          Bingo. I remember when I first encountered lesswrong and went to read the sequences and some other essays. At one point I reach an essay in which Yudkowsky very confidently stated that one particular interpretation of quantum mechanics was necessarily the one we should believe, and that it was totally ridiculous to believe any other ones. Without having any strong opinions on which one is right, I do know that there is no experiment we can actually perform to distinguish the different interpretations, and thus to have such a definitive view seems incredibly arrogant. That was the point where I decided I wasn’t going to get anything more useful from the essays and quit reading.

          • Anonymous says:

            there’s no experiment you can perform to distinguish general relativity from my theory that undetectable gnomes move things around, should we be agnostic between these two theories?

          • Alex Godofsky says:

            Have you mathematically formalized your gnome theory so that it produces identical predictions to general relativity in all circumstances? Then I would look askance at anyone who had really strong feelings about it one way or the other and argued vociferously against it, and I would rightfully conclude that those people didn’t have many useful insights that are worth my attention.

          • LLDOB says:

            So, in a similar vein, if I don’t like Wittgenstein’s logical atomism, I can conclude that 3-7 of the Tractatus aren’t worth wasting my time on as anything more than a historic document and probably don’t have very good insights? In my experience, “correct” answers that you agree with on issues related to philosophy are really hard to come by.

          • Alex Godofsky says:

            I haven’t read Wittgenstein so I can only infer the argument you’re making from context. But a good Bayesian should, upon witnessing a person make one silly, pointless argument, update in favor of more of the future arguments being more likely to be silly and pointless.

          • Luke Somers says:

            That no conceivable experiment could distinguish these interpretations makes it MORE justifiable to pick one over another based on the kinds of criteria used in the sequences – not less.

          • Alex Godofsky says:

            I am arguing from instrumental rationality, which I thought was reasonably clear from the fact that I never alleged that Yudkowsky’s argument was wrong (I have no opinion) but rather that it was not useful to me.

          • Luke Somers says:

            Not useful to YOU, sure. But for anyone actually concerned with quantum mechanics, it does make a difference. Collapse interpretations lead you up blind alleys and rifle your pockets for good experiment ideas.

          • Alex Godofsky says:

            … we’ve already stipulated that there isn’t an experiment to distinguish the two.

          • Ilya Shpitser says:

            Luke, what are you even talking about?

            This preoccupation with MWI is completely bizarre. The interpretation does not matter, almost by definition. If you like Bayes so much, the Bayes thing to do is to not exclude possibilities a priori but maintain a distribution over everything still in play. (Of course unless someone has a Nobel-worthy idea, you will not be able to update this distribution).

          • ADifferentAnonymous says:

            The argument is that this is a Russell’s teapot situation. That is, Copenhagen has a completely superfluous postulate whose truth cannot be proven or falsified by experiment, and we should reject such postulates.

          • vV_Vv says:

            @ADifferentAnonymous

            MWI has the same number of postulates as Copenhagen and the other common interpretations.

            In the last year there have been some attempts to derive the Born rule (the controversial postulate) from the other ones under MWI. AFAIK the issue is still not settled, but it these attempts work, this is indeed good news for MWI. But that’s irrelevant as far as Yudkowsy arguments go.

        • Eli says:

          Oh boy, and now you’ve got me to deal with, who somehow manages to hate mainstream philosophy even more than Eliezer.

          Essentially, let’s ask the question of philosophy: “What do you think you know and how do you think you know it?”

          And the resultant answer is usually, “We believe in a priori knowledge derived from pure reason and intuition.” Which is, from the cognitive-scientific perspective, absolute bunk. All your models of logic and causality, to my knowledge, every method of “a priori reasoning” in your mind, are actually learned.

          While the philosophical questions of life are indeed important questions, and while I often disagree with Eliezer on various beliefs or positions for what I consider good reasons, I feel a need to go even further than him in saying: the field of philosophy makes the fundamental error of believing that truth comes from inside rather than outside, and from deductive rather than inductive reasoning. The result is that the field of philosophy is largely useless at answering or dissolving the Larger Questions of Life, instead finding an infinite variety of ways to go around in circles telling each-other in more and more elaborate ways that nothing can really be known at all…

          Until eventually some scientist somewhere does some experiments that definitively answer the issue, at which point the divide within philosophy between naturalists and atheistic mystics becomes even starker and the arguments over definitions get even louder.

          • Irenist says:

            Essentially, let’s ask the question of philosophy: “What do you think you know and how do you think you know it?”

            And the resultant answer is usually, “We believe in a priori knowledge derived from pure reason and intuition.” Which is, from the cognitive-scientific perspective, absolute bunk. All your models of logic and causality, to my knowledge, every method of “a priori reasoning” in your mind, are actually learned.

            “Usually”? The idea that the “usual” philosophical answer is that knowledge isn’t initially “learned” through the senses before it’s known further through abstraction betrays an ignorance of the history of philosophy.

            Both Aristotle and Aquinas, e.g., endorsed the Peripatetic Axiom:
            “”Nothing is in the intellect that was not first in the senses.” They were both pretty mainstream thinkers in their day, and thus representative of what “usual” ought to mean.

            Locke, Hume and the Empiricists (i.e., half of the mainstream in their day, Cartesian/Leibnizian/Spinozan rationalism being the opposed half of the mainstream) built their entire epistemology on the idea of the mind as a tabula rasa that receives all its impressions from sense data, with no innate knowledge.

            Then Kant, the most prominent of modern philosophers, came along and said that we do have innate ideas, but they only reflect the structure of our minds, not the structure of the world.

            Here and now, analytic philosophers are the mainstream. And Quine’s position in “Two Dogmas of Empiricism” (that the supposedly a priori/analytic isn’t really distinguisable from the a posteriori/synthetic in any consistent way) is certainly an influential position in the analytic philosophy “mainstream.” And analytics generally bend over backwards to accommodate and integrate the insights of the hard sciences as much as possible.

        • Illuminati Initiate says:

          What exactly is the difference between professional and amateur philosophy, besides positions in universities? It does not seem to me that “professionals” are any less likely to say contradictory or incoherent things than “amateurs”. I’m not trying to insult the “professionals” here, it just seems that philosophy is not something that requires or is even helped by any sort of formal training.

          • LTP says:

            “It does not seem to me that “professionals” are any less likely to say contradictory or incoherent things than “amateurs”.”

            On what basis do you make that statement? How much contemporary philosophy have you read?

            Certainly professionals put forward contradictory or incoherent beliefs just as amateurs. However, from what I’ve seen of both, the mistakes amateurs make tend to be of the more obvious sort, i.e. ones that have been made before and would easily be recognized as mistakes by a professional or just obviously bad ones. On the other hand, the professional mistakes tend to be much less obvious and have not been made before. Usually the problems with them are not seen until another professional points them out, and it may be years or decades before somebody discovers the crucial objection. The ways the argument of professional is unsound/invalid are not obvious to contemporaries, contra many amateur mistakes. Yes, there are hacks in the profession, but there have historically been a lot of scientists who proposed theories that are, in hindsight, crazy.

            The importance of professional training in philosophy is two-fold. First, learning the history of philosophy will, on the one hand, open one’s mind, and on the other hand, make one aware of the mistakes philosophers in the past made. Second, you will have your ideas and arguments challenged. You won’t be able to make the simplistic and easily demolished amateurish arguments for a position. My philosophy professors are brutal in their commentary on my papers, for instance, and I always agree with their critiques.

            An amateur might, through their arguments, commit themselves to contradictory positions x and y without seeing it (because x and y are merely implied by their argument, but not explicitly laid out). A professional will be more likely to see that they must either choose between x and y, or find a novel way to reconcile them (which will probably alter one or both of x or y in certain ways). Even if they don’t see it themselves, there is peer-review.

            I’m not saying amateurs can’t do good philosophy (though not many). Saul Kripke, for instance, published his first paper in a professional journal at 17 IIRC, and he went on to become one of the most important philosophers of the latter 20th century. But, that doesn’t discredit philosophy, as there have been amateur scientists and amateur mathematicians that have found success.

            Anyway, I hope I’ve been somewhat coherent in this post.

          • Illuminati Initiate says:

            Yeah, I think I was being too… broad?… in my statements, I kind of take this one back. Sorry.

            To be fair to myself though, I was not intending to disparage people who do philosophy “professionally” though.

            I would still argue that philosophy is not something that requires training the same way sciences do, but you are right that training can help.

          • LTP says:

            Agreed.

            One of the reasons the sciences require a great deal of training is that you often need access to very expensive labs with advanced technology to learn how to do relevant science, where as with philosophy (and other subjects like math for that matter) you mostly just need book to learn it, at least in theory. I still think formal training is very helpful for the vast majority, though.

      • Matt says:

        Have you considered the possibility that rationalists (or skeptics or atheists) are accused of arrogance not because they are more confident in their belief than others, but because they confidently hold *unusual* beliefs?

        • Anonymous says:

          Confidence has nothing to do with it. It’s people refusing to engage fully with the work of people they criticize, and dismissing entire areas of thought and research based on a half-remembered Wikipedia page. That’s arrogance, and when you do that, you fall victim to Dunning Kruger and end up like anti-vaxxers and creationists — cheerfully delusional, convinced that your opponents have nothing of value to say.

          • Anonymous says:

            Do you also think a person should study Bible or Quran before he or she rejects that the theologians have nothing of value to say?

          • Anonymous says:

            (same anon as parent) Sort of. An atheist has no reason to read theology, but one who claims that there can’t be a God because of all the evil in the world has essentially made a theological claim, and needs to address a whole heaping mess of philosophical and theological arguments. (See “theodicy”) You can’t just make your argument, and then throw up your hands and yell “I win”! If your arguments stray into an area where there’s an established body of thought/expertise, than that needs to be addressed thoroughly.

          • Illuminati Initiate says:

            If God is defined as all powerful and good (a strange definition, see below), I don’t see any way in which the problem of evil argument could be wrong. What is the explanation?

            I suspect I know what they are, and the problem here is confusion over the nature of morality. God is being defined here using a subjective term. If you do that you end up with the conclusion that God is logically possible to exist for some people but not for others*. Theists attempt to refute the problem of evil by saying “Well I say its not evil!”, but God still can not possibly exist to the people who bought it up because they think all that stuff is evil. Using subjective definitions of allegedly real objects is silly like this. So the disagreement here is actually a value dispute rather than a logical argument. And there is no reason to research the values of your opponents in a value dispute.

            *objective morality is utterly nonsensical, it is a non-concept. Not only have I never seen a good argument for it, I can not even conceive of or comprehend the possibility of one.

          • Tom Hunt says:

            Argument from incredulity is not an argument. There are plenty of people who make arguments for objective morality, of varying levels of soundness. While this perhaps unwarranted, the tone of your dismissal leads me to believe you haven’t actually sought out or read any of them.

            For what it’s worth, according to my interpretation of Feser’s interpretation of Aquinas anyway, the traditional (or one traditional) response is that every evil on Earth is directed in the service of infinite goodness in Heaven, or some such thing. (Also, the definition of God as ‘good’ is quite a bit more complex than you’re imagining, and doesn’t have much to do with any individual’s perception of morality.)

          • Protagoras says:

            That response really doesn’t work, though, however traditional it may be. God is supposed to be omnipotent, which seems like it ought to mean that whatever “infinite good” God can bring out of evil could have equally well been brought about via non-evil means. Of course, perhaps “omnipotent” doesn’t actually mean without restrictions (that’s basically how the free will defense works, whatever its proponents claim), but if Feser intends such a restricted meaning, he never explains it.

          • Troy says:

            If God is defined as all powerful and good (a strange definition, see below), I don’t see any way in which the problem of evil argument could be wrong. What is the explanation?

            There are different versions of the problem of evil. The strongest formulations argue that the existence of any evil at all entails the non-existence of God. These formulations are clearly too strong: from the fact that God is all good, all knowing, and all powerful, it does not follow that God will not allow or create evil. As Alvin Plantinga observes, the former premises do not entail the conclusion in any straightforward logical manner (you couldn’t, say, formalize this argument in 2nd-order predicate logic to make it valid). Perhaps the entailment is supposed to be semantic, in virtue of the meanings of the terms; but it’s not clear how this is supposed to go. Consider an analogous case: from the fact that I always desire ice-cream (all-goodness) and always am able to get ice cream (omnipotence/omniscience) it does not follow that I am always getting ice cream. I could have reasons to not always get ice cream even though I always want it. Similarly, God could genuinely hate all evil and yet still allow or create it if he had sufficient reasons to.

            Weaker formulations of the argument focus not on evil in general, but on the actual kinds and magnitudes of evil in our world. And they claim not that these entail God’s non-existence, but that they constitute evidence against the existence of God. I’m inclined to agree that the evils we observe are evidence against theism, but as with any theory we need to consider all the evidence and I don’t think that the problem of evil is decisive when held up against natural theological arguments and other evidences for theism.

          • Illuminati Initiate says:

            @Tom Hunt

            You’re right about incredulity and I apologize for my perhaps overly-hostile tone.

            But I am highly skeptical of the claim that anyone has successfully argued against the is-ought divide, and if anyone has such an argument I would truly want to see it. Most arguments for objective morality I have seen tried to ground morality in some sort of “it is advantageous for individuals or societies to follow it” argument, but those do not give any reason why you should objectively care about you’re own well being either, or why you should consider their definition of well being as what you want. The idea of an objectively correct preference of any sort still seems absurd to me in the way that someone saying they have an argument as to why 1+1=3 does.

            I’ll try and find some, but I doubt I will.

          • drunkenrabbit says:

            @Illuminati Initiate
            I agree, most attempts to bridge the is/ought divide fall fairly flat. Divine Command Theory defines good as synonymous with the will of God, which is the one objective basis for grounding morality (although it would be irrelevant to an atheist). Therefore, the term “good” isn’t at all subjective, but rather real, and bears immediate relation to humans and human behavior.

            Troy nailed the problem of evil thing. The traditional counterargument to the weak version he put forward is Leibniz’s theory that we live in the best of all possible worlds. Which some people (like Voltaire) strenuously disagreed with, but it’s a hard statement to prove or disprove, since we don’t have any points of comparison.

          • Troy says:

            I am highly skeptical of the claim that anyone has successfully argued against the is-ought divide, and if anyone has such an argument I would truly want to see it.

            I’d recommend Judith Thomson’s Normativity. She nicely articulates how normative concepts and judgments pervade ordinary thought, and in the process pretty soundly (IMO) demolishes several popular 20th century meta-ethical positions.

          • Tom Hunt says:

            @Protagoras:

            From what I can tell (from Feser’s summary of Aquinas, I am not a theologian, &c. &c.), God is “omnipotent” in the same kind of sense that he is “all-good”, i.e. as the output of a convoluted logic proof which establishes its conclusion in a meaningful but circumscribed kind of way. Thus, yes, God is “omnipotent”, but that doesn’t mean he is capable of effecting any outcome you describe in words here. As far as I can tell, any coherent theological position must accept a still-circumscribed “omnipotence” of this kind; a truly unrestricted one is logically incoherent. The traditionalist God, as I understood him from Feser, is only even a volitional being in this same analogical/circumscribed sense; in a lot of ways it’s more productive to describe him as a set of tendencies of the universe, rather than an individual sapient being.

          • Protagoras says:

            That’s all very well, but Feser will also insist that he is of course not a pantheist (as would Aquinas before him); indeed, he is committed to some pretty specific and narrow Christian doctrines. It doesn’t make much sense for a “set of tendencies of the universe” to perform miracles on behalf of chosen favorites, or (more strikingly) to have a human son/incarnation.

          • Illuminati Initiate says:

            @Troy, OK thank you, I will look into Normativity.

            As to your ice cream analogy: If you constantly wanted ice cream, and could make infinite ice cream at any time, the only reason you would not is if you had other preferences that outweighed the desire for ice cream, such as not suffocating in ice cream or not having you’re stomach burst from all the ice cream you ate. This analogy only works for the problem of evil if this universe is already completely optimized for utility such that an omnipotent being could not improve it, like the free will defense which, if I understand correctly, says that “meaningful choice” is inherently good and any divine attempt to reduce other kinds of bad stuff in the universe would actually decrease utility by reducing “meaningful” choice. The thing is this goes right back to what I was talking about: I certainly do not think “meaningful” choice is inherently good, and the universe is definitely not optimized for my morality by an omnipotent being. Therefore no God that is omnipotent and benevolent by my morality can exist. So it boils down to a values dispute, and the silliness of defining an allegedly real object with subjective terms.

          • drunkenrabbit says:

            That only holds if you insist on defining “good” and “bad” as subjective. According to divine command theory, the will of God is synonymous with good. Thus, if God desires people to have free will, then free will is good. If that’s true, then the fact that you “certainly do not think ‘meaningful’ choice is inherently good” is simply evidence that your views and desires are defective, because they differ from God’s. If good exists in any metaphysical way (divine will, platonic forms, etc) then “your morality” isn’t a meaningful or useful concept.

            Edit: Not saying it’s necessarily true, just saying that it’s a way of solving the is-ought problem that’s logically consistent.

          • Troy says:

            This analogy only works for the problem of evil if this universe is already completely optimized for utility such that an omnipotent being could not improve it,

            The analogy was intended to show that an argument of the form

            (P1) S (always) desires X.
            (P2) S (always) knows how to get X.
            (P3) S can (always) get X.
            (C) Therefore, S (always) gets X.

            is not in general valid, that is, not valid for any values of S and X. I take this form to be close to the traditional presentation of the logical problem of evil, e.g., the oft-cited Epicurean Paradox.

            One might object that the argument is valid if we add some further premise, of the form (P4) S has no overriding reasons to not get X. (You put this in terms of optimizing for utility. This is too specific; it presupposes some kind of consequentialism which the theist might reject. For example, perhaps God obeys certain deontological side-constraints in his actions, so that he acts in ways that do not maximize utility.)

            Even with (P4), that the argument is valid is still a substantive claim, relying on assumptions about the nature of action, desires, etc. But the theist can grant, for the sake of argument at least, that (P4) makes the argument valid, and then simply deny (P4) in the argument from evil. (P1)-(P3) plausibly follow from classical theism, but (P4) does not.

            You might respond by giving arguments for (P4). For example, you might note that you can see no good reasons for God to allow evil, and infer that there are no good reasons. The theist would presumably dispute this inference; the dialectic would then proceed in much the same way as it in fact has in late 20th century philosophy of religion.

          • Illuminati Initiate says:

            Edit: this comment was to drunkenrabbit.

            Right, I’m aware of divine command theory, but I’ve never seen any attempt at justifying why God’s morality is any more objectively correct than mine or your’s, or even how such a notion makes sense. And if goodness is defined as God’s preferences, not just determined by it, then I think we may have a language dispute here: if benevolence is defined as helping God’s preferences being satisfied, then clearly God could be benevolent by that definition. But that’s not how I was using the word good or how most people seem to use it. Red means a color in English and a fishing tool in Spanish.

            …I’m actually staring to think that this whole debate is being confused by usage of the words good and evil and I’m losing grasp of my point in the language. If someone wants to define my preferences as evil that’s OK, I’m gonna go be evil then.

            By the way, just in case anyone misinterprets me, I am a metaphysical moral anti-realist, but NOT a normative moral subjectivist.

          • drunkenrabbit says:

            Right, “God is good” can mean both “God is equivalent to goodness” and “God possesses all the attributes which I intuit as being good”, and those are very different things. I think your average church attendant on Sunday morning would have a view closer to the latter, although it wouldn’t be philosophically/theologically correct.

          • Illuminati Initiate says:

            You might respond by giving arguments for (P4). For example, you might note that you can see no good reasons for God to allow evil, and infer that there are no good reasons. The theist would presumably dispute this inference; the dialectic would then proceed in much the same way as it in fact has in late 20th century philosophy of religion

            This is sort of my point, except that I think this debate is not primarily about whether or not, given a utility function/rules list, an omnipotent being could improve the universe, but rather what the utility function/rules list is in the first place- and some of these are compatible with empirical observations and P1, P2, P3 and P4 being true while others – such as mine – are not.

            But again, I’m actually starting to think my language use has been somewhat confused here. And really this debate does not have much substance as nothing I’m saying is an argument against the existence of a God identical to the one theists believe in. The problem of evil does not prove that there is no God- just that if there is they are not benevolent by my standards.

        • Anonymous says:

          Indeed. They are accused of being arrogant by people who like label other people as arrogant partially because they have beliefs you, as an average high status member of society, are not supposed to care and argue about.

          Various political/social activists state opinions that are based on much shakier ground with way more confidence and much less doubt, and while they are often accused of many things, arrogance is usually not the first choice of words, despite the fact that they are way overconfident. They themselves claim that they are compassionate (or similar), that’s why they feel they shouldn’t be in doubt about their facts (or, as quite often happens, “facts”).

          It seems to me that many people want to have an opinion about certain things that are not in the center of their attention. However, they do not feel that they should expend the effort to argue about them, as it is not central to them. Thus, when they encounter somebody who makes claims (about these topics) they disagree with, they use “arrogance” as a way to give themselves a reason to bail out of the argument without having to argue against those claims.

          Politics is popular thing to care about, many people think about it quite often, therefore it is not the said periphery of attention.

        • Anonymous says:

          One ought not to dismiss the possibility that it *is* arrogant to confidently hold unusual beliefs. It’s kind of like thinking you can beat the market.

          • Illuminati Initiate says:

            The market is allegedly (not going to argue about that here) the best way to satisfy preferences for goods and services, because of game theory type stuff I’m not going to pretend to fully understand. But preferences for goods and services is fundamentally different from correctness of philosophies. There is no analogous system to select for it.

            Now, if you wanted to use market-like forces to determine the comforting-ness of philosophies, that could maybe work for the same reason it works (partially) for material goods and services. It would definitely be a good way to determine the memetic fitness of philosophies.

            This does not mean that popularity does not provide very slight evidence in favor of a position. I does, just because more thought was made about it, but it is typically vastly outweighed by any other piece of evidence or argument. Especially when you have empirical reasons to think people are likely to be wrong about the sort of thing you’re talking about. Especially especially if you think that the position would have a memetic advantage independent of its correctness.

          • drunkenrabbit says:

            I that’s overextending the metaphor. Simply, an amateur who thinks they’ve licked a serious philosophical problem is like day trader who thinks they can beat the market in the long run (not really doable). Or a new math student who thinks they’ve figured out how to square the circle. Or Ayn Rand trying to do philosophy. If you think you fundamentally “get” something obvious that nobody before you did, odds are you’re just wrong.

      • 27chaos says:

        +1 on tone

    • gedymin says:

      This is a spot-on comment.

      Especially taking into account how your other example, Descartes. Despite be the first to formulate the empirical scientific method with such a clarity, he apparently ignored it in much of his work on physics. The best scientists of his time (Galieo, Huygens) were much better in the practice of empirical science. As a result, Descartes is primarilry remembered for his contributions to math and philosophy. His views on the “mechanistical universe” (everything is mechanical nature, including forces such as gravity, which is caused by entaglement of invisible particles) sound ridiculous today.

      My point is that declaring the principles of rationality is not the same as following them. (To be clear, I mean the movement as such, not EY personally.)

      • Scott Alexander says:

        Sorry, how does mechanistic universe sound ridiculous? Seems to me to be an improvement on what came before (though quantum physics throws a wrench in everything), most things are pretty mechanistic at least in a classical sense, and although I still don’t know much about gravitons I thought their existence was still considered plausible?

        • peterdjones says:

          Cartesian mechanism is about explaining force in terms of direct contact between solid bodies. That turned out to be exactly the wrong way round, but someone had to try it.

        • Kevin says:

          The existence of gravitons is plausible, but we’ll essentially never be able to detect them directly because they interact so weakly. Gravitational waves can be considered a coherent state of many gravitons, so if/when we detect those, that will be the best we can do. (There are issues with writing down a quantum field theory version of general relativity, but this is a pretty technical, though important, problem.)

    • Buck says:

      Seconding this.

    • Cyan says:

      I don’t understand how someone could read:

      We’re the people who obsessively calibrate with Prediction Book et cetera to remind ourselves just how high our error rate is. We’re the people who keep a community-wide keep a mistakes repository (with Gwern once again going above and beyond). THERE ARE FORTY ONE DIFFERENT POSTS ON LESS WRONG TAGGED ‘OVERCONFIDENCE’!

      and then write what you wrote.

      • peterdjones says:

        Otoh .. lesswrong has extant claims to have solved free will and modal realism and the interpretation of QM…..

        • Samuel Skinner says:

          Free will is trivially easy to solve; most of it is a definition issue due to conflating multiple conflicting things under the same term.

          • peterdjones says:

            Is that a fact?

            I could note that mainstream philosophers have managed to tease out several aspects of free will, such as Alternatives Possibilities and Ultimate Origination.

          • grendelkhan says:

            For a long time some of the greatest minds humanity could produce were devoted to doing scholastic philosophy. This does not mean that the questions they pondered were worth pondering, or as complicated as they made them. Sometimes questions, even very old questions that very clever people have wrestled with, really can be dissolved.

            I’m unimpressed with a lot of what gets hailed as deep and important mainstream philosophy; much of it boils down to a simple banging on intuitions or conflation of words. Like the Chinese Room thought experiment or the concept of p-zombies. (Either consciousness is not an epiphenomenon, or we’re living in a horror story; there is no outcome pleasantly congruent with our intuitions.)

            Alternative possibilities” seems to restate the question in terms of the word ‘could’; “ultimate responsibility” (I think that’s what you mean?) seems to rest on underlying indeterminism in physics. It seems like a category error to see moral responsibility as sometime tied in with the low-level substrate on which the universe is constructed, as though it wouldn’t exist in human minds built out of lego blocks and gears, but it would if they were built on probabilistic quantum physics. Neither makes the crucial leap of seeing the sense of free will as existing in the mind rather than exposing underlying truths about the universe.

          • peterdjones says:

            Claims about how things seems to such and such a person carry varying levels of weight depending on how much research they have done.

            In the spirit of “one mans modus tollens is another man’s modus ponens”, the argument that “I can’t see why philosophers are arguing about this subject, therefore philosophers argue about pointless things” has a flipside  in the argument that “philosophers argue about this, so there is a point to it.”

            Even if APs are nor relevant to moral responsibility, they are relevant to other issues, such as whether our decisions can shape the future in some significant way.

      • OP says:

        Please respond to the suggestion of steel-manning your opponents argument, rather than picking on an example to dismiss my suggestion. Translation: Steel-man my suggestion to steel-man.

        First, if my example is off base, then consider whether my suggestion still stands without it. Second, I think my example does stand as it is.

        Considering how we all understand “false humility” here, couldn’t we be more critical of defenses like “it’s in our document of declared values”, “we use the word frequently”, and “we have a comment thread about mistakes we made”? It seems these defenses could just as easily be applied to religious orders, etc. Much stronger evidence would be examples of rationalists updating on strongly held beliefs.

        …Which is not to say such examples or even patterns don’t exist. The point is that I’m trying to help Scott write a more convincing article, not nitpick a fight. So let’s keep the discussion civil.

        second

    • rrb says:

      Yeah. Comments on rationalist blog posts are a great place to find deductions from economic or evolutionary theory that draw conclusions completely contrary to real world evidence. Making unreliable deductions from popular models without looking at real-world evidence actually is a common way rationalists fail, despite the fact that the underlying philosophy of LW!rationalism contains some of the best criticisms of this kind of failure.

      • Scott Alexander says:

        Give three examples.

        • rrb says:

          It’s not really common. I made this comment right after arguing with a troll on OvercomingBias, which is why I think I made it.

          Maybe I’ll get you those three examples when I have time to go through some posts later. Three’s not that much.

          EDIT: Can’t even find one non-troll example before losing patience.

          • macrojams says:

            I appreciate your follow up. This exchange is a perfect example of how these things should be handled: a claim, a challenge, a search for evidence, and an honest report on that evidence even when it failed to back up the original claim. +1

          • Paul Torek says:

            Your sneaky plan to demonstrate, rather than argue, rationalist humility, is working like a charm. I.e., what macrojams said.

        • peterdjones says:

          For my money , that pattern is more reactionary than rationalist …there are some gruesome examples on OB at the moment.

          • Eli says:

            I know I harp on this, but why should we rationalists allow ourselves to intersect, as a community, with reactionaries, when we wouldn’t do the same for, say, Communists?

            (Said the Communist, I know. Still: reactionaries aren’t, by communal self-definition, interested in things like cognitive biases, statistical rigor, or Bayesian epistemology. In fact, judging by one of the Effective Altruist posts on LessWrong today, they don’t really intersect with the preference-utilitarian-technocratic politics of the rationalist clique either.)

          • Anonymous says:

            Neoreaction is more metacontrarian and more fun.

          • ozymandias says:

            We’re next to neoreactionaries in socialspace and were from very early on in the history of both groups (Robin Hanson used to have Heartiste on his blogroll, for instance). I’m not sure if there’s any explanation deeper than that.

          • Zubon says:

            I don’t know about avowed Communists, but I see more commune-forming and anti-capitalist advocacy in the online rationalist community than I do elsewhere.

            That probably should not surprise me given (1) a seed population from the San Francisco bay area, (2) a population left-tilted enough that moderate-left Scott gets daily accusations of being a far rightist, and (3) most of the anti-capitalism seems to be on Tumblr. Of course, it is hard to be certain whether my mind is being more or less typical here.

          • suntzuanime says:

            We wouldn’t do the same for Communists? Surely we’d do the same for Communists.

          • Anonymous says:

            Heck, communists, unlike neoreactionaries, even get their own category in LessWrong survey

          • Nick says:

            I’d actually be really interested in seeing a new movement akin to the Neoreaction* come from the far-left. Preferably something sympathetic to anarchism and syndicalism and takes (non “tumblr strawman”!) social justice and feminism and etc seriously, but I’m not too picky. I’ve been making due with some of the more original thought from NR and the “post-rationalist” people, but there’s not nearly enough.

            *By “akin to the Neoreaction” I mean all the good qualities Scott praised ages ago (I can’t find the quote) like originality of thought and engagement with rationalist ideas.

          • drunkenrabbit says:

            @Nick
            It might be hard for it to gain adherents – the silly parts of the internet far-left (witch-hunting and drama) seem to be the main attraction. Just as metacontrarianism, holiness-signalling, and obscurantist prose sometimes seem to be the real core of NRx.

          • Nick says:

            drunkenrabbit,

            Yeah, you’re probably right. I wonder, though, whether that’s mostly a problem of the current political climate. The left seems to be in a state of recognized ascendancy (even after the Republican wins during the midterm), so I have some hope that, if it takes a beating for a few elections and does some soul-searching, a movement like what I want might arise. But that’s really shaky speculation, since I don’t know that we can say e.g. the Tea Party or the Neoreaction is really a response to that ascendancy.

          • Jaskologist says:

            What if SJ is the left-wing version of NRX?

          • Nick says:

            Jask, I’d accept the comparison if you can point me to an SJ equivalent of nydwracu or anyone at MoreRight or even Moldbug. But I don’t know of any! And besides that, I think SJ isn’t quite on the mark. I mean, I don’t know of anyone who would treat SJ as a political/economic/social/whatever philosophy the way someone would communism or anarchism or capitalism, and if such people do exist I don’t know why they see SJ that way.

          • Ilya Shpitser says:

            Probably smart Marxists are the left’s equivalent of the Nrx.

          • drunkenrabbit says:

            Ilya, you have any reading recommendations as far as smart Marxists go?

          • Illuminati Initiate says:

            I’v been thinking about what a NRX-like left wing ideology would look like for a while actually (I was essentially trying to out metacontrarian them for fun and ended up possibly convincing myself of stuff), though what I was thinking of was more technocratic than anarchistic. It was inspired in part by the Moloch post. I might try and post an explanation in the next open thread but I’m still thinking about it. Its also quite possible I’ll abandon the whole thing by then though.

          • Nick says:

            Seconding drunkenrabbit’s interest in smart Marxists to read, and Illuminati Initiate, I think a fair number of people would be interested in your proposal!–if only because we all like to out-metacontrarian each other.

          • Scott Alexander says:

            Have toyed with the idea of creating left-wing neoreaction for a while. I think it would basically just be neoreaction, but with a much stronger focus on how democratic societies/civilizational incompetence mostly just harm the disadvantaged. For example, breakdown of family has mostly bypassed rich whites but strongly affects poor blacks; banning things like IQ tests, drug tests, etc known to cause employers to just hire rich whites because then they can use class and race as a proxy for things they can’t measure directly; poor enforcement of law and order means riots in minority communities, means rich people and businesses flee and leave them destitute (see: Detroit, Ferguson ten years from now).

            Charlie Stross has done some of this work, especially recasting the Cathedral as equally hostile to any checks on their power from labor/the anti-corporate movement as they are to any checks on their power from aristocracy or tradition.

            A lot of it would probably be reinventing Ross Douthat, though.

          • Jaskologist says:

            I actually think NRX is the right-wing version of Marxism. Sometimes it seems very Marxist indeed, given the very strong materialistic historical determinism.

          • Multiheaded says:

            @Scott:

            There is, I think, nothing inherently left-wing about paternalistically doing things for the “disadvantaged”. I think that the people suggesting that Left-NRx would be SJ-ish/Machiavellian/Alynskyite are closer to the mark. Like, it’s more strategic to focus on power, control, incentives and struggles than on temporary and considerably subjective good policy measures, etc.

            P.S.: Screw it, I’m on hiatus again. So many arguments here are kinda tiring.

    • Tyle says:

      I agree with Anonymous: Stephanie’s article is about experiences she has had in which self-proclaimed ‘rationalists’ behave badly – exhibit overconfidence, make claims which are rhetorically clever but insufficiently grounded in the data, etc – and she explains that these experiences have driven her away from the rationalist movement.

      So I enjoyed Scott’s article defending the usefulness of logic and critical thinking, but I thought that it was pretty clearly a non-sequitur to Zvan’s article.

  2. Frog Doe says:

    Wrong discussion, you’re focusing on “rationality”, the key word is “movement”. My understanding after reading the post is that they don’t want to be a part of a movement that is “the young, to men, to white people, to libertarians”, for whatever reason.

    Of course, “Thousands of scientists have worked their entire lives to get you the evidence in favor of evolution. But thousands of creationists have worked their entire lives to obfuscate and confuse that evidence.”, so it’s not like your making particularly non-tribal arguments either.

    • Scott Alexander says:

      I feel like “creationism is wrong” is sufficiently non-controversial as to not be much of a tribal signal around these parts. I need something to use as an example!

      • Frog Doe says:

        There is a world of difference between “creationism is wrong” and “the life goal of creationists is to obscure the truth, those devils”. Factual statements versus accusations of willing diabolism.

        • Scott Alexander says:

          I’m not saying the explicit endorsed goal of creationists is to confuse the evidence. I’m just saying that the effect of there being large creation science institutes is to make the waters muddier.

          • Deiseach says:

            Forgive me, I don’t know the language of card games, but I believe it goes something like “I’ll see your Creation Science Institute and raise you a Jesuit winning the Carl Sagan Medal this year”? 🙂

            This year’s Carl Sagan Medal, presented by the American Astronomical Society (AAS), has been awarded to Brother Guy Consolmagno, SJ, of the Vatican Observatory.

          • That’s exactly how it came across, here.

          • Scott Alexander says:

            I did not say anything bad about the Jesuits or anything contradictory to them winning medals?

          • Deiseach says:

            I did not say anything bad about the Jesuits

            Oh, please feel free to do so! I’m Dominican in my sympathies so I can stand to see The Society get a bit of a kicking from time to time 🙂

            No, I actually do feel some sympathy for you, Scott. It just tickles me that rationalism was being compared to theology in the Great Rationalism Versus Empiricism Knock-Down Drag-Out Hair-Pulling Allowed Blog Post of Doom, and it obviously gored your ox.

            And amongst your responses was to appeal to the Scriptures Founding Document of the movement, and the Law and the Prophets founder, Yudkowsky. And talking about Virtues (with a capital “V”). As in, the most time I ever hear anyone speaking of Capital V Virtues is the three theological and four cardinal virtues.

            Never mind that this is probably the exact thing that would have an empiricist rolling her eyes at: ‘oh great, he’s quoting abstractions again. Yeah, measure me out a pound of humility there and we’ll put it through the HPLC for analysis, right?’

            And the distinction between “Rationalism (Yudkowsky) and rationalism (Descartes)”, which, when I make the same distinction vis-à-vis Catholicism (and more broadly, historic global Christianity) and a certain strain of American Protestantism, I get told that that is a difference which makes no difference.

            So I hope you can see why I’m sitting here with a broad grin on my face.

            But also re: the Jesuit awarded for science – really, ‘creation science institutes’ are so weak an opponent, they’re not even made of straw, they’re squishy mud. It only signalled “let’s all pile on and go ‘ha ha’ at the dummies. because this is an easy target everyone will laugh at!”, rather than advancing anything of your argument.

        • RCF says:

          I think that creationism is, in fact, predominantly a form of dishonesty.

          • Deiseach says:

            I think that creationism is, in fact, predominantly a form of dishonesty.

            Well, I blame Martin Luther 🙂

            Luther’s theological preoccuptions meant he required a very high view of Scripture to support the solution he came up with, which requires a high degree of trust in the bare word of salvation (Luther’s neurosis about his salvation being eased by taking the word that all who believe shall be saved as meaning if he only truly believed, he could and did not need to do anything else on his own efforts to be saved – I object to his notion of “forensic justification” but that’s a theological point of quibbling) and his (and the other Reformers’) insistence on “the plain word of Scripture” as against the claims of Tradition and necessity for interpretation meant that inerrancy had to be insisted upon, which led to the corollary, for some at least, that you had to insist every single word of Scripture is inspired.

            Now, there are nuances within the inerrantist position (some accept the use of figurative language), and this is a very old debate, going ack to the first centuries of Christianity. But to do as much for the creationists as I can, if by “dishonesty”, you mean deliberately lying about things they know to be false, I have to disagree. To be consistent with their position on Biblical inerrancy, they must take the literalist meaning every time. Now, my own personal opinion is that even this does not require anyone to stick to Archbishop Ussher’s dating scheme of around 6,000 years or so from the date of the creation of the Earth, but some obviously feel differently.

            On the other hand, I do think there are some out there who are quacks, chancers and obsessives. So there may well be a few who are deliberately lying, but that’s to fleece the rubes who genuinely and ignorantly take their word for it.

            On the third hand, I’m not at the mercy of the American educational system, so it’s easy for me to be unconcerned about this topic.

          • RCF says:

            It’s not clear to me what your disagreement consists of. You say that you disagree, but the only basis for that disagreement that you present is the tautology that to be a Biblical inerrantist, one must consider the Bible to be inerrant.

            Part of the question of whether creationism is predominantly a form of dishonesty is whether the word “creationism” refers to the advocacy of creationism, or of the acceptance of creationism. Perhaps the majority of people who believe in creationism honestly believe in it (to the extent that the amount of motivated reasoning required to accept it can be called “honest”), but to consider that a refutation of the assertion that creationism is predominantly a form of dishonesty is a bit like saying that because most patrons of psychics honestly believe in psychic powers, psychics are not frauds.

          • Tom Hunt says:

            It’s more than possible to honestly advocate a mistaken position. Taking even the narrow interpretation that those who actively preach and advocate creationism (in the modern, young-earth kind of sense) are the ones you’re accusing all of being dishonest, that seems an extraordinary claim which would require much greater evidence, when the known ability of humans to…well…be wrong explains all the observables equally well. (Not that there aren’t probably dishonest creationist preachers, but I factor outliers out of all sides because there are no sides free of them.)

      • Deiseach says:

        And now I’m going to use on you the same answer that was used on me back in the “whales are not fish” comments.

        But Scott, some persons calling themselves rationalists believe the kinds of things Almost Diamonds says they believe. Sure, she may not be arguing against your kind of rationalist, but how useful is that when there are lots of people calling themselves rationalists out there that are not your kind of rationalist? The arguments she uses are exactly the kind of arguments necessary and most suited to make those rationalists realise they are in error and to lose their faith in their false beliefs!

        And welcome to what it feels like to be a Catholic 🙂 when people are going on about “Those crazy creationists”, for example. “Not all Christians are Biblical literalists”, one pipes up, and gets told more or less “Pshaw, what care we? We’re talking about those crazy creationists, not your brand!” and then they go on to talk as if crazy creationists were all there were.

        I mean, I can quote St. Augustine on “The Literal Meaning of Genesis” and quote the Catechism on the Four Senses of Scripture till I’m blue in the face, and for some people it will still be “Christianity always and everywhere = what some Americans believe today”.

        • Nick says:

          I went back and read the whale argument and I feel for you, Deiseach. The broader takeaway from this is probably that the tinman (or weak man, or whatever you want to call it) is a real thing and a serious problem for good, productive debate.

          • Deiseach says:

            Appropriating local gods as saints.

            Boy, did you pick the wrong argument to use on me! I have a particular grudge on this topic, and I’m going to tell you why: because, as an Irishwoman, I am sick to the back teeth of the whole “Celtic” craze, to the point where I’ve seen people online doing fluffy fanart of the Morrigan, and one girl whom I personally think is out of her tree talked about invoking the Morrigan!!!! Their notion of who or what the Morrigan is, is more derived from fantasy novels than the source myths, where she is not a nurturing, empowering figure. If you really did evoke her, you’d find yourself with a lot of trouble in your life in the “blood and violence and fighting for my life” sense.

            Attend to a little tale of local gods and saints from my own small green island. My late grandfather had devotion to St Bridget (of Kildare, please do not confuse her with the later St Bridget of Sweden). There is a holy well associated with her in his native parish, and we still (though it’s probably dead on its feet by now) have the tradition in Ireland of the brat Bhríde – where, on the eve of St Bridget’s Day (1st February*) you hang out on your front door a cloth (black in colour if possible) and it is blessed by Bridget as she does her rounds, and then it is a cure against headaches (my grandfather, according to my mother, used to suffer with headaches and he would tie the brat Bhríde around his head for relief).

            (*Yes, I am perfectly well aware 1st February is Imbolc. Don’t get me started on the Wheel of the Year and the Sabbats, where a mish-mash of Irish and Welsh mythology has been leavened with a few Scandinavian traditions such as Yule to make a pseudo-liturgical calendar which apes, not to say rivals, the liturgical year of the Church, and which is about as authentic by comparison to real folk practice and pre-Christian traditions as processed cheese is by comparison to genuine Cheddar or Brie).

            So far, so pretty pagan-sounding, right?

            Now, if you hit up the Internet to find out about Bridget, you will find a truckload of burble from local tourism trying to sell Celticky-type bobbins to attract the Yanks to various New Age, neo-Pagan, Wiccan and others who like to go on about Bridget the Goddess and how the bad old church appropriated her and made her into a ‘saint’ to win over the locals (never mind that the history of how Ireland became Christian is very different to the usual ‘imperially imposed from above’ narrative in continental Europe).

            You may even read stuff about the triple Bridget and a whole heap of “she’s the patron goddess of poetry, medicine, herds, this that and the other” and the three Bridgets, daughters of the Dagda and so forth.

            And most of this is woven out of whole cloth, in a conflation between the Protestant propaganda about Catholicism being idol worship (of the kind you can still find going today about Mary is Ishtar, Isis, Semiramis and the divil knows what other pagan goddesses), the 19th century Decadent movement where everyone and his dog was setting up Elevated Orient Temples of the Old Religion and finding inspiration in all kinds of pantheons, the Robert Graves “White Goddess” mythos and Gerald Gardner, the Iolo Morganwg of 20th century Wicca, not to mention the pick-and-mix ethos of modern Wicca/neo-Paganism/’New Age’ traditions which cheerfully admit to cherry-picking the bits they like from every kind of tradition and making up their own rituals and doctrines.

            All of which is a long-winded way of saying that the “Bridget is a pagan goddess appropriated as a saint” is the equivalent of the “Columbus set sail to prove the world was round” notion; that is, it has no basis in fact and is a mix of urban myth and semi-digested propaganda (if you’re an old-style Protestant, any stick will do to beat the pagan Catholics; if you’re a modern pagan, any stick will do to beat the smug Christians).

            Now, if you read source documents like Gerald of Wales’ Topographia Hibernica, you will certainly find accounts of practices which do look teasingly like remnants of pagan traditions (intriguingly, in this case, like the Vestals of Rome). But for things like “Three Daughters of the Dagda” and all the rest of it – there are very few source texts and none of them have this kind of handy listing. A lot of this is scholarly speculation along the lines of “well, the name of Bridget contains elements meaning such-and-such, and there’s another goddess-figure with similar elements, so the two can be related?” Saying “Of course the 6th century church in Ireland took over the goddess” is less rigorous scholarship that can be backed up, and more assumptions along the lines of “the patriarchy takes women’s power for its own so this is what must have happened”.

            Nobody seems to consider that maybe there was a woman named Bridget who was a Christian and who was made a saint the old-fashioned way because that’s too simple.

            It would be like saying “Since Scott Alexander has the same name as Alexander of Macedon, obviously this ‘Scott’ figure was an appropriation of the heroic leader as a figurehead for the purposes of this Slate Star Codex cult, and the deliberate creation of a whole cultus around a mythical healer-philosopher-scientist-polymath scribe-champion in a hieros gamos polyamorous union (one member of which, as gender-transcending sacred consort, is obviously the representation of sovereignty and divine wisdom bestowing guidance and the favour of Elua on the Chosen One), which would be persuasive to the ordinary people who were accustomed to thinking of Alexander as a hero and victorious commander” 🙂

          • Samuel Skinner says:

            “I agree about both of these. But they remain vague (what does “believing in God” amount to? what kinds of behavior does God want to promote?), and don’t necessitate the kinds of policy prescriptions you’re making.”

            The 10 commandments aren’t that vague- no adultery, no other gods, etc- it is pretty clear.

            I’m also not seeing what the confusion is. I’m saying God would design the bible to get his goal across as effectively as possible- the specifics aren’t important to whether or not the tools are effective.

            “The example still works even if I don’t get tired out or am not limited in time. The one option satisfies or partly satisfies my desire. Similarly, God’s using other means to attain a goal satisfies or partly satisfies his desire to attain that goal. Hence, he is less likely to do various other things to attain that goal.”

            Nope. God never has to settle with “just good enough”. Seriously, light is both a particle AND a wave- God can totally do two mutually incompatible approaches and combine the features he wants.

            “But, most fundamentalists would claim that God did do those things and was justified in so doing. So as I said, you’re not representing the fundamentalist line here.”

            If God tells you he is all good and that you should tell everyone that, you don’t argue with him. If he wants you to believe it, you try. It doesn’t mean your position requires God to be all good or for God to actually be all good.

            “I think that the weight of the evidence supports the historicity of some parts of the OT and not others.”

            Do you accept the 10 commandments? Because they are from Exodus which evidence supports being false. Do you accept the Garden of Eden? Because without it Christ’s sacrifice doesn’t make sense. In fact if you take out the stories that aren’t supported, it is a bit questionable why you’d think Judaism was inspired by God?

            “I also think that the OT should be read through a lens of Christ’s teachings. This is compatible with God revealing himself to the Hebrews in various ways that are accurately recorded in the OT, as well as the authors of the OT misinterpreting other revelations or mistaking some things for divine revelation that were not.”

            And why doesn’t the same exact rule apply to the New Testament? You are stating the Muslim position down to a tee.

            “The evidence for the Christian miracles is much stronger than for other religions. Several of the works here — http://historicalapologetics.org/collection/annotated-bibliography/ — discuss this question. Campbell’s essay contains a fairly decisive discussion of Hume’s famous examples, and Paley’s book discusses Islam.”

            Well the argument is
            “1.That there is satisfactory evidence that many professing to be original witnesses of the Christian miracles passed their lives in labours, dangers, and sufferings, voluntarily undergone in attestation of the accounts which they delivered, and solely in consequence of their belief of those accounts; and that they also submitted, from the same motives, to new rules of conduct, and
            2.That there is not satisfactory evidence that persons professing to be original witnesses of other miracles, in their nature as certain as these are, have ever acted in the same manner, in attestation of the accounts which they delivered, and properly in consequence of their belief of those accounts.”

            I’m not seeing why that is evidence. Mormonism exists and had a similar beginning. You need to believe people won’t die for something that they shouldn’t believe is true. Which would imply suicide cults are impossible and the communist movement could never get people like those in “Darkness at Noon”.

            I was also referring to how fundamentalists accept those miracles but reject all the other ones Catholicism claims where saints use Gods power (which is a point in favor of fundamentalism because it has less divergence than the laws of physics regularly being suspended).

            “And I maintain that they are.”

            Really? So you’d notice if the universe didn’t exist? By definition it is impossible to notice.

            “If you admit that they are evidence for both, then you are granting my point that some observations are more probable on Christian theism than they are otherwise.”

            What? If they are evidence for religion that doesn’t increase the odds of Christianity relative to other religions, only relative to atheism.

            “This is not the place to have an in-depth discussion of the anthropic principle. I think that Leslie’s firing squad analogy shows that this objection cannot be right; exactly why it goes wrong gets into technical issues in probability theory and the problem of old evidence which would take us too far afield.”

            The firing squad isn’t a rebuttal. In that case you exist before the firing squad fires and after.

            In the case of the universe you only exists after the universe begins. There is no “before” because time is a property of the universe!

            “I never claimed that my epistemic bedrock is tradition. I’m an evidentialist and a strong foundationalist.”

            I’m arguing with Catholicism/Orthodox/Coptic versus fundamentalism because Catholicism claims tradition is its bedrock.

            It is important to have a bedrock because the bible didn’t come to use directly. The books of the new testament were selected from many different competitors and versions. You either accept God was behind the process (which is the fundamentalist position) or you accept that the Church and its experts had the power to choose correctly (which is tradition). Relying solely on evidence doesn’t work because a significant amount of evidence has been already discarded- you have to justify that.

          • Troy says:

            Samuel: You seem to have posted in the wrong place.

            I have professional responsibilities to attend to, and so do not have the time to continue most of this conversation further. I don’t think we’re getting anywhere at any rate, since you seem determined to misinterpet me. Case in point:

            “If you admit that they are evidence for both, then you are granting my point that some observations are more probable on Christian theism than they are otherwise.”

            What? If they are evidence for religion that doesn’t increase the odds of Christianity relative to other religions, only relative to atheism.

            I never said that religious experiences increase the odds of Christianity relative to other religions. E is evidence for H iff P(H|E&K) > P(H|K), which holds just in case P(E|H&K) > P(E|~H&K). My claim was just that religious experiences raise the overall probability of (non-fundamentalist) Christianity. They raise the probability of other religions too. This is contrary to your earlier claim that (non-fundamentalist) Christianity “makes no predictions,” which I (charitably) took to mean that it and its negation gave all evidence the same probability, thus making it impossible to confirm or disconfirm.

            A final point, because I am weak willed and cannot help myself:

            “I never claimed that my epistemic bedrock is tradition. I’m an evidentialist and a strong foundationalist.”

            I’m arguing with Catholicism/Orthodox/Coptic versus fundamentalism because Catholicism claims tradition is its bedrock.

            I am a Catholic. There is nothing incompatible between Catholicism and evidentialism. I believe that traditional Christian theism, which Catholics share in common with many other Christians, is the view best supported by the overall evidence. Tradition is evidence, but it’s not indefeasible and trust in tradition is justified by more epistemically basic considerations. I do not believe in “discarding” any evidence. Whatever you’re attacking, it’s not my view and it’s not the only possible Catholic view.

            If you’re interested in reading an evidentialist defense of traditional Christian theism, I would recommend Richard Swinburne’s corpus. Swinburne is, for what it’s worth, Orthodox.

        • RCF says:

          Making accusations of “tinman” when those “tinmen” are people who actually take your founding scriptures seriously is rather silly. I don’t care how common cafeteria Christianity is, Biblical literalism is still a central form of Christianity. Comparing “you call yourself a rationalist, but some people call themselves rationalists and they hold positions you don’t like” is nothing like “you say you follow the Bible, but you don’t really, and there are some people who take the Bible much more seriously” is not at all legitimate.

          • This comment is astounding. You’re doing a think that I notice internet atheists do a lot, namely they assume that real Christianity is fundamentalist Protestant Christianity, and if you aren’t a fundamentalist Protestant you’re just faking it. Your entire argument rests on the premise that Christianity must be based solely on the Scriptures, untainted by Tradition, and that the Scriptures must be interpreted “literally”. In other words, you’re a fundamentalist Protestant, and you’re treating every other kind of Christian as a defective Protestant.

            Listen: literalism of the sort you describe is not the historical norm in Christianity, and the fundamentalist reading of Scripture is not actually what most Christians in most places and times believed. St. Augustine is the next most important figure in Western Christianity after St. Paul, and he clearly disregards the literal reading of Genesis in his writings. Deiseach’s other example comes from the catechism of the Catholic Church, an official document issued by the office of the Pope himself, and calling this “cafeteria Christianity” is just outrageous. The fundamentalists are the ones practicing a weird mutant offshoot of Christianity, but just because the mutant is the most familiar to you doesn’t mean that it’s actually the original.

          • Samuel Skinner says:

            It is alien to Catholicism, but Protestantism is explicitly a rejection of the idea tradition is a source and relies solely on scripture.

            Of course using Catholics disregards the people who originally used it (the Jews) and they interpreted it literally- Christians are the weird mutant offshoots to them.

          • Jaskologist says:

            That’s not true of Protestantism, either. Luther and Calvin both quoted extensively from the church fathers to back up their ideas. Sola Scriptura is not solo scriptura. The complete rejection/ignorance of tradition is relatively new, and much less widespread than you might think.

            As for the Jews, read some Philo and then come back and tell me that they were all literalist all the time.

          • RCF says:

            “This comment is astounding. You’re doing a think that I notice internet atheists do a lot, namely they assume that real Christianity is fundamentalist Protestant Christianity”

            I’m not assuming anything. I am CONCLUDING that Biblical literalism is central Christianity. Referring to (distortions of) my CONCLUSIONS as “assumptions” is really fucking rude, so please stop doing it.

            “Your entire argument rests on the premise that Christianity must be based solely on the Scriptures, untainted by Tradition”

            No, it doesn’t. Adding things to scripture is very different from removing things from scripture.

            “and that the Scriptures must be interpreted “literally”.”

            It’s not that the scriptures must be interpreted literally, but that pure Christianity does not simply ignore the scripture because it’s inconvenient.

            “In other words, you’re a fundamentalist Protestant”

            ???

            How could you possibly conclude that from what I wrote?

            “and you’re treating every other kind of Christian as a defective Protestant.”

            “defective” has derogatory connotations. If you don’t follow the Bible, then the Bible is not your scripture. That’s not a defect, but it’s not Christian either.

            “Listen: literalism of the sort you describe is not the historical norm in Christianity”

            If the vast majority of communists claimed to follow the Communist Manifesto, but opposed any government ownership of means of production, that wouldn’t change the fact that government ownership of means of production is a central aspect of communism. The fact that the majority ignores their supposed scripture does not disprove the claim that following the scripture is central.

            “Deiseach’s other example comes from the catechism of the Catholic Church, an official document issued by the office of the Pope himself, and calling this “cafeteria Christianity” is just outrageous.”

            Disagreeing with you is “outrageous”? Wow, you sure are full of yourself. Newsflash: simply declaring opposing points of view “outrageous” is not a counterargument. Catholicism most definitely is cafeteria Christianity. They pick and choose what parts of scripture to follow.

            “The fundamentalists are the ones practicing a weird mutant offshoot of Christianity”

            So, your claim is that original writers of the Bible did not take it literally?

            “but just because the mutant is the most familiar to you doesn’t mean that it’s actually the original.”

            I never presented the “mutant” being most familiar as being the basis of my argument, and I am not arguing about what is the “original”, so that’s a double strawman.

            In fact, going through your post, pretty much everything is a rewording, if not outright misrepresentation, of my position.

            My position is very, very simple. It’s right there in the first sentence of my previous post:

            “Making accusations of “tinman” when those “tinmen” are people who actually take your founding scriptures seriously is rather silly.”

            No how about you address my actual position, rather than equivocating yourself over to something easier to attack?

            Look, either you base your religion on the Bible, or you don’t. There’s no such thing as “sort of” basing it on the Bible. The moment you say “I’ll follow the Bible, unless it violates my common sense”, then the ultimate basis of your religion is your “common sense”, and bringing the Bible into it is simply a disingenuous attempt to imbue your religion with a fictitious sacredness.

            As the-book-you-sometimes-pay-attention-to says, “A man cannot serve two masters”. If you follow both Scripture and Tradition, you must have some rule for deciding which to follow when, and it’s that rule that you’re really basing your religion on.

          • suntzuanime says:

            If you don’t follow the Bible, then the Bible is not your scripture. That’s not a defect, but it’s not Christian either.

            I’m sorry, but that’s just inaccurate. To be a Christian is to be a follower of Christ, and Christ did not write the Bible. Christ did not even approve the Bible. Christ passed authority on to Peter “the rock on which I will build my church”, and Peter and those apostles went on to build a Church, which then compiled the records of Jesus’s teaching into a Bible. So yes, the Bible has divine authority behind it, but to say that the Church lacks divine authority because the Bible says so is to put the cart before the horse. If you think the Bible disagrees with the Church, you are either mistaken or neither of them are trustworthy. If you think people are not faithful Christians because they trust Christ’s Church to tell them what Christ meant, that’s entirely unfair and can reasonably be considered “outrageous”.

          • Samuel Skinner says:

            “That’s not true of Protestantism, either. Luther and Calvin both quoted extensively from the church fathers to back up their ideas. Sola Scriptura is not solo scriptura. ”

            From your wiki link
            “Lutheranism teaches that The Bible contains everything that one needs to know in order to obtain salvation and to live a Christian life.[31] There are no deficiencies in Scripture that need to be filled with by tradition, pronouncements of the Pope, new revelations, or present-day development of doctrine”

            That really does sound like what I was saying. Since they aren’t a fundamentalist branch, that is probably their traditional doctrine; correct me if I’m off.

            “As for the Jews, read some Philo and then come back and tell me that they were all literalist all the time.”

            Also from wiki
            “”His allegorical exegesis was important for several Christian Church Fathers, but he has barely any reception history within Judaism. “”

            That doesn’t sound like not literal interpretation was followed by Jews.

            sunt
            “If you think the Bible disagrees with the Church, you are either mistaken or neither of them are trustworthy. If you think people are not faithful Christians because they trust Christ’s Church to tell them what Christ meant, that’s entirely unfair and can reasonably be considered “outrageous”.”

            That doesn’t work. There are multiple Christian Church’s that claim that mantle and have tradition to back it up (Catholic, Coptic, Orthodox). Which one is Christ’s Church?

          • suntzuanime says:

            The Roman Catholic Church is in communion with the various Eastern Orthodox Churches, and they can all be seen as basically extending the Church created by Christ. The Protestants deliberately broke with and rejected that Church, and many Protestant sects reject the Church as the core of “Christian” life altogether!

            Which Catholic Church is favored by God, if He is playing favorites, is an empirical question, but if you had to guess, you’d say the Roman Catholic Church, right? I mean if I were placing my Pascal’s Wager that seems like where the smart money would be.

          • Look, either you base your religion on the Bible, or you don’t.

            I don’t base my religion on the Bible. Why would I do a silly thing like that? I base my religion on Christ and on the Church that he founded. Christianity cannot be “based on the Bible” in the sense that you describe, because Christianity predates the Bible. The historical norm is that the Scriptures are interpreted through the traditions and practices of the church; the notion that the Bible is the sole source of belief and should be read read “literally” is a historical novelty. You’re welcome to identify with that or reject strand of Christianity if you’d like, but you cannot possibly pretend that this is the only possible approach.

            I’m not going to waste our time responding to most of the rest of your points. Your entire post rests on the assumption that Christianity simply is fundamentalist Biblicism. That belief is simply false, both historically, logically, and theologically, as has been pointed out multiple times.

          • Irenist says:

            RCF

            Biblical literalism is still a central form of Christianity…. “you say you follow the Bible, but you don’t really, and there are some people who take the Bible much more seriously”

            Why does a Christian (or a Jew, for that matter) have to be a literalist to be seen by you to take the Bible seriously? It would seem at least as plausible that engagement with the best available scholarship on the literary genres of the Bible (which would argue for taking the Book of Job as literature rather than history, e.g.) is a form of taking the Bible seriously.

            Indeed, as Deiseach and Mai La Dreapta have pointed out, St. Augustine himself didn’t argue for taking Genesis literally in the way that modern creationists do.

            But St. Augustine was a bishop, a monk, and the author of a bookshelf full of Bible commentaries who converted to Christianity based on “taking up and reading” the Bible. The Bible dominated the man’s entire later life. Any test for “taking the Bible seriously” that fails St. Augustine appears to me to be too idiosyncratically designed to be useful.

            ETA: St. Paul and St. Peter both “removed” lots of the Old Testament law, according to the writings and actions ascribed to them in the New Testament; arguably, so did Jesus Christ in the Gospels. Is the Bible even taking *itself* seriously given your usage?

          • Deiseach says:

            you say you follow the Bible, but you don’t really, and there are some people who take the Bible much more seriously

            You know who I’ve heard say that? Fundamentalist-style Protestants (of the American variety usually, though we have our own brands here in Europe – the late Ian Paisley, for instance) who deny that Catholics are Christians on the very grounds that we don’t take the Bible seriously enough (we’ve corrupted the pure Gospel and substituted man-made works of idolatry instead).
            So I’m still hearing the “Rationalism is so The True Pure Religion!” argument here, to be honest. And you’re making my argument for me that, no matter what non-American, non-Protestant/non-Fundamentalist Protestants may say, for some (whether Christian, non-Christian, or atheist) the only “real” Christianity is That Old Time Religion that grew out of the 19th century American Great Awakening. All the rest of us in the rest of the world (Copts? Oriental Orthodox? Syro-Malabar?) You’re not and never have been proper Christians, which is why American churches run missions to Europe and other areas of the world.

          • Samuel Skinner says:

            “The Roman Catholic Church is in communion with the various Eastern Orthodox Churches, and they can all be seen as basically extending the Church created by Christ. ”

            And if they disagree on an issue? How can you rely on them if they gave multiple different answers to the same question?

            “Which Catholic Church is favored by God, if He is playing favorites, is an empirical question, but if you had to guess, you’d say the Roman Catholic Church, right? I mean if I were placing my Pascal’s Wager that seems like where the smart money would be.”

            No, the Coptic Church. Unlike the Orthodox or Catholic it was never forced from its original country, didn’t have antipopes and isn’t an extension of the state. You only pick the Catholic Church if you believe secular power is evidence of God’s favor which doesn’t mesh with the small scale of the Old Testament.

            Irenist
            “It would seem at least as plausible that engagement with the best available scholarship on the literary genres of the Bible (which would argue for taking the Book of Job as literature rather than history, e.g.) is a form of taking the Bible seriously.”

            If you believe the Bible was divinely inspired what does the literary genres matter?

            “Indeed, as Deiseach and Mai La Dreapta have pointed out, St. Augustine himself didn’t argue for taking Genesis literally in the way that modern creationists do. ”

            I’m sorry, you don’t get to do that. Either the literary genre matters (and St Augustine’s opinion doesn’t) because we care about the intention of the authors or theologians have a gift that lesser men lack.

            “St. Paul and St. Peter both “removed” lots of the Old Testament law, according to the writings and actions ascribed to them in the New Testament; arguably, so did Jesus Christ in the Gospels. Is the Bible even taking *itself* seriously given your usage?”

            Yes. The believed the law was literally what God desired people to do and they believed they had been provided a new covenant which over wrote the old.

            Deiseach
            ” You’re not and never have been proper Christians, which is why American churches run missions to Europe and other areas of the world.”

            And? You don’t find appropriating local Gods as saints a little bit off the beaten track for Christianity?

          • Troy says:

            If you believe the Bible was divinely inspired what does the literary genres matter?

            The literary genre matters because it is evidence for what the author of the text was trying to express with it. Why does it make a difference whether the author was divinely inspired or not?

          • Irenist says:

            @Samuel Skinner:

            Troy already ably answered your question about why literary genre matters. Whether the human authors of the Bible were divinely inspired or not is irrelevant to whether literary genre tells us about the text. If the Book of Job was (a) a literary composition and (b) inspired by God, that just means that someone trying to hear the Word of God in the Book of Job will have better luck reading it as literature than as history. If there is no God, but the human author intended it as literature, then the same idea applies–a secular modern Biblical scholar will try to situate the authorial context for the Book of Job within Near Eastern wisdom literature, not within historical chronicle or love poetry or whatever.

            I’m sorry, you don’t get to do that. Either the literary genre matters (and St Augustine’s opinion doesn’t)

            There’s no conflict. Augustine’s exegesis was characterized by keen awareness of generic and rhetorical considerations. To take one example from a book of Augustine’s largely given over to such matters:

            I would have learned men to know that the authors of our Scriptures use all those forms of expression which grammarians call by the Greek name “tropes,” and use them more freely and in greater variety than people who are unacquainted with the Scriptures, and have learned these figures of speech from Other writings, can imagine or believe. Nevertheless those who know these tropes recognize them in Scripture, and are very much assisted by their knowledge of them in understanding Scripture.

            De doctrina christiana, Bk. III, chap. 29.

            because we care about the intention of the authors or theologians have a gift that lesser men lack.

            Well, theologians may not have any more intelligence than others, but they have more knowledge of their field (e.g., more knowledge of the traditional methodologies of Biblical hermeneutics) than those who haven’t studied in as much depth. Now, one can quibble with the worth of that knowledge (“Who cares whether the Gospels have the generic characteristics of Hellenistic biography or Hebrew chronicle? What’s important is that there’s no God!”), but it is a form of knowledge, and even highly intelligent rationalist types will be more effective if they base their opinions on as much knowledge as is available.

            You don’t find appropriating local Gods as saints a little bit off the beaten track for Christianity?

            A definitional dispute as to which sects are “really” Christian would be a waste of time. Historically, Catholicism and Orthodoxy have been alleged to have appropriated local gods as saints. For purposes of, say, Protestant theology, defining those churches as “not true Christians” might be useful. For purposes of a discussion at SSC, it’s unproductively “no true Scotsman”-ish.

          • Jaskologist says:

            How could literary genre not matter? Expecting to be able understand something without knowing its genre makes about as much sense as expecting to be able to read a manuscript without knowing the language it is written in. Divine authorship has nothing to do with it.

          • Samuel Skinner says:

            Troy (and Irenist + Jask)
            “The literary genre matters because it is evidence for what the author of the text was trying to express with it. Why does it make a difference whether the author was divinely inspired or not?”

            Because if it is divinely inspired it doesn’t matter. If God is communicating through the writers than he can make the message true even if the genre is not nonfiction. That is what all powerful all knowing means! Why should the people who had an incomplete revelation be better at understanding what category they information they received falls into? The number of people living has increased over time so if God is trying to maximize those saved it should be tailored for the present.

            If God is real, the bible is no more a book than The Gallic Wars are an impartial account- they are both written to achieve a specific goal.

            “but it is a form of knowledge, and even highly intelligent rationalist types will be more effective if they base their opinions on as much knowledge as is available.”

            Nick just below us is pointing your interpretation of St Augustine is misleading; you should respond to him.

            “A definitional dispute as to which sects are “really” Christian would be a waste of time.”

            Why? Christian is those who recognize Jesus Christ as their lord and savior. It is really simple. However what institution is the correct one to build off that is what is the issue.

            Since you are dealing with different conflicting interpretations of Christianity you have to have a way of differentiating which is the correct one (unless you think they are equally valid; good luck with that).

            Ones that recognize non Christian Gods as saints (and skirt close to violating the 10 commandments) are probably less central than ones that don’t.

          • Nick says:

            Samuel,

            What I said doesn’t actually conflict with what Irenist said. He said “St. Augustine himself didn’t argue for taking Genesis literally in the way that modern creationists do.” while I said “when St. Augustine said “the literal meaning” he didn’t actually mean “literal” the way we do when we talk about fundamentalists”. So, we’re both saying that Augustine doesn’t want us to interpret things the way that fundamentalists/modern creationists do. Afaict Irenist and I are in total agreement about this, indeed in all the matters in question here.

          • Troy says:

            Because if it is divinely inspired it doesn’t matter. If God is communicating through the writers than he can make the message true even if the genre is not nonfiction.

            I am not sure what you mean that God “can make the message true.” Do you mean that he can change things in the world to match the message, or he can change the meaning of the message to match the world?

            Why should the people who had an incomplete revelation be better at understanding what category they information they received falls into?

            Because they wrote the text. You seem to be operating with a verbal-plenary model of inspiration: God told the authors of the Bible exactly what words to write down. One can believe that the Scriptures are divinely inspired without believing this. One might think, for example, that God moved the authors of Scripture to write in order to communicate certain key ideas, but that he let them write in their own words, languages, idioms, etc.

            The number of people living has increased over time so if God is trying to maximize those saved it should be tailored for the present.

            This is silly a priori speculation into God’s motives and the best way to achieve them. Do you think that God should have dictated the Bible in, say, English and Chinese, using 21st century metaphors and idioms, to people who would not have understood it? (And what of our descendants who will speak different languages and live in a different culture?) Why would people copy down and venerate texts that made no sense in their cultural context? How are previous generations and minority groups supposed to understand a message tailored to “the majority” of people in history?

            It’s obvious that the text of the Bible was shaped by the cultural contexts in which it was written. To understand that text we have to learn about that context, as well as learn the languages in which the Bible was originally written. No one but some fundamentalist Protestants who think that God is speaking to them personally through the King James Bible disagrees with this.

            If God is real, the bible is no more a book than The Gallic Wars are an impartial account- they are both written to achieve a specific goal.

            Even granting the premise, I don’t see what follows. Mere Christianity and The Chronicles of Narnia may have been written with something like the same goal in mind — to defend a Christian worldview — but obviously you’ll go completely wrong in understanding them if you don’t understand their genre.

          • Samuel Skinner says:

            “Afaict Irenist and I are in total agreement about this, indeed in all the matters in question here.”

            My mistake. Of course the real people who you actually want are the people who wrote Genesis- what did the Jews have to say about it?

            Troy
            “I am not sure what you mean that God “can make the message true.” Do you mean that he can change things in the world to match the message, or he can change the meaning of the message to match the world?”

            Either. God is all powerful in Christianity; neither is a barrier for him.

            “Because they wrote the text. ”

            So? Christianity is based on being an update of Judaism. If you hold Christianity to be true, the Jews didn’t understand as well as they thought they did because God came out with a new revelation and dumped the idea of chosen people for a universal faith.

            “You seem to be operating with a verbal-plenary model of inspiration: God told the authors of the Bible exactly what words to write down.”

            I’m not. God is all knowing so he knows what the authors will write down though- that is the point of being all knowing. The effect is essentially the same- God knows what to tell them to get a given result.

            “This is silly a priori speculation into God’s motives and the best way to achieve them.”

            Yes, he totally sent his only son into this world to be tortured to death because he was bored. The entire point of Christianity is that God is providing a chance of salvation for all humanity, not just the Jews.

            God is perfectly capable of not bothering with an afterlife, bothering with creating the universe, bothering with souls or any of the things God is credited to do. Since he is all powerful he also doesn’t need them for usage elsewhere.

            ” Do you think that God should have dictated the Bible in, say, English and Chinese, using 21st century metaphors and idioms, to people who would not have understood it?”

            Why would he? If they were in a language no one could read, they wouldn’t have been preserved.

            ” (And what of our descendants who will speak different languages and live in a different culture?) ”

            It is too bad Christianity doesn’t speculate about an end of secular time.

            “Why would people copy down and venerate texts that made no sense in their cultural context? How are previous generations and minority groups supposed to understand a message tailored to “the majority” of people in history? ”

            Since most people weren’t literate (and if you agree only Christianity is true and other religions are false) AND it is possible to get people to follow things that make no sense, that isn’t a problem.

            “It’s obvious that the text of the Bible was shaped by the cultural contexts in which it was written. ”

            So? It is supposed to be divinely inspired by an individual who can see the future and is all knowing and all powerful. You can’t separate the reading out from that background.

            “No one but some fundamentalist Protestants who think that God is speaking to them personally through the King James Bible disagrees with this.”

            And? Are you seriously arguing the correct view is based upon majority view?

            “Even granting the premise, I don’t see what follows. Mere Christianity and The Chronicles of Narnia may have been written with something like the same goal in mind — to defend a Christian worldview — but obviously you’ll go completely wrong in understanding them if you don’t understand their genre.”

            C S Lewis never claimed to be able to see the future. That is the big difference.

          • Troy says:

            Samuel, I do not mean to be rude, but I really cannot tell what you are trying to argue. My request for clarification pertained to what you were asserting within the context of your argument. I can grant for the sake of argument that God can do either of the things you say; what I don’t understand is what’s supposed to follow from that premise.

            Later on you respond to my question about language by saying that “If they were in a language no one could read, they wouldn’t have been preserved.” Then you respond to almost exactly the same point which I made in my own post (since, obviously, I was not actually claiming that God should have done this; rather, I was putting this forward as a reductio of your position) by saying that “Since most people weren’t literate (and if you agree only Christianity is true and other religions are false) AND it is possible to get people to follow things that make no sense, that isn’t a problem.”

            At several points you seem to go out of your way to interpret me uncharitably. For example, I am not saying that the correct view on any topic is whatever the majority think; rather, by noting that most Christians would agree that we need to learn the language and cultural context of the Bible to understand it I am providing evidence as to the contents of orthodox Christianity, which I took to be what you were arguing about. Again, on God’s motives in acting in history, I am not saying that he is not concerned with saving souls; I am saying that this is not his only motive and that you are making numerous assumptions about the best way to go about this goal.

            Your main point, as I understand it, has something to do with God’s omniscience and how this would shape the Bible. Do you think that God’s omniscience somehow means that he’s going to ensure that the Bible is not shaped by the culture in which its authors wrote? If so, why? I don’t see the inference. If not, then what does it matter that God can see the future?

          • Samuel Skinner says:

            “Samuel, I do not mean to be rude, but I really cannot tell what you are trying to argue.”

            I’m putting forward the fundamentalist argument.

            “Later on you respond to my question about language by saying that”

            And? What is wrong with that?

            “For example, I am not saying that the correct view on any topic is whatever the majority think; rather, by noting that most Christians would agree that we need to learn the language and cultural context of the Bible to understand it I am providing evidence as to the contents of orthodox Christianity, which I took to be what you were arguing about. ”

            I’m defending the fundamentalist position because people here seem to think it is unfair to use the fundamentalist position (which after all considers itself Christianity).

            “Again, on God’s motives in acting in history, I am not saying that he is not concerned with saving souls; I am saying that this is not his only motive and that you are making numerous assumptions about the best way to go about this goal.”

            And here you state his other motives (that are on the level of the whole Crucifixion don’t forget) and provide the assumptions I am making.

            “Your main point, as I understand it, has something to do with God’s omniscience and how this would shape the Bible. Do you think that God’s omniscience somehow means that he’s going to ensure that the Bible is not shaped by the culture in which its authors wrote? If so, why? I don’t see the inference. If not, then what does it matter that God can see the future?”

            God knows that the bible is going to be shaped by the culture of the authors who wrote it. He can tell it to them straight and have future generations have to systematically decode the bible based upon the cultural distortion.

            Or (and here is the fundamentalist version) he can use the fact he is all powerful and all knowing to correct for the distortion so that the message comes through clearly.

          • Ray says:

            Since people are bringing up Augustine a lot here, I think it’s worth pointing out that the man pretty clearly cited the authority of scripture as demonstrating that both humanity and the world did not predate his writing by more than 6000 years.

            http://www.newadvent.org/fathers/120112.htm (see especially chapter 10.)

            Now granted, holding this interpretation against speculations of ancient Greek philosophers and Babylonian and Egyptian histories whose early chapters are as obviously mythical as Genesis 1-11 is not so great a sin as holding similar views in the face of modern science. Nonetheless Augustine’s stated interpretation of scripture does not differ from that of a modern Protestant Fundamentalist in any way that would affect the compatibility of those scriptures with modern science.

          • Troy says:

            Samuel:

            And? What is wrong with that?

            Nothing: I agree with the point. Indeed, I made the point myself. What I was saying is that your making the point appeared to contradict your later objecting to my making of this point.

            I’m defending the fundamentalist position because people here seem to think it is unfair to use the fundamentalist position (which after all considers itself Christianity).

            Okay, that helps. For my part, I’m not sure that the position that you’re labeling fundamentalist is ultimately coherent. At any rate, I take it that what the Christians in this thread are primarily concerned with maintaining is that one need not be a fundamentalist in this sense to be an orthodox Christian (as evidenced, say, by the positions of Church Fathers, the contents of the creeds, etc.).

            It is also worth noting that the term “fundamentalist” is ambiguous. Historically it refers to a movement based around adherence to the “five fundamentals” (from Wikipedia):

            – Biblical inspiration and the inerrancy of scripture as a result of this
            – Virgin birth of Jesus
            – Belief that Christ’s death was the atonement for sin
            – Bodily resurrection of Jesus
            – Historical reality of the miracles of Jesus

            You’ll notice that Biblical literalism (as distinct from inerrancy), or the idea that the Bible can be readily understood by anyone who picks it up, is not among these. (I also note in passing that I, and I suspect most of the other Christians commenting on this thread, affirm the last four; the only item I deny on there is the inerrancy of Scripture.) So you’re defending a position that goes above and beyond historical fundamentalism.

            God knows that the bible is going to be shaped by the culture of the authors who wrote it. He can tell it to them straight and have future generations have to systematically decode the bible based upon the cultural distortion.

            Or (and here is the fundamentalist version) he can use the fact he is all powerful and all knowing to correct for the distortion so that the message comes through clearly.

            What would “correcting for the distortion so that the message comes through clearly” amount to? What form could God possibly put the Bible into such that it could never be misinterpreted? I can’t even write a simple assignment for my class without somebody misunderstanding my directions.

          • Troy says:

            Ray:

            That is true, but it’s also worth noting that Augustine explicitly argues against taking the Genesis creation accounts literally. For example, he does not think that the “six days” refer to six literal days; instead, he argues that God created the world in an instant. (See his The Literal Meaning of Genesis.)

            Indeed, elsewhere in that book he endorses some kind of proto-evolutionary hypothesis about the origins of different forms of life on Earth, although the interpretation of what he meant by this is somewhat disputed, if I remember correctly.

          • Ray says:

            Troy

            Indeed, elsewhere in [on the literal meaning of genesis] [Augustine] endorses some kind of proto-evolutionary hypothesis about the origins of different forms of life on Earth, although the interpretation of what he meant by this is somewhat disputed, if I remember correctly.

            Without knowing which part you are referring to I can’t be quite sure what Augustine meant, at least not without doing a lot more hunting that really seems worth it (I recall investigating similar claims about Aquinas and finding that the cited passages almost certainly referred to Aristotelian Spontaneous Generation, not Evolution.) But, even if we grant that the proto-evolutionary reading was Augustine’s original intent, you still haven’t said anything which separates him from e.g. Ken Ham

            http://2.bp.blogspot.com/_0zu-653BZ-A/TTthdUGbu6I/AAAAAAAAAPE/GxEgRLAzU7o/s1600/Creation%2BOrchard.jpeg

            http://3.bp.blogspot.com/_0zu-653BZ-A/TTtfaka34kI/AAAAAAAAAO8/fEBm30xtVgM/s1600/Horse%2BDevelopment.JPG

            In any event, Augustine is unequivocal in interpreting scripture as saying that both the world and humanity are less than 8000 years old (relative to the present day.) That alone puts his reading at odds with modern science. Whether he thinks those 8000 years were preceded by a literal six day creation period, an instant in which everything was fully formed, or an instant in which life was merely given the potentiality to become fully formed, he’s still equally wrong.

          • Nick says:

            Ray,

            See my latest comment downthread; you’re right that Augustine’s position on the details of creation probably don’t differ much from a YEC of today, but how he arrived at that position makes a lot of difference. He didn’t have reason to think a more straightforward interpretation was false; but we have absurd amounts of evidence now that the Earth is very old and the universe considerably older than that, that speciation occurs, good reason to think that life arose without some sort of divine intervention, and so on. The YEC, in other words, has all the tools to know better but doesn’t. (Although yes, there are often understandable reasons why the YEC doesn’t know better, like indoctrination and all. The larger point here is that churches should not succeed and fail based on the interpretation of one man, because they have a (sorta) diverse hermeneutical tradition of thousands of years.)

          • Nick says:

            Troy,

            I think you have the right of it about God being unable to give the Bible in such a way that the meaning is always clear and that this isn’t a problem for omnipotence. This is a really confusing point that I think most people just develop an intuition for. But I want to take a stab at explaining it to Samuel (with a gratuitous math metaphor of course).

            With a function, we can’t assume that, just because the domain and range are all reals, you can get any y for any x. It’s the difference between assuming something can do anything and assuming it can do anything in any way. It seems clear that this applies to omnipotence, insofar as to be able to do the latter would lead to contradictions, like being able to construct a square by constructing a circle instead. So sure, God can say anything he wants to the Jews, but that doesn’t mean that for any way of taking the text you will always get out of it what God wanted to convey.

          • Troy says:

            That alone puts his reading at odds with modern science.

            I wholly agree; my point (similar to Nick’s) was that his general hermeneutical strategy suggests that he would be willing to accept the conclusions of modern science were they available to him. Put otherwise, his conclusions were due to bad (or undeveloped) science, not bad theology or Biblical interpretation.

            On Ken Ham: my understanding is that the idea of contemporary YECs that God created “kinds” that underwent “micro-evolution” is a comparatively recent development among special creationists. If I’m not mistaken, in Darwin’s time special creationists did not even accept (or at least did not all accept) speciation, hence Darwin’s having to argue that artificial selection had created new species.

            On Augustine’s proto-evolutionary theory: I originally learned of this from the novel Canticle of Leibowitz, and looked it up some time ago. A quick Google search reveals this blog post discussing the matter: http://www.johndcook.com/blog/2011/03/07/augustine-leibowitz-and-evolution/

          • Ray says:

            Nick:

            The larger point here is that churches should not succeed and fail based on the interpretation of one man, because they have a (sorta) diverse hermeneutical tradition of thousands of years.)

            I brought up Augustine, since he was already under discussion as an example of someone who was willing to interpret scripture to accommodate contemporary philosophy. (e.g. he interpreted in light of the contemporarily well known fact that it makes no sense to think of morning as following evening when referring to the Earth as a whole — since it’s always morning somewhere on a round Earth.)

            However, Augustine was absolutely typical of Catholic and Orthodox writers throughout antiquity and the middle ages. See e.g. http://en.wikipedia.org/wiki/Young_Earth_creationism#Jewish_and_Christian_dates_for_creation

            I think on this point, the Creationist Protestant is correct in accusing anyone who accommodates their reading of the book of Genesis to modern scientific chronology of innovating, both from scripture and established tradition.

            Troy

            On Ken Ham: my understanding is that the idea of contemporary YECs that God created “kinds” that underwent “micro-evolution” is a comparatively recent development among special creationists.

            Yes, I think that is correct. That’s part of the reason I am skeptical that Augustine originally intended to express similar proto-evolutionary thoughts. On more careful examination, I think the relevant text is On the Literal Meaning of Genesis, chapter 17. He says something brief and vague about the unformed creation being “marked off from formed creation in order that it may not find its end in an unformed state but rather be set aside to be formed later by other created beings of the corporeal world.”

            In context, it’s not clear Augustine has a specific natural history in mind. His main concern seems to be responding to problems in his theodicy, due to the creation of darkness/night before the fall. If he did have a specific natural history in mind, it’s not clear he’s referring to anything more exotic than one generation of mankind or animal life succeeding the previous (This at the very least would involve the creation of new souls which had not come into existence at the moment of creation — unless you’re saying Augustine was an Averroist or a Mormon)

          • Samuel Skinner says:

            Troy
            “For my part, I’m not sure that the position that you’re labeling fundamentalist is ultimately coherent.”

            It isn’t. I’m not a fundamentalist after all. It is however as coherent as Catholicism (which has its sophisticated theology, saints channeling God all over the place and relics).

            “At any rate, I take it that what the Christians in this thread are primarily concerned with maintaining is that one need not be a fundamentalist in this sense to be an orthodox Christian”

            I don’t care what is consider orthodox. The entire question of Christianity is about saving peoples souls and THAT is the only question that matters. My point with fundamentalist protestants is pointing out their premises and conclusions are coherent and you can’t use the grounds Catholics use to reject them without being inconsistent.

            “You’ll notice that Biblical literalism (as distinct from inerrancy), or the idea that the Bible can be readily understood by anyone who picks it up, is not among these.”

            Biblical literalism is that the literal meaning of the text is correct, not that anyone can understand it (although they tend to overlap). Given the idea behind Christianity is the salvation of all humanity the idea that the bible isn’t accessible to everyone is a little bit humorous.

            “(I also note in passing that I, and I suspect most of the other Christians commenting on this thread, affirm the last four; the only item I deny on there is the inerrancy of Scripture.)”

            That is inconsistent. If the text isn’t divinely inspired and correct, than there is no reason to believe it over other religions which also talk about miracles.

            “What would “correcting for the distortion so that the message comes through clearly” amount to?”

            If God can see the future and knows that there will be errors in the copying, he can make the original flawed so the copying errors result in the correct text.

            “What form could God possibly put the Bible into such that it could never be misinterpreted?”

            I don’t know- I’m not the all powerful, all knowing creator of the universe who has endowed humans with souls that are supposed to resonate with the divine. I’m sure an individual with over 14 billion years of experience could find a solution.

            Nick
            “I think you have the right of it about God being unable to give the Bible in such a way that the meaning is always clear and that this isn’t a problem for omnipotence. ”

            I’m not claiming he aims to make it always clear. I’m claiming he makes it clear to the intended audience. Remember most people in human history were illiterate- the intended audience is stacked with people born after the industrial revolution.

            Yes, I am technically arguing God designed the bible to be in Mandarin, Spanish, French and English.

            “So sure, God can say anything he wants to the Jews, but that doesn’t mean that for any way of taking the text you will always get out of it what God wanted to convey.”

            Yeah, if only he could go in person to clear things up…

          • Troy says:

            The entire question of Christianity is about saving peoples souls and THAT is the only question that matters.

            Have you read the New Testament? How about just the gospels? Jesus’ ministry focuses on rather more than just “saving people’s souls.”

            That is inconsistent. If the text isn’t divinely inspired and correct, than there is no reason to believe it over other religions which also talk about miracles.

            I said that it was not inerrant, not that it was not divinely inspired.

            You are indeed representing the fundamentalist line here: if we can’t trust the Bible about X, we can’t trust it about anything. But this is a non-sequitur. There could be good reasons to believe what the Bible says about, say, Jesus’ miracles even if it is wrong about, say, whether the mustard seed is the smallest of all seeds; just as there is good reason to believe the general testimony of other historical documents that are in error about certain particular matters of fact.

            If God can see the future and knows that there will be errors in the copying, he can make the original flawed so the copying errors result in the correct text.

            This does not follow. First, copying errors are not uniform, so he could not make them all result in the “correct” text. Second, to what extent God can control precisely what happens in the world in general depends on one’s views about human free will, indeterminism, and the nature of divine foreknowledge. These are heavily disputed topics in philosophy and philosophy of religion.

            “What form could God possibly put the Bible into such that it could never be misinterpreted?”

            I don’t know- I’m not the all powerful, all knowing creator of the universe who has endowed humans with souls that are supposed to resonate with the divine. I’m sure an individual with over 14 billion years of experience could find a solution.

            It seems to me eminently plausible that what you are asking of God is either metaphysically impossible or only attainable by massively disrupting the world we live in, in such a way as to have myriad other negative consequences. In general, it sounds as if you want God to micromanage human history in a way that would be incompatible with any kind of genuine human autonomy.

          • Samuel Skinner says:

            “Have you read the New Testament? How about just the gospels? Jesus’ ministry focuses on rather more than just “saving people’s souls.””

            Saving souls is the main justification of Christianity. If it couldn’t do that, it doesn’t have a reason for existing.

            “I said that it was not inerrant, not that it was not divinely inspired.”

            You know who else thinks the bible is divinely inspired? Muslims! They think Jesus was also a prophet but the message was garbled. Believing it is divinely inspired is not sufficient to be a Christian.

            “You are indeed representing the fundamentalist line here: if we can’t trust the Bible about X, we can’t trust it about anything.”

            Here is the thing- the bible has to be different from other religious texts. If the difference between the Bible and Buddhist canon is “I trust the former” than how do distinguish which one people should choose? If God exists he will try to make the true religion different from all other religions.

            “There could be good reasons to believe what the Bible says about, say, Jesus’ miracles even if it is wrong about, say, whether the mustard seed is the smallest of all seeds;”

            Why? Why should it be less reliable when it comes to items we can check versus ones we can’t?

            “just as there is good reason to believe the general testimony of other historical documents that are in error about certain particular matters of fact.”

            That is a bit misleading. The author of those documents aren’t supposed to be inspired by God. God should be able to do better than the default.

            “This does not follow. First, copying errors are not uniform, so he could not make them all result in the “correct” text. ”

            So? He doesn’t need to have them all result in the correct text. He just has to have the “corrected” version be the one that is eventually adopted for official usage. I’m sure there were many different versions of the bible in English, but the King James bible is now the standard.

            “Second, to what extent God can control precisely what happens in the world in general depends on one’s views about human free will, indeterminism, and the nature of divine foreknowledge. These are heavily disputed topics in philosophy and philosophy of religion.”

            God is all knowing. He can make prophecies that come true. He can see the future. He sent himself to die on the cross knowing what decisions people would make at least three decades in advance.

            How on earth is there any dispute?

            “It seems to me eminently plausible that what you are asking of God is either metaphysically impossible or only attainable by massively disrupting the world we live in, in such a way as to have myriad other negative consequences.”

            Metaphysically impossible? That is fine, God only cares about the physically impossible. As for massively disrupting the world… yeah, God exterminated all living things once before. I’m sure getting the bible so that it translates well into Spanish, English, French and Mandarin is rather minor in comparison.

            “In general, it sounds as if you want God to micromanage human history in a way that would be incompatible with any kind of genuine human autonomy.”

            What are you talking about? God is predicting errors scribes make for hundreds of years in my version so that the version is what he was aiming for when the industrial revolution and mass literacy comes around. After the initial push there is zero micromanagement (unlike Catholicism which postulates people with divine powers running around willy nilly).

          • Jaskologist says:

            @Samuel Skinner

            I don’t even know what you’re trying to argue anymore, but as far as I can tell, you’ve constructed a strawgod. Whatever God can do, it doesn’t follow that he did do that. No Christian believes God has done the things you are describing, including staunch KJV-onlyists.

          • Samuel Skinner says:

            “I don’t even know what you’re trying to argue anymore”

            As I’ve said, I’m defending the protestant literalist position. I’m an atheist, but the point is if you accept the premises, than the conclusions follow and (the reason this came up) the premises are essentially the same as Catholicism. All powerful, all knowing God who manifested as Jesus Christ to bring salvation; these aren’t exactly controversial Christian positions.

            “but as far as I can tell, you’ve constructed a strawgod. ”

            You can’t construct a strawman for your own position.

            “Whatever God can do, it doesn’t follow that he did do that. ”

            No, it doesn’t. I’m not claiming God did either since I happen to be an atheist.

            People complain that “literalism” is unfair to versions that rely on tradition. I’m pointing out that the basic assumptions of Protestantism are shared by Catholicism and that there is no way to pick one over the other based purely on logic or evidence. Sure literalism rejects evidence from science, but “tradition” ends up including lots of miracles which is a bigger rejection. Creationism only requires one miracle that we can’t see, while miracles require suspending natural laws repeatedly, but only where they can’t be rigorously documented.

            “No Christian believes God has done the things you are describing, including staunch KJV-onlyists.”

            No one believes God can see the future and used it to insure the bible has the correct message to carry to mankind? Really? Because that is a rather essential part of Christian belief given the books of the bible weren’t compiled until around 50-70 years after Jesus’s death. I’m not seeing how a factor of 100 is relevant to a being described as transcending time.

          • Troy says:

            You know who else thinks the bible is divinely inspired? Muslims! They think Jesus was also a prophet but the message was garbled. Believing it is divinely inspired is not sufficient to be a Christian.

            Indeed, I never said it was. Christians also believe that Jesus was the Son of God and was resurrected. Muslims believe neither of those things.

            God is all knowing. He can make prophecies that come true. He can see the future. … How on earth is there any dispute?

            http://plato.stanford.edu/entries/providence-divine/

            Why? Why should it be less reliable when it comes to items we can check versus ones we can’t?

            The Bible is not a science textbook. One can coherently trust it on matters of theology while not believing that God ensured that its authors got correct matters horticultural.

            I’m an atheist, but the point is if you accept the premises, than the conclusions follow and (the reason this came up) the premises are essentially the same as Catholicism. All powerful, all knowing God who manifested as Jesus Christ to bring salvation; these aren’t exactly controversial Christian positions.

            Your conclusions do not follow from those premises, which I indeed accept. They follow only with extremely dubious bridge premises that primarily consist of speculation as to what God would do if he wanted to attain X.

          • Scott Alexander says:

            RCF: tone warning. You are close to getting banned.

          • Samuel Skinner says:

            “Indeed, I never said it was. Christians also believe that Jesus was the Son of God and was resurrected. Muslims believe neither of those things.”

            Yes, but you need to believe the bible is more than just divinely inspired in order to have your beliefs about Jesus justified by it. You can be Jewish and believe the bible is divinely inspired after all. You need to believe it is accurate.

            “http://plato.stanford.edu/entries/providence-divine/”

            That is because they assume God is all good. The story of Job makes it pretty clear he isn’t.

            “The Bible is not a science textbook. One can coherently trust it on matters of theology while not believing that God ensured that its authors got correct matters horticultural.”

            (note- this isn’t related to the protestant branch)
            So if the bible had scientific items correct, that would increase the probabilities it was divinely inspired? If the existence of evidence causes you to update in favor of a belief, the opposite should cause you to lower your belief.

            “Your conclusions do not follow from those premises, which I indeed accept. They follow only with extremely dubious bridge premises that primarily consist of speculation as to what God would do if he wanted to attain X.”

            Why not? Why don’t the conclusions follow?

            You accept past theologians can be wrong due to limitations of factual knowledge…
            and now we know the majority of literate people live in a relatively short span of history that coincides with the time period Christianity spread across the globe.

            God really wants people to worship him. He founded two religions, voluntarily become human in order to be executed, set up an afterlife, etc.

          • Troy says:

            Why not? Why don’t the conclusions follow?

            Your argument, such as I can make it out, trades on numerous speculations about God’s motives (e.g., what he’s trying to achieve in creating the world, becoming incarnate, inspiring people to write the Bible, etc.), how those motives weigh off against each other, and the optimal way to achieve those motives. Some of these speculations have some support from Christian tradition; others do not. (Indeed, speculations about how God would, if he were rational, go about achieving his ends sit very uncomfortably with much Christian thought, which tends to emphasize the cognitive gap between us and God.)

            Let’s suppose that given that all these speculations are right, your conclusions follow. I remain untroubled, because the probability of the conjunction of all your speculations conditional on Christian theism is very low. So the conditional probability of fundamentalism given Christian theism also remains very low.

          • Samuel Skinner says:

            “(Indeed, speculations about how God would, if he were rational, go about achieving his ends sit very uncomfortably with much Christian thought, which tends to emphasize the cognitive gap between us and God.)”

            If that is true, all speculation is pointless and you shouldn’t even be trying to respond because you are as ignorant as me. We have to work of the assumption that motivations are comprehensible.

            “I remain untroubled, because the probability of the conjunction of all your speculations conditional on Christian theism is very low”

            Why? It requires

            God to want to save souls
            God to want to use the bible to do this
            God to know the future

            The idea that God made the bible so it is accessible to the most people (to save as many as possible) instead of just a select few is a conclusion, but it is justified by the premises.

            Of course this leads to believing the end times will happen and that the past is a lie, but Catholicism leads to believing that God handed out magic powers in the past and that the past is a lie for heathens (or the work of demons) so it actually postulates less interference by God (but of a larger magnitude).

            “So the conditional probability of fundamentalism given Christian theism also remains very low.”

            So? Fundamentalism is only one version of Christian theism. My claim isn’t that fundamentalism is right, but that it is equally valid/likely as other forms of Christianity and that it uses the same assumptions- you can’t claim that rejection of fundamentalism has no relation to your branch of Christianity in other words. I’m also a bit leery of any Christian branch claiming their justification is “tradition” because the Jews exist and the idea that making a complete break and then playing the tradition card strikes me as massively hypocritical.

          • Troy says:

            If that is true, all speculation is pointless and you shouldn’t even be trying to respond because you are as ignorant as me. We have to work of the assumption that motivations are comprehensible.

            False dilemma. You can lower your credences in various hypotheses about God’s motivations/methods/etc. without taking yourself to be completely in the dark about them. I can more accurately predict the motivations and actions of my wife than a stranger from my own culture, I can more accurately predict those of that stranger than those of a stranger from another culture, and I can more accurately predict those of that stranger than those of God. But that doesn’t make God completely inscrutable, any more than it makes the stranger from another culture completely inscrutable.

            Why? It requires

            God to want to save souls
            God to want to use the bible to do this
            God to know the future

            (1) This goal is vague, and its content will depend on one’s soteriology. Not all Christians will agree on what is necessary to “save souls,” and consequently God’s potential means in doing this are quite large, and not limited to giving people Bibles: e.g., the Church, general revelation, personal revelations, other religions, post-mortem revelation. (Obviously some Christians will dispute the possible efficacy of these as means of “saving souls.” Certainly many fundamentalists would. But part of the debate we’re having is whether all Christians need share such fundamentalist assumptions; there are theologians who will defend each of these as a means for God to redeem human beings.)

            (2) God might have other goals besides “saving souls” that would conflict with doing what you say he should (e.g., preserving human autonomy) and other means to save people besides the Bible (e.g., the Church, general revelation, personal revelations).

            (3) Divine foreknowledge does not imply ability to do just anything (e.g., the metaphysically impossible), and there are debates about its interaction with human free will.

            So I maintain that your conclusions about how God will go about achieving the goal of “saving souls” are highly improbable conditional on Christian theism.

            My claim isn’t that fundamentalism is right, but that it is equally valid/likely as other forms of Christianity and that it uses the same assumptions

            And that is precisely what I am denying. Your arguments lend very little support to the claim that Christian doctrine implies fundamentalism, and thus do not appreciably raise the probability of fundamentalism given Christianity. This probability is further lowered by its conflict with observed facts, e.g., modern science. These facts do not (for the most part) disconfirm non-fundamentalist versions of Christianity, because these non-fundamentalist versions do not predict their negations. So non-fundamentalist forms of Christianity remain much more probable than fundamentalism.

          • Samuel Skinner says:

            “False dilemma. You can lower your credences in various hypotheses about God’s motivations/methods/etc. without taking yourself to be completely in the dark about them.”

            Except what is replacing them is “this problem is unknowable”, not another hypothesis. If the size of unknowable is large enough you are claiming speculation is pointless. If it isn’t than there is no point bringing it up because it doesn’t change the relative probabilities of different choices.

            “and I can more accurately predict those of that stranger than those of God. ”

            You can more accurately predict the actions of God than those of a stranger. I predict no miracles or divine intervention in the future. That is trivial and encompasses God’s interaction with the physical world.

            “This goal is vague, and its content will depend on one’s soteriology.”

            How is it vague? The content of it varies, but the once you agree on what is requisite for salvation, the goal is pretty clear. If by works, encourage works, if by faith, encourage faith.

            “Not all Christians will agree on what is necessary to “save souls,” and consequently God’s potential means in doing this are quite large, and not limited to giving people Bibles: e.g., the Church, general revelation, personal revelations, other religions, post-mortem revelation.

            But part of the debate we’re having is whether all Christians need share such fundamentalist assumptions; there are theologians who will defend each of these as a means for God to redeem human beings.)”

            So? Just because God has other options doesn’t mean he will ignore or misuse this one. If God can use the bible to save people’s souls, he will use the bible to save people’s souls. The existence of other tools doesn’t negate my argument; only arguing the bible has no power to save souls (or changes to the bible would somehow worsen those tools) is a counter.

            “God might have other goals besides “saving souls” that would conflict with doing what you say he should (e.g., preserving human autonomy)”

            God designed mankind. We don’t have autonomy outside of the limits he designed. It also has scant support from the bible where god intervenes to get people to do what he wants. Noah’s flood is sort of the ur example of God not caring about autonomy at all.

            ” Divine foreknowledge does not imply ability to do just anything (e.g., the metaphysically impossible), and there are debates about its interaction with human free will.”

            The debates are meaningless- they presume God is all good which conflicts with the bible. It isn’t possible to form a coherent opinion on God if you assume he is all good, knowing and powerful- the three simply conflict too much and no attempt to rectify them works.

            “Your arguments lend very little support to the claim that Christian doctrine implies fundamentalism, and thus do not appreciably raise the probability of fundamentalism given Christianity.”

            You use sources to support your position that claim God is all good, despite the fact the bible repeatedly shows that isn’t true and you claim that this means God can’t see the future.

            Of course if you reject the accuracy of the bible and the idea of an all powerful, all knowing God, than fundamentalism doesn’t make sense!

            “This probability is further lowered by its conflict with observed facts, e.g., modern science. ”

            So? Other branches of Christianity also conflict with modern science. The Virgin Birth and the Resurrection are things you agreed to holding and “God raising people from the dead and impregnating people” is no more or less interventionist than God creating the world to look older than it is. Catholicism also has countless miracles and saints relics which are openly magic.

            It only looks less conflicting with science because the claims have been trimmed down to things in the past that haven’t been observed.

            “These facts do not (for the most part) disconfirm non-fundamentalist versions of Christianity, because these non-fundamentalist versions do not predict their negations. ”

            Because non-fundamentalist version do not make any predictions! You can’t use “makes predictions until they are falsified” as evidence of initial accuracy of the belief, rather than the ability of the Church to adapt.

            You can of course make a fundamentalist version that doesn’t deny modern science; the bible is true, but God decided to add a couple of billion years to the start because he is outside time so the order he did that didn’t matter to him.

          • Troy says:

            How is it vague? The content of it varies, but the once you agree on what is requisite for salvation, the goal is pretty clear.

            People agree on neither its content nor the necessary means to it. Hence, just saying that “God aims at saving souls” does not tell us very much. You have to specify what you mean by that, and when you do you’ll be siding with some Christians over others.

            So? Just because God has other options doesn’t mean he will ignore or misuse this one. If God can use the bible to save people’s souls, he will use the bible to save people’s souls. The existence of other tools doesn’t negate my argument; only arguing the bible has no power to save souls (or changes to the bible would somehow worsen those tools) is a counter.

            If I want to get some exercise, and I can either go for a walk or for a swim, I’m less likely to go for a walk than if that were my only option. Of course I could do both, but the probability that I go for a walk given that I want exercise is lower in the first case than in the second.

            The debates are meaningless- they presume God is all good which conflicts with the bible. It isn’t possible to form a coherent opinion on God if you assume he is all good, knowing and powerful- the three simply conflict too much and no attempt to rectify them works.

            If this is your position, you are no longer representing the fundamentalist Christian. You’re now representing the Internet atheist. Naturally, I disagree with what you say here, and accept the classical conception of God as all-powerful, all-knowing, and all-good; but addressing this would take us afield from the current debate.

            Other branches of Christianity also conflict with modern science. The Virgin Birth and the Resurrection are things you agreed to holding and “God raising people from the dead and impregnating people” is no more or less interventionist than God creating the world to look older than it is.

            I believe that the evidence supports the historicity of the Gospel miracles: see, for example, http://www.lydiamcgrew.com/Resurrectionarticlesinglefile.pdf.

            Evolutionary theory is logically inconsistent with a literal reading of the Genesis account. Modern medicine (say) is not logically inconsistent with the Gospel accounts because those accounts report miracles that are claimed to go beyond the productive power of nature. Pointing out that they don’t happen in the normal course of nature isn’t pointing out anything new; their whole point is that they only happen with divine intervention.

            Because non-fundamentalist version do not make any predictions!

            This is simply false. Orthodox Christian theism is confirmed by the fine-tuning of the universe, religious and mystical experiences, and the historical evidence for the Gospel miracles, all of which are more strongly predicted by orthodox Christian theism than they are by naturalism.

            Anyway, this is moving the goalposts. My claim is that non-fundamentalist versions of Christianity are much more likely than fundamentalist versions. While I also believe that those versions are overall quite probable, establishing that is not necessary to refute your claim that fundamentalist and non-fundamentalist forms of Christianity are on a par.

          • Samuel Skinner says:

            “Hence, just saying that “God aims at saving souls” does not tell us very much. You have to specify what you mean by that, and when you do you’ll be siding with some Christians over others.”

            Okay, God cares about people believing in him (see the book of Job and 10 commandments). God also cares about people’s behavior (see the 10 commandments).

            Unless you reject that, it is pretty clear God cares about both of those; you can argue about what is necessary, but both of them are things God wants.

            “If I want to get some exercise, and I can either go for a walk or for a swim, I’m less likely to go for a walk than if that were my only option. Of course I could do both, but the probability that I go for a walk given that I want exercise is lower in the first case than in the second.”

            God is all powerful. You are not. God doesn’t have to worry about one choice crowding out other choices because he can do all of them simultaneously unless they are mutually exclusive (and if he wants he can change the physical laws to allow that- after all we have things already particles and waves simultaneously).

            “If this is your position, you are no longer representing the fundamentalist Christian. ”

            Do you believe that God killing everyone in Noah was not only a good act, but the best possible good act? Do you believe murdering the firstborn of the Egyptians was the same?

            If you answered yes, than it is impossible to talk about Gods actions because he can justify anything under all good and an explanation that can justify anything justifies nothing at all- it has zero predictive power.

            If you answered “those never happened”, than you end up dumping most of the Old Testament; if you reject Exodus, do you still accept the 10 Commandments?

            “You’re now representing the Internet atheist. Naturally, I disagree with what you say here, and accept the classical conception of God as all-powerful, all-knowing, and all-good; but addressing this would take us afield from the current debate.”

            That would be meaningful if you weren’t egregiously doing the same thing- the foundation of Christianity is about God doing two things simultaneously, and this is an example of something you gave as beyond Gods power!

            “I believe that the evidence supports the historicity of the Gospel miracles: see, for example, ”

            Is there anything there that cannot be applied to other religions?

            “Evolutionary theory is logically inconsistent with a literal reading of the Genesis account.”

            So? Theistic evolution is also inconsistent with evolution. It only avoids open conflict by retreating and claiming that there is no difference between its results and science.

            “Modern medicine (say) is not logically inconsistent with the Gospel accounts because those accounts report miracles that are claimed to go beyond the productive power of nature. Pointing out that they don’t happen in the normal course of nature isn’t pointing out anything new; their whole point is that they only happen with divine intervention.”

            Really? So you accept miracles occur in all other religious traditions?

            ” Orthodox Christian theism is confirmed by the fine-tuning of the universe, religious and mystical experiences, and the historical evidence for the Gospel miracles, all of which are more strongly predicted by orthodox Christian theism than they are by naturalism.”

            Nope. For them to be confirmations, they must be less likely is Christianity was wrong.

            Fine tuning doesn’t work- if Christianity was wrong the Universe would still be fine tuned (unless you believe universes with non-fine tuned laws of physics AND intelligent life are possible, in which case our universe isn’t fine tuned!)

            Mystical experiences and the gospels are no more evidence for Christianity than any other religions.

            Unless you are defending “religion exists” (because even pagans had mystical experiences), mystical experiences are worth very little. Same with miracles; every culture experienced them.

            “Anyway, this is moving the goalposts. My claim is that non-fundamentalist versions of Christianity are much more likely than fundamentalist versions.”

            It isn’t moving the goalposts. You are claiming something supports non-fundamentalist versions and I’m pointing out that is an illusion. It is also important because the claimed bedrock is tradition, but if you change to accommodate evidence, you are recognizing something superior to tradition.

          • Troy says:

            Okay, God cares about people believing in him (see the book of Job and 10 commandments). God also cares about people’s behavior (see the 10 commandments).

            I agree about both of these. But they remain vague (what does “believing in God” amount to? what kinds of behavior does God want to promote?), and don’t necessitate the kinds of policy prescriptions you’re making.

            God is all powerful. You are not. God doesn’t have to worry about one choice crowding out other choices because he can do all of them simultaneously unless they are mutually exclusive

            The example still works even if I don’t get tired out or am not limited in time. The one option satisfies or partly satisfies my desire. Similarly, God’s using other means to attain a goal satisfies or partly satisfies his desire to attain that goal. Hence, he is less likely to do various other things to attain that goal.

            Do you believe that God killing everyone in Noah was not only a good act, but the best possible good act? Do you believe murdering the firstborn of the Egyptians was the same?

            I don’t believe that God did those things. But, most fundamentalists would claim that God did do those things and was justified in so doing. So as I said, you’re not representing the fundamentalist line here.

            If you answered “those never happened”, than you end up dumping most of the Old Testament;

            I think that the weight of the evidence supports the historicity of some parts of the OT and not others. I also think that the OT should be read through a lens of Christ’s teachings. This is compatible with God revealing himself to the Hebrews in various ways that are accurately recorded in the OT, as well as the authors of the OT misinterpreting other revelations or mistaking some things for divine revelation that were not.

            My position here is not very different than many early Christian commentators on the OT, who interpreted, e.g., the conquest narratives in the OT largely allegorically.

            I’m more than happy to admit that I don’t know what to say about everything in the OT; following the evidence is hard and I don’t think it’s necessary to have answers to all the questions one might ask about the Bible or God’s activities in history to be an orthodox Christian.

            Is there anything there that cannot be applied to other religions?

            Yes. The evidence for the Christian miracles is much stronger than for other religions. Several of the works here — http://historicalapologetics.org/collection/annotated-bibliography/ — discuss this question. Campbell’s essay contains a fairly decisive discussion of Hume’s famous examples, and Paley’s book discusses Islam.

            What other miracle claims have in common with the NT miracle claims is simply that they are also miracle claims. This is not sufficient for evidential parity.

            Really? So you accept miracles occur in all other religious traditions?

            I don’t think that miracles in other traditions are (in general) incompatible with science. But I think the weight of evidence is against them.

            For them to be confirmations, they must be less likely is Christianity was wrong.

            And I maintain that they are.

            Mystical experiences and the gospels are no more evidence for Christianity than any other religions.

            I agree, and I never said otherwise. “E is evidence for H1” is compatible with “E is evidence for H2.” If you admit that they are evidence for both, then you are granting my point that some observations are more probable on Christian theism than they are otherwise.

            Fine tuning doesn’t work- if Christianity was wrong the Universe would still be fine tuned (unless you believe universes with non-fine tuned laws of physics AND intelligent life are possible, in which case our universe isn’t fine tuned!)

            This is not the place to have an in-depth discussion of the anthropic principle. I think that Leslie’s firing squad analogy shows that this objection cannot be right; exactly why it goes wrong gets into technical issues in probability theory and the problem of old evidence which would take us too far afield.

            It is also important because the claimed bedrock is tradition, but if you change to accommodate evidence, you are recognizing something superior to tradition.

            I never claimed that my epistemic bedrock is tradition. I’m an evidentialist and a strong foundationalist.

          • Samuel Skinner says:

            “Samuel: You seem to have posted in the wrong place.”

            Do I really need to explain why this counts as passive aggressive?

            “I have professional responsibilities to attend to, and so do not have the time to continue most of this conversation further. ”

            How are you taking more than 10-30 minutes posting? This isn’t incredibly difficult issues and there is little need to look items up.

            “This is contrary to your earlier claim that (non-fundamentalist) Christianity “makes no predictions,” which I (charitably) took to mean that it and its negation gave all evidence the same probability, thus making it impossible to confirm or disconfirm. ”

            I should have been more precise- make no testable predictions (have you never argued with atheists before? Charitable is odd unless you’ve never run into Russel’s Tea Pot). If people can be given mystical experiences by MRIs, people won’t abandon religion because they don’t recognize it as a falsifiable event.

            Fundamentalism makes testable predictions (and is wrong) but you can’t use that against it in favor of other versions of religion because they have retreated from making testable predictions as fast as they can.

            “There is nothing incompatible between Catholicism and evidentialism.”

            Aside from the repeated miracles that Catholicism covers you mean? Because that contradicts the evidence that violations of the laws of nature do not happen.

            ” I believe that traditional Christian theism, which Catholics share in common with many other Christians, is the view best supported by the overall evidence.”

            And I’ve been arguing that fundamentalist protestant is also supported to the same level of accuracy. If the bible is divinely inspired and literally true than it is stronger evidence than all other sources which is why fundamentalists accept it over everything else.

            “Tradition is evidence, but it’s not indefeasible and trust in tradition is justified by more epistemically basic considerations.”

            Which is why you are Jewish, right? Tradition can’t be evidence for Catholicism and not other religious traditions which makes it a null when it comes to determining between them.

            ” I do not believe in “discarding” any evidence. Whatever you’re attacking, it’s not my view and it’s not the only possible Catholic view.”

            If you accept the New Testament, you believe in discarding evidence. The New Testament was chosen from a variety of Gospels. Now you can claim that not all of the discards were written by the apostles but if you believe that God can inspire people (which seems to be your position in relation to the Old Testament) it is unclear why you’d discard them. There is certainly dispute about how old several of them are and it is unclear if the church had better information or if there was simply a political power play (which, given the feuding of different sects isn’t too unlikely).

        • Jaskologist says:

          (You may also need to taboo “literally.” All Christian variants take 1 Cor 15:3-8 literally; I’ve never seen a Christian or Jew take Psalm 119:105 literally, no matter how fundamentalist.)

          • Nick says:

            Actually, on this topic, it’s probably worth mentioning that when St. Augustine said “the literal meaning” he didn’t actually mean “literal” the way we do when we talk about fundamentalists, and instead meant it the way that Catholics and whoever else mean it when they interpret Scripture. Some people have tried to fix this terminology nightmare by saying that fundamentalists have a literalist interpretation, but that doesn’t help. But seriously, and I’m directing this quote at RCF, this is what Augustine says in the first paragraph of the first chapter of the first book of his Literal Meaning of Genesis work:

            In all the sacred books, we should consider the eternal truths that are taught, the facts that are narrated, the future events that are predicted, and the precepts or counsels that are given. In the case of a narrative of events, the question arises as to whether everything must be taken according to the figurative sense only, or whether it must be expounded and
            defended also as a faithful record of what happened. No Christian will dare say that the narrative must not be taken in a figurative sense. For St. Paul says: “Now all these things that happened to them were symbolic.”2 And he explains the statement in Genesis, “And they shall be two in one flesh,”3 as a great mystery in reference to Christ and to the Church.4

            And now that I think about it we are all probably due for this reminder: http://squid314.livejournal.com/337475.html because it’s about 75% of the reason that arguing with traditional Christians is really disorienting.

      • SUT says:

        While “young Earth” creationism is certainly guilty of a head-in-the-sand attitude, “old Earth”-variant can be marinated in enough mathematical agnosticism to be appreciated by anyone from the Rationalist movement.

        There are essentially two parameters we need to infer:
        1. N: How many bits of information are necessary for the origin of life? (OOL Research has said 160bits, smallest free-living organism is 500,000 bp). Then the chance of that information arising by accident is 1 / 2^N, and in our finite universe, there seems like a limit to extreme unlikeliness.

        2.Y: years until we find aliens. Seriously. At a near point in technology’s development, we’re going to be able to classify millions of exoplanets and disqualify (or maybe confirm) them as harboring life. The technology is just scientific measurement instruments, not spaceships, and Y is on the decades scale for a modest first attempt. In other words, in a lifetime, we could have a lot of Bayesian prior updating to do.

        While I think finding alien life would be the most exciting possible outcome, like ever, I think the possibility of the barren Universe is something not enough people have wrestled with. Combined with a finding that N is high enough that it seems unlikely that life would ever arise anywhere, and an empirical finding after Y years that we’re 0/1,000,000 on finding any life outside us, would give a real boost of support for idea that life is more than a happy accident, which seems the default origins-story from the secular side of the “evolution vs creationsim” debate.

        While it’s complete speculation for me to say we’ll go 0-for-a-million on a scientific experiment run decades or centuries in the future, the point is not to come to an eternal conclusion. The point is actually they we should keep our conclusions open knowing that we’ll receive that valuable information in the future. (It’s actually very reasonable to say this experiment will be run, given its importance there will be motivation and given current tech trends there will be feasibility.)

        To add in one more level of epistemology: if you’re reading this, you probably won’t get a definite answer before you die, unless artificial life extension becomes a reality. Evolution.

        • Matt says:

          “OOL Research has said 160bits, smallest free-living organism is 500,000 bp”

          Completely irrelevant. How much of that genome codes for functions unnecessary for an organism alone in an otherwise sterile world (where there is food)? No need to defend against predation or infection. No need to be competitive, so no need for movement (food is not going anywhere). No need for proteins (and the necessary machinery to translate DNA into enzymes) since ribozymes can replace them. They may be inefficient, but you’re not competing. So how much of those 500,000 bp would the first organism have even needed? Is there a breakdown of the way different functions take up space in those 500,000 bp? Just curious, what organism is being referenced?

        • Josh says:

          We don’t understand the process of abiogenesis well enough to know what the individual steps are, much less estimate likelihoods of those steps. Even if we were to eventually discover conclusive evidence that abiogenesis is so unlikely that only one in 10^50 universes would contain life, we would still expect to find intelligent life in *our* universe via the anthropic principle. Moreover, OEC is essentially just passing the buck. How did whatever ‘seeded’ the earth with life come into existence?

        • RCF says:

          So, your two arguments are (1) life is unlikely and (2) we won’t find any more life

          (1) is based on wild speculation. Further, the size of the observable universe is about 2^500 bits, so saying that life takes 160 bits isn’t as large as a bar as it might seem. And who’s to say the observable universe is all there is? Finally, this doesn’t really provide an argument for creationism, it’s an argument against evolution, a distinction that creationism seem to have a serious problem recognizing.

          (2) is an argument from evidence-I-expect-to-see, which is a ridiculous. Saying “we should keep our conclusions open knowing” is just nonsense. We should base our belief on the evidence, not on imaginary evidence. Also, (2) kinda contradicts (1); if life really is unlikely, then not seeing any other life is exactly what we should see. If life is unlikely, then seeing life on lots of planets is evidence for intelligent design. So not finding life must be evidence against intelligent design.

  3. Anonymous says:

    Is …she even referring to Lesswrong or any particular real world movement? Or did she just hear the word “rationalism” and get to “rationalization” via the etymology?

    • Markus Ramikin says:

      Yeah, I wonder too. Yudkowsky or LW aren’t mentioned by name at all. If fact, there seems to be nothing and no one specific except some “acquaintances”.

      This is mighty convenient, because now lots of people can find themselves agreeing with the article by recalling someone who talks about reason and logic but seems to in fact be a confused thinker, and yet nobody whom the article strikes at either by intention or as collateral damage can defend themselves, because they weren’t named. “Why are you assuming I am talking about your community? I just meant the people who do the bad things I describe, and if you aren’t like that, why are you being defensive?”

      Chances are that wasn’t the intention – I don’t know that person or blog – but generally I dislike posts that are poorly grounded in examples and don’t refer to their targets explicitely. Who the “rationalists” are is wonderfully underspecified. They seem to have a certain content of straw, though.

      • Anonymous says:

        (same anon)

        > “Why are you assuming I am talking about your community? I just meant the people who do the bad things I describe, and if you aren’t like that, why are you being defensive?”

        That was actually *precisely* my thought process upon reading her post, and Scott’s rebuttle…and it’s still kind of my thought process, even after you call it out.

        Because of Scott’s good track record of not making those sorts of mistakes, I assume there’s some important context I’m missing, but if I had no evidence and didn’t know how high quality this blog is I would immediately decide Scott just picked a random lost and confused blogger who happened to use the word “rationalist” when perhaps a better wordwould be “armchair philosopher” and decided it referred to Yudkowski et al.

        People who know the word “Lesswrong” associate to it strongly whenever they hear the word “rationality”, but most people who know the word “rationalist” just think “person who thinks things can be explained through rational thought”. They don’t know who Yudkowski is, and even if they know who Descartes is they wouldn’t explicitly jump to him via rationality. “Rationalist” is not typically referring to any particular group, “founding document”, or anything like that. If I randomly told my friends I was a “rationalist”, they’d mostly just take it to mean that I didn’t believe in fairies and witchcraft.

        • Markus Ramikin says:

          No, I hear you. I would like to know why Scott chose to respond as he did, too, what makes him confident it’s about LW and EY. It seems to me he’s either mistaken, like you suggest, or he identified the target correctly but he’s unwisely taking the bait.

          > “Rationalist” is not typically referring to any particular group, “founding document”, or anything like that.

          To be fair, the article does talk about rationalism as a movement. That implies a community, a website, not just armchair philosophers in general. If not LW, is it RationalWiki? Randi’s forum? Rationalist Society of Australia? Who knows.

    • Anonymous says:

      The mention of “cognitive biases” is strong evidence that she means LW.

  4. Qiaochu Yuan says:

    Quoting the Twelve Virtues is super motte-and-bailey, though. It’s like quoting Gloria Steinem or whoever in a discussion about internet feminists.

    • Scott Alexander says:

      Well, what else can I quote? I mean, I can’t just say “the movement is totally about empiricism, trust me on this one”. 12 Virtues is a document written by Eliezer, it’s something a lot of people take seriously, and I think it was written around the same time Eliezer started writing the Sequences, as a statement of his philosophy.

      I think the equivalent would be someone saying “The problem with feminists is that they’re not interested in women’s issues” and someone being so boggled and confused that they can’t think of what to do, and so they just point out that a Gloria Steinem book mentions women’s issues like three hundred gazillion times. Like, okay, Gloria Steinem might not be the best representative of internet feminists, but what else are you supposed to do with such an accusation?

      • Frog Doe says:

        Well, you can look at the stated positions of high status people in the rationalist community on controversial issues? It might be useful to remember that to a lot of people you look more like a weird sex cult that is incidentally interested in AI and cryonics.

        • Anonymous says:

          Data point: I spend a fair amount of time around the rationalist community and it still looks like a weird sex cult that is incidentally interested in AI and cryonics.

          • nkh says:

            Wait, where does the “sex” part of “weird sex cult” come in? I mean, I can totally see the “weird cult” impression, but somehow I’ve never particularly associated sex with LW or any of the other rationalist sources I’ve come across.

          • anon1 says:

            nkh: Presumably it comes from the fact that a disproportionate number of LWers, including EY, are polyamorous.

          • nkh says:

            anon1: Ah, I see, thanks.

          • David Hunt says:

            An analogy I’d like to draw with the silver fox breeding experiments; selecting hard on some complex trait (‘interest in rationality’) is often going to come with a bunch of unexpected things you’re apparently selecting on with it.

        • Zubon says:

          I don’t know how much higher status you get on Less Wrong than citing Scott, Eliezer, and Luke. I don’t know how much more controversial you get then pointing to any week’s worth of SSC posts.

      • Bugmaster says:

        No no, I think Qiaochu Yuan has a very good point.

        If you read pretty much any foundational document on feminism, you will form the impression that feminism is all about equality, reducing oppression, treating women as first-class citizens, all that good stuff. But if you look at the actual behavior of actual feminists, you will see that mostly what they do is engage in tribal warfare, ideological witch-hunting, and persecuting people who are less powerful than themselves (e.g. trans people).

        Similarly, if you look at the foundational documents of the Republican Party, you will find a lot of talk about small government, individual liberties, fiscal responsibility, etc. However, the average rank-and-file Republican ignores all that stuff in favor of trying to stop gay people from being married and copy-pasting the Bible everywhere.

        Almost Diamonds is saying that, if you look at an average self-proclaimed rationalist, you will see a supremely arrogant guy who makes all kinds of unsubstantiated claims from his armchair. You cannot refute Almost Diamonds by quoting foundational documents; you have to focus on the actual behavior of actual rationalists. And you have already done so to some extent, e.g. with Green and Luke, but that’s just two data points. Some sort of a meta-analysis of rationalist literature and online behaviors might be in order.

        • Scott Alexander says:

          How about “If you make a post defending rationalism on a rationalist blog, dozens of rationalists will suddenly show up arguing you’re not being sufficiently charitable to the person attacking them.”?

          • Anonymous says:

            That is…. actually a really good example of how the rationalist community “boots on the ground” behave very differently than the criticism in the article. You should edit that in.

          • Anonymous says:

            This is a great example 🙂 Of course, this kind of behavior leads to certain weaknesses, but I like it a lot. Other web communities should become more like this one 🙂 Sadly, that probably won’t happen 🙁

        • Anonymous says:

          >the average rank-and-file Republican ignores all that stuff in favor of trying to stop gay people from being married and copy-pasting the Bible everywhere.

          Are you an average self-proclaimed rationalist?

        • hawkice says:

          It’s worth jumping in and pointing out that a surprising percentage of the RNC’s platform speaks to their support of “American exceptionalism”, which, having read a great deal on the subject, I am still mostly confused about what it even means.

          Most “platform”-style bits of writing are about getting the ideological foot-soldiers something to get excited about. Why do they not match up with observable details of a movement? For basically the same reason that so many people are disillusioned upon achieving their dream of being a game programmer: the details of what you do every day are not as exciting as you might imagine, but if you never excite anyone you don’t have a movement. It is the ontological paradox of hypocrisies, and I think we can generally excuse it as long as people pull back the curtain on the sex-cult stuff fairly quickly (I use that as an example only because I love LW/SSC/EY, and picking on feminists or republicans for their gritty on-the-ground stuff is pretty actively distracting).

          • Samuel Skinner says:

            American Exceptionalism means we aren’t like Europe. It is a way attacking liberals desire to be more European and defend American things like gun rights, free market capitalism, social mobility and fast food.

            As for the platform not matching the movement, the platform has to hit every single issue and give an answer that is different from the opposing party, decisive enough to rally around but vague enough not to alienate any possible supporters. Items are rarely talked about at length because that requires going into details (which may be contentious) so most are the same length.

          • peterdjones says:

            Exceptionalism CAN function very well as a way of immumusing oneself from evidence….speaking of empiricism.

        • Nick says:

          But Almost Diamonds is advocating her own philosophy over whatever she thinks the rationalist philosophy is, and Scott has shown that everything Almost Diamonds is advocating just is part of the rationalist philosophy, so she’s not actually offering the rationalists anything better, is she? And, as an exemplar of this philosophy, she’s very confused and so not living up to it well, whereas the rationalists have Gwern and Luke and Scott and all. So I don’t see how it matters what the average rationalist does in this case, because she wasn’t pointing to the average empiricist.

          As an aside: Intuitively I was sure that Scott’s disagreement with and correction of Almost Diamonds on this is totally legitimate, and so I’m really surprised that people are defending her here, esp. the analogy to feminism. Maybe it’s an object vs. meta thing. But more importantly, I had great difficulty explaining above why I disagree with you, and Scott seems genuinely confused too, so I suspect that maybe the problem here is just that we don’t have great conceptual tools on hand to explain this. So I’d be interested in a followup post by Scott on this.

      • pneumatik says:

        The answer is to ask the accuser what their evidence is. What self-identified feminists never talk about women’s issues? Or, what self-identified rationalists as Almost Diamond referring to? I think the fallacy here is assigning people to a group and then stating that everyone in the group acts the same. It’s the exact same type of argument that’s going on with the hash tag about gaming journalism and misogynistic death threats. In that situation there seem to be people who are concerned about gaming journalism and also people making misogynistic death threats who are both using the same hash tag. It’s the same situation as “Are all members of $RELIGION terrorists?” Similarly, you can have good rationalists and bad rationalists. If Almost Diamonds can list the people and behaviors that she’s talking about in her blog post, and they’re really acting the way she says, then you can rebut her by saying those people are bad rationalists or not rationalists according to your own definition of rationalist, but you can’t keep the bad/not rationalists from self-identifying as rationalists.

        I’ve found this concept of people being able to self-identify as whatever they want to be a helpful addition to my model of how people behave. It’s also helpful for understanding how a group can act in contradictory ways so easily.

      • kieran M says:

        If I wanted to defend Almost Diamonds, I would probably link to Eliezer talking about cyronics. Cyronics is a field where there isn’t really any empirical evidence because it’s based on lots of assertions of what the future will be like (I know there’s some exploration of whether brains can survive freezing, but thats really only one fly in an ointment crawling with maggots). There many rationalists, including their “leader” has decided in favour of cyronics, essentially because of a utility calculation triggered by a thought experiment.

        That is, while rationalism (rationality, whatever the hell we want to call the movement) can, and should be used to carefully consider empirical evidence and weight correctly, it’s also used to make arguments about the future based almost entirely on thought experiments. Basically, the whole of the idea of Friendly AI is based on a thought experiment.

        My feeling is that that notion is what the essay is targeting (although remarkably imprecisely!), hence the motte and bailey of pointing at the virtues.

        My steel manned version of the argument would be:

        “Rationality is clearly an important principle to apply when considering evidence. Blindly doing numbers without thinking about them is obviously going to cause you pain, and you need to make sure you are doing the right numbers! However it seems like one aspect of the movement focuses on applying rationality towards unlikely conclusions: places where empirical evidence is low on the ground.

        Personally I don’t think its helpful to try to apply this kind of thought to areas where we lack substantial evidence, because mistakes are simply too easy to make when there isn’t evidence to clarify our thinking.”

        Note that this argument is explicitly disagreed with by Eliezer in Science vs Bayes.

      • Oldman says:

        I agree that the 12 principles is probably the most reasonable document to quote. However Rationality isn’t a movement that requires people to agree with a single document – hence I can see why Almost Diamonds might characterise the Rationality movement by what they observe.

        Evidence against Empiricism: There are a lot of discussion threads on less wrong that don’t seem to have any real world application.

        Evidence against humility: Eliezer is called arrogant really often, so regardless of whether this is a fair accusation, it is at least a common perception.

        Evidence against scholarliness: Okay this one is harder, but I think someone (maybe Julia) recently wrote something about Rationalism is not a superpower – saying that rationalists frequently and incorrectly dismiss experts with greater learning in favour of a ‘Rationalist’ argument.

        I absolutely agree with what Scott has written, and I do think that Almost Diamonds is incorrect. Nonetheless I think it would be better to be more charitable about where Almost Diamonds may have drawn their evidence.

      • veronica d says:

        @scott — As an somewhat outside observer I’ll say this: folks around here fall short on humility a lot. Likewise, I see plenty of people willing to speak with confidence on topics where they know little. There is a willingness to do scholarship in areas where they enjoy the message. However, there is a pretty broad rejection of the humanities, likewise the social sciences. Social justice positions are frequently based on “the dumbest thing ever said on Tumblr” instead of, for example, Julia Serano.

        But then, those are the comments, not you. Nevertheless, I can easily see how this community is very off-putting.

        • Scott Alexander says:

          Do you think we fall more short on humility than anyone else?

          • LTP says:

            Should that matter? Given that rationalists claim to be more self-aware about their reasoning and fallacies and biases, *if* they are no more humble than other groups, then they are not succeeding in their stated goals, no?

          • veronica d says:

            Actually that’s a good question.

            Averaged over similarly sized groups, and if I try to think about it objectively, then I suppose you all do very well. But then, the non-so-humble moments are often hitting points that are sore for me. There are many times I go, “Some humility would help here.” Likewise, there will always be that one person, like some NRx, spouting terrible stuff that just poisons everything.

            By which I mean, I still post here, so revealed preferences and all.

            (And I reiterate I’m talking about the comments. Not you. You’re awesome.)

          • suntzuanime says:

            I don’t think everyone who posts in the comments here is necessarily a member of the “rationalist movement”. This blog has a wider and more diverse reach than Less Wrong.

          • 27chaos says:

            I think yes, especially if humility is allowed to include the idea of “considering alternative possibilities” about the future, like the main page advocates.

            Here’s another example that just popped into my mind: LWers often advocate doing a retrodiction, imagining that you’re in the future and one of your ideas has failed and then looking back into the past to see what errors you’ve made. So why haven’t any of them thought to do this for AI risk? Or applied it to literally any other area?

            Somewhat similarly, CIA folks do sensitivity analyses, where they alter the assumptions their predictions were based on and see what the expected results of deviations from those assumptions might look like.

            Why don’t we ever see people talking about making changes to one’s mind that are neither centered on trivial applications of Bayes theorem nor on instances of cognitive bias? There are more skills of epistemic rationality than just these.

            If it’s so important to avoid getting stuck in a biased rut of thought, why does all the discussion on LessWrong sound alike?

          • Samuel Skinner says:

            “So why haven’t any of them thought to do this for AI risk?”

            Because the failure state for that is either we can’t program AIs or programming friendliness is easy. Since people can’t currently agree on ethics and morality the second seems unlikely (after all, it isn’t enough to solve this problem- you need to get the people making the AI to agree with you). The first is unlikely to persist indefinitely- there is a large economic/military payoff and we know human minds work.

            “If it’s so important to avoid getting stuck in a biased rut of thought, why does all the discussion on LessWrong sound alike?”

            Because they have a similar set of culture and jargon? Most talks on hobby sites sound alike for the same reason.

      • Deiseach says:

        I think the equivalent would be someone saying “The problem with feminists is that they’re not interested in women’s issues” and someone being so boggled and confused that they can’t think of what to do, and so they just point out that a Gloria Steinem book mentions women’s issues like three hundred gazillion times

        You’ve not heard of feminist versus womanist versus mujerista, then? The whole point of which, insofar as I understand it, is that the Gloria Steinem-ilk feminists do not understand women’s issues outside a narrow concentration on things that were of interest to, and issues for, upper middle class, white, university educated, Western (predominantly North American) women.

      • Jacob Steinhardt says:

        I have to agree with Qiaochu here. I don’t think “Rationalists care about the 12 virtues” is an equivalent statement to “Feminists care about women’s issues”. The 12 virtues is in some sense written to look as good as possible; while the rationalist movement isn’t nearly as single-issue as feminism is, I think an at least more comparable statement would be something like “Rationalists care about raising the sanity waterline”. Conversely, the equivalent to “Rationalists care about the 12 virtues” for feminism would be “Feminists care about equality” or “Feminists care about human rights”. It’s the difference between appealing to a mission statement and appealing to the abstract virtues that you would like people to associate with that mission statement.

      • Spectralshift says:

        I can’t identify with the “rationalist” movement that I experience on a day-to-day basis largely because of conflating beliefs – of which there is no mention of in your blog post.

        How can empiricism be used to describe the central conflated beliefs, like AI and super-AI, existential risk (in the likes of MIRI), cryogenics and so forth? Its not that these aren’t worth arguing over, but these are the major themes in the rationalist movement. Count the current threads on LW. Virtues were written a long time ago relative to the current movement’s posts.

        Anyone I point to the movement is going to read about creating gods, the end days, psuedo-hell and giving away all your money… unless I selectively pick a small subset of other ideas. Otherwise it’s almost literally the same pitter patter as your average religion, but new and improved with a modern flair (and better rhetoric).

        A great deal of good content is produced by the rationalist community, this blog being a great example, but the movement is heavily conflated with some really weird stuff that stays because some very talented people use well polished rhetoric to maintain it. I’m not even claiming they are wrong, but in the context of empirical standards…?

        This is not a defense of the Almost Diamonds blog post that I do disagree with, but I also disagree with the defense created by selected superstars and selective choosing of memes (eg: “virtues” and surrounding sequences over “basilisk” and surrounding future predictions). I believe it is a case of selection bias on both parties.

        • Scott Alexander says:

          “How can empiricism be used to describe the central conflated beliefs, like AI and super-AI, existential risk, and so forth?”

          – MIRI: “The Lessons, Errors, and Insights Of Famous AI Predictions, And What They Mean For The Future

          – MIRI: Predicting AGI: What Can We Say When We Know So Little?

          – Stuart Armstrong: AI Timeline Predictions: Are We Getting Better?

          – Stuart Armstrong: AI Prediction Case Study: The Dartmouth Conference

          Etc.

          Yes, the value of empirical evidence changes when you’re talking about far future things that can’t be tested, but MIRI has still spend dozens of person-hours evaluating the success of past AI predictions, trying to quantify the rate of technological advance, trying to calibrate itself, et cetera as part of their work.

          • Spectralshift says:

            Thank you for the links. I appreciate MIRI’s attempts to calibrate their predictions and see great value in this (as a principle especially). Just to clarify, my reference to MIRI was more about existential risk via AI more than a criticism of MIRI.

            My original quote was meant more about the spread of the AI memes through the rationalist movement. If anything, the empirical evidence you presented here is that we can’t even predict short-run outcomes, never mind dismantling systems into computronium or some such (not implying MIRI claims this!) This isn’t evidence of the future of AI as much as the ability to predict AI outcomes, which appears pretty dismal (going as far as predicting when we will be able to predict AI outcomes).

            I’ve been very impressed with your analysis of difficult (tribal) topics. Are you putting yourself outside the group, outside your knowledge base, outside your local norms and looking at the rationalist community as a whole, in the present, when you reject the criticisms?

            I ask because I would see the links you offered as evidence for Almost Diamonds’ theme as much as evidence against. There is a fundamental difference between the raw predictions on AI and, say, a meta-study of biblical authors about “the end days”… but that difference seems lost to a significant portion of the community. It’s sometimes lost on me too, given the strength of some pretty dubious claims.

          • Samuel Skinner says:

            AI risk is based on the following items

            – you can make an intelligent computer (justification- humans exist and are intelligent)

            – that computer will not have a human value system (justification- evolutionary psychology)

            – there are a number of value systems that are amoral and so would be harmful to humanity if an AI had them (any goal system that allows optimization, but doesn’t consider humans makes it efficient to exterminate us)

            – an AI can rapidly learn, gather information and transmute those into the ability to affect the real world

            I’m not seeing anything really questionable.

          • 27chaos says:

            I think the risk of FAI is lower than LessWrong typically assumes because solving some kind of approximation of friendliness is necessary for AI to exist. An AI that interprets “keep me warm” as “light everything on fire and keep it that way for eternity” is never going to be built because too many problems would have already come up in testing its predecessors.

          • grendelkhan says:

            @27chaos: Sometimes it bothers me that people point to a giant heap of LW writing and say that you’d understand why your question is silly if you just spent a few months chugging it, but it really does hit the common objections.

            “An AI that interprets “keep me warm” as “light everything on fire and keep it that way for eternity” is never going to be built because too many problems would have already come up in testing its predecessors.”

            There’s a long, long history of machine-learning algorithms performing fine in test but doing weird things in production because the trainers didn’t anticipate a particular situation. This is a long-running serious problem in machine learning.

          • Spectralshift says:

            I’m not seeing anything really questionable.

            Logical and I’m not in disagreement. This is about evidence supporting the probability of (varying degrees of) specific claims. The logic chain you mention is broad enough to be relatively safe, but still illustrates the theme of Almost Diamonds – armchair beliefs based on little to no evidence resulting in undue confidence in those beliefs. Yes, we can argue over what constitutes evidence and I accept that it is a bit grey when talking about a very generic and open ended AI threat, but the evidence on our ability to predict is that we are not very good at it and should have very low confidence in any AI claim (trading broad beliefs for confidence).

            There is a difference between saying “AI will get better” and saying “Intelligence explosion that supercedes all human knowledge in an instant that will discover physics unknown to us and be unstoppable”, which is uncomfortably close to the singularity meme. There is a broad range of possible outcomes, such as human intelligence being integrated with computers before we even get close to even a primitive (but possibly dangerous) AI.

            Before being accused of saying that I’m building a strawman, keep in mind that the 2013 less wrong survey shows that 14.2% believe the most likely global catastrophe risk is unfriendly AI. Unfriendly AI is a very specific claim of runaway technology, so I’m not quite sure what a more general “threat of advancing technology” would rank.

            The rationalist community is pretty unique – I can point to that 14.2%, built on the hard work of many people in a thread where we steelman a weak argument. It deserves credit for what it does well. I don’t believe the AI narrative fits that pattern, which is why I used it as an example.

          • Samuel Skinner says:

            ” “Intelligence explosion that supercedes all human knowledge in an instant that will discover physics unknown to us and be unstoppable”, which is uncomfortably close to the singularity meme.”

            Which I’m not claiming. I’m also pretty sure the claim is it will discover the ability to make working nanotech, not that an AI will invent zero-point energy.

            The singularity meme is that there is a point we can’t predict past because technological change will be occurring in ways that are completely different from the past. It doesn’t require instant knowledge- the version I’ve heard is that you have computers design the chips so the time spent in development drops as they run faster until it is equal to however fast they can replace the old factories.

            “There is a broad range of possible outcomes, such as human intelligence being integrated with computers before we even get close to even a primitive (but possibly dangerous) AI. ”

            Do you mean uploads? Because that runs into similar problems (namely people not keeping human values).

            There are certainly scenarios that avoid AI takeoff. After all, unlike most end of the world disasters, it is extremely situation specific- if an AI emerged in the 1970s, the military would just have to mark the infected computers with a warning and have a soldier nearby to shoot anyone stupid enough to connect them to a network because of how primitive the net was. You can put the genie back in the bottle if you eliminate anonymity in the internet- have everything tracked, use cameras and cell phones to link it to specific people and tag anything that isn’t tied to a real person or a verified program.

            The problem is the things that can stop an AI tend to require a broad social coordination in order to pull off.

            “Before being accused of saying that I’m building a strawman, keep in mind that the 2013 less wrong survey shows that 14.2% believe the most likely global catastrophe risk is unfriendly AI. Unfriendly AI is a very specific claim of runaway technology, so I’m not quite sure what a more general “threat of advancing technology” would rank. ”

            And? How do you know they fear your version of the argument and not the more supported (but less threatening) version (I’m not a lesswrong member- I have no idea the general view over there)?

            I’ll be honest about my stance- I think it is a threat, but I don’t donate to MIRI because I think the best way to deal with the threat of an AI is simply to inform people about it. It isn’t like other issues- AI threat can be stopped by getting people not to make and release AIs on the internet (and if the AI’s developers want money you could offer a Randi prize equivalent).

          • 27chaos says:

            > “There’s a long, long history of machine-learning algorithms performing fine in test but doing weird things in production because the trainers didn’t anticipate a particular situation. This is a long-running serious problem in machine learning.”

            But this supports my point! You’re not going to successfully build a GENERAL intelligence if you’re still using processes that yield these kind of difficulties. Solving the unpredictability of machine learning is going to be necessary to build even a stupid AI, because without solving it broad cross domain intelligence isn’t possible.

          • Anonymous says:

            Generality is not binary thing. Governments, military, companies will try to create something as good as they can before their competitors do it. There will be a lot of pressure to create AI as quickly as possible. And as long as it does its main job mostly correctly, such bugs will not stop the development. Even today, there is no software without bugs. Self-improving AI will almost certainly have bugs in its code. Nobody does mathematical proves for the software they write. Similarly, unpredictability of machine learning you talk about will not be solved before the appearance of general-ish AI, as even buggy AI would be good enough for many things

          • Spectralshift says:

            @Samuel Skinner

            Which I’m not claiming. I’m also pretty sure the claim is it will discover the ability to make working nanotech, not that an AI will invent zero-point energy.

            Not you, but the movement in general. I’m still posting in the context of the original blog post.

            Nanotech isn’t considered particularily dangerous IIRC, I think the most supported fear was engineered pandemics. The common accepted argument is that an intelligence explosion would put AI so far beyond us that we can’t even guess the vector of attack.

            The unknown physics was a bit tongue in cheek due to the number of writings that essentially do that. It’s part of the singularity narrative however.

            The singularity meme is that there is a point we can’t predict past

            That’s more the proper definition than the meme. The meme is intelligence explosion and the total change/collapse of civilization. I checked to make sure I wasn’t pushing my own view by taking a look at wikipedia – as a cross section of beliefs, I think it supports my interpretation of the meme.

            Do you mean uploads? Because that runs into similar problems (namely people not keeping human values).

            Or cybernetics that extend your intelligence, a few itnerative steps beyond the smart phone. That’s more to highlight that there are alternative progressions into a rapidly changing future.

            And? How do you know they fear your version of the argument and not the more supported (but less threatening) version (I’m not a lesswrong member- I have no idea the general view over there)?

            “My version” is their version; I think “global catastrophe” is a sufficient cut off. That’s ~14%, with some unknown amount of the other ~86% believing it to be a significant but less likely risk.

          • Samuel Skinner says:

            Your complaint is there are people who are taking the unsupported and hysteric view. There isn’t much to argue about there; people do that, but we don’t know how many or what percentage of the AI movement. I don’t think global catastrophe is a sufficient cutoff for that survey; people put down socialist for their political views on less wrong when they mean social democrat or moderate interventionist.

            “Or cybernetics that extend your intelligence, a few itnerative steps beyond the smart phone.”

            I doubt that will occur; people are leery about having their brain messed with and the benefits have to significantly outweigh just owning a handheld device. Also the limiting factor is how well we understand intelligence and the human brain which is also the limiting factor for constructing an AI- an AI will almost certainly be built first because code has a lot shorter lead time than poking inside peoples skulls.

          • Spectralshift says:

            @Samuel Skinner


            Your complaint is there are people who are taking the unsupported and hysteric view. There isn’t much to argue about there; people do that, but we don’t know how many or what percentage of the AI movement. I don’t think global catastrophe is a sufficient cutoff for that survey; people put down socialist for their political views on less wrong when they mean social democrat or moderate interventionist.

            My assertion is that there are conflated beliefs within the rationalist movement that fit Almost Diamonds posts’ theme (contrary to Scott’s post about the Virtues, etc.) AI (notably unfriendly) is one of those. Even your broad logical argument for AI is unsupported by evidence. This is a common issue for any futurist, of course, but a lack of evidence should lead to lower confidence – instead, we have serious overconfidence in various outcomes. I also assert that the normal rationalist narrative, at least intra-group, is to make much more specific claims that should have even lower confidence.

            As for the survey, it isn’t the “hysterics” that I am referring to, its the specific claim. Stating that the most likely global catastrophe risk is unfriendly AI is significant unless you believe the general population also believes this. That demonstrates a minimum amount of the LW population who consider it a serious risk; a specific claim on the probability and minimum scope of unfriendly AI.

            “Hysteric” is a complaint about the narrative and rhetoric used within the rationalist community, which is also an issue on these topics, but not related as much to Almost Diamonds’ post.


            I doubt that will occur; people are leery about having their brain messed with and the benefits have to significantly outweigh just owning a handheld device. Also the limiting factor is how well we understand intelligence and the human brain which is also the limiting factor for constructing an AI- an AI will almost certainly be built first because code has a lot shorter lead time than poking inside peoples skulls.

            (Devil’s advocate:) I doubt AI will occcur, people don’t trust machines and will likely self limit, but the growing reliance on technology will lead to cybernetics (retaining human control) – which will progressively become less invasive. All of this is already occurring.

            I think this fits the armchair criticism in the OP though.

          • Samuel Skinner says:

            “Even your broad logical argument for AI is unsupported by evidence. ”

            Like what? That programming human morality is hard? We have evidence (no one can agree on a single standard). Most of the assertions are backed by evidence.

            ” Stating that the most likely global catastrophe risk is unfriendly AI is significant unless you believe the general population also believes this. ”

            Why? It could be all the other possible global catastrophe’s are viewed having a much smaller risk- remember the wording was
            “”which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?””

            and several of the options were incredibly questionable on that regard (economic/political collapse and natural pandemic are not going to kill that many people).

            ” I doubt AI will occcur, people don’t trust machines and will likely self limit”

            People are already handing over control to algorithms and following their outputs. People are willing to trust machines if it makes them money; for this trend to change we need people to view AI as different from simpler mechanisms which is unlikely given incremental progress.

            “but the growing reliance on technology will lead to cybernetics (retaining human control) – which will progressively become less invasive. All of this is already occurring.”

            Where? Where are people getting brain implants to increase their intelligence? That is not occurring at all.

            “I think this fits the armchair criticism in the OP though.”

            It is only “armchair” because explaining every single part of AI threat exhaustively is really time consuming and has been done before.

          • peterdjones says:

            @Samuel
            ” – you can make an intelligent computer (justification- humans exist and are intelligent)– that computer will not have a human value system (justification- evolutionary psychology)– there are a number of value systems that are amoral and so would be harmful to humanity if an AI had them (any goal system that allows optimization, but doesn’t consider humans makes it efficient to exterminate us”)”

            Motte and Bailey. An AI will almost certainly lack human values…after all, a pocket calculator also lacks them,,,but to be dangerous it also needs to have inhuman values, and to pursue them incorrigibly, and to be adaptive about how it pursues them. In particular, it is not clear that it is even possible for a self modifying AI to have stable goals.

    • vV_Vv says:

      My thoughts exactly.

      There seems to be a huge disconnect between what Yudkowsky preaches in “Twelve virtues” and what he actually practices:

      – Scholarship, when lots of the material in the Sequences is poorly researched, often ignores or misrepresents opposing views, or “reinvents the square wheel”.

      – Humility, from a guy who makes grandiose claims about his own abilities without evidence of actual proportionate achievements, to the point of claiming that physicists who don’t accept the many-worlds interpretation of quantum mechanics as obviously true are “nuts”, or arguing that he could do better than Enrico Fermi at estimating the probability of nuclear fission chain reactions to be possible.

      – Empiricism, from a guy who dabbles with pseudo-scientific stuff like cryonics, DIY ketogenic diets, etc. Even his research in AI safety, allegedly his field of expertise, used to be quite crackpottish, until recent years when Muehlhauser managed to back him with people who can actually do science.

      • drethelin says:

        How the fuck is trying a diet yourself and recording the results NOT empiricism? Empiricism =/= worshipping what the FDA publishes as daily diet requirements. Empiricism is fundamentally the principle of “Try it and see what happens”

        • vV_Vv says:

          There might be health risks associated with a DIY keto diet, and the chances of getting meaningful results from an experiment on a sample size = 1, even if you are only interested on the effects on yourself, are probably quite small, unless the effect magnitude is high.
          Various confounders, or even random noise, that would approximately cancel out in a large sample, might end up dominating the result in a sample size one experiment.

          • Vulture says:

            Tell that to gwern.

          • Ilya Shpitser says:

            I actually did tell that to gwern. He’s generally a sensible guy, but he wasn’t really interested in hearing me out on this. His reply, if I remember right, is that his conclusions are generally valid but non-transferable to others.

            Also gwern’s “style of statistics kung fu” is much more scruffy than neat.

          • Scientists have to actually decide what things to run expensive controlled experiments on. And they can’t always run expensive controlled experiments to figure out which other expensive controlled experiments to run. In most cases they decide what to test based on pure thought experiments — a sample size of 0. In other cases they do a quick, informal self-study. This is not, in itself, ‘pseudoscience’; Tversky and Kahneman frequently figured out which cognitive biases to examine by running a 0-person experiment, followed by a 1-person or 2-person one.

        • Anonymous says:

          If you tried a diet which nutrition scientists recommend based on a number of controlled trials as a method of losing weight, and it didn’t make you lose weight, and you concluded that the nutrition science is a scam, then your reasoning would be wrong.

          However, it would be still infinitely saner than if you decided nutrition science was a scam because the diet was unpleasant so you stopped it.

      • suntzuanime says:

        Dabbling with stuff, rather than accepting the authority of people who call it “pseudoscience” seems perfectly empiricist. This applies more to the ketogenic diets thing than cryonics, which is not a field very amenable to trial and error at this point.

        • vV_Vv says:

          Dabbling with stuff, rather than accepting the authority of people who call it “pseudoscience” seems perfectly empiricist.

          Right. Let’s try homoeopathy! 🙂

          Ok, that was a hyperbole, keto diets aren’t that much implausible, but it is still the case that running after every fad that is sufficiently high-status within your social group without much evidence to back it up is not the proper way of doing empiricism. It’s more like “cargo cult” empiricism, where you reproduce superficial aspects of scientific research without the necessary rigor and theoretical understanding.

          As for cryonics, the fact that it is not amenable to trial and error, combined with the weakness of the speculative arguments for it, is the reason why the whole practice is pseudoscientific.

          • suntzuanime says:

            Mad scientist self-experimentation is certainly no more cargo cult than “official” science’s worship of p<0.05 and oh by the way if you try to replicate an experiment you're Hitler.

          • vV_Vv says:

            It’s true that some scientific fields have replication issues, but this is not a blanket justification for doing amateur science, especially when you are playing with your own health.

          • suntzuanime says:

            “I am a free American” is all the justification you need for doing amateur science playing with your own health. It’s other people’s health you need justification for playing with.

          • vV_Vv says:

            I’m not arguing that you should be legally prevented from experimenting with your own health, I’m arguing that it is not necessarily a scientific and rational endeavour.

          • suntzuanime says:

            Whether it’s scientific doesn’t depend on the dangers involved. Whether it’s rational depends on whether you’re talking epistemic or instrumental rationality.

            Eliezer’s handwaving equivocation between epistemic and instrumental rationality is his greatest sin, from my perspective.

          • houseboatonstyx says:

            @vV_Vv

            -1 for rhetoric

            If experimenting on oneself to see what works for you is using too small a sample, then the more people who are trying this experiment and reporting their results, the larger the sample. If they are in your social group, then their results are more likely to apply to your case because less confounded (if I’ve got ‘confounded’ right).

      • 27chaos says:

        See also: http://lesswrong.com/user/Eliezer_Yudkowsky/comments/

        Compare the number of times he admits he’s wrong to the number of times he dismisses someone else’s views as wrong without actually making an argument or explanation.

        I do agree with the people saying it’s fine to try keto or whatever. But Yudkowsky is not a humble prophet.

        • Criticizing others without explaining why they’re wrong seems like a poor use of time, and admitting you’re wrong a lot less often than you are wrong seems condemnation-worthy. On the other hand, I’m not seeing why the same is true for ‘admitting you’re wrong a lot less often than you call someone else wrong’. If the ideally humble person makes noteworthy mistakes a lot less often than they see others making noteworthy mistakes, is the humble thing to do to lie, or to be selectively quiet? The sin you’re calling out sounds more like what rationalists call ‘modesty’ than what they call ‘humility’ (and I think the distinction is important).

          • 27chaos says:

            I’m working from the prior that Eliezer is a human being whose predictions are of near average quality.

            He generally uses his influence to criticize others’ noninfluential ideas, when a better use of it would be to have others criticize his own highly influential ideas. Entertaining other points of view and then merging them into your own is a crucial skill of rationality, but instead he actually discourages people from voicing their opinions because he so freely criticizes others.

            Even if my prior is incorrect, I’m quite certain that he could have found something of value in others’ criticisms and learned from them if he was willing to do so. Geniuses are wrong and partially wrong all the time, even if they’re very rational. If you’re not finding and admitting things you’re wrong about a few dozen times a day, you’re not trying hard enough to learn things. The fact he can look through the forums long enough to see all those mistakes of others but almost none of his own is thus a warning sign.

            Perhaps he does learn from his mistakes all the time. But if so, it’d sure be nice if he’d start saying so explicitly whenever the opportunity arises. I like when people lead by example, and don’t see any good reason he’s not already doing this.

      • Anonymous says:

        You should be humble with respect to the fact that you may not know the truth. You do not have to be humble with respect to other people. You can claim that you are unsure about many things and at the same time claim that most other people are even worse and you would still be humble in this sense. You do not have to respect other people to be humble.

    • macrojams says:

      I think it is less a motte within a bailey and more a union over all sets, which is of course much more restricted in scope. The 12 virtues are as good a candidate as any of something anyone in the rationalist movement would endorse.

    • HmlssNSpc says:

      That’s not the argument Almost Diamonds is making though. She’s not saying “This is what rationalists claim to believe, and here are some examples of them failing to follow their own guidelines.” She’s poorly criticizing the movement’s ideals, not the behavior of its followers.

    • moridinamael says:

      One could argue that Almost Diamonds’ post was a case of motte-and-bailley in the first place.

      “I can’t be a member of this group because many members of the group have failings X, Y and Z.”

      But every person has failings! You can’t hold the movement responsible for its members’ failure to live up to its tenets. Either point out that a lot of rationalists are bad at following their own advice, or make an honest criticism of the movement, but don’t pretend you’re criticizing the movement when you’re really just criticizing loudmouths.

      Incidentally – if you try to follow the Twelve Virtues, you’re going to find it very hard to post anything, ever. I probably delete half of the posts I start writing after I think of flaws in my argument, or my inability to back up what I’m saying, or a lack of certainty. Contrast with somebody who doesn’t really care if their statements are correct or well-supported, who just posts whatever comes to mind. This type of person is going to have a much louder presence – unfortunately, since their contributions are almost certainly less valuable. Thus, online discourse becomes dominated not by the best examples of the movement, but rather, often by the worst. There are literally less than a handful of LW posters who post both frequently and with high quality. Most post either frequently, or with high quality.

      This is a phenomenon that generalizes way beyond LW. I’m sure the loudest feminists aren’t the most thoughtful ones. Etc.

    • peterdjones says:

      A lot of movements have a a Motte and Bailey structure, where the Founders put forward a carefully crafted philosophy, which the Followers the exagerate and cherry pick.

      It doesn’t follow from that that that it is always a fallacy to attack the Followers version. The Founders may have the theoretically correct version, but the Followers have the practical impactful version … relevant to questions like “what are the likely real word effects of this movement” and “what is likely to happen to someone who becomes a rank and file follower of this movement”.

  5. Steve Sailer says:

    Thanks. I’ve been thinking about this lately, and it appears to me that people in the rationalist movement tend to have outstanding working memories that allow them to juggle variables in a highly abstract fashion. It’s an admirable skill, one that I wish I had.

    In contrast, I’m not strong at thinking about x and y and z in the abstract, but I’m good at accessing a large number of real world examples and drawing analogies among them. And, for whatever reason, I’m not terribly interested in thinking about toy analogies involving small, uncontroversial examples. Instead, I’m drawn to the major (and therefore controversial) topics of 21st Century intellectual discourse, such as race, gender, sexual orientation, and whatever else Scott periodically announces is banned from his blog, but which he, himself, immediately returns to (see Scott’s next posting for an example) … for the perfectly understandable reasons, that race, gender, etc. are, currently, the Big Leagues of Thinking, and what else is more interesting?

    • Kiya says:

      My understanding is that discussion of race and gender is banned only from the SSC open threads — comments about race and gender are welcome on posts to which they are relevant. If you want to talk about race and gender on open threads, Ozy does run simultaneous race and gender open threads.

      Do you really think that the Big Leagues of Thinking is made entirely of sociology? Those poor folks in the hard sciences are doomed to work on inconsequential problems irrelevant to this modern age?

    • Vaniver says:

      whatever else Scott periodically announces is banned from his blog, but which he, himself, immediately returns to (see Scott’s next posting for an example)

      I think you’re talking about the open threads; I don’t recall Scott saying that he won’t talk about those issues himself (check out paragraph six of his top posts list).

      As someone who very much appreciates your perspective on those topics, I’ve got to say I endorse Scott’s ban of race and gender in the open threads because it reduces low-info and low-planning discussions of race and gender, which I haven’t seen do any good, while allowing high-info and high-planning discussions of race and gender to flourish elsewhere.

    • Public health, tech, and institution design seem bigger-league, so long as you can keep the runaway Civil Rights engine directed elsewhere. Institution design suffers from lack of empiricism though (you get a bunch of utopia stories and then millions die if you ever get your chance to enact them). Arnold Kling and others make me think there’s some value to describing possibilities, even though you’re stuck in politically-reachable local optima.

    • zn289 says:

      “for the perfectly understandable reasons, that race, gender, etc. are, currently, the Big Leagues of Thinking, and what else is more interesting?”

      These topics are widely discussed. Same for the lives of celebrities. Would you say the lives of celebrities are the big leagues of thinking?

      Race/gender/etc. are interesting for all the wrong reasons, just like the lives of celebrities.

    • Scott Alexander says:

      I recently learned about the idea of idiosyncrasy credits – that is, if you have one or two opinions that differ from the mainstream, then you are a reasonable person who is a little nonconformist, the mainstream accepts you as basically okay, and you have lots of room to push your nonconformist position. If you have ten or twenty opinions that differ from the mainstream, then the group considers you a total crackpot and you have no ability to influence the group in favor of any of them.

      Race and gender related issues seem to me to burn idiosyncrasy credits about about ten times the rate of any other issue. I consider my idiosyncrasy credits a valuable resource, so I avoid them except when absolutely necessary and make a conscious effort to talk about other things as much as possible to build my credits back up.

      Having random people talk about race and gender in the comments of my blog in a way people can connect to me still burns up credits (“Scott’s blog? That’s a place where people talk about X! You shouldn’t go there and you should dislike him personally!”) but I don’t even get to express my own opinion and half the time I’m losing idiosyncrasy credits for a position I disagree with.

      I let Ozy handle it because their social justice credentials are impeccable so they have more idiosyncrasy credits than they could ever need and can take the hit in an “Only Nixon can go to China” way.

      But I also have this feeling that talk about race and gender is an addictive memetic toxin. Draw a scale where on one side you have cellular messaging systems (incredibly boring and complex biochemistry that’s so painful to think about that people generally avoid it even though understanding it might cure cancer) and on the other side you have celebrity sex scandals (almost impossible to get people to stop talking about) and race/gender is way toward the sex scandal side of the chart. I think that indulging that kind of thing is dangerous, sort of like only eating candy. I agree it can be useful, but I’d rather not get my brain into the habit.

  6. jaimeastorga2000 says:

    I wouldn’t call “Twelve Virtues” the founding document of LessWrong rationality or its most important mission statement. It was a really bad attempt by Eliezer to crib the writing style of Eastern philosophy and create a document that would go viral, but which resulted in an incredibly boring and rather obscure piece.

    • Scott Alexander says:

      Well, *I* like it! I was trying to calligraphy it and turn it into an illuminated manuscript a while ago, but I got bored around the third virtue and now it’s sitting in a drawer.

      • It’s a pretty document, mainly thanks to the way each paragraph’s ending leads in to the next virtue.

      • Markus Ramikin says:

        For what it’s worth, I also think it’s one of the least valuable things I’ve read by EY.

      • Andrew says:

        I remember that the twelve virtuous used to be printed out and handed out back before CFAR started doing that with HPMOR.

      • Matthew says:

        Just imagine if persistence or patience had been the 13th virtue.

      • Edan Maor says:

        I’ll second what a lot of people are saying, in that I really disliked the “Twelve Virtues”.

        More importantly, in my 3 years of being around LW and the rationality community, this is one of the few times I’ve even seen it mentioned. That’s not to say the ideas aren’t taken seriously by the community – just the document itself.

    • suntzuanime says:

      I didn’t think it was so bad, it was a powerpoint version of his whole philosophy, and that’s a useful thing to have around. The flowery language he used is kind of corny, but you’ll be happier if you learn to appreciate the corny things in this life.

      • jaimeastorga2000 says:

        I didn’t think it was so bad, it was a powerpoint version of his whole philosophy, and that’s a useful thing to have around.

        What a coincidence; I hate powerpoint, too!

    • Bugmaster says:

      FWIW I pretty much agree. While the ideas presented in the document are solid, the pretentious style detracts from its overall effectiveness.

    • Vulture says:

      I also think that the 12 Virtues does pretty well at the spiritual language thing :).

    • Aris Katsaris says:

      I share your low opinion of “Twelve Virtues”. The paragraphs are too long, the phrasing not quite memorable enough — the numbering of the virtues (even though I understand the end of each paragraph is designed to flow into the naming of the next ‘virtue’) is largely meaningless…

      So all in all, I believe “Twelve Virtues” to be a failed effort; could have been written better.

    • veronica d says:

      Agreed. Much of Yudkowsky’s writing has that pretentious “I’m trying to sound deep” quality. Which, from time to time it works, but in the “Twelve Virtues” it just seemed silly.

    • rrb says:

      It is quoted fairly often isn’t it? I like how you nested “obscure” into “bad”, “boring”, and “cribbing Eastern philosophy.” It’s not obscure, and it’s like you thought putting “obscure” into a list of more plausible bad things about it would make it more convincing.

      • rrb says:

        just realized you meant something completely different by “obscure” than I thought you did–I thought you meant “not widely read,” and were arguing it was a bad example of rationalist values because nobody reads it.

  7. zz says:

    Well, in theory, we value scholarship, and we certainly do it better than anywhere else I know of on the internet, but still not enough. Scholarship is mentally taxing, time-consuming, and often replaces the simple answers of your naive model with an eight thousand words. To be sure, it’s worth it (often fastest way of improving your model, and improved model > not-improved model, even when it doesn’t give you easy answers anymore), but “read the damn textbook” is a boring and slow solution, so even though it’s often the best solution, it’s far from the norm it should be.

    • Scott Alexander says:

      “Better than anywhere else I know of on the internet but still not enough” is enough for me to want to respond to a hit piece saying we don’t do it.

      I agree everyone needs to practice every rationalist virtue more all the time. CONSTANT VIGILANCE!

    • Anonymous says:

      I would say that Mathoverflow and certain Stackexchange sites are even better, however, they are purposefully restricted to domain experts only.

  8. Ilya Shpitser says:

    “then performed Bayesian analysis on it”

    He has? How much of his analysis is Bayesian? From what I have seen of it, it is not Bayesian at all.

  9. Keller says:

    There’s a point that I was surprised you didn’t raise. Slightly to my surprise, Almost Diamonds didn’t also make a false accusation of the LW crowd being disproportionately cis. This says much about my low expectations for the post, but it also forces her to answer the question: why white but not straight and cis? I think the answer is better explained by ‘the group is filled with CS/math people’ (I rely solely on my own suspicion and anecdata that these people are disproportionately not straight and cis) than ‘the group is really individualistic, and so oppressed people don’t want to join’.

    • Anonymous says:

      IIRC, LW surveys indicate that there are quite a lot (compared to general population) of trans folks on Lesswrong.

    • suntzuanime says:

      Hypothesis: racial minorities are part of natural oppressed groups – they are generally the children of racial minorities, grow up in social networks full of racial minorities, and so forth. To them, individualism seems like a disingenuous strategy to ignore the oppression of their group. Sexuality and gender minorities, however, face a different dynamic. They are generally the children of straight cis people and grow up in social networks full of straight cis people. For them, individualism is a survival strategy, at least until they can get free and join the Queer Community.

      This would possibly account for differing views of individualism between racial minorities and sexuality/gender minorities, which could explain why individualist groups like, let’s stipulate, the rationalist community would tend to be both white and queer.

      There’s also the issue that “taking weird ideas seriously” is one of the things that makes people wary of the rationalist community, and if your own identity is considered a weird idea and you’re frustrated by people not taking it seriously, you may not be as put off by that.

      This hypothesis predicts that as trans acceptance becomes more normal, the rationalist community will become less trans. It also predicts that since gay acceptance is already reasonably normal, we should see a much stronger effect among trans people than among gay people.

      • ozymandias says:

        The rationalist community is approximately twenty times (!) more trans than the general population and only twice as homosexual.

        We also have more nonbinary trans people than binary trans people, which is the reverse of most trans groups IME.

        • Anonymous says:

          Where do you get your baseline numbers? Might it just be the age of LW?

          nonbinary is almost certainly something really different about LW. probably transhumanism.

        • Steve Sailer says:

          Transgenderism and transhumanism correlate to some degree. Perhaps it has something to do with dissatisfaction with the human condition.

  10. mugasofer says:

    I noticed the “Twelve Virtues” thing too, but I still think it stands on it’s own; Scott refutes each point with a Twelve Virtues quote and several observations about people’s behaviour.

    For example, “Humility” isn’t just a virtue, it’s also demonstrated by all those listing their mistakes and the fact that the entire movement calls themselves “aspiring rationalists” to remind everyone that they’re not actually there yet, just aspiring.

  11. But in no case does the debate ever resemble Almost Diamonds’ naive conception of some people thinking they have to study the world and other people sitting and speculating in armchairs and playing at self-improvement because they don’t want to get their hands dirty in the real world.

    This sounds like the debate between Plato and Aristotle. You need to be an especially extreme variety of rationalist in order to think that analyzing sensory information is actively harmful to human inquiry.

  12. lmm says:

    Empiricism demands looking at the community as it is rather than its founding documents. And I think there is a case that you and Gwern are exceptions, that there are an awful lot of people here and on LW (and I don’t except myself) who do too much armchair philosophising, who make gross generalisations about fields they haven’t actually studied, and who sound awfully arrogant about it.

    There is room for the community to do a better job at holding its members to account. I’m not at all convinced that membership in this community as it exists is particularly conductive to winning.

    • Thomas says:

      Upvote.

    • peterdjones says:

      Cluefullness demands that you be clear about what questions you are trying to answer before deciding whether to judge a movement by its documents or community.

      • houseboatonstyx says:

        The distinction shows up well in discussions by lay Catholics about whether “the Church” means “what the documents and the Bishops say” or “what we laymen are actually doing.”

  13. maxikov says:

    > And in accordance with this, I will put the rationalist movement up, mano a mano, against any other movement on the entire Internet in terms of the quality of scholarship and empiricism.

    I probably agree but:

    > Life is not graded on a curve.

    And to be fair, LW core sequences could benefit from more citations. It is also the case that they’re sometimes a bit sloppy with terminology, especially philosophical, which makes it seem like EY has been reinventing the wheel at many points. LW’s enthusiasm about cryonics also goes rather against the consensus among biologists and doctors.

    To be even more fair, widely recognized scholars do that too. For example, Popper’s inventing “essentialism” instead of “realism”.

    • Izaak Weiss says:

      That’s why I no longer refer people to the core sequences when I want to introduce them to rationality.

      I send them here.

      • Anonymous says:

        SSC is more mainstream sounding and less geeky than LW. If your friends aren’t very geeky, SSC is probably a better starting point for them

  14. Luke Muehlhauser says:

    For the record, I think your writings are a better example of scholarship than mine are. I cite lots of sources, sure, but back when I was writing LessWrong posts I was also less careful about examining and critiquing the merits of every study I cited in support of major points. I’ve become a lot more skeptical of psychological research, and research in general, since then.

    I should also say that while epistemic humility is a proclaimed virtue of LessWrong-influenced rationalists, I don’t think it’s one we’re particularly good at as a group. Part of this seems to be a founder effect (in the memetic sense not the genetic sense, obviously). That said, I’d be surprised if the top 20% of LessWrongers were worse on this score than the top 20% of thinkers associated with, say, the skeptics movement.

    • Vulture says:

      Well, at least stereotypically the skeptics movement is almost literally the worst on that dimension, so not sure how much that would say.

  15. Anonymous says:

    Whatever happened to the Determinist Catholic Puerto Ricans?

  16. Wes says:

    I actually got into a short FB argument with Stephanie Zvan about this. She refused to clearly state the target(s) of her criticism, but it appears to be some combination of Less Wrong and small groups of self-described rationalists that she’s encountered in atheist/skeptic circles. Her attempts to justify her piece made about as much sense as the original piece, which is to say, not at all.

    • wulkywilkenson says:

      Is this exchange public? If I’m reading you correctly, she never actually named Less Wrong, or anyone associated with it?

  17. Anonymous says:

    “Descartes-style rationalism is complicated, but involves the claim that certain concepts are known prior to experience.”

    I realize that this post wasn’t really about early modern rationalism, but the above quote represents a pretty confusing conflation of separate rationalist theses (what does it mean to know a concept?), which one could call “psychological rationalism” and “epistemic rationalism”.

    Psychological rationalism is the view that some of our concepts are not derived from experience; we possess them innately. As such, psychological rationalism represents the rejection of the Lockean blank slate.

    Epistemic rationalism is the view that some of our (substantive) judgments are justified or known not by experience.* That is, some of our (substantive) judgments are justified a priori,. The “substantive” hedge is there because many empiricists accept this of areas like math but claim that such areas consist only of relations between ideas or analyticities.

  18. Pepe says:

    The best textbook on every subject link doesn’t work.

  19. wulkywilkenson says:

    I don’t see much evidence that the Almost Diamonds post was even referring to the Less Wrong crowd. Not that I have a better candidate for who she *was* referring to, but I’ll throw down a 20 on the author never having heard of Less Wrong.

    • BenSix says:

      But if this is the case she is at fault for the confusion that arises. I think rationalism can encourage epistemic overconfidence but if one is going to accuse rationalists of demonstrating this one should identify the culprits.

      Indeed, it is almost as if it is a good idea to specify the targets of one’s criticisms.

    • Earnest Peer says:

      A lot of people from the skeptics/atheists/etc.-crowd also call themselves rationalists, and (by virtue of being an even more heterogenous bunch than the LW-style rationalists) are fairly easy to criticize, especially if you’re being vague about your target.

      • Anonymous says:

        Could you give examples of skeptics or atheists who call themselves rationalists without meaning LW?)

        • Irenist says:

          “Rationalist” and “Atheist” seem to be used as nearly synonymous by some Australian atheist groups. I think there’s a group in Italy and another in India that does the same thing? I googled “rationalist atheist,” and the first few pages of results didn’t seem very LW related.

          • Anonymous says:

            What I really meant was: examples of skeptics/atheists who call themselves rationalists to contrast themselves from others, just as Zvan calls herself not a rationalist for contrast. So the examples of people using the words interchangeably don’t help. I guess it is not clear what Peer meant, but you can’t address that.

        • syllogism says:

          Well there’s the Rational Wiki community…Who are highly critical of EY and LW.

  20. Eli says:

    Also, can I just mention that one of the commenters on that blog says that the problems with the rationalist movement are a lot like the problems with frequentist statistics, and what would really help them is if they investigated Bayesianism? I swear I am not joking. I swear this is a thing that happened.

    Yeah. This is a thing that happened. Our opponent has attacked us by saying we’re not sufficiently us.

    In terms of one group’s social status as the normative clique of the bunch… this is called winning.

  21. Paul says:

    Love the article, but the link to ‘the best textbooks on every subject’ is broken.

  22. Anonymous says:

    More people who took the 2013 LW survey identified as socialists than libertarians. Which makes me wonder, where does the “rationalists are libertarians” meme come from?

    • Scott Alexander says:

      If the general public is 5% libertarians, and we’re 25% libertarians, then we are disproportionately libertarian even though we might be 75% non-libertarian. This criticism seems fair to me.

      (saying “THEY’RE REACTIONARIES!” when 2% are reactionaries seems a little more deliberately malicious, though I’m not sure why)

      • drethelin says:

        Reminds me of the problem with women speaking up. If more than 30 percent of the talking is done by women, people feel like WOMEN ARE TALKING TOO MUCH

        • Jiro says:

          Perhaos women communicate more poorly than men and when people say that women are talking too much they are basing their impression on the proportion of uncommunicative female speech, which is higher than the proportion of all female speech.

      • Steve Sailer says:

        The Less Wrong movement always reminds me of Heinlein’s “The Moon Is a Harsh Mistress” — polyamory, libertarianism, AI, etc.

    • Alex Godofsky says:

      More people who took the 2013 LW survey identified as socialists than libertarians.

      This is another one of those things that just makes me deeply skeptical of LW.

      • David Hunt says:

        Elaborate?

        • Alex Godofsky says:

          My prior against socialism is very, very strong, and the fact that people who have ostensibly worked really hard at eliminating error still believe it leads me to doubt the effectiveness of the error-elimination.

          • David Hunt says:

            I identify with “social democrat” for lack of a better term, because I support liberal social policies and because I haven’t been given reason to believe that libertarian policies will fix wealth issues or unhealthy markets and solve coordination problems.

            There’s not a ton of discussion that I’ve run into on libertarianism on LW; it may simply be the case that I am ignorant of the important arguments and they will blow my mind; it’s hard for me to know that since I haven’t seen them.

            I’m not sure how typical that is, and I’m also not sure that that fits within someone’s “socialist” centroid or not.

          • Alex Godofsky says:

            My priors against the policies typically supported by self-styled social democrats* are not nearly as strong as against socialism, but are still strong. To the degree that the LW-libertarians are more ‘ideological’ than ‘pragmatic’ libertarians this would also to weaken my faith in LW’s error-correcting techniques.

            I could provide a theory of why I think LWers have erroneous beliefs, but I think the genre “explain through pop sociology why your opponents disagree with you” is patronizing and not useful. I’ll merely say that I think I have very good reasons for believing what I do, and in this case I think disagreement is more likely to reflect error than surpassing insight.

            *I’m not going to try to guess which policies you, individually, support or do not support and preemptively declare whether I agree or disagree with them

          • Anonymous says:

            Alex, you may be interested to know that the definition of “socialist” for the purpose of the survey is “social democrat.” But probably not.

          • David Hunt says:

            Alex and I were just discussing that off-thread and came to that conclusion.

            Also, he shared with me some useful links, huzzah!

          • Samuel Skinner says:

            David Hunt
            “I identify with “social democrat” for lack of a better term, because I support liberal social policies and because I haven’t been given reason to believe that libertarian policies will fix wealth issues or unhealthy markets and solve coordination problems.”

            That isn’t social democracy. That is a mainstream liberal/conservative position. Social democracy generally refers to the establishment of a social welfare state.

            I have a feeling more people identify with mainstream opinions than are listed, but pick the more extreme labels because they don’t want to identify with the parties that embody those labels because they tend to come with additional baggage.

    • Illuminati Initiate says:

      This comment was from me by the way, don’t know why it made me anonymous. Also apparently I fail at block quoting.

      In an alternate universe I could easily see rationalist-types being associated with socialism, for the reasons Scott mentioned in his post on Red Plenty. I do not intend that as a criticism of either rationalism or socialism.

      • Jaskologist says:

        I’m not sure that’s an alternate universe so much as it is the situation 100 years ago.

        • Scott Alexander says:

          I agree that it seems that the most mathy and sciency people were socialist a century ago and libertarian today. Why do you think there was such a dramatic change?

          (also of note, there was a lot of overlap between socialists and eugenicists a hundred years ago, but now I feel like socialists are even more likely to run away screaming from the idea than anyone else, and I’ve at least seen hit pieces that say libertarians are more likely to be favorable to it. This suggests there’s a single demographic that switched from socialism to libertarianism sometime)

          • Alex Godofsky says:

            Why do you think there was such a dramatic change?

            We tried socialism and it didn’t work.

          • Protagoras says:

            In the early 20th century, there seem to have been a number of economists who were impressed with the productivity of wartime economies, which generally centralized economic planning. They seem to have thought that if central planning can do such a good job of producing weapons in wartime, despite the best effort of enemies to disrupt producting, and the loss of workers to the military, it ought to be able to do even better at producing what people need in peacetime. Subsequent evidence suggested otherwise. While people sadly change their beliefs for lots of reasons having nothing to do with evidence, I suspect that the changing evidence played some role here.

          • Jaskologist says:

            I tend to agree with Alex on why the mathematically-minded abandoned socialism/marxism; they tried and died.

            More interesting to me is why they turned to it in the first place, because that probably illuminates some ways that people like me are liable to go wrong. One suspicion: engineers liked socialism for the same reason philosophers like Plato’s Republic; the central thesis is that they should be the ones in charge.

          • Matthew says:

            Wow. I’ve never seen that before. I owe you one hilarious link, Jask.

  23. Princess Stargirl says:

    As of now 9/40 of the post in “discussion” on less wrong are about AI-risks. To most people this is going to sound like fanciful speculation at best. And at worst it looks like a doomsday cult.

    Its almost impossible for a sub 150 IQ person to seriously evaluate the risk of imminent AI themselves, unless they are willing to spend alot of time and effort. Surely one would need to learn quite alot of machine learning before they could evaluate the field. Lesswrong’s claims arguments that a human level AI cannot be expected to stay near human level for long are strong. But how is a person to evaluate when human level AI is actually going to be developed? (There are studies of AI researches but can those be trusted?).

    • Princess Stargirl says:

      (arguably its 10/40)

    • Anonymous says:

      >Its almost impossible for a sub 150 IQ person to seriously evaluate the risk of imminent AI themselves, unless they are willing to spend alot of time and effort.

      what

      • Princess Stargirl says:

        How are you supposed to evaluate the progress of machine learning research without learning the field. If you can’t even implement basic algorithms I questions how well you can judge which experts are worth listening to. (and even if you do understand the field prediction is hard). Do you think most people find machine learning easy to understand?

        (I didn’t say it was impossible for most people to get. Just alot of work).

        • call_me_aka says:

          150 is a very high number. 99.9th-percentile high, to be precise. That’s ridiculous. Anyone who’s decently adept at symbolic reasoning can wrap their minds around basic machine learning algorithms, which is a lot of people. Cf. Calculus is taught in high schools outside the US.

          • Matthew says:

            Calculus I and II is taught in high schools inside the US, too, at least in middle class public schools and in private schools. Possibly not in underfunded schools in poor districts.

          • call_me_aka says:

            That surprises me. Is it elective? I remember being confused my freshman year by how many people at my (very good) college were lining up to take Calc I. Somehow it never occurred to me to ask.

          • suntzuanime says:

            In most US high schools, there are multiple math tracks that you can take depending on ability. Schools will offer calculus in one of their math tracks if they expect to have a reasonable number of students with the ability to handle it.

            I have had personal experience with three different high schools in the US – two of them offered calculus, one partnered with a local college to provide calculus education for top students because they didn’t have enough to justify having their own class.

          • Anonymous says:

            In most US high schools, there are multiple math tracks that you can take depending on ability

            Really? I had this notion (based exclusively on the TV show The Wire) that such thing is called “tracking” and considered horrible and evil.

          • Matthew says:

            It is called “tracking.” When I was in middle school in the early 1990s, my (middle to upper-middle class, 95% white) town considered eliminating it, and my parents actually had me take the SSAT and the entrance exams for a couple of private schools, because they were going to pull me out of the public schools if it had happened. But it did not happen, and I took math through Calculus II (i.e. Calculus BC for the Advanced Placement system) in public high school.

          • suntzuanime says:

            Just because something’s considered horrible and evil doesn’t mean people don’t do it. (Even on The Wire, their tracking project happened!) Reality does sometimes trump moral outrage, and the reality is that teaching IQ 80 students math the same way as IQ 120 students is blatantly farcical. So it doesn’t happen. Although they have the politeness not to mention the IQ-word when they’re dividing students into the different classes.

          • Matthew says:

            STA:

            The presumptive evil of tracking, at least from what I’ve seen, isn’t about “all students should be taught the same.” It’s about “racist teachers are presumptively shunting all the black students into remedial tracks.” I expect the extreme blue-tribe version of that would find disparate impact unacceptable even if you gave the kids an IQ test first, which is why they’d rather get rid of tracking than insist on testing for everyone.

            The other objection is that tracking tends to be permanent in practice even if in theory one could excel in a lower track and be moved to a higher track.

          • suntzuanime says:

            Right, but the effect of getting rid of tracking would be that all students would be taught the same. Reality doesn’t care what your motives are. This effect, however unwished-for, would be extremely unpalatable, and so tracking remains, racism or no.

          • Hainish says:

            OTOH, there is full inclusion.

          • Princess Stargirl says:

            I am aware 150 is extremely rare. I just think that the vast majority of people would have to spend alot of effort to get a decent knowledge of machine learning.

            I am thinking of a person with a decent grasp of linear algebra and calaculus and some programming experience. But who has not studied optimization (ex they don’t know what BFGS stands for). I would guess understanding machine learning is about a month of work if they are putting in a couple hours a day.

            This sure sounds like a ton of work to me for a normal person.

            Maybe we just have a different definition of decent?

        • peterdjones says:

          ” How are you supposed to evaluate the progress of machine learning research without learning the field. ”

          So who did Yudkowsky do his PhD under?

          • Rowan says:

            Is that your true objection?

          • Earnest Peer says:

            Rowan’s referring to the fact that “Yudkowsky doesn’t have credentials” is a common criticism, but responses of the kind “we took person X with impressive credentials Y on the team” seem to have no effect with those critics.
            In fact this problem was *in* the True Rejection post.

          • peterdjones says:

            What I wrote was a response to the quote above it

          • Jiro says:

            Having a PhD on your side is a necessary condition, not a sufficient condition. So the fact that someone doesn’t immediately agree when presented with a PhD doesn’t mean it’s not their true objection.

          • Ilya Shpitser says:

            Publishing good papers (as first author, e.g. you did the bulk of the work) “screens off” credentials.

            I know a famous, well-respected academic mathematician at UCLA without a PhD.

    • Anonymous says:

      >Surely one would need to learn quite alot of machine learning before they could evaluate the field

      Are you saying this as someone who knows about machine learning?

      Because AFAICT, it’s almost impossible for *any* person to seriously evaluate AI risk, at this point…irrespective of knowledge of machine learning. (As for IQ, that makes it easier to understand exactly what the supposed threat actually is, but I’m not sure it makes estimating it much easier.)

      • Princess Stargirl says:

        I am not an expert. I did however take CS 511 at Princeton. I did not find this stuff every simple! And I don’t think that even taking CS 511 is that thorough of an introduction.

        I also basically agree with you its not really possible to predict when human level Ai is coming. But I don’t think knowledge of machine learning is useless.

        This is a webpage for the course (not the year I took it).

        https://www.cs.princeton.edu/courses/archive/spring03/cs511/index.html

    • vV_Vv says:

      Lesswrong’s claims arguments that a human level AI cannot be expected to stay near human level for long are strong.

      Not very much. They are based on the assumption that intelligence far above human level is physically possible within practical resource constraints, and that it can be obtained by self-improvement without quickly running in diminishing returns.

      • Samuel Skinner says:

        How much damage do you think a person with an IQ of 150, the ability to copy themselves and exchange thoughts and able to think faster by adding more processing power is capable of?

        • vV_Vv says:

          Not much, and I don’t buy these “Hollywood hacking” type of claims without support.

        • Samuel Skinner says:

          http://www.sitepoint.com/passwords-most-people-do-it-wrong/

          Passwords are very easy to break. If you are an AI, you can apply to a lot of them very quickly and just through sheer numbers break into a significant number of accounts.

          Then you use that to construct an identity and get money by doing online or call center type jobs. Then you use the money to finance your doomsday weapon.

          If you want more details, you should consult an incompetent mad scientist; a competent one won’t reveal your weaknesses until they can exploit it.

          • vV_Vv says:

            If you are an AI, you can apply to a lot of them very quickly and just through sheer numbers break into a significant number of accounts.

            Properly designed systems (that is, not iCloud) will lock you out after a number of attempts. Well, I suppose that an AI could steal celebrity nude photos from iCloud and sell them for bitcoins, which it could then use to…buy drugs? Converting bitcoins to cash, or performing pretty much any serious financial transaction requires a bank account, which in turns requires you to show up at the bank. You could meddle with hacked accounts belonging to somebody else, but then it is only a matter of time before you are caught.

            I can imagine an IQ-150 rogue AI causing some disruption, but not much more damage than a small (~10) group of IQ-150 rogue people.

            EDIT:

            If we were talking about true superintelligence, something way smarter than any human that ever lived and any organization of humans that ever existed, then I would agree that it could screw us in ways that we couldn’t even imagine, much like we can screw mosquitoes. But it is not obvious that this kind of superintelligence is physically possible and economically viable, and that will appear in a short time after human-level AI.

          • Samuel Skinner says:

            It is my understanding that it only locks you out from specific IPs, not all attempts to access so you can still brute force it. Plus you still have the accounts you managed to get with 5 guesses.

            I can’t speculate more because I don’t have grounding in electronic security. I assume things that deal with money are rather tight (otherwise people could just make money appear out of thin air by spoofing the system), but the weakness is access because people use similar passwords.

            “I can imagine an IQ-150 rogue AI causing some disruption, but not much more damage than a small (~10) group of IQ-150 rogue people.”

            Setup a bioweapons lab and start producing tailored plagues? Don’t bother targeting people- if you wipe out the crops you can destroy civilization.

            “But it is not obvious that this kind of superintelligence is physically possible and economically viable, and that will appear in a short time after human-level AI.”

            I don’t see why it isn’t physically possible- you can just keep on adding simulation capability and more processing power. It don’t know what you mean by economically viable- being able to simulate anything perfectly has a ton of applications. If you mean viable for an AI to run while piggybacking on everything, than the answer is “I don’t know”.

            Take off though depends entirely on how easy it is to scale up intelligence. I’m not sure what parts of super intelligence are more than just high processing capability and being able to simulate different scenarios.

            I suspect since the word isn’t well defined we are dealing with different conceptions.

          • peterdjones says:

            It doesn’t require intelligence to mount brute force attacks, and current systems are resistant to brute force attacks that could be mounted with current hardware.

          • Samuel Skinner says:

            Apologies then; I don’t know much about hacking. The intelligence part lies in not getting caught and being able to exploit the information gained to peel open other accounts.

          • peterdjones says:

            On second thoughts, i can conceive ofwhere a situation where a fast intelligent AI scrapes social media for the sort of information that bad passwords are based on..spouses name, pets name, etc.

    • zn289 says:

      The arguments for AI risk seem mostly philosophical… it doesn’t seem that they rely on the details of specific machine learning algorithms. Also, machine learning may not be as abstruse a topic as you think… Andrew Ng’s coursera course is plenty accessible to anyone who knows programming and calculus (hardly 150 IQ topics).

      • Princess Stargirl says:

        Idk this course but you don’t consider taking a reasonably high level coursera course alot of work?

        • vV_Vv says:

          Do you think that Coursera would make its “reasonably” high level courses accessible only to less than one person in a thousand?

    • syllogism says:

      ML/narrow AI doesn’t have much at all to do with what the MIRI people are working on. Michael Jordan (probably one of the 20 most well-known ML people) was asked about this in an interview:

      Spectrum: But if you did encounter someone like that, what would you do?

      Michael Jordan: I would take off my academic hat, and I would just act like a human being thinking about what’s going to happen in a few decades, and I would be entertained just like when I read science fiction. It doesn’t inform anything I do academically.

      Spectrum: Okay, but knowing what you do academically, what do you think about it?

      Michael Jordan: My understanding is that it’s not an academic discipline. Rather, it’s partly philosophy about how society changes, how individuals change, and it’s partly literature, like science fiction, thinking through the consequences of a technology change. But they don’t produce algorithmic ideas as far as I can tell, because I don’t ever see them, that inform us about how to make technological progress.

      An ML researcher is going to understand probabilistic graphical models, and MaxEnt — but EY’s discussion of that stuff is fine anyway. It tells you what you’d want to know about it to understand what MIRI are thinking.

  24. Patrick L says:

    There’s a reason on the internet we have funny meme pictures of cats and stuff to solve these problems.

    “Says Rationalists reject empiricism, lack scholarship, and humility.”
    Picture of Scumbag
    “Provides no facts, goes on feels, publishes findings confidently.”

    Perhaps something with a dog in a labcoat?

    “I have no idea what I’m doing, but I’m pretty sure real rationalists don’t either.”

  25. Shmi Nux says:

    Like any scriptures, the Sequences contain contradictions, and a determined reader can find a passage confirming whatever views they already hold. In http://lesswrong.com/lw/qg/changing_the_definition_of_science/ Eliezer seems to state that a perfect Bayesian would be able to derive all of physics/the world from observing very little:

    > In the extreme case, a Bayesian superintelligence could use enormously less sensory information than a human scientist to come to correct conclusions. First time you ever see an apple fall down, you observe the position goes as the square of time, invent calculus, generalize Newton’s Laws… and see that Newton’s Laws involve action at a distance, look for alternative explanations with increased locality, invent relativistic covariance around a hypothetical speed limit, and consider that General Relativity might be worth testing.

    Taken out context, this readily confirms Almost Diamonds’ skewed view of LW-rationality.

    • vV_Vv says:

      In fairness here he’s talking about a superintelligent AI. The QM-MWI sequence, however, with pearls like this, is much more damning.

      • Luke Somers says:

        Quick question – what about that ought to be damning, again?

        • vV_Vv says:

          He conflates the Copenhagen interpretation with Objective collapse intepretation, and fails to ackowledge the existence of additional interpretations other than MWI.

          He falsely claims that Bayesian reasoning provides a definitive argument for MWI.

          He falsely claims that Solomonoff induction is standard part of Bayesian reasoning, and that it provides an argument for MWI.

          He attempts to set up Bayesian reasoning as an alternative to the scientific method, while in fact Bayesian reasoning is a tool in the toolbox used by science. That’s a category error.

          The tone is extremely arrongant and condescending, as he implies that his armchair speculations can beat the the scientific method applied by professional scientists.

        • Luke Somers says:

          P1: On Copenhagen, making THAT error – if an error it is, of which I am gravely uncertain – is very understandable. Feel free to fix Wikipedia’s misconception on this, for example.

          It is true that he did not specifically address every single other alternative in a zoo. Copenhagen is, however, apparently the most popular by a fairly wide margin, so a pairwise matchup seems not unreasonable. Also, the same basic argument applies to all of them that don’t end up being MWI in the end, though with less force because they tend to avoid faster-than-light messaging and relativity breaking and all of that.

          P3: He’s promoting Solomonoff Induction, which uses Bayes’ rule. It does provide an argument for trimming away unnecessary complexity and nigh-contradictory elements in theories.

          P4: That’s metonymy, not a category error.

          • vV_Vv says:

            On Copenhagen, making THAT error – if an error it is, of which I am gravely uncertain – is very understandable. Feel free to fix Wikipedia’s misconception on this, for example.

            If you are writing a long rant about how much you know better than quantum phisicists about their own field of expertise, you should at the very least have a basic understanding of the terms of the discourse. Otherwise you are being bad at scolarship.

            It is true that he did not specifically address every single other alternative in a zoo. Copenhagen is, however, apparently the most popular by a fairly wide margin, so a pairwise matchup seems not unreasonable.

            But instead of Copenhagen he ended up attacking some strawman mix of Copenhagen and Objective collapse.

            Also, the same basic argument applies to all of them that don’t end up being MWI in the end, though with less force because they tend to avoid faster-than-light messaging and relativity breaking and all of that.

            Except that they don’t. If they did, they would make different predictions than standard QM, which is known to be consistent with Special Relativity.

            Sorry, but I have to ask, did you learn quantum mechanics from Yudkowsky?

            He’s promoting Solomonoff Induction, which uses Bayes’ rule. It does provide an argument for trimming away unnecessary complexity and nigh-contradictory elements in theories.

            Occam’s razor provides an argument for trimming away unnecessary complexity. Solomonoff Induction is an attempt to formalize Occam’s razor for the purposes of machine learning, but it is infact useless as a way to settle QM interpretation disputes. Not only you don’t have a hypercomputer to run it, but even if you had, SI only predicts the probability of new observations based on previous observations. It can’t be used to distinguish between theories that make the same predictions. Bringing it up to the debate is a category error.

            That’s metonymy, not a category error.

            Nope, he clearly states that he thinks that his brand of Bayesianism is superior to the scientific method.

    • Scott Alexander says:

      I feel like this is my view which I explicitly defend in part (V). See for example “A superintelligence can take a grain of sand and envision a whole universe…”

      Actually, I was looking for that post so I could link it – your Google fu is better than mine.

  26. JB says:

    Hi Scott, good post! Your link to Luke’s list of best textbooks on every subject is malformed and doesn’t work. =)

  27. Totient says:

    I may, unconsciously, consider myself more rationalist (Yudkowsky) than I realized – I had this vague feeling of “Hey, they’re attack my tribe!” while reading the Almost Diamonds piece.

    I wonder if this sort of “Why I Am Not a X” piece is an inevitable consequence of movement X starting to gain influence; a necessary (clearly not sufficient) condition of actually trying to change the world.

    In that sense, maybe it’s a good sign. “The movement has become important enough for people to write poorly researched articles decrying it!” Optimizing for this kind of reaction would be disastrous, but it’s more entertaining than thinking “Someone is attacking us!”

    • David Hunt says:

      I think that may be an overly optimistic assessment, given the retort was against rationalism, not LW!rationality.

      Is there some way to fix the naming problem?

  28. toto says:

    That person didn’t use a single source, citation, or even an example of who exactly she was criticizing. IMO it was an extended brain fart based on random cliches (if even that).

    If we start providing long, thorough replies every time somebody pulls something out of their rectum on the internet, aren’t we legitimizing / rewarding / encouraging their lack of engagement with actual facts? Maybe causing more confusion and time-wasting in the future?

    Sometimes there’s a use for curt, dismissive replies. “You didn’t do your homework, rationalists are card-carrying empiricists, rationalism is what you need to act correctly on experimental data”, with maybe a couple links. Life’s short enough already!

  29. 27chaos says:

    Perhaps the people who you see as “rationalist” are different than the people that author sees as “rationalist”. Yudkowsky and those who know him personally might be better at the virtue of empiricism than his internet disciples.

    • David Hunt says:

      She’s genuinely conflating Descartes with LW-rationality. I run into it fairly often, which is why I taboo the word rationality when I raise the topic with someone until after I’ve explained what I’m talking about.

      I wasn’t even aware there was a difference for a while, between LW-rationality and rationalism.

      The first things I ever read on rationalism and empiricism were critiques of each other, and my initial searches for a less INSERT_NASTY_WORD positions brought me to articles asserting that it is impossible to merge the two philosophies in any appreciable sense. I had the good sense to make a face when I read that.

      Found LW a month or so later. Not immediately.

      I suspect my story is not as uncommon as we would like. But who knows?

      • Anonymous says:

        Could you point to other examples of people making that confusion? Occasionally people on LW say that it’s a bad choice of word because people will make this mistake, but I’ve never seen it before this example. (I think it’s a bad choice for other reasons.)

        • David Hunt says:

          The most available example I’ve got is from my daily life. A female friend of mine, upon bringing up the topic: “What? Rationalism is anti-woman, how can you support that” – at which point I’m reeling, because I don’t even know where this misogyny thing came from, and we aren’t even talking about the same thing.

          This happened at least one other time, at which time I started tabooing the word until AFTER the explanation, just in case.

          I haven’t run into this in a forum discussion situation.

          ED: “fairly often” is an overstatement. A mere handful of times. One time memorably horrible. At least once more that was not particularly memorable.

          • Irenist says:

            Rationalism is anti-woman, how can you support that”

            Was your friend conflating rationalism(LW) with something she read about, say, Elevatorgate, or …?

          • Anonymous says:

            She isn’t a skeptic, doesn’t know about elevator gate. Somewhere along the way she picked up on rationalism, and or rationality, are anti woman, and couldn’t give me any leads on why she thought that.

            The Wikipedia article on Max Weber rationality references a feminist critique, but I’m unfamiliar with it and have no particular reason to think that’s what she was referring to. Most likely not.

          • Anonymous says:

            What? Rationalism is anti-woman, how can you support that

            […]

            Somewhere along the way she picked up on rationalism, and or rationality, are anti woman, and couldn’t give me any leads on why she thought that.

            Having strong opinions about the thing you barely even heard about is not a good thing in general, even if we are not talking about rationalism specifically. You should tell your friend not to do this.

          • David Hunt says:

            Yeah, that’s not lost on me. Our last conversation she decided I’d gone Objectivist or something, and it had a weirdly confrontational tone I really wasn’t comfortable with.

            I did a poor job of explaining the value of the LW!rationality cluster of ideas with that friend. But I’m not sure I’m willing to correct it.

          • Protagoras says:

            According to stereotype, men are supposed to be more rational, women are supposed to be more emotional. Some feminists seem to think (perhaps correctly) that because this stereotype conflates “rational” with “masculine,” the word “rational” is at least sometimes used in a way that actually just means stereotypically male thinking. It’s easy enough to see how maintaining that thinking in stereotypically male ways is a better way of thinking could be seen as sexist.

            It’s a very old debate. Descartes thought rationality was by far the most important thing about a person, and contemporaries of Descartes of proto-feminist inclination were divided between one camp which saw this emphasis on a trait stereotypically associated with men as anti-women and a second camp which noted that you don’t think with your genitals, so saying mind matters more than body undermines reasons for discrimination. The second group seemed to have Descartes himself on their side, if that matters, but while that’s also where my sympathies lie, I don’t think the issue is crystal clear. Certainly the word “rational” has been used to describe many quite problematic things; it’s easy enough to say it was being misused, but one should certainly be alert to the risk that it is still being misused.

          • Anonymous says:

            Most things that are called “anti-women” aren’t anti-women at all. Somehow it became very popular to indiscriminately label everything as “anti-women”. I can’t take it seriously anymore.

          • peterdjones says:

            I think this is a case of some feminists saying rationality is anti feminine.

  30. Boromir says:

    I work with the LessWrong community on a daily basis as part of my job. I have even attended a camp or two, and I can tell you the virtues were, at best, under-emphasized in the training I received. I don’t remember them being mentioned at all, and going over my notes and what recordings I took I have not found them. This may be because it was an early camp, but still. When speaking to the overwhelming arrogance and the re-creation of naive errors due to lack of scholarship, dealing with those problems is basically why I was hired (and your response has a ring of lack of Scotsman about it). I have seen rationalists in attempts to deal with banks and lose hundreds of thousands of dollars based on refusing to look at the way companies deal with banks vs individuals. I have seen tens of thousands blown on marketing campaigns without understanding the first thing about a competitive analysis and the innate presumption that all marketers are scum because they aren’t promoting rational thought. I have seen multi -million dollar deals flushed down the toilet in a moment of insane arrogance when someone in the community refused to take the money of investors because they were “superstitious crystal wavers”.This money came with no voting stock and no strings, just a wish on the part of the “crystal wavers” to do some good that was backed by science. I have seen good people driven to nervous breakdowns and wicked venal callous idiots dressed in scholarly trappings blame them for being insufficiently rational with one side of their mouths while proclaiming special privilege for their own emotional needs with the other.
    To illustrate my point, here is a re-creation of a conversation I had with a person highly placed in the LessWrong community.
    Them: All marketers are evil, marketing doesn’t even work and advertising is all lies. We shouldn’t have to advertise.
    Me: Well that’s as may be but even if you have something good you need to let people know. Plus not all advertising is inaccurate or even misleading.
    Them: Bullshit.
    Me: What is the last thing you bought?
    Them: Meatball sandwich.
    Me: Was it made of meat, between two slices of bread, and the meat vaguely ball shaped?
    Them: Yes, but that doesn’t count, it wasn’t advertised.
    Me: Oh, how did you find out where to get it?
    Them: I went on Seamless.
    Me: How did you know that Seamless has meatball sandwiches and isn’t, say, a clothing store?
    Them: There’s a big billboard outside my window.
    Me: What does it say?
    Them : Seamless, >mumble< order food something something.
    Me: Also how did the sub shop know to put their sandwiches on Seamless? How did Seamless contact the sub shop?
    Them: I don't know and its not like it says on the app.
    Me: Have you checked? Also they are by definition within delivery range, have you asked at the store? Would it be worth the success or failure of your company to get off your ass and walk down the street and ask some questions of someone who makes sandwiches, successfully?
    Them: But they don't *know* anything.
    Me: How long has the sub shop been there?
    Them: According to Google, 8 years.
    Me: You know, most restaurants fold in their first 3-5 years. They at least know more than most restaurateurs.
    Them: But I can't ask them , they won't tell me.
    Me: How do you know, have you checked? Even if they don't can you then look for other restaurants of similar age with similar incomes? if you have a desire to know how this works?

    In the end they (I use as a gender neutral) never conceded that all advertising isn't evil, misleading, the work of the devil and (their words) BadThought and the company continued its death spiral until they left. Since hiring someone that has taken a business class ever, bothered to research common practices, and was willing to spend time and money to test which of those practices are relevant to the company and true miraculously it has done a lot more good.
    Saying the LessWrong community values those 12 Virtues is like saying the white Christian community of apartheid-era South Africa valued the Ten Commandments. it is maybe theoretically correct among individuals for short periods of time in their own in group among problems they consider real problems and people they consider real people, when they have the time, sort of. And also they might be 1% that actually means it.

  31. Harriett says:

    Arachnophobia is learned according to this article. Check it out? It doesn’t have sources for the “animal phobias are cultural” thing but it has a bunch for how little harm spiders do.
    http://bogleech.com/arachnophobia.html

    • Anonymous says:

      The usual claim is that fear of snakes and spiders is learned, but a lot easier to learn than fear of other things. eg, here

  32. aretae says:

    “Islam is a religion of peace” — said the suicide bomber.

    There’s a very big difference between claiming that these are foundational virtues, and living them.

    Empiricism?

    Perhaps…but when I see the rationalist community fail, it’s usually in the direction of trusting rationalism too much, and empiricism too little — present company excluded, Scott…you’re marvelously careful.

    Consider Yudkowsky on Many Worlds hypothesis. My claim is that this isn’t an aberration, but the normal from which aberrations diverge.

    Humility?

    I’ve not met a more arrogant “we’re right, you’re wrong” group of people since I left the Objectivists 15 years ago. As a practical matter, as a Gestalt, this fails magnificently.

    Scholarship, I’ll give you, and it’s a real win as compared to prior rationalist movements.

    As I’ve noted before, I’ve sat through Objectivists, Extropians, Neo-Popperian TCSers, and now the Rationalists…and what I call the 4th church of rationality are a smidge better (see: scholarship), and making most of the same mistakes. My history of philosophy puts the Logical Positivists as a premodern incarnation of the same greater intellectual thread.

    For all these groups, they lean rationalist over empiricist, and they are thoroughly arrogant rather than humilitant.

    Note, I am not critiquing Scott Alexander…who I find to avoid most of these problems, but rather of the movement as a whole…I’ve been annoyed by Yudkowsky specifically, and others in the movement on these points for >5 years.

    This is a real part of why I count myself as a friendly critic rather a co-traveler.

  33. The_Duck says:

    “Why I Am Not a Rationalist” reads to me like a reaction to an overconfident tone. I could imagine the author writing the essay after reading some comment or post by a self-identified rationalist along the lines of “Obviously we should institute policy X; only irrationality has held us back from doing this.”

  34. Matthew says:

    I think arrogance is a rather flexible concept and people are using it differently. Consider two hypothetical individuals

    Biased Bob: Holds correct beliefs 60% of the time, but thinks he holds correct beliefs 75% of the time. He also thinks Ralph holds correct beliefs 75% of the time (this falls in the 25% where Bob is wrong).

    Rationalist Ralph: Holds correct beliefs 80% of the time, and is well calibrated about this, thinking he holds correct beliefs 80% of the time. He also correctly thinks that Bob holds correct beliefs 60% of the time.

    Bob is wrongly overconfident about his own accuracy, wrongly underconfident about Ralph’s accuracy, and leans toward “everybody’s entitled to their opinion/nobody is better than anybody else.” (The causal arrow of beliefs probably goes the other way.)

    Ralph is correct about his own accuracy, correct about Bob’s accuracy, and acts like he is more rational than Bob even though he correctly realizes that he is wrong 20% of the time.

    Who’s arrogant?

    I run into something like this situation a lot, because I like the Bryan Caplan strategy of offering bets to call people out when they are bullshitting. People tell me I’m being arrogant when I do this, because I don’t defer to the prevailing “everyone’s opinion is equally valid and it’s all about who can shout louder” ethos, and I’m like, “I offered 4-1 odds to the bullshitter. I’m implicitly claiming 80+% confidence in my own belief, while he’s using his failure to put his money where his mouth is to claim 100% confidence in his belief. Who’s being arrogant?”

    ETA: Perhaps there is a distinction to be made between epistemic arrogance and interpersonal arrogance.

  35. thisspaceavailable says:

    All you’re saying is that Saddam called the USA’s bluff and was wrong and it was disastrous. That could EASILY have happened with an attempt by the US to demand inspections from Russia.

    Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies’ long-term goals.

    And then called me a “troll” who was deliberately pretending to not understand him, when I asked

    So, just to be clear: you believe that in the hypothetical world in which the US threatens to attack the USSR if it does not allow inspections, the USSR would have no reason to think this serves a useful purpose, and would be therefore justified in concluding it was a bluff?

    How can you call gwern a rationalist, when he absolutely REFUSED to even CONSIDER the possibility that he was at fault for the miscommunication?

    • Nornagest says:

      Without using any -ist words: Gwern is careful, original, very thorough, and has a fantastic nose for scholarship. He is not, however, very good at adopting the principle of charity in interpersonal communication.

  36. Anonymous says:

    Humanities person here. A couple observations:

    (1) Any time someone talks about “the rationalist movement”, I cringe a little, because I generally try to replace “rationalist” with “person who cares about the truth, in that particular cult-y internet way that I shall resolve to either ignore or find endearing”, and that dun need to move anywhere.

    What are the goals of the rationalist “movement”? To cultivate intellectual virtue in its own members? That’s monasticism, not a movement. To “raise the sanity baseline” more generally? How? By persuading more people to try and be intellectually virtuous? By introducing novel intellectual discoveries to replace the way that people currently form and negotiate beliefs?

    Insofar as the movement has social goals, they’re patently silly. People aren’t going to wise up to Bayesian rationalism any more than they were ever going to wise up to “the truth thing”, no matter what name is fashionable for it at any given point in time. Time, working memory, and normal human feelings are too valuable to expend on an activity with highly uncertain returns, if any. In my experience, rationalists tend to overestimate the utility that most people will get from believing true things, by generalizing from their own sense of accomplishment and/or neuroticism temporarily appeased.

    Which brings me to

    (2) this “truth thing” is a thing, and has been a thing long before Our Prophet Yudkowsky started writing fanfiction on the internet. Internet rationalism vastly overestimates its own originality and, crucially, its ability to re-derive all philosophy (that matters) from a few basic building blocks within the lifespan of the internet. Remember when Scott reinvented classical liberalism? This comment captured my feelings exactly. It’s extremely off-putting to observe that kind of naïveté in conjunction with claims of intellectual superiority, however well-substantiated by evidence of conscientiousness on the part of (some) rationalists.

    I appreciate LW (and this blog) because sometimes I find good, concise statements of useful concepts. I also just really enjoy watching things I already understand get spelled out explicitly. But nothing I’ve encountered in the rationalist community has significantly changed the way I think about things, because I learned critical thinking in college already. And I learned it reading poetry, ferchrissake. If you’re doing it with smart people who call you out when you make a lazy assumption or have confused meta-beliefs, you will learn pretty much everything on LW that’s worthwhile. The rest is rarefied geekery–the obsession with AI, transhumanism, even polyamory.

    (3) Speaking of polyamory, I can’t be the only one who’s observed rationalists claiming that polyamory just makes so much sense and the only reason everyone else isn’t doing it is that they’re sheeple. The burden of argument seems to be particularly low where the potential for prudery is high. This is where it starts to look like self-congratulation is a central part of the movement. Those of us who spent some time thinking about relationships and family structure before throwing away common sense (which is a fair thing to do, with care) aren’t quite as willing to jump that particular cliff just because it looks like the water is fine below. Lefty Blue types at least offer polemics to go with the radicalism.

    Basically, rationalism has a huge Icarus problem. And it’s been that way for a while now–W.H. Auden wrote about techno-optimist myopia before we even had startups:

    It’s natural the Boys should whoop it up for
    so huge a phallic triumph, an adventure
    it would not have occurred to women
    to think worth while, made possible only

    because we like huddling in gangs and knowing
    the exact time: yes, our sex may in fairness
    hurrah the deed, although the motives
    that primed it were somewhat less than menschlich.

    more.

    ETA: This is an elaboration of this comment.

    • Jaskologist says:

      Most rationalists could benefit from a solid grounding in the classics. But then, that’s true of most everyone, as is the lack of said grounding. I had to take mine on my own time, in a haphazard, self-taught (and therefore with a fool for a teacher) manner, and it was still more enlightening than the sequences I’ve seen so far.

      The whole “let’s just toss out all the bath water; surely we can rederive the babies later” attitude probably should be blamed on Descartes. It’s a thoughtstream we all swim in, even if Yudkowski Rationalists manifest it more strongly.

    • The Do-Operator says:

      There are certain elements of truth to this, but I think you are missing some essential points:

      (1)

      Yes, sometimes rationalist bloggers will “rediscover” old ideas. Newsflash: If you work full time as a psychiatrist and post a blog entry every other day, not all of the entries are going to be original contributions to the post-enlightenment Western canon of thought.

      It is not uncommon that the ideas of old thinkers float around in a weakened form, such that most people have been infected by the meme but find it hard to state the idea explicitly. It is always possible to go back to the original thinker, but it will be written in a boring format and in an outdated language. When a blogger “rediscovers” such an idea, readers will feel like the memes become “crystalized” in their minds. This can lead to a very pleasant feeling of understanding something new. Scott makes the process much more fun by writing in a clear and engaging style

      That said, I am convinced that there are original ideas both in SSC and in Eliezer’s sequences.

      (2)

      Humans spend much of their lives discussing and arguing with each other. Academia (including the humanities and mainstream philosophy) is just a formalized, high-status version of this game. There are two interpretations of what is going on: The charitable interpretation is that they are engaging in Aumann updating, trying to help each other get closer to the truth. The uncharitable interpretation is that they are monkeys playing a status game about who gets to decide what the “truth” is. If the uncharitable interpretation is correct, all the effort is wasted on useless signaling. For good or bad, many rationalists adopted the charitable interpretation.

      Rationalists then notice that in every corner of human endeavor, people appear to be arguing about truth, but are doing a spectacularly bad job. To use rationalist lingo, truth is an idea that most people fail to ”take seriously”. If people took the idea of truth seriously, the humanities would look spectacularly different – maybe it wouldn’t look completely like Less Wrong, but it would at least look like something that makes sense to rationalists.

      Eliezer did not claim to have invented “truth”, but he did however start a movement that takes the idea seriously. For members of such a movement, it will seem obvious that one should “reboot” mainstream philosophy and the humanities, and make sure all the arguments are held to the proper epistemic standards, without regards to the status of the person making the argument (“nullius in verba”). As in any reboot, there will be many elements of the original continuity that should be preserved, and one should certainly acknowledge the contributions of the original writers. However, rewriting the material in the style of the new continuity is a worthwhile project, which will make it much easier to evaluate whether the idea still makes sense after the foundations have shifted.

      • Anonymous says:

        I didn’t fault Scott or EY for being unoriginal. I enjoy a fresh, lucid restatement of an important concept as much as anyone; that’s why I’m here. It’s when you get to the started-a-movement part that I start to get uncomfortable.

        Here, I’ll ask you the same question that I ask people who come to me with their startup ideas: if this is such a good idea, why hasn’t anyone else thought of it before? There are good answers to this question–maybe it’s a new problem, or you’re an early mover capitalizing on a recent technological disruption, or maybe other people have thought of it but made well-known business mistakes x, y, and z because they didn’t consult with successful entrepreneurs and learn that there are actually well-established ways to address these problems. But “I’m just smarter than everyone else” or “I’m the first one to take this seriously” is not one of them.

        Talking about starting a movement is appropriate when you’ve identified a widespread problem and you think that problem can be solved through better coordination and perhaps some reallocation of resources. Maybe the internet made it suddenly possible to form a community of people who take truth seriously, but I’m skeptical that anything has changed in the human condition such that people in general will somehow become more dedicated to the truth. And absent some evidence that this is a social goal that (1) LW has and (2) is not stupid, LW can’t properly be said to be a movement. It’s an internet community, serving the needs of the people in it. This is much more obvious to an outsider, particularly a sympathetic outsider, looking in on LWers being (sometimes hilarious) parodies of themselves. There is no real reason that caring about truth and being really into wizards and AI should go together.

    • Anatoly says:

      I think your criticisms are broadly correct and useful, but some of them go too far.

      >Insofar as the movement has social goals, they’re patently silly.

      Things aren’t as bleak as all that. “Teach Bayesian thinking to everyone in the world” is not a useful goal. But I think you can steelman “raising the sanity waterline” into something very useful. For example, how about “spreading the explicit awareness of cognitive biases among people who do research or work with data?”

      >Speaking of polyamory

      An embarrassing failure of LW-style rationalism, but its canonicity shouldn’t be overestimated. Not that many members of the LW tribe are poly, and even if many of the central figures in the movement are, that doesn’t seem to encroach on anything else. To put it plainly, I don’t care who EY sleeps with, and it doesn’t matter to truth or usefulness of any of his other writings.

      >this “truth thing” is a thing, and has been a thing long before Our Prophet Yudkowsky started writing fanfiction on the internet. Internet rationalism vastly overestimates its own originality and, crucially, its ability to re-derive all philosophy (that matters) from a few basic building blocks within the lifespan of the internet.

      I think this is both true and worth reiterating again and again. Preferably with detailed examples. Which LWers, in my experience, often proceed to genuinely learn from and take into account.

      >But nothing I’ve encountered in the rationalist community has significantly changed the way I think about things, because I learned critical thinking in college already. And I learned it reading poetry, ferchrissake. If you’re doing it with smart people who call you out when you make a lazy assumption or have confused meta-beliefs, you will learn pretty much everything on LW that’s worthwhile. The rest is rarefied geekery–the obsession with AI, transhumanism, even polyamory.

      I really don’t think that’s true. My best example is, again, cognitive biases and learning to explicitly recognize them and (when possible usefully) to correct for them. This isn’t something you get when learning “critical thinking” in college, or even during years of following the skeptical movement, at least in my experience.

      Explicitly Bayesian thinking is also very useful as a tool, even (or especially?) to those who don’t take it as gospel.

      What I would give you credit for here is the fact that LW-style rationality builds on critical thinking skills that are usually acquired before you encounter it. I would be surprised if many people learned to be rational (learned to question their assumptions and worldview, to attempt an outside view, to weigh evidence, etc.) from LW/EY.

      But in the end, when all this is said and done, what other movement/large forum/etc. is out there for people who want to improve their critical thinking, find like-minded smart people and learn from them, and not compartmentalize to a particular science? The closest I can think of are the skeptics and the atheists, but they’re both more about debunking than learning. “Refining the art of human rationality” may sound a little pompous when you take an outside view, but in the end, this is really worth doing, and if I want to do this to myself and learn from others with the same goal, I don’t know a better place on the Internet or off it.

      • Scott Alexander says:

        I don’t even care if people know about cognitive biases. There are too many and most of them rarely come up in real life. I’d be thrilled if people just realize that speculative statements should have degrees of credence (ie “60% sure”) and that people should check them later to see if they’re right or if they’re overconfident.

    • MicaiahC says:

      I find comments like these unusually frustrating to read.

      I also have some misgivings with the community, although I tend to identify more with it than not, so I tend to be very excited when I see substantive criticisms.

      The above comment gives me the impression of the snooty old guard derisively snorting at Nouveau riche, for not behaving in the exact same way as a respected in-group. Complaints about the lack of originality in thought are thinly veiled to put down those who did not learn the same social norms, who did not read the same classics, who did not give the exact same levels of respect to the same figures that you do, without having to address specific object level concerns.

      Yet, the INSTANT the ‘rationalist’ community starts saying things that are original, that are outside of the mainstream (transhumanism, cryonics, polyamory, in this case) there’s an immediate response dismissing them as silly, with no followup on why.

      It seems to me that there’s a subtle shift in criteria, which allows no good impression. No points are rewarded for being correct albeit unoriginal, but any original, out of the box thinking gets castigated for being different without reference to correctness.

      I have a much more positive opinion of these comments when criticisms are brought forth or when ways to look up existing literature are provided (a comment earlier in this thread about compatibilism made me very happy).

      I must apologize for for the ranty nature of this comment. Part of it is frustration at reading many curt dismissals of the community with much less followup than this one.

      (I fear the only reason why I might like the community is that they are WILLING to engage, and not necessarily that the engagement is high quality. Please don’t give my lizard brain more excuses. )

      • peterdjones says:

        Being both original and correct is extremely difficult, and rationalists
        should therefore put low prior on having achieved it.

        And, yes, a specific reference to a known problem is much better than a sweeping New Therefore Wrong.

      • Anonymous says:

        I will cop to having a major anti-nouveau-riche bias. I’m working on it.

        That said, the point wasn’t that rationalists are insufficiently original. The point was that intellectual naïveté ought not to be paired with claims to intellectual superiority. See for example The Do-Operator’s point above about the humanities needing a “reboot” and then they’ll start to make sense to rationalists. If you’re familiar with e.g. political philosophy, watching someone dabble in the field while seemingly unaware of foundational ideas in it is a bit like watching this guy try to call the president of physics. It’s embarrassing.

        I’m not trying to bash rationalists generally, nor am I trying to correct them on particular points. I’m just trying to point out that getting too clever is a real danger. You don’t know what you’re losing in that reboot.

        • Anonymous says:

          This might be a better statement of the same.

        • MicaiahC says:

          Thank you for being patient with me.

          I agree that some aspects of the community should be more aware of alternative perspectives; it’s hard to understand that reasonable people can have ethics other than consequentialist ones, and metaphysics other than reductive materialism exists. I agree that not having the background is non-ideal.

          However, I think we diverge when it comes to the relative benefits. afforded by Eliezer’s writings

          As a person who did my undergraduate in a hard science (physics), seeing a derivation from first principles very appealing. On top of that, my only exposure to the humanities is through very watered down general education classes (come to think of it, intro to physics is also similarly terrible DAMN YOU COLLEGE WHAT HAVE YOU EVER DONE FOR ME). So, from my perspective, I have to sift through *a lot* of literature before I feel like I have a complete survey. Which is fine, learning the basics of a field is not going to be always easy. But Eliezer’s writing style, and his foundational assumptions are a lot closer to mine.

          So from my perspective, I make a choice between an easy to indulge in, if naive “source of philosophy” and a blur of “dead european Xtreme 2000: the philosophoning”. So, I’d much rather read the imperfect, naive version first, then try to approach correctness.

          As a person who is familiar with the classics, do you suppose this is a decent attitude to adopt, or should I be far more selective and skeptical, because of fundamental mistakes in the sequences?

          • Jaskologist says:

            I’m not sure there is a shortcut to good philosophical thinking.

            I think a lot of the benefit of the classics comes from exposure to people whose foundational assumptions are wildly different from yours. Trying to wrap your mind around somebody like Aristotle arguing logically (and convincingly to his contemporaries) for slavery will make you a lot more humble about whatever you’re so confident about now. You’ll end up encountering a whole lot of people who were demonstrably far smarter than you, and they’ll be questioning things nobody you ever met would have dreamed of questioning. Why, some days I’ve thought as many as six unthinkable things before breakfast.

            This is important when trying to approach correctness, because we are generally dealing with questions that have not been solved yet. Step 1 is to stop assuming that you Western Modernism is essentially correct. This is a very hard step.

            Too short, didn’t read; basically everything C.S. Lewis says in this introduction.

          • MicaiahC says:

            I mean, in theory I agree that I *should* be inoculated against a narrow-minded ‘modernism is correct’ but I don’t know if I do in practice.

            Growing up in a drastically different culture (Traditional Chinese), I would like to think I’ve already taken step one. Which leads to the question: How on earth do you differentiate good philosophical thinking from bad? Even if there are no shortcuts to good philosophical thinking, I can still not want to take the main path if it’s filled with pot holes, precarious falls from cliffsides or, *shudder* collected essays of Theodor Adorno.

          • Jaskologist says:

            I don’t have a happy or easy answer for you. The search for truth is much more hazardous than you’d expect. Nietzsche wasn’t kidding; there really is an abyss that stares back into you (he later went insane). It takes a lot to cross that abyss to the other side. If you aren’t already firmly grounded, you’re as liable as not to get lost.

            There are no widely agreed upon rules to distinguish the good from the bad. Our species and its understanding is still young.

            Seriously, this stuff is hard. You may be better off sticking with whatever widely accepted tradition you were raised in. Becoming only a little knowledgeable is more likely to just make you overconfident in your foolishness, which is worse than plain ignorance.

          • Protagoras says:

            Best current retro-diagnosis attempts suspect that Nietzsche suffered from a slow-growing brain tumor (he may have had a hereditary susceptibility; his father also died young of possibly related causes). “Driven mad by his own philosophy” is definitely very low on the list of theories of his collapse ranked by plausibility.

          • MicaiahC says:

            Jaskologist: Thankfully, the chief disadvantage of philosophical musings, the difficulty of telling what is good or bad philosophy, also implies that having marginally better philosophy will not have a noticeable impact on my life, otherwise it’d had been decided earlier.

            Of course, merely because any effect is not noticed doesn’t mean that the effect is small, but so far even fans of the classics do not say that it’s big enough to have effects X, Y, Z.

          • Jaskologist says:

            @Protagoras,

            Sure, go ahead and ruin my fun story with your retro-medical facts. This is why Epicurus never invites you to any of his parties.

            @MicaiahC,

            It matters immensely, and should affect how you live your life if you do it right (but humans are only slowly logical; more likely it will affect how your children live). Divide the world’s land up by major religion. See how different those territories are? That’s the difference philosophy makes.

            “How Should We Then Live?” is the great question, and it is philosophical. There is a real philosophical case to be made for drinking yourself into an early grave, liquidating millions, or abandoning everything to live a life of contemplation in the desert.

    • David Hunt says:

      > Insofar as the movement has social goals, they’re patently silly. People aren’t going to wise up to Bayesian rationalism any more than they were ever going to wise up to “the truth thing”, no matter what name is fashionable for it at any given point in time. (etcetcetc).

      People say “X is not gonna happen” with alarming certainty.

    • Scott Alexander says:

      I have a lot of objections here, but the part that really stood out to me was the polyamory. I think I have hardly ever (once a year or so?) seen statements that polyamory is much better and more rational than nonpolyamory and everyone who doesn’t do it is wrong, and never from important central community members.

      I’ve seen a few more “we enjoy polyamory and you should tolerate it” claims.

      And I’ve seen a lot more of people not politicizing polyamory or making demands about it, but just doing it and not apologizing for it.

      I worry that when people do something weird and don’t apologize for it, people who don’t like the weird thing fish for some reason to say they are wrong, and turn to “they’re saying their weird way is better than the normal way and being evangelical about it!” as a smokescreen for their own evangelism in favor of normal.

      You can change my mind if you have some good examples of what you’re talking about.

      • Anonymous says:

        I’m a little confused by your fourth paragraph. Did I imply that there would be something wrong with it if rationalists actively advocated polyamory? Mind you, I don’t think that polyamory is a good idea and I’m not shy about saying that, but people are perfectly within their rights to be wrong. What I was objecting to was the attitude that polyamorous people need not offer any sort of reasoning for denormalizing monogamy. Which is kind of what you’re saying…?

        Polyamory isn’t like liking chocolate ice cream–you can’t “just do it and not apologize for it”. Relationships are public and heavily scripted affairs (and for good reason). If you’re in a normal one-on-one relationship, but you’re fine with your partner having discreet trysts on the side, that’s all well and good, but if you are socially polyamorous, you are asking society to change.

        Rationalists as a group are maddeningly blithe about this. You yourself said that you became poly when you just found yourself dating three people. The most egregious examples I can think of are from real life, but I’ve personally heard rationalists say that the only reason most people aren’t polyamorous is they’re either sexual conservatives and therefore crazy, or just conformists who haven’t really thought about it. That’s not evangelizing, exactly, but it does imply superiority.

        ETA: And again, if you think polyamory is superior, please do come out with the arguments. I just don’t want to see complacent self-congratulation about throwing that pesky baby away with the bathwater.

        • ADifferentAnonymous says:

          This addresses your second paragraph only, specifically the claim that being socially polyamorous comes with a burden of public proof.

          I assume you would not assert those arguments with respect to homosexuality rather than to polyamory. On what basis do you make the distinction?

          • Anonymous says:

            Can you narrow the question down a little bit? I can sort of see the analogy but I’m having trouble understanding the question. Gay rights advocates have always been clear that the point of coming out is to normalize gay relationships. The assumption being that normalizing gay relationships is good. That leaves you with plenty of room to say that things that shouldn’t be normalized ought not to be normalized. Or?

          • Matthew says:

            If gays can be openly gay and that doesn’t count as making demands on the rest of society to change, why can’t polyamorous people be openly polyamorous without that being taken as an additional demand on the rest of society?

            Normalizing gayness != insisting everyone should be gay. Similarly, normalizing polyamory != insisting everyone should be polyamorous.

          • Jadagul says:

            There’s a large inferential gap here between people who think of a romantic relationship as being a private matter between two (or more) people, and people who think a romantic relationship is fundamentally a public affair involving the entire community. Group B tends to be thinking more of the family-and-children angle.

            My position honestly is that relationships are private but marriages are public. But I don’t want to argue for my position on this issue so much as point out that it is an issue, on which people in this thread are disagreeing.

          • ADifferentAnonymous says:

            Matthew got it exactly. Bring understood by the people who agree with you is probably communication skills level one, I guess.

            Jadagul: that does help me see what anon’s position could be, but since the “relationships are public” concept probably is at the heart of most opposition to homosexuality, the question still stands.

          • Jadagul says:

            Reasonable.

            As I said elsewhere in the On the Road thread, the “relationships are public” model tends to think of relationships as the way society channels the sex drive into requiring people to be socially responsible and productive. Opinions on gay marriage tend to split roughly into the “Andrew Sullivan” camp of “legalizing gay marriage is a way of incorporating gay people into this system of social capital formation” and the “Ross Douthat” model of “The push for gay marriage is a consequence of people forgetting that relationships are a way of channeling the sex drive and treating them as something purely for the benefit of the people in the relationship.” Both would typically agree that polyamory, like casual sex, is problematic because it’s an attempt to increase sexual satisfaction while avoiding being tied into normal family units.

          • Grumpus says:

            But gay people being openly gay are making demands on society to change. They are demanding that society adjust its concept of relationships to include same-sex relationships, hence “marriage equality” etc. I could probably dig up a Dan Savage quote to that exact effect.

            >”I assume you would not assert those arguments with respect to homosexuality rather than to polyamory. On what basis do you make the distinction?”

            I honestly don’t understand what you’re assuming here. I don’t think I’m making a distinction…? Gay relationships strain cultural norms just as much as polyamorous ones, and it seems to me that we can discuss the relative merits of such strain separately in each case, seeing as gayness and polyamory are, you know, pretty different.

          • Grumpus says:

            Oooooh I think I understand what’s going on. I think you think that my objection to polyamory is that it does make demands on others. Which couldn’t be further from the truth! I’m like the opposite of a person who thinks that people shouldn’t make demands on others. I have more respect for the person who thinks that monogamy is inherently abusive and actively advocates polyamory than the person who thinks, eh, this doesn’t make sense, I’m just gonna throw it out.

            P.S. I gave myself a name for ease of reference.

          • Grumpus says:

            Oh, I just noticed Matthew’s second paragraph. That simplifies things.

            Normalizing x != insisting everyone should be x.

            Sorry for the spam, y’all.

          • Grumpus says:

            This probably bears elaboration.

            So, an important mechanism by which norms work is the foreclosing of possibilities, or at the very least the withholding of social approval for those possibilities. For example, I grew up assuming I would go to college because everyone went to college. (I was pretty shocked to find out the actual number.) I would not have gone to college if I’d met some of the people I’ve since met, who are smart and curious and claim to have educated themselves to a satisfactory level without college, because that would have made it seem like going to college was an option, and that would have been a bad thing, because now I’ve been through college and I can see the gaps in these people’s educations and I know how much I would have missed trying to do it on my own. So that’s one strike in favor of a strong college-going norm, at least in my demographic. (Obviously you need more strikes before you make a final judgment.) Similarly, people practicing polyamory and not apologizing for it hurt the monogamy norm because they make polyamory an available/okay option. That’s pretty much the entire case for coming out as poly/gay/whatever. Then it’s down to the specific merits of the norm in question.

            What I was objecting to in the original post is the attitude, “Oh, lookie, a norm! How irrational it would be to follow it. Come, let’s jump!” which is only a mild parody of what I think I’ve observed in rationalist circles w/r/t polyamory in particular.

          • ozymandias says:

            I note the goalpost seems to have moved from “rationalists are continually going on about how polyamory is Great and the only people who aren’t into it are sheeple” to “rationalists are poly and don’t apologize for it.” The latter seems obviously true, the former far more controversial.

          • Grumpus says:

            I don’t think so? From the OP:

            “The burden of argument seems to be particularly low where the potential for prudery is high.”

            and

            “Lefty Blue types at least offer polemics to go with the rationalism.”

            Perhaps I was unclear, but the point all along has been that rationalists are poly in a not-very-well-considered way, and that that is symptomatic of the “Icarus problem” I was pointing out more generally. The reason I zeroed in on the “doing your own thing and not apologizing for it is a statement” angle is that that particular point was challenged.

            Edit: I guess, from my perspective, the difference doesn’t really matter, so if you’re only willing to grant the latter, we’ll talk about the latter.

            Edit 2: Maybe it sounded like I was objecting to the obnoxiousness of saying, “I know better than you”? I was not. I said throwing out common sense was a fair thing to do; what I was objecting to was the shallowness of the argument. I didn’t mean to imply that the median poly rationalist thought monogamous people were sheeple, only that that was representative of the level of argumentation that was considered acceptable in that domain. I guess that was really unclear. My bad.

          • peterdjones says:

            People who talk about norms need to be clear about what they mean by the term…permissible? Preferable? Mandatory?

            Heterosexual marriage+children was never mandatory, and was never seen as being threatened by celibate priests or old spinsters.

        • blacktrance says:

          Relationships are public and heavily scripted affairs (and for good reason). If you’re in a normal one-on-one relationship, but you’re fine with your partner having discreet trysts on the side, that’s all well and good, but if you are socially polyamorous, you are asking society to change.

          They’re often somewhat public and scripted, but they don’t have to be. How am I asking society to change by being polyamorous? You could say that I want my relationships to be treated equally, which I do, but that’s not a necessary part of being polyamorous – someone could present themselves as polyamorous and not demand anything.

        • ozymandias says:

          I don’t quite see what’s wrong with the “start dating multiple people and see if you like it” strategy…? Empiricism > uninformed speculation any day of the week.

          • Grumpus says:

            Seriously?

            There are roughly two gazillion ways that “try it and see if you like it” could go wrong. Just off the top of my head: heroin (you might like it so much you’ll lose your mind), not saving for retirement (you’ll sure like that for a long time), jumping off cliffs (you might not be around to like anything any more), rape (other people might not like it), pretty much everything in Meditations on Moloch (you and other people are either ok or miserable, but don’t even know you’re trapped in a shitty system).

            I’ve been careful to withhold my actual position on polyamory (and gay acceptance for that matter), because it’s (1) a bother to explain, (2) probably inflammatory, and (3) irrelevant to the point I was trying to make. But said position was formed with access to the following evidence:

            Exhibit A: I tried polyamory a few years ago, mostly by accident. (I was rather disaffected at the time.) It worked okay. I really appreciated some aspects of it, e.g. spreading out the emotional load. I came away from it with the conclusion that I needed to cultivate more close relationships, which (along with other evidence) basically meant that I had to stop sleeping with all my friends.

            Exhibit B: “Anna” is a 20-year-old kid who became poly after escaping an “abusive” relationship with an older guy, who appears to have been a asshole in general, and made her feel bad in particular after she cheated on him. (I put “abusive” in scare quotes because assholery is not tantamount to abuse. She may have been withholding information, but she seemed to think that what she gave was sufficient.) She decided she would no longer let anyone make her feel bad, or tell her how to feel, for that matter. Because she now only has relationships “on her own terms”, she refuses to be monogamous. Not a straw man, just a dumb college kid.

            Exhibit C: “Gene” is a twentysomething rationalist. He decided to become poly when he saw other rationalists doing it, read a bit about how jealousy has not been a problem for them, and thought, “Gee, it really doesn’t seem like there’s any reason why I shouldn’t. Why, society, why have you shackled me with these irrational inhibitions?” It’s working out okay so far, in that his overall volume of complaints about love and/or sex has remained constant.

            Exhibit D: “Harry” and “Sally” are in their forties, and have been polyamorous since before it was cool. They are basically hippies in every way. They seem to be happy, but no one takes them seriously.

            None of this is good evidence for any position. It ain’t anti-empiricism to try and think about it some more.

      • Jaskologist says:

        I think advocacy of polyamory is nearly irrelevant to most people viewing the movement from outside. Convincing members that the women should be shared (particularly with the leader) has been one of the classic hallmarks of a cult since time immemorial. For those who are already getting a cultish vibe from LW, seeing that is pretty much going to seal the deal for them, whether it is advocated for or not.

        • ozymandias says:

          I find it sort of odd that you specify that “the women” are shared. I mean… the men are shared too in polyamory. It is not like women get four boyfriends all to themselves.

          • no one special says:

            Almost certainly this is meant to be a description of the modal cult, not the modal polyamourous society. The skeptics here are either assuming that polyamory is a cover story for “leader gets to sleep with all the women”, or see non-monagamy, and round it off to cult behavior without looking closely.

            Wanted: cult with female leader to see how this generalizes across sex. Rand, perhaps?

      • jaimeastorga2000 says:

        I think I have hardly ever (once a year or so?) seen statements that polyamory is much better and more rational than nonpolyamory and everyone who doesn’t do it is wrong, and never from important central community members.

        From HPMOR’s author’s notes:

        Before anyone asks, yes, we’re polyamorous – I am in long-term relationships with three women, all of whom are involved with more than one guy. Apologies in advance to any 19th-century old fogies who are offended by our more advanced culture.

  37. John Maxwell IV says:

    It’s true that this criticism of LW is relatively poorly argued, but I think the existence of poorly argued complaints can be an indicator that there are better argued complaints to be made. Possible examples: Luke here; me here.

  38. Richard Metzler says:

    Hi everyone, first time commenter (I stumbled across SSC a few weeks ago, and had a feeling of “ah, so this is where all the smart, levelheaded people on the internet hang out”. Awesome place, keep up the good work).

    As someone who’s been following a lot of internet drama from the sidelines over the last couple of years (it’s an unhealthy addiction, I know), may I suggest that you’re parsing this blog post from the wrong angle? It’s laudable to take any criticism seriously and to evaluate how much of it is justified… but I think Almost Diamond’s post is less about substantive criticism than about signaling. The key sentence is the one about the movement attracting white male libertarians. As a card-carrying feminist, you can’t join a club that has any of those, now can you? For additional context, consider that Stephanie Zvan’s buddy PZ Myers has written a similarly content-free post titled “I officially divorce myself from the skeptic movement”, and has been very critical of “dictionary atheists” (who consider “there’s no god” to be the core of “atheism” and feel uncomfortable taking any left-wing politics on board in their movement)… and that’s really just the tip of the iceberg as far as “us vs. them” is concerned.
    So what I think Zvan is really saying is “we have our own cool kids club, so I’m not going to hang out with these nerds.”

    • suntzuanime says:

      To be fair, I think a lot of people in the atheist movement would agree that there is no god, and many of them would even consider it something like the core of atheism. It seems like a pretty fair criticism of atheism to me.

      • Richard Metzler says:

        Oh, totally. The problem is that it’s kind of hard to build a “movement” around that core. To get people off their butts and on the streets, you need a more detailed vision of what follows – socially, policy-wise, ethically – from “there’s no god”. Separation of church and state is an easy one, tolerance for nonbelievers, obviously… and then it gets kind of hazy. Some people seem to think they can start with “there’s no god” and derive substantial parts of current feminism/ social justice philosophy, others are not really on board with that, and voila: “deep rifts”, endless rounds of witch hunts, slanders, rape threats, rape accusations, you name it.
        (I am not an activist – again, I just followed this stuff from the sidelines, but here’s my armchair position on this, just to be clear: “There’s no god” should be, and stay, the core of any atheist movement, and it should concern itself with pointing out and reducing the harm done by religion. If someone wants to add their personal convictions regarding ethics, politics etc to that mix, that should run under a different label – “humanism”, “secular leftism”, whatever.)
        I haven’t looked into the history of the rationalist community enough to know if it has encountered similar problems. One should think that if it follows its own ideals, it should be fairly immune to that kind of thing, but you could have said the same of the skeptic movement. As Scott likes to say: CONSTANT VIGILANCE!

        • Peter says:

          I think there’s a problem of movement naming. I’m an atheist but not a “movement atheist”. If people want to go and have a movement based largely around the idea of atheism they should feel free to do so. However, they should not name the movement in a way that implicates me in it.

          If there was a “motherhood and apple pie” movement you’d end up finding they were in favour of something ghastly like bans on contraception and “ethnic food”. If there was a “truth, goodness and beauty” movement then all sorts of horrors could come from that. If you had a Scott Alexanderist movement then I’m sure than after a while our gracious host would be saying in exasperation “I am not a Scott Alexanderist!!!”

          • Richard Metzler says:

            I agree that movements generally have the potential to take on horrible aspects, or experience mission creep. These are things to watch out for and avoid.
            However, I am a friend of truth in advertising, and I don’t see why a catchy, informative label should be avoided just because not everyone to whom the label applies is 100% behind the movement. What would be the alternative?
            A) Invent a new label that no one knows, explain what you mean, and everyone says, “Okay, no gods, separation of church and state… you mean ATHEISM, right? Why didn’t you say so in the first place?”
            B) Adopt a label that people are familiar with, because it is already in use. The people who used it before will complain that you’re appropriating it, and the people you’re actually targeting will still say it’s misleading.

          • Peter says:

            I wouldn’t even consider separation of church and state to be a part of the definition. Sure, I’d expect a high and disproportionate proportion of atheists to be keen on various facets of secularism, but one can, for example, see the current situation with the established church as a Good Thing, and still be very much an atheist. One could not even mind there being state church schools.

            I mean, on my side of the pond, we have an established church, and more atheists, much less of a hostile climate (OK, there have been times when I’ve been a closeted atheist (see state church schools above) but actually I needn’t have been), less religious influence on public policy, lots of good stuff like that.

            With option A, I imagine my future:

            Someone: Hi! Nice to meet you. What religion are you?
            Me: I’m a nontheist.
            S: What? Never heard of it.
            Me: Well, I used to call myself an atheist, until _that movement_ appropriated the name.

            On the subject of atheist stuff: I’ve always thought there’s a tension within atheist activism. On the one hand there’s the promotion of atheist beliefs, on the other hand there’s the sort of atheists-rights and anti-prejudice stuff similar to LGBT campaigning etc. Plus there’s a need for support and quite possibly social spaces, and it’s important not to let the activists misappropriate “atheism” in a way that upsets that.

    • Anonymous says:

      Even the name “Atheism+” is basically false advertising, since if you go to their subreddit, it is not atheism or religion they mostly talk about. I guess they might have called themselves in intentionally misleading way in order to be able to exploit motte-and-bailey to even greater extent.

  39. Boromir says:

    >And in accordance with this, I will put the rationalist movement up, mano a mano, against any other movement on the entire Internet in terms of the quality of scholarship and empiricism.

    There is another movement with 5 virtues and one virtue. The one virtue is called mindfulness, defined as that which causes one to work well in unexpected and unprecedented situations and it is broken into 5. those virtues are :
    1. Preoccupation with failure
    2. Reluctance to simplify interpretations
    3. Sensitivity to operations
    4. Commitment to resilience
    5. Deference to expertise

    I would put the High Reliability Organization community and its planners, devotees HRO as an intellectual and social movement among scholars, students, and boots on ground workers and their ability to o real science up against LW any day of the week. Especially in the areas of optimizing rational thought for things the average human brain can actually do. http://govleaders.org/reliability.htm

    I think I am up for a more audacious claim. I would say anyone in the firefighting community that attends conferences and is/was involved in the NIMS ICS movement. NIMS ICS was an intellectual movement and social re-ordering of priorities that started (much like LW in California) except this was among wildfire fighters. The movement disbanded shortly after all of its goals became law and policy with the passing of PPD-8 in 2011. I would judge that having the good sense to mostly disband the social and political arms of your movement when your goals have been achieved and get down to operational action is also a sign of some stronger rational thinking.

    For an intro this helps
    http://www.depts.ttu.edu/cehrop/Weick.php
    http://high-reliability.org/pages/Weick-Sutcliffe

    http://www.blm.gov/pgdata/etc/medialib/blm/wy/programs/fire/hros.Par.99704.File.dat/MovingTowardsHRO.pdf

    or http://en.wikipedia.org/wiki/High_reliability_organization

    There are plenty of people who calculate odds, update based on real evidence and stay humble in the face of difficult tasks better than any LW-er who publish papers, are peer reviewed and understand scholarship because they meet Darwinian selection head on. Their stakes on a day to day level are high and gosh darn it some of them are smart enough to realize it and have things like doctorates in areas where status games are less important than whether or not you will be on fire tomorrow. Attempts to get the LW community to borrow some of the risk analysis tools that are used to make split second judgments in such communities effectively has been met with a crushing wall of failure and arrogance. Suggestion that LW-ers should take a simple training course at their local volunteer fire department so they can understand low probability high cost risk on an emotional level has been met with outright derision.

    I’d put the disciples of Tom Mercer, Karlene Robers and Ben Aguirre over Yudkowsky and crew any day of the week.

    Yes, this is arrogant not just a little arrogant either. It is born of frustration and the provincial. “My subculture may have flaws but it is better at ____ than anyone else attitude” commonly found in people that haven’t bothered to have a look around at other cultures. It appears never to have occurred to people who manage existential risks to look towards the literature and culture of those who do it every day.

    There are people who have designed nuclear powered floating cities on the ocean and they have worked continuously without major malfunction for years. Perhaps those people know something about rapidly estimating risks in complex systems and then getting an entire city’s with of people to raise their baseline of map/territory correlation because if even a small number of them don’t everyone dies. Jumping up and shouting “Your are digging in the wrong place! this problem has already been solved or worked on! hasn’t helped.”

    Yes, this is a certain amount of playing my subculture is better at xxx than yours but I have been to the houses and rationalist camps and I have been to the triage exercises, full scale tabletops, and drills of the HRO’s and they can not compare.

    • Samuel Skinner says:

      Those communities aren’t remotely like the rationalist community though. The have selection measures so you only get people with a certain level of competence and skills, they have clear goals to achieve, they are paid to do that so they can devote their time to those tasks and they have a large amount of institutional experience to draw upon. Less Wrong and MIRI might have some of those characteristics (I have no clue), but the “rationality movement” does not.

      And those constraints are like the difference between building a skyscraper in New York and one in Venice.

      “It appears never to have occurred to people who manage existential risks to look towards the literature and culture of those who do it every day. ”

      Existential risk is the species end.

      “Attempts to get the LW community to borrow some of the risk analysis tools that are used to make split second judgments in such communities effectively has been met with a crushing wall of failure and arrogance. Suggestion that LW-ers should take a simple training course at their local volunteer fire department so they can understand low probability high cost risk on an emotional level has been met with outright derision. ”

      Link?

      Also I should point out those are examples of rational planning, not rationality. Rationality is a tool and rational planning is its application to, well planning. Do the listed groups generalize rationality or is rationality compartmentalized?

      • Boromir says:

        Professionals (called practitioners) have these traits (pre selected, paid) as opposed to researchers who do the purely academic work (sometimes paid but no more or less so than any LW er who is piad to write code) and volunteers who are just that are not at all paid to think of such things but do.

        I know existential risk means species end and the examples listed of very bad things happening to nuclear submarines and power plants I think count. Maybe I am overestimating how bad a nuclear launch is.

        Suggestions were made to Ms Salamon, Mr Yudkowsky et al in person.

        In the research mode and what is referred to as tabletopping mode, these are generalized to all fields of thought. Texas tech has some good bits on mindfulness.

        Weick and Aronson
        Weick:
        http://high-reliability.org/pages/Weick-Sutcliffe#weick2007

        http://high-reliability.org/pages/Weick-Sutcliffe#weick1988

        Aronson:
        http://high-reliability.org/pages/allied-knowledge-impediments

        Take this into an all fields of cognition approach and Mistakes were made (but not by me) and Wick and Sutcliffe’s work are considered foundation documents of the field in how to view the world not just planning. I strongly suggest looking at “the morality of error” and the various other “Knowledge Impediments” section of the above link. Still in progress but useful.

        The listed groups generalize this process of thought mostly because they self identify as responders in that way that people conflate their identities with their jobs. I suggest the studies of role takin gin disasters by Trainor and Barsky and further the work by Aguirre and Trainor in emergent pro-social behavior and how to engender it in everyday life.

        also this for a general life approach
        Aguirre, B. E. “The Sociology of Collective Behavior.” PP. 528-539
        in Clifton D. Bryant and Dennis L. Peck, editors, The Handbook of
        21st Century Sociology. Berkeley: Sage.

        • Eli says:

          Hmm…. I don’t think I could learn the right behaviors just by reading scholarly papers on these matters. How does one get trained in this kind of thing, explicitly?

          • Boromir says:

            The Disaster Research Center has good courses, as do Texas Tech and the Red Cross. FEMA.gov has some online courses that once you get through the bureaucrat-ese are surprisingly useful I suggest NIMS/ICS 100b and 700 series. For in person training ask to volunteer at your local fire station or help out with the local suicide hotline.

        • Samuel Skinner says:

          “I know existential risk means species end and the examples listed of very bad things happening to nuclear submarines and power plants I think count. Maybe I am overestimating how bad a nuclear launch is. ”

          The issue is we have had previous failures in those things to learn from. We had a history of previous submarines sinking, bombs misfiring and power plants exploding to see what where the most important failure states.

          We sort of have that with software code not doing what is intended, but we don’t with software code that is deliberately trying to hide what is intended.

          “Suggestions were made to Ms Salamon, Mr Yudkowsky et al in person. ”

          Post it on the Less Wrong forums.

          • Deiseach says:

            I suppose one reason I’m sceptical of “We will, sooner or later, wittingly or unwittingly, create a true AI which will then blossom and progress so much it will take over the world!!! and so we need to be careful our monster does not devour us!!!!” is because in my work I struggle with government-procured and designed-by-tender software systems.

            Believe me, I’d gladly take the chance of a god or demon-like AI if it mean I could faffin’ well communicate with the people in the offices twenty-eight miles down the road, and if the nationally-rolled out database system for inputting details of applicants, tenants, etc. for social housing could handle apostrophes in names in a country where people tend to have names like O’Brien, O’Keeffe, O’Connor, O’Neill, and so forth (which means every time you need to look up a record, you get to play the “How did the person inputting it get around the apostrophe problem? Do I search for OKeeffe, O Keeffe, or daringly strike out and see if they got away with O’Keeffe? Or will that cause the entire system to crash and everyone else in the department to curse my name and have to wait two days while the IT guys in the firm in the capital who got the contract to set this up and are the only ones authorised to do anything with it get around to sorting it out?” game)

    • Thanks for these examples, Boromir! This is very well-put.

    • John Maxwell IV says:

      Thanks. This entire discussion reminds me of Scott’s own writing on ethnic tension and the trouble with “good”. The post he’s responding to is a “boo rationalists” post and his response is a “rah rationalists” response.

      I’m not sure the line you quote about how rationalists are cooler than anyone else adds much. From a communications consequentialist perspective, I suspect statements like this one repel more than they attract. I was about to start writing that there are much more important ideas from the aspiring rationalist movement that we should prioritize sharing above the idea that the aspiring rationalist movement is super cool. But on second thought, even if “the aspiring rationalist movement is super cool” has a relatively low “hit rate”, it might cause a certain fraction of people to read about all the valuable ideas produced by the movement in a way that sharing other stuff wouldn’t. I doubt it, but it seems like a possibility worth considering.

    • Scott Alexander says:

      That is part of why I specified “Internet movement”. There are a lot of professional organizations with all sorts of interesting hidden wisdom nobody has mined yet.

    • fubarobfusco says:

      Attempts to get the LW community to borrow some of the risk analysis tools that are used to make split second judgments in such communities effectively has been met with a crushing wall of failure and arrogance. Suggestion that LW-ers should take a simple training course at their local volunteer fire department so they can understand low probability high cost risk on an emotional level has been met with outright derision.

      I don’t think you have posted this proposal on LW, and it sounds like you might have some really good suggestions there.

      Please do write it up and post it there!

      (And please make sure not to sabotage the effort with snark or expectations of failure. I’m utterly fucking serious: bringing in risk-analysis tools from firefighting or other well-tested practical disciplines seems like a prima facie awesome idea, but I don’t know enough about it to propose it. You do. But poisoning that proposal with a lot of insults would be really unfortunate.)

  40. Anonymous says:

    Why *I* am not a rationalist?

    In great part because the comments section in this very blog has shown me how a lot of the so-called rationalists are a bunch of homophobic/transphobic gender essentialists. What I’ve seen here is that “rationality” turns people into sociopaths who are more than willing to make the world a less welcoming place to trans*, genderqueer and homosexual individuals, just because they fail to conform to some fucked up Hegelian bullshit (which is given a scientific veneer by evo-psych and similar crap) about how “true” men and women should to be.

    • suntzuanime says:

      You seem to be confusing turning people evil with failing to turn people entirely good, and also you’re assuming that the commentors on this blog are all rationalists (which seems a little weird since you state in a comment on this blog that you’re not a rationalist).

      If you want to compare attitude towards transfolk among the rationalist community vs. attitude towards transfolk among the population of the world at large, I guarantee you the rationalist community is more accepting. This isn’t a fully fair test, but it’s much much fairer than the test you’ve given it.

      • Anonymous says:

        What would be a fair comparison? Against the internet population at large, against the internet population excluding Facebook and Twitter, against other internet communities that share a certain set of assumptions or qualities?

        • suntzuanime says:

          A fair comparison would be a double blind trial exposing one group to Rationality and one group to a placebo ideology. Otherwise you’re going to have to wade through a sea of confounders.

          • Anonymous says:

            OK, what would be a fair-enough-yet-reasonably-workable comparison, then?

          • suntzuanime says:

            Well, think of all the possible confounders you can think of, think of some more, write them all down in an unchangeable list, and then control for them all at once. Hopefully at least that wouldn’t be predictably biased?

            These sorts of sociological problems are really hard and awful to work with. You basically just have to examine the data from a bunch of different angles and then use your honest judgment.

          • Anonymous says:

            Seems like the timeless question of whether Rationalism turn people into trans-hating sociopaths will go unanswered still.

          • I am deeply amused about what we are going to use as our placebo ideology.

    • Earnest Peer says:

      ETA: Taken out the “argument soldier”-y intro.
      This blog has a small amount of reactionary commenters, which probably stick out a lot, because it means this bunch of people isn’t negatively selected for inclusiveness as most nominally inclusive places are.

      That raises an interesting question: Would it be better for the stated community goals of Social Justice to select negatively or positively? I feel that the intuitive answer is “negatively”, but I learned in a recent discussion on tumblr that just using the correct pronouns etc. often leads to people changing their words but not really their attitude towards genderqueer people (i.e. calling Ozy “they” but still thinking of them as a girl).

      • ozymandias says:

        I’d like to clarify that while I find that personally upsetting for obvious reasons I don’t think there’s anything morally wrong with seeing me as a girl. People can’t easily control what gender they see others as.

    • Scott Alexander says:

      Uh, rationalist movement has something like six times the number of LGB people and twenty times the number of transgender people as the general population. 89% of rationalists have net positive opinions of gay marriage, compared to something like 48% of the US. I just got done writing a long argument for increased toleration of transgender people on this very blog, and Ozy practically writes about nothing else.

      Your accusations are completely divorced from reality and I am pretty sure you are doing the thing where any time anyone is even slightly weird or nerdy or you throw “homophobe” as an insult at them to see if it sticks.

      Give me a defense of your statement as having even the slightest basis in reality or else I’m banning you.

      • Elissa says:

        I am pretty sure you are doing the thing where any time anyone is even slightly weird or nerdy or you throw “homophobe” as an insult at them to see if it sticks.

        Remember that your nrxy commenters legitimately make some people super uncomfortable, in much the same way that you are made uncomfortable by people harshly attacking the “weird/nerdy” ingroup.

        • Tom Hunt says:

          Even granting this is true, it doesn’t lend any kind of point whatsoever to the criticism that all rationalists are $BADTHINK. Tolerating $X does not mean that you yourself are $X.

        • Scott Alexander says:

          …who are mostly not rationalists and who mostly hate the rationalists for exactly their tolerance of this sort of thing.

          Didn’t hear a defense, so banned.

          • Elissa says:

            Clarifying my objection, although I fear this will only annoy you more: Purple-anon is wrong but probably not being dishonest (as it seemed to me you were implying). They likely perceive SSC commenters as an undifferentiated mass of anti-LGBT rationalists/neoreactionaries because a credible-feeling attack on their ingroup is making them not think so good, rather than because they hate and wish to discredit all things nerdy or weird-seeming. I suspect you are imputing malice to purple-anon for roughly analogous reasons.

            (On balance I regret making the original comment that I am now clarifying, and I promise to look at my life and my choices)

        • nydwracu says:

          And what do you think the odds are that someone who both has severe memetic allergic reactions and can’t independently figure out that they think 14% is a very large percentage because of some sort of salience bias that probably has a name and that arises from the fact that they’ve self-selected into a bubble where the number is essentially 0% due to those memetic allergic reactions will say anything worthwhile instead of stupidifying everything they touch?

    • Anonymous says:

      You clearly don’t know what you are talking about.

      homophobic/transphobic

      really??

      more than willing to make the world a less welcoming place to trans*, genderqueer and homosexual individuals

      Then why there are so many of them on LessWrong if they supposedly aren’t welcome in rationalist community?

      You are simply making stuff up.

    • Luke Somers says:

      LW-rationality attracts some weirdos, some of whom are very transphobic. On account of this, LW isn’t a particularly safe space. However, the philosophy itself tends to push towards transhumanism, which is approximately maximally trans-everything-friendly.

      Finding and engaging the worst parts of people and making them better seems like a good thing.

  41. NRK says:

    Well, to be fair, you (the self-proclaimed rationalist community) stole the name of a philosophical movement that had existed for 500 years, what did you expect?
    After all, I’m not going to name any position I defend ‘Christianity’ on account of it being largely devoloped by my buddy of the name Christian.
    Actually I’d expect more people to assume you’re all a bunch of neo-cartesians, because more people are aquainted with one of the fundational philosophers of of the western tradition than with a bunch of charmingly smart fellows on the interwebs.

  42. Irenist says:

    LTP mentions the Dunning-Kruger effect above.

    I’m not fit to evaluate the defensibility of EY’s iconoclastic pronouncements on quantum physics, cryonics, and AI.

    However, I do consider myself reasonably competent to evaluate the level of expertise I’ve seen presumably rationalist(LW)-affiliated commenters here demonstrate in their opinions on philosophy (Eli’s assertion that innate ideas are the “usual” ground of philosophical knowledge, so philosophy can be rejected) and the relationship between Christianity and the Bible (in which rather stereotypically New Atheist failures of understanding have been displayed by a few commenters).

    I love SSC, I like much of what I’ve read on LW, and I consider myself a sympathetic outsider. But when rationalist types give off a strong Dunning-Kruger vibe in discussions of the humanities stuff I do know, it makes me warier than I might otherwise be about accepting the community’s competence when it makes iconoclastic pronouncements in math/science areas I’m not fit to evaluate.

    Now, appealing to us sub-Triple Nine Society IQ-level humanities types might not be of any great interest to the rationalist(LW) community–CFAR, LW, etc. Maybe we’re not seen as being worth your attention. But to the extent that we are, I think it bears keeping in mind that “judge this guy’s competence in what I don’t know by his competence in what I do” is a common heuristic. There are exceptions: I’m unimpressed with Lawrence Krauss’ grasp of metaphysics, but I don’t doubt his competence on physics, because he has establishment credentials. But if people who scorn such credentials give me a Dunning-Kruger vibe when they talk about what I think I grok, it leads me to discount their opinions a bit.

    (This should go without saying, but the D-K vibe is not given off by Scott, or by ALL the commenters here. Not at all.)

    • Samuel Skinner says:

      “and the relationship between Christianity and the Bible (in which rather stereotypically New Atheist failures of understanding have been displayed by a few commenters). ”

      Because if tradition is a grounding the fundamentalists have been laying that down over the last century. Unless you are arguing only certain traditions count- that generally ends with “everything I don’t like is a false tradition”.

      • Irenist says:

        The discussion upthread was about whether ways of reading (and trying to follow) the Bible that differ from U.S. fundamentalist and creationist ways can properly count as “taking the Bible seriously.” I don’t think anyone was arguing that U.S. fundamentalists and creationists don’t “take the Bible seriously”; rather, the argument was about whether they are the ONLY Christians who can properly be said to take it seriously. So “arguing only certain traditions count” is precisely the opposite of what I was doing.

        ETA: There’s a common shape to debates between New Atheists and traditionalists like the Catholics and Orthodox:
        1. The New Atheist convincingly demolishes some claim typical of a subset of conservative U.S. Protestantism (like creationism, e.g.).
        2. The traditionalist protests that creationism (or whatever) isn’t even something her tradition holds.
        3. The New Atheist accuses the traditionalist of “moving the goalposts” by practicing “sophisticated theology TM” that few people have ever really believed in.
        4. The traditionalist cites some ancient worthy (e.g., Augustine or Aquinas or one of the Eastern Fathers) as holding the same position she does, to demonstrate that the goalposts have been firmly planted for many centuries, and she’s not moving them.
        5. Some of the inferential distance having hopefully been bridged, the rest of the debate MAY get more interesting at this point.
        *
        In short, the typical New Atheist move is to say that traditionalists aren’t “really Christians” because they’re not Young Earth Creationists, or don’t follow Old Testament dietary laws, or whatever.

        This seems to happen because Anglophone New Atheists have often absorbed the Protestant assumptions of Anglophone culture, which include the doctrine of the “perspicuity of Scripture,” so it often seems natural to them that anyone with no special training can pick up an English language Bible and figure out whether a given Christian sect is forthrightly following it or obfuscating the unattractive parts. The assumptions that (a) the meaning of the Bible is obvious to the non-expert reader, and (b) that obviousness is not in itself open to dispute save by people trying to obfuscate the bad bits of the Bible, appear remarkably naive to traditionalists. Traditionalists tend to hold with Augustine’s dictum that the Christian “should not believe the Gospel except as moved by the authority of the Catholic Church” (however a given traditionalist wants to define “catholic” here). The traditionalist is appalled that the New Atheist is (or at least appears) ignorant that something like this dictum has been normative Christianity for the majority of Christians ever.

        There’s a strong Dunning-Kruger vibe when rationalists exhibit this common New Atheist failure mode. That’s a shame because LW and its ecosystem do a lot of really cool stuff, and giving off a D-K vibe detracts from potential interest in that stuff.

        • Deiseach says:

          I mean, my problem with the whole “whales are not fish” thing is that it’s not an effective strategy against anyone who doesn’t hold that one particular literalist inerrantist view.

          It’s like someone waving around a can of (I don’t know if this exists) anti-skunk spray and saying how marvellously it gets rid of skunks, and when I say “Yeah, but this is Ireland, we don’t have skunks in Europe, your anti-skunk spray is doing nothing about the mouse infestation in your house”, they continue to insist that the real problem is skunks and mice are only a petty nuisance anyway and what is needed here is an even more effective anti-skunk spray.

          • Nick says:

            Actually, it’s even worse than that. It’s as if the person is telling you that mice can’t be really be vermin anyway, because of course they don’t have the big bushy striped tail that skunks have!

          • Irenist says:

            Agreed, Nick.

            The weird thing is, definitional disputes like whether Catholicism is Christianity, or mice are vermin, are warned against persuasively and insightfully in the “Human’s Guide to Words” LW sequence: they’re like arguing about whether a tree makes a sound instead of just politely figuring out whether you’re curious about acoustic vibrations or auditory experiences. Accordingly, it’s not an error that ought to characterize a rationalist(LW), and yet these definitional arguments keep happening around here.

            ETA: Of course, definitional disputes also get us into Worst Argument in the World/Non-central fallacy territory. So again, why do they keep happening in SSC comments, of all places?

          • Nick says:

            Yeah, Irenist, I read that Sequence again earlier this year and everything that it says about definitions is illuminating (and relevant to the argument upthread). But be careful insisting that we should know better here: the last time I did that, I got yelled at for arrogance.

          • Samuel Skinner says:

            Because what is Christianity isn’t just a definitional dispute. Being THE Christian group means you know the way to salvation which makes it important.

          • The thesis of A Human’s Guide to Words is ‘it matters a ton how we define our terms, so it’s extra important not to misuse them’. Not ‘it doesn’t matter how we define our terms’. Either perspective can justify ‘taboo your words’, of course.

        • Samuel Skinner says:

          “1. The New Atheist convincingly demolishes some claim typical of a subset of conservative U.S. Protestantism (like creationism, e.g.).”

          Creationism was the position of the church (and almost everyone) prior to Darwin.

          “4. The traditionalist cites some ancient worthy (e.g., Augustine or Aquinas or one of the Eastern Fathers) as holding the same position she does, to demonstrate that the goalposts have been firmly planted for many centuries, and she’s not moving them.”

          That doesn’t answer “sophisticated theology no one believes in”. It answers “sophisticated theology is new”.

          ” Traditionalists tend to hold with Augustine’s dictum that the Christian “should not believe the Gospel except as moved by the authority of the Catholic Church” (however a given traditionalist wants to define “catholic” here). ”

          Why? Do they not have a criteria they use that can be applied? And if they do what happens when we apply it to other religions?

          • Troy says:

            Creationism was the position of the church (and almost everyone) prior to Darwin.

            This is true, but not that interesting. Creationism was the position of almost everyone because prior to Darwin there was no other credible scientific theory for the origin of life.

            In addition, young earth creationism is a largely 20th century phenomenon, at least among educated Christians. Prior to Whitcomb and Morris’s The Genesis Flood, most creationists were old earth creationists.

          • Samuel Skinner says:

            Darwinism doesn’t posit a mechanic for how life began. Darwinism does not contradict divine creation- the theory is silent on what was the starting block.

            There was a theory prior to Darwinism about how life began (spontaneous generation). I’m not sure how you can say it wasn’t a credible scientific theory compared to its contemporaries.

            “In addition, young earth creationism is a largely 20th century phenomenon, at least among educated Christians. ”

            I’m pretty sure old Earth Creationism dates to the 18th century when discoveries indicated the world was older than everyone thought.

          • Troy says:

            I was sloppy in my above post. By “origin of life” I meant the origins of the particular forms and distributions of life on earth.

            I think you are also right that old earth creationism only became the dominant viewpoint in the 18th century — although the fact that it became the dominant viewpoint suggests that Christians at the time did not see a young earth is an important theological (as opposed to scientific) commitment. I’m not as familiar with opinions before that time, but I think that some people defended an old earth. Certainly many theologians argued against a six-day creation, such as Augustine, who (as I mentioned upthread) maintained that the world was created in an instant.

          • Deiseach says:

            Actually, this is a point which I would love to see defined:

            What is meant by “creationism”? What do you (random person) mean when you use the term?

            If, by “creationist”, is meant “person who believes God is the Creator of the universe and all that is in it; Credo in unum Deum, Patrem omnipoténtem, factorem cæli et terræ, visibílium ómnium et invisibílium“, then yes, I’m a creationist and creationism is and was the position of the churches pre- and post-Darwin.

            If, however, what is meant by “creationism”, “person who is Young Earth Six-Days of Twenty-Four Hours each Creation literalist and Biblical inerrantist on the verbal inspiration and inerrant in the autograph view of Scriptures, possibly also KJV-Only”, then I’m not, and there was a lot of debate on exactly this point in the historic churches, and the majority of current Christianity is not.

            And I think the second meaning is the one used most (often with a sprinkling of “they believe humans and dinosaurs co-existed” and “whales are not fish” on top), which means I have problems answering quizzes that go “are you a creationist?” because if you mean (a) I am but if you mean (b) I’m not, and how do I know which definition you’re using?

            And the really, REALLY big problem with definition (b) is that it is assumed if you answer “no” to such a question, then you’re not a Christian/believer/theist/you’re atheist, which – as I’ve said – is not the case.

            I want a little more clarity, if people are going to be dropping religion questions into quizzes (like the one about political views that Scott linked on here); don’t assume that a ‘one size fits all’ “Are you a creationist?” means what you think it means for the audience. I’d be guessing, in an American context, that such a question presumes definition (b) so I’d say “no”, and then the way the quiz is set up, this answer would mark me as “not religious” which would be false.

          • Troy says:

            Right, I was sloppy in that terminology too (at least I can blame that on inheriting the term from others upthread!). I think the term “special creationism” is clearer, as referring to the thesis that God created humans and other “kinds” directly in their modern forms. This was the position that Darwin was arguing against in his own time.

          • Nick says:

            I think it’s worth clarifying that the original problem with Samuel Skinner’s post was just that the line “Creationism was the position of the church (and almost everyone) prior to Darwin.” is misleading. The Church’s position on matters of science is provisional, and it has long been policy to accept scientific insight when it is beyond reasonable doubt and reinterpret accordingly. It is, after all, a prudent approach to interpretation: we shouldn’t conclude that a passage is only figurative until we can show it cannot be literal. This was much the same point made to Galileo by Cardinal Bellarmine: the Church can and readily will reinterpret Scripture if it is given reason to, but Galileo hadn’t given it reason to, because the scientific difficulties (stellar parallax and etc) weren’t yet resolved.

      • Deiseach says:

        What interested me was the familiarity of the way Scott reacted to Almost Diamonds’ post, because as I said, I’ve been through this on the religion side.

        Almost Diamonds: My understanding of a rationalist is this, this and this, and that, that and that are my refutation of rationalism and why empiricism is better.

        (Even the “rationalism is the equivalent of theology” line made me laugh here).

        Scott: What the FUDGE is this woman on about? She’s constructed her own version of what rationalism and rationalists are, then refuted that! When I, as a rationalist, would like to give you this long line of evidence as to why RATIONALISTS ARE NOT LIKE WHAT SHE SAYS.

        (I would not be one iota surprised if Almost Diamonds or a defender came back with the “but you’re not representative of rationalism/your version of rationalism is not the one I’m seeing commonly and encountering amongst the majority of self-described rationalists, ergo it is not REAL rationalism”).

        I pointed out that this was very much like the reaction I get:

        Person: Boy, those silly Bible-bashers and the silly things they believe! Can you credit it, they are so absolutely wedded to their notions that, if you point out to them that whales are not fish, that completely collapses their house of cards?

        Me: Er, you know, that’s not what the majority of global historic Christianity has believed?

        Them: Yeah, but your brand of Christianity is not representative/not the version I am most commonly seeing or encountering amongst self-described Christians, so your version is not REAL Christianity.

        And then RCF hopped in with the exact line I’ve encountered amongst several varieties of (mainly though not exclusively) American non-mainline Protestantism which denies that Catholicism (for one) is truly Christian. No, only the particular strand of American literalism is truly true representative Christianity!

        I’m not sure if RCF is (ex-) Protestant or one of the “I know what Christianity really is better than you Christian” types who hover around being atheist or agnostic or ‘reality-based community’ or whatever definition of “I don’t believe that junk” they find most suiting as a definition.

        • Irenist says:

          Deiseach,

          The parallels between the kinds of issues LW folks encounter and those religious groups encounter are indeed fun to think about, and might even be instructive.

          Here’s another example:
          I’ve been saying in this thread that it’s a shame that some rationalists give off a Dunning-Kruger vibe when they talk about religion (or philosophy), because for us humanities types, it kind of discredits their musings on math a bit.

          Well, to my delight, EY roughly parallels my point from the other side in his old “Outside the Laboratory” LW post:

          what are we to think of a scientist who seems competent inside the laboratory, but who, outside the laboratory, believes in a spirit world? We ask why, and the scientist says something along the lines of: “Well, no one really knows, and I admit that I don’t have any evidence – it’s a religious belief, it can’t be disproven one way or another by observation.” I cannot but conclude that this person literally doesn’t know why you have to look at things. They may have been taught a certain ritual of experimentation, but they don’t understand the reason for it – that to map a territory, you have to look at it – that to gain information about the environment, you have to undergo a causal process whereby you interact with the environment and end up correlated to it….

          If, outside of their specialist field, some particular scientist is just as susceptible as anyone else to wacky ideas, then they probably never did understand why the scientific rules work. Maybe they can parrot back a bit of Popperian falsificationism; but they don’t understand on a deep level, the algebraic level of probability theory, the causal level of cognition-as-machinery. They’ve been trained to behave a certain way in the laboratory, but they don’t like to be constrained by evidence; when they go home, they take off the lab coat and relax with some comfortable nonsense. And yes, that does make me wonder if I can trust that scientist’s opinions even in their own field – especially when it comes to any controversial issue, any open question, anything that isn’t already nailed down by massive evidence and social convention.

  43. Anonymous says:

    Are there any examples of Yudkowsky calling his system of rationality “rationalism”? Searching LessWrong produces only 116 (or 114, oddly) posts where his name and “rationalism” occur together, and most of those appear to be other people using the term carelessly to mean something like “believing rationality is important”, which remains historically incorrect. I’m not going to go though all the posts to be certain, but it appears that Yudkowsky is rather consistent with calling what he does “rationality” and not “rationalism”, presumably because he knows the difference.

    • Eliezer uses “rationalism” a very very small number of times, e.g.: “Behaviorism probably deserves its own post at some point, as it was a perversion of rationalism; but this is not that post.” You’re right that he generally says “rationality”; “rationalism” on LW is mostly a back-formation from “rationalists”, his go-to name for people interested in (applied/technical) rationality. I think Luke dislikes the word “rationalism” too.

      • Nornagest says:

        There was a push to use “aspiring rationalist” for a few years, too. I personally think that’s actually worse than an unadorned “rationalist”: it retains all the ugly implications of an -ism and also tacks on a fairly obvious allusion to the Perfect Idealized Rationalist, which as a touchstone is only minimally useful from a self-cultivation perspective and fantastically, horribly bad from an outreach/PR perspective.

        • “Aspiring rationalist” seems fine to me. It would dovetail nicely with “aspiring effective altruist,” a term a lot of EAs like because it doesn’t imply you think your approach to philanthropy is highly effective. Obviously our goal is to be rational, effectively altruistic, etc., but as long as we don’t confuse the map with the territory we can take a sober look at how likely we are to be meeting our ideals.

  44. Anonymous says:

    Just read most of these comments and I have to ask: Are SSC comments always this terrible? At least on LW they’d be downvoted into oblivion and you can just focus on reading the good.

  45. Wulfrickson says:

    EDIT: this was meant to be @Anonymous above

    I’ve lurked here for a year and commented occasionally for a few months. I think comment quality has gone downhill markedly – the usual scourge of Internet popularity, helped along by an influx of mindkilled Reddit or Instapundit readers whenever Scott writes something critical of feminism. With a couple exceptions, the best commenters were all here a year ago when I started lurking and typical posts got ~50 comments instead of ~300.

    We’ve considered and rejected a voting system a couple times, so just memorize the names of the best commenters, or stick to the threads on statistics and medicine and such.

  46. Steve Sailer says:

    Robert A. Heinlein’s 1948 novel “Space Cadet” is first on Mr. Yudkowsky’s reading list of Books of My Youth. Back in 2011 I reviewed the first volume of the giant Heinlein biography, and what I was most struck by was something that never quite happened. While Heinlein’s buddy L. Ron Hubbard founded a cult and Ayn Rand founded a cult, Heinlein never founded a cult, even though some of his readers would have joined one.

    What it sounds like to me is that Mr. Yudkowsky has started the Heinlein Cult (c. “The Moon Is a Harsh Mistress”): polyamory, AI, etc.

    That strikes me as a pretty cool thing to do.

    • Anonymous says:

      Although Heinlein didn’t form it himself, the Church of All Worlds was inspired by Stranger in a Strange Land. The same people also coined “polyamory”.

    • Protagoras says:

      Heinlein was a better writer than either Rand or Hubbard, and so didn’t need to start a cult to get people to buy his books.

    • Nornagest says:

      I think the causality goes the other way round. Eliezer didn’t start the Cult of Heinlein; the people who thirty years ago would have joined the real Cult of Heinlein (i.e. literary SF fandom) found Eliezer instead after that fandom lost most of its former nerd cred.

      The cluster of beliefs you’re pointing to is surprisingly old. I have some friends who were born into it. As regards its contemporary form, note also that its social aspects aren’t strongly emphasized in Eliezer’s writing; it’s mainly a phenomenon of the culture around him.

  47. Steve Sailer says:

    So here’s a question: Since Yudkowskyism is a sort of Golden Age of Sci-Fi-based movement (e.g., Friendly AI is an updating of Asimov’s pre-War rules for robots), why do things like polyamory, libertarianism, and transgenderism show up disproportionately both in Golden Age Sci-Fi and among Yudkowskyites? What are the common denominators? Is it just Heinlein’s influence? Or did Heinlein stumble upon a grouping of obsessions that don’t look like they have much in common, but actually do? And if the latter, what do they have in common?

    • blacktrance says:

      I think a large part of it is that nerds are disproportionately likely to be drawn both to sci-fi and libertarianism, because of their personality and interests. Low religiosity, low concern with purity, authority, and loyalty, an analytical cognitive style, introversion, at least some interest in science, a higher-than-average probability of enjoying reading (and arguing on) the Internet – there’s definitely a cluster of people like that. People like that are likely to actively speculate about ways in which the world could be better. Those traits are conducive to an interest in sci-fi, support for libertarianism, practicing polyamory, and probably to identifying as trans.

  48. Jos says:

    A number of people have almost certainly already said this, but IMHO, as an empiricist, I define rationalists by observing how the people who identify as rationalist act and then generalizing. AFAICT, you guys have at least as much Hume as Descartes in your makeup.

  49. Anonymous says:

    People, don’t be empiricists or rationalists(Descartes), try to apply best methods that are available for a given situation or a question you are trying to answer. Sometimes data isn’t available, that doesn’t mean doing nothing is the best we can do. For example, we can try to predict the future even if we can’t obtain data about it. Sometimes we will be wrong. The most important thing is not avoiding ever being wrong, it is to maximize our correctness in general. Don’t be loyal to your methods. Instead, try to arrive at correct beliefs using the methods that are most suitable for that situation.

  50. Anon256 says:

    I think for example the typical math grad student is much better at the virtue of humility than the typical LWer.

    The lack of humility seems particularly clear in the intersection between the LW community and the startup community; I’ve heard several prominent LWers explicitly reject the outside view with respect to their own startups and plans while admitting they have little epistemic basis to do so. (I think irrational optimism may be almost inherently required to seriously pursue a startup.)

    Also recall that Scott’s own survey showed systematic overconfidence on the part of LWers.

    • ozymandias says:

      It seems like that sort of confidence might be instrumentally rational even if it is not epistemically rational. Once you’ve decided that a startup is a good thing to do with your life (perhaps you’re pursuing a high risk/high rewards strategy), fretting about how unlikely you are to succeed seems like a waste of energy.

      • Anon256 says:

        As noted elsewhere in this thread, LW often sweeps the distinction between epistemic and instrumental rationality under the rug, but the references to the Litany of Tarski and the text of the Twelve Virtues suggest prioritizing epistemic rationality when forced to choose. It’s therefore jarring to see prominent LWers appearing to throw epistemic rationality out the window for the sake of unclear and questionable instrumental benefits.

        Even if the instrumental benefits of overconfidence were on much firmer ground, I suspect Almost Diamonds (or a steelman thereof) cares more about epistemic correctness than instrumental success. So if LWers’ lack of humility is connected to prioritizing the latter over the former then this is a legitimate criticism from that point of view.

    • David Hunt says:

      I could be wrong (don’t think so) but humans in general do pretty badly on credence calibration. The question to ask is how they’re performing with respect to the general populace, and I don’t know the answer to that.

    • Frog Doe says:

      As a math grad student, I’d argue we’re better at humility with respect to math, and no demonstrable effect on humility outside of our domain (when compared to populations with similar IQs). I’m getting the distinct feeling that this is going to turn into some sort of timeless “fully-general humility counterargument”, that all claims in the future will be rejected for insufficient humility.

      • Anonymous says:

        Indeed. Arguing about humility or arrogance is a distraction from the more important questions. It is better to be correct and arrogant than incorrect and humble. However, you must find a way to test whether you are actually correct

        • Anonymous says:

          >It is better to be correct and arrogant than incorrect and humble.

          Is it?

          • David Hunt says:

            If you’re correct on some point, and someone is incorrect on it, and you decide to trust their conclusions over your own? You’ve failed.

            (Ed: and, well, caveats. You could be correct by chance and a terrible argument.)

      • Ilya Shpitser says:

        People who prove theorems for a living have to have a working understanding of a lot of things LW talks about e.g. a good skill of “noticing I am confused” is basically a prerequisite for doing serious math.

        • Anonymous says:

          Yes, I agree that this skill is necessary in mathematics. But you should do that even outside your field.

          Speaking generally, most other areas of life would benefit from being more like mathematics.

          However, being a good mathematician doesn’t guarantee (although it helps) that one is good thinker when it comes to other fields.

      • Anon256 says:

        Part of my claim was that math grad students were better about humility outside their domain than LWers; math grad students were my example of a “population with similar IQ” I felt comfortable comparing to. Do you disagree with this?

        I’m not bringing up humility as a counterargument to some object-level claim. This is a meta-level discussion of the “rationalist movement” and humility is a significant part of how the “rationalist movement” defines itself (c.f. virtue #8), so it seems reasonable to ask whether they are worse at it than a similar reference class.

        • David Hunt says:

          I think this sounds like a great experiment to run.

          Though I don’t think credence calibration is all that’s included in virtue ‘humility’.

        • Steve Sailer says:

          Mathematicians tend to be quite aware that there are other mathematicians who are more talented than them. Math talent is quite objective. Similarly, professional musicians ranked as the least narcissistic celebrities appearing on Carolla-Pirsky radio show, according to a standard psychological questionnaire Dr. Drew gave them. The most narcissistic celebrities were female reality TV stars.

          • Anonymous says:

            Narcissism and arrogance are quite different, though. I’m extremely narcissistic and self-absorbed, but I readily admit that there are tons and tons of people more talented than me in the world. They just don’t matter as much as I do.

          • another says:

            source

            I am surprised. Competence seems so unimportant for the success of popular musicians. Mainly their job is to be celebrities. It is much more important for comedians, who scored highly in the study.

          • Ilya Shpitser says:

            I agree that “math talent” is objective, but I disagree that there is a way to order mathematicians from most to least talented. There are definitely people (e.g. Terry Tao) who are better than others on almost every dimension, but generally math is quite specialized. You can be a luminary in one area but have nothing to contribute to another. One common divide mathematicians talk about is “analysts” (?driven by visual processing?) vs “algebraists” (?symbol manipulators?).

            This is another instance of an implicit “single parameter model of intelligence” a lot of people seem to have inherited from early psychometrics.

          • Anonymous says:

            Ilya, Steve’s point does not require or imply a single dimension. Virtually all mathematicians are humbled by someone dramatically better in their own area.

            What leads to your historical claim? Mathematicians’ view of talent has been stable through all recorded discussion I have seen, at least a century. That is young enough to be influenced by psychometrics, but I think the reverse is more likely.

          • Ilya Shpitser says:

            I think Steve can speak for himself.

            I am not talking about the mathematician’s view of talent (which would consider e.g. Ramanujan as extremely gifted, but only in a particular area), but Steve’s view.

          • Anonymous says:

            You put words in Steve’s mouth just to condemn them?

          • Anon256 says:

            Many LWers/rationalists are humbled by others’ obviously greater abilities as well, but this doesn’t seem to decrease their overconfidence with respect to practical problems or the rest of society, so I don’t think this explains much of the difference in humility between math grad students and LWers that I remarked upon.

          • nydwracu says:

            I am surprised. Competence seems so unimportant for the success of popular musicians. Mainly their job is to be celebrities. It is much more important for comedians, who scored highly in the study.

            I’m not surprised about comedians. A lot of comedy these days is about taking on some sort of posture of superiority. And they generally seem not to be right in the head.

            Note that a lot of the Something Awful diaspora went into standup.

  51. Sammy says:

    Hi there I am so excited I found your site,
    I really found you by accident, while I was researching on Aol for something else, Anyhow I am here now and would just like to say thanks {2014 {advanced elements kayak review|best fishing kayak reviews|carbon kayak paddle reviews|crossover kayak reviews|dagger kayak
    reviews|fishing kayak paddle reviews|fishing kayak
    review|fishing kayak review 2012|fishing kayak review
    australia|fishing kayak reviews|fishing kayak reviews 2010|fishing kayak reviews 2011|fishing kayak reviews
    2012|fishing kayak reviews 2013|fishing kayak reviews 2014|fishing kayak
    reviews 2014 uk|fishing kayak reviews australia|fishing
    kayak reviews best|fishing kayak reviews for tall guys|fishing
    kayak reviews nz|fishing kayak reviews uk|fishing kayaks reviews|fishing kayaks reviews 2011|fishing kayaks reviews 2012|fishing kayaks
    reviews 2013|fishing kayaks reviews 2014|fishing
    kayaks reviews nz|inflatable kayak reviews|inflatable kayak reviews 2011|inflatable kayak reviews 2012|inflatable kayak reviews 2013|inflatable kayak reviews 2014|inflatable kayak reviews 2014 uk|inflatable kayak reviews 320|inflatable
    kayak reviews australia|inflatable kayak reviews lidl|inflatable kayak reviews ratings|inflatable
    kayak reviews sevylor|inflatable kayak reviews uk|kayak paddle reviews|kayak
    paddle reviews 2010|kayak paddle reviews 2012|kayak paddle reviews 2014|kayak paddle reviews australia|kayak paddle reviews uk|kayak review|kayak review
    fishing|kayak reviews|kayak reviews 2012|kayak reviews 2014|kayak
    reviews 4.3|kayak reviews and ratings|kayak reviews and ratings 2014|kayak reviews complaints|kayak reviews fishing|kayak reviews flights|kayak
    reviews for beginners|kayak reviews hotels|kayak reviews pelican|kayak
    reviews perception|kayak reviews recreational|kayak reviews sit in|kayak reviews sit
    on top|lifetime kayak reviews|oru kayak review|pelican kayak review|pelican sit on top kayak reviews|perception angler kayak reviews|perception blast
    kayak reviews|perception carolina kayak reviews|perception fishing kayak reviews|perception kayak review|perception kayak reviews|perception kayak
    reviews 2010|perception kayak reviews tribe|perception pescador kayak reviews|perception shock kayak reviews|perception sport kayak reviews|predator 13 kayak review|prodigy 10 kayak review|pursuit 100 kayak review|recreational kayak reviews|sit
    on top fishing kayak reviews|sit on top kayak reviews|sit on top kayak reviews 2013|sit on top kayak reviews 2014|sit on top
    kayak reviews australia|sit on top kayak reviews nz|sit on top kayak reviews
    uk|sit on top kayak seat reviews|tandem kayak reviews|tandem sit on top kayak reviews|tarpon 100
    kayak review|touring kayak reviews|werner kayak paddle reviews|whitewater
    kayak paddle reviews} (Sammy) a marvelous post and a all round thrilling blog (I also love the theme/design),
    I don’t have time to read it all at the
    minute but I have bookmarked it and also added in your
    RSS feeds, so when I have time I will be back to read more,
    Please do keep up the excellent b.

  52. MicaiahC says:

    Scott, I don’t know if you’re still reading the comments, but considering that almost diamonds has posted a response explicitly denouncing that she meant Less Wrong in particular, I think it would be fair to add an edit somewhere saying as much.

    The post is still terribly unclear about who they are referring to, as well as being… well, very equivocate-y, but I think section III changes substantially in meaning with and without the context of the response.

  53. Pingback: You can’t optimize anything, literally | Rival Voices

  54. Pingback: Link Archive 11/8/14 – 12/16/14 » Death Is Bad

  55. Pingback: On examining evidences for points of view, etc | The Daily Pochemuchka