Five Years and One Week of Less Wrong

[content warning: will be boring for non-LWers]

I.

Last week was the fifth birthday of Less Wrong. I thought I remembered it was started March 11, but it seems to have been more like March 5. I was going to use that time to talk about it, but now I will just have to talk about it awkwardly five years and one week after it was started.

I wrote a post a while ago called Read History Of Philosophy Backwards. I theorized that as old ways of thinking got replaced by newer ways, eventually people forgot the old ways even existed or were even coherent positions people could hold. So instead of reading Hobbes to tell you that people can form governments for their common advantage – which you already know – read him to tell you that there was a time when no one believed this was true and governments were natural structures ordained by God.

It makes sense that over five hundred years, with births and deaths and so on, people would forget they ever held strange and incomprehensible positions. It’s more surprising that it would happen within the course of a single person’s philosophical development. But this is what I keep hearing from people in the Less Wrong community.

“I re-read the Sequences”, they tell me, “and everything in them seems so obvious. But I have this intense memory of considering them revelatory at the time.”

This is my memory as well. They look like extremely well-written, cleverly presented version of Philosophy 101. And yet I distinctly remember reading them after I had gotten a bachelor’s degree magna cum laude in Philosophy and being shocked and excited by them.

So I thought it would be an interesting project, suitable for the lofty milestone of five years plus one week, to go back and try to figure out how far we have progressed without noticing that we were progressing.

A while ago I wrote about how the idea that beliefs are probabilistic is very not intuitive and something most people never grasp. I don’t mean complicated controversial ideas about how you should be willing to bet on every belief or anything like that. I mean drop-dead basic “You are not 100% certain of everything you believe”.

Were we ever this stupid? Certainly I got in fights about “can you still be an atheist rather than an agnostic if you’re not sure that God doesn’t exist,” and although I took the correct side (yes, you can), it didn’t seem like oh my god you are such an idiot for even considering this an open question HOW DO YOU BELIEVE ANYTHING AT ALL WITH THAT MINDSET. I remember being very impressed by Robert Anton Wilson’s universal doubt, and not as disgusted as I should have been by people making arguments like “If there’s any chance at all a criminal might re-offend, we shouldn’t let them out of jail”. In all of these cases I was sort of groping at the right idea, but I didn’t have a framework for it, couldn’t put exactly what I meant into obviously-correct words.

But that’s Overcoming Bias stuff, Sequence stuff. What have we done on Less Wrong, in the past five years and one week?

II.

It was around the switch to Less Wrong that someone first brought up the word “akrasia” (I think it was me, but I’m not sure). I remember there being a time when I was very confused and scandalized by the idea that people might engage in actions other than those rationally entailed by their beliefs. This seems really silly now, but at the time I remember the response was mostly positive and people upvoted me a lot and said things like “Huh, yeah, I guess people might engage in actions other than those rationally entailed by their beliefs! Weird! We should worry about this more!” For a while, we were really confused about this, and a really popular solution (WHICH I ALWAYS HATED) was to try to imagine the mind as being made up of multiple agents trying to strike a bargain. Like, your conscious mind was an agent, your unconscious mind was an agent, your sex drive was an agent, and so on. Ciphergoth was the first person to help us get out of this by bringing up hyperbolic discounting (there was a time Less Wrong didn’t know about hyperbolic discounting!) After that we started thinking more in terms of non-goal-directed systems. I remember when someone (Richard Kennaway?) first posted a lot of oversold stuff about Perceptual Control Theory, one of which was that it was a system that acted purposefully without modeling a goal, and I – and a lot of other people – commented that that was poppycock and not even possible. Later Anna posted Humans Are Not Automatically Strategic and I posted The Blue-Minimizing Robot, where we both started realizing the importance of non-goal-directed behavior in human affairs. This freed us from having to talk about “bargaining with unconscious agents” and allowed us to become more interested in things like behaviorism, even though I think later we became too willing to accept behaviorist just-so stories.

I remember it was around the beginning of Less Wrong when I first realized decision theory was a thing that existed. This was mostly my personal ignorance, but given that I read nearly every comment on Overcoming Bias and Less Wrong it must have been pretty obscure to the rest of the community as well to escape my attention for so long. I remember making a post that was very much about decision theory even though I didn’t know it, and then getting in a long argument with Eliezer that either of us would now be able to resolve in three seconds by saying “Scott, you’re rounding this off to Counterfactual Mugging; Eliezer, you’re rounding this off to Smoking Lesion, agree on where you’re sticking the locus of decision and the apparent different between your viewpoints will disappear.” But Counterfactual Mugging didn’t even appear as a concept until after this time! Eliezer didn’t discuss Timeless Decision Theory until six months after Less Wrong started, and I’m trying to imagine how our moral discussions must have taken place without it (“imagine Kant, only rigorous”). We didn’t get a really good explanation of different decision theories presented to everyone until three years after LW started.

There was a time before we had Eliezer Yudkowsky Facts, and that time must have been very sad to live in.

There are a couple of terms I coined – not because I’m especially smart, but because I talk to a lot of Less Wrongers and am especially sensitive to when they are using interesting ideas but haven’t yet named them or discussed them with anyone else – that I now can’t imagine living without. The idea of meta-contrarianism – or at least of signaling hierarchies – is one of them, and I find it especially useful in understanding some aspects of social class – and more awareness of the nature of class seems to be something else the community has gotten better at.

Likewise, this idea of trivial inconveniences having outsized effects has been very helpful to me, and got expanded into discussion of trivial fears, the possibly-grammatical-or-not trivial impetuses, the sort of related concept of an ugh field, and most important, this idea of fearing the twinge of starting. All of these make the internal experience of motivation make a lot more sense.

It wasn’t until well into the Less Wrong era that our community started to become aware of the problems with the scientific process. This wasn’t because we were behind the times but because the field was quite new; Ioannides didn’t publish his landmark paper until 2005, and it languished in specialized circles until the Atlantic picked it up in 2010. But as early as December 2009, Allan Crossman working off a comment of Eliezer’s wrote Parapsychology: The Control Group For Science. This pointed out that the frequent ability of parapsychology experiments to discover “telepathy” or “psionics” may not just be about them being especially dishonest, but also demonstrate a fundamental bias of the scientific system towards spurious positive results. The first I heard about Ioannides and medical replicability was a Less Wrong article from Nancy Lebovitz which then got further explored by gwern and Luke. I have since come out against the strongest interpretations of their claims, but it’s hard to imagine that as recently as five to ten years ago no one was really talking about the problems with medical studies or had good evidence anything was wrong with them.

I don’t know if there’s a good history of the efficient charity movement, but it seems to be pretty recent. GiveWell was founded in 2007. Giving What We Can was founded in 2009 by Toby Ord, a frequent commenter on the Less Wrong Sequences. 80000 Hours was founded in 2011 by William MacAskill and Benjamin Todd, both of whom are frequent Less Wrong commenters. I don’t want to overestimate the effect we’ve had on the efficient charity community – all of these people got into LW from EC rather than vice versa and we were not super important influences on any of them. And I don’t think we’ve ever been the super-cutting-edge. But I think we helped spread some ideas and give better philosophical grounding to others. I remember Money – The Unit of Caring and Purchase Fuzzies and Utilons Separately as having a big impact on me. A lot of people have told me that my 2010 LW post explaining efficient charity was important to them, and I’ve gotten requests to republish it in efficient charity books and manifestos. And in terms of just really neat philosophical framework-grounding, my most recent discovery is Katja Grace’s In Praise Of Pretending To Really Try, which neatly solves what was previously an irritating conceptual gap. On a purely practice level, after the past five years of exploring the concept, 30% of Less Wrongers now consider themselves effective altruists and among the lot of them donate over a million dollars to charity per year.

It continues to puzzle me that there was a time when I didn’t know what a Schelling point was. I imagine myself just sort of wandering through life, not having any idea what was going on or why. I’m pretty sure I corrected this well before I joined Less Wrong, but it wasn’t until I pieced together some information from a couple of Vladimir_M comments that I coined the term “Schelling fence”, which was in itself a miniature version of this kind of revelation. I thought for a while people were insufficiently impressed with how important this was, which makes me happy that people are starting to use it more. In terms of barrier-related metaphors, there’s also the Chesterton Fence, which has been around for eighty years but which I almost never hear mentioned outside the LWosphere. I feel like knowing about these two things has improved my ability to intelligently discuss politics dramatically.

A while ago I got an unusually weird complaint about my blog – someone didn’t like that I used the word “meme” a lot when memetics was an “unproven theory”. I objected that it was less of a theory than an extended metaphor. Some of the work done at Less Wrong has either tried to make it more of a theory, or at least extend and flesh out the metaphor. My favorite of these posts is Phil Goetz’ Reason As Memetic Immune Disorder. This seems to have permeated the culture enough that when I refer to my tendency to get freaked out and angry about ideas I myself support as “a memetic autoimmune disorder” everyone seems to understand what I’m talking about.

We’ve also moved in some interesting directions on friendships and relationships. My mind boggles to remember that for several years Less Wrong was not associated with polyamory, and that when I first met Alicorn she was mildly against it. Aside from a few very early adopters like Patri and Tilia, I don’t think it was really talked about in the community until Alicorn wrote Polyhacking in 2011. Two and a half years later, 15% of Less Wrongers consider themselves poly – a small minority, but still way more than the general population – and I think we’ve become pretty good at developing social norms of dealing with that. For example, just two months ago, Brienne discussed her idea of Tell Culture, and I have serious reservations with it that I’ve been meaning to get around to discussing, I agree that this is the sort of direction we should be thinking in and that this is the sort of “invent new and better ways of interacting with people” that makes me excited to be part of a community trying them.

III.

I’ll end with something that recently encouraged me a lot. Sometimes I talk to Will Newsome, or Steve Rayhawk, or Jennifer RM, or people like that in the general category of “we all know they are very smart but they have no ability to communicate their insights to others”. They say inscrutable things, and I nod and pretend to understand because it’s less painful than asking them to explain and sitting through an equally inscrutable explanation. And recently, the things that Will and Steve and Jennifer were saying a couple of years ago have started making perfect sense to me. The things they’re saying now still sound like nonsense, but now I can be optimistic that in a few years I’ll pick up those too.

I find this really exciting. It suggests there’s this path to be progressed down, that intellectual change isn’t just a random walk. Some people are further down the path than I am, and report there are actual places to get to that sound very exciting. And other people are around the same place I am, and still other people are lagging behind me. But when I look back at where we were five years ago, it’s so far back that none of us can even see it anymore, so far back that it’s not until I trawl the archives that realize how many things there used to be that we didn’t know.

I don’t think Less Wrong ever reached the insight-a-minute pace of the Sequences. But it’s been pretty enlightening. And over the course of five years and one week, that has really, really added up.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

180 Responses to Five Years and One Week of Less Wrong

  1. Sniffnoy says:

    “Parapsychology: the control group for science” link is broken (missing http://).

  2. Vaniver says:

    I remember when someone (Richard Kenneway?) first posted a lot of oversold stuff about Perceptual Control Theory, one of which was that it was a system that acted purposefully without modeling a goal, and I – and a lot of other people – commented that that was poppycock and not even possible.

    Intriguing. I’m reading through Behavior: The Control of Perception now because I saw pjeby comment positively about it, and I have learned that if pjeby likes it, I probably will like it (as ~three things I discovered and then excitedly searched LW for, the only discussion I saw was by pjeby, years before I found it).

    But if the description of it that you got is “acted purposefully without modeling a goal,” then I think Kennaway probably did a terrible job of explaining it. I’ll increase my priority to finish the book and write a LW post about it.

    • Scott Alexander says:

      It’s more likely that I’m misremembering things than that he’s getting them wrong. I’ve been informed it’s correctly spelled “Kennaway” – try searching for that.

      • Vaniver says:

        Yeah, I came across those posts easily enough, and am not terribly impressed by their explanations. The primary criticism I’m seeing on LW is from SilasBarta and orthonormal, and their comments seem to be “I don’t get why you’re excited” rather than “I think this is incorrect.” The only comment I’m seeing from you on a PCT post is this one, though; am I missing another criticism post?

    • Since I always enjoyed pjeby’s comments, I’m curious as to what the other things were.

  3. Andrei says:

    “for several years Less Wrong was not associated with polyamory”

    Does anyone know when EY started being poly? (The first public mention that I know of is Nov 2011 – http://hpmor.com/notes/chapter-77/ )

    “Sometimes I talk to Will Newsome”

    Yeah, I have recently discovered his old posts on LW and Computational Theology and they seemed really promising and fascinating. Pity they weren’t developed too much.

    • Steve says:

      Will had a computational theology blog up for about five posts; but that was over a year ago, and he generally seems embarrassed about ideas he had more than a year ago.

      For a few years, I haven’t seen any [Rayhawk,Jennifer,Newsome] long-form writing anywhere, and Newsome only on twitter. Anybody else seen more?

      • Will Newsome says:

        he generally seems embarrassed about ideas he had more than a year ago

        Nowadays I tend to be more embarrassed by how I put things than by the ideas themselves. This is sort of worrying, ‘cuz the last year marks the first time that I don’t look on my past self with seething contempt. Previously I’d taken that contempt as a sign of progress.

        For what it’s worth I never got to the point of Computational Theology where I was explaining ideas that actually interested me; instead I got stuck on (and eventually demotivated by) having to explain prerequisite ideas and background assumptions.

        For a few years, I haven’t seen any [Rayhawk,Jennifer,Newsome] long-form writing anywhere, and Newsome only on twitter. Anybody else seen more?

        Private IRC channels have become the primary medium of communication, or at least a primary medium. Talking to a large audience is slow and tedious. This is especially true since Less Wrong entered Eternal September a year or three ago.

        • Steve says:

          I agree. I should come back to good intentions, if you’re still around there; and don’t mind a lurker.

  4. a person says:

    This is my memory as well. They look like extremely well-written, cleverly presented version of Philosophy 101. And yet I distinctly remember reading them after I had gotten a bachelor’s degree magna cum laude in Philosophy and being shocked and excited by them.

    This is what’s crazy to me. If the sequences are simple and true and essentially summing up the work of outside authors in a convincing way, then why are they not Philosophy 101? Why are there so many deontologists and “there is no such thing as truth”-ists among philosophy professors? Are the sequences wrong? Are the sequences right, but overrated somehow? Or is Yudkowsky truly a prophet? Is society is truly insane?

    I don’t know if there’s a good history of the efficient charity movement, but it seems to be pretty recent. GiveWell was founded in 2007.

    This is some crazy shit to me. I remember when I was fourteen or so I came up with efficient charity on my own: “Isn’t it kind of messed up that anybody donates money to local ‘give the football team a nicer gym’ charities, when that money could go to the best charity for starving African children instead?” I’m not trying to argue that I’m particularly smart, just that the idea is obvious and very important. How did it not exist until 2007? Is society truly insane?

    EDIT: When I first discovered LW a year ago and read the sequences, my reaction was: “This is the most amazing thing I have ever read, this has changed my life, Eliezer Yudkowsky must be the smartest man on the planet.” Then time passed and my stance changed to “It’s just a website where we talk about basic philosophy and findings in cognitive science, Eliezer is smart but it’s not like he solved all the problems of philosophy forever himself.” Which of these positions do you think is closer to the truth?

    • Brian says:

      Most of the cog-sci stuff in the Sequences isn’t original to the Sequences — Daniel Kahneman can get you about 70% of the way there (though Thinking Fast and Slow, his best summary work, wasn’t published until well into the LW era), and a working knowledge of the famous bits of social psychology, plus Dawkins on memetics (and the gene-centric evolution that begat them) and maybe Cialdini and Eric Hoffer for politics stuff, can fill in most of the gaps. I don’t know as much about the decision theory side, but I get the impression that the situation’s much the same there.

      The Sequences are, however, a really good popularization, especially for the kind of people that read Goedel, Escher, Bach and ordinarily wouldn’t get within fifty feet of a social psychology textbook without warding themselves against the evil eye. And there’s something to be said for that, definitely.

      • I don’t know as much about the decision theory side, but I get the impression that the situation’s much the same there.

        Oh, this is definitely not the case. The ‘logical decision theory’ stuff Eliezer and MIRI work on has a few interesting parallels in the literature, but on the whole it’s extremely heterodox.

      • Eliezer Yudkowsky says:

        “and essentially summing up the work of outside authors in a convincing way” – No. This is something that people say to make a Display of Independence and show off how much they don’t believe in Yudkowsky. That said…

        “Most of the cog-sci stuff in the Sequences isn’t original to the Sequences” – Yes, of course. In cognitive science especially, 98% of the experiments and 95% of the hypotheses I use are standard in the literature, and >50% of the interpretation I write afterward.

        I think it’s safe to say that something like 60% of the Sequences are ideas that are standard or considered a standard position in at least one field of academia, 25% would be ideas you could find well-developed elsewhere with some hard looking, and 15% would be original. But that is also a description of an original academic paper on anything! So people who focus on the 85% that isn’t original and say, “Oh, Yudkowsky is just explaining other people’s ideas” really are, as said, making a great showy Display of Independence which isn’t actually true.

        I could have done a better job of documenting that 85% – I’d say maybe two-thirds has decent attribution, though people may not notice the attribution on the first read-through which is also a thing that happens, or I might not have been clear on how much is being attributed in a reference (illusion of transparency). But that’s because the Sequences are a massive braindump written at one post per day where my number one fear, starting out, was that I wouldn’t be able to keep up the pace and get a substantial amount written at all. I didn’t have time to make everything exactly right and journal quality, and even in retrospect that still seems like the right decision because that kind of perfectionism would have resulted in nothing being written.

        Anyway, I’d say that the mix of original vs. re-explained proportion of material in the Sequences is probably 15/85 which in turn seems typical for a standard academic paper. But the fact that I wrote it in a non-boring tone might be fooling initial readers’ intuitive ‘sense’ of how much was supposed to be original, and then when they read Judgment Under Uncertainty and discover that it wasn’t all me (which their brain failed to note before despite references, because their brain didn’t actually read the references so it didn’t sink in emotionally) then they conclude that it was all unoriginal, and seize the chance to say so in a Display of Independence, which is also wrong.

        Obviously, someone who was originally excited about 100% of the Sequences and whose brain attributed that all to the genius of Eliezer Yudkowsky is, indeed, normatively overexcited compared to someone who later correctly works out which 15% was the original part.

        But you also shouldn’t overlook the curation effect. There’s a reason why you can get a bachelors in philosophy and still have your mind blown by the Sequences. It’s that I have unusual taste, and some would say unusually good taste, on what to write up, and what fits together in a package. That, too, is originality.

        • Paul Crowley says:

          I’m one of the people who said this, and I now agree with what you say here. It is to some extent a failing of the blog posts that this isn’t clearer, but it seems plausible that the realistic alternative was not writing them at all, and I’m glad you did!

        • Brian says:

          “The Sequences are a massive braindump written at one post per day where my number one fear, starting out, was that I wouldn’t be able to keep up the pace and get a substantial amount written at all. I didn’t have time to make everything exactly right and journal quality, and even in retrospect that still seems like the right decision because that kind of perfectionism would have resulted in nothing being written.”

          Man, this is exactly the problem I’m trying to overcome now. I’d like to dump my brain out onto paper, but I keep getting caught up in how it wouldn’t be perfectly researched journal quality, and this anxiety makes it hard to write.

          How did you deal with this? What motivated you to keep pumping out a post a day? Particularly, how did you find a good way to divide up a massive idea into manageable pieces?

    • Anonymous says:

      when i first discovered lw, i quickly became enamored with it. three years later, i find myself disagreeing with much of what i used to hold as revelatory truth. lw is still an interesting read, but now seems to me no more than a fun site to discuss intellectual curiosities. i am of the opinion that although eliezer is intelligent, his popularity stems more from his ability to write well and successfully lead readers along a chain of conclusions than from any genuine accomplishments in cognitive science; indeed, as i read more philosophy, i slowly found out much of what i found most profound and intriguing in eliezer’s writing had been said before by others (ex. daniel dennett).

    • nydwracu says:

      I’m sure there’s some LW term I don’t know yet for this, but why are you so sure that philosophy departments are for finding and propagating simple and true philosophy?

      • Steve says:

        > I’m sure there’s some LW term I don’t know yet for this, but why are you so sure that philosophy departments are for finding and propagating simple and true philosophy?

        “X isn’t about Y” is an OvercomingBias/Robin Hanson thing which predates Lesswrong.

    • If the sequences are simple and true and essentially summing up the work of outside authors in a convincing way, then why are they not Philosophy 101?

      Because philosophy presently exists to perpetuate philosophy. It doesn’t have something to protect, or any particular goal it feels it needs to achieve, beyond perpetuating the memes that philosophers find appealing. A lot of those are very intellectually healthy memes, but they have not brought the truth to fixation, chiefly because philosophers don’t think it’s a matter of life and death that they get Exactly The Right Answer.

      Even that can be a healthy meme to some extent, since it mitigates dogmatism; it’s partly because they lack a sense of moral urgency that they don’t run into problems of tribalism to the same extent as, e.g., politics. Analytic philosophers are often very good at considering alternative viewpoints and carrying on civil, honest, to-the-point discussions. But they perhaps have taken that useful heuristic a bit too far.

      Why are there so many deontologists and “there is no such thing as truth”-ists among philosophy professors?

      I think a better question (and one that reflects a more complete internalization of the lessons of the Sequences, perhaps more complete than even EY has at this stage) would be: ‘Why are there so many people interested in and working on things like primitive duties and epistemic models of truth?’ If you spent enough time talking to a deontologist or a non-correspondence-theorist, you’d probably find that the main disagreements here are (a) linguistic or (b) a matter of taste.

      Philosophers are far more guilty of wasting time thinking rigorously about unimportant topics, than they are guilty of deducing demonstrable falsehoods. Which is part of why it’s hard to fix philosophy. It’s one thing to offer a group a proof that their beliefs are false; it’s another thing to try to persuade them that the questions they find most interesting are the wrong questions.

      I remember when I was fourteen or so I came up with efficient charity on my own

      Almost everyone has that intuition, and there are lots of historical examples of people trying to come up with innovative technologies or social policies to save the world. The ‘individuals giving to charity’ approach is more unusual, though that’s not necessarily a good thing; it’s not unlikely that we’ll find a more indirect approach (through technology, swaying public policy, or something else) that empirically works better in the next decade, and then we may cease to look so much like outliers.

      My guess is that the reason EA as such didn’t happen before was that there was no prominent shared community of thinkers with the right mix of expansive ambition, quantitative empiricism, and social and psychological incentives to be altruistic. The mix of idealistic intellectuals, programmers, and entrepeneurs that became concentrated in specific locations and around specific figures (Singer, Yudkowsky, etc.) provided a culture that could sustain an EA-like approach on a visible scale.

      We’ve had plenty of people in the past with expansive ambition, but they usually stop at the first Good Idea they come up with and fall into a death spiral, without testing the idea. We’ve had plenty of rigorous empiricists, but they usually equate ‘pragmatic’ with ‘known, safe, normal’ and don’t rock the boat much. And when we did happen upon ambitious empiricists, there was a lack of social incentives to move them toward a career in charitable giving, as opposed to one in public policy, science, technology, finance, etc.

      So I’d say the idea of EA has existed for nearly as long as values monism, moral egalitarianism, and scientific empiricism have; but the culture was missing, so people lacked an environment that would inspire them to visibly put those ideas into practice.

    • gattsuru says:

      I remember when I was fourteen or so I came up with efficient charity on my own: “Isn’t it kind of messed up that anybody donates money to local ‘give the football team a nicer gym’ charities, when that money could go to the best charity for starving African children instead?” I’m not trying to argue that I’m particularly smart, just that the idea is obvious and very important. How did it not exist until 2007? Is society truly insane?

      In addition to the matters Bensinger brings up above, Effective Altruism as applied by GiveWell is also dependent on certain information infrastructure technologies and certain reliable regulatory or socially expected declarations, neither of which existed until fairly recently. Individual exposes on poorly performing charities have a long pedigree in journalism, and the “shut up and multiply” concept of ethics was one of the less controversial bits of Singer’s papers in the 70s, so at the very least the components have been around for quite some time.

      It’s also worth noting that a lot of charity exists on an axis orthagonal to GiveWell’s purpose. Donating to a local sports team likely reflects matters like desired social status, helping a local community, ‘paying back’ a group they belong or belonged to, getting rid of less-fungible goods, or even tax purposes.

      • misha says:

        Let’s not completely dismiss local charity: It’s an avenue of altruism where you have vastly disproportionate impact compared to most people in the world. The issues are smaller, but you’re much more capable of swaying them. You’re also likely to be MUCH better informed.

        Local charity also exists at the intersection between altruism and selfishness: Giving to a local sports team that you watch ensures that it’s around next year for you to watch again.

    • peterdjones says:

      There are deontologists and sceptics among professors because there are good arguments for those things. LWers disregard or are unaware of those arguments. Professors can do neither. Simplification by disregard is not big or clever it is like condensing a long book by tearing out pages.

      • I think this is partly right. One thing it’s missing is that there are LWers who not only are familiar with some of those arguments, but actually endorse consequentialism or an unorthodox theory of truth. I myself don’t endorse correspondence theories in a lot of interesting domains, and, relatedly, I don’t think deontology is ‘false’ in any simple way. Your hypothesis doesn’t explain us, nor does it explain the LWers who strongly reject deontology etc. even after hearing non-straw arguments.

        My suggestion is that this can again be explained primarily in terms of divergent goals. LW wants to save the world. Skepticism and deontology, no matter how intellectually motivated, don’t help with doing that. As Eliezer pithily put it, “To those who say ‘Nothing is real,’ I once replied, ‘That’s great, but how does the nothing work?'”

        When your house is on fire, intellectual achievement isn’t irrelevant; you need to understand how fires work and the specific facts of your predicament in order to reliably make the right decisions. But you can be excused for prioritizing fire-related knowledge, and points of view that can at least in principle be useful for fire-fighting, over less practical topics. Going through the motions of trying to make the world a better place, even though you can’t perfectly philosophically justify ideas like ‘consequentialism’, at least ensures that you’ll lose fewer lives to the fire. Likewise for going through the motions of trying to understand fires, even though you can’t perfectly philosophically justify ideas like ‘scientific inquiry’.

        • *actually endorse deontology

        • Douglas Knight says:

          Once you claim that it matters how many lives you save from the fire, you’ve already smuggled in consequentialism.

        • Douglas: Every actual human being uses both consequentialism and deontology. Because no one can get buy with a hierarchy of preferences simple enough to reduce cleanly to one category or the other. And because in practice people’s values sometimes allow a consistent ordering over world-states, and sometimes don’t.

          They’re both abstractions that don’t match up to the psychology of human motivation in any detail; our actual moralities are pre-theoretic. So it’s up to us to come up with some new standard for what it means for the theories to be ‘true’, or to reject the truth of all the theories at once.

          It sounds like the standard you’re proposing is ‘anyone who cares about any worldly circumstance is a consequentialist and not a deontologist’. I don’t see the advantage of this approach.

        • ozymandias says:

          What if I’m a deontologist with the rule “save as many lives as possible from fires”?

    • MugaSofer says:

      “Is society is truly insane?”

      That’s the one I settled on in this situation, yeah.

    • hf says:

      We should stop saying “insane” when we do not mean clinically insane. Here in particular, you should look at people who make decisions for universities and ask if using the Sequences for Philosophy 101 would serve their interests or desires.

      • NIH says:

        I’m not sure if you find this to be self-evident, but you didn’t actually provide a reason why we shouldn’t.

  5. Alexander Stanislaw says:

    Although I only really started reading LW about 2 years ago, this was very enjoyable to read.

    Also, just wanted to say that you are one of my favorite writers and I’m glad that you are able to sustain this blog even with substantial work obligations.

  6. Aaron Brown says:

    Great summary. Almost all of this stuff is part of my mental furniture now too.

    I think this might be the Perceptual Control Theory post you’re thinking of.

    (Also, s/Kenneway/Kennaway/.)

  7. Cyan says:

    Link to Counterfactual Mugging is broken. (The link text is “after this time!”)

  8. Chris says:

    Sometimes I talk to Will Newsome, or Steve Rayhawk, or Jennifer RM, or people like that in the general category of “we all know they are very smart but they have no ability to communicate their insights to others”.

    I’m curious to see an example of this for Will Newsome (I don’t know about the others). There are other people like that in the LW orbit, at least one of whom comments here, that it seems like everyone in the community “knows” is really really smart, and yet their writings don’t seem to reflect that. And when they talk about an area that I’m familiar with, well, their lack of knowledge and clarity is quite quickly apparent.

    • Anonymous says:

      >And when they talk about an area that I’m familiar with, well, their lack of knowledge and clarity is quite quickly apparent.

      I’ve also experienced this.

    • Donald Q says:

      Myself as well

    • peterdjones says:

      The idea that there is some kind of smartness that makes you equally good at everything is a central myth of LW.

    • Anonymous says:

      Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.
      In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

      ― Michael Crichton

    • Will Newsome says:

      And when they talk about an area that I’m familiar with, well, their lack of knowledge and clarity is quite quickly apparent.

      This agrees with my experience. *nods somberly*

  9. Nisan says:

    The first time I saw polyamory mentioned on Less Wrong (or anywhere) was Unknown knowns: Why did you choose to be monogamous? by User:Wrongbot.

  10. Cyan says:

    I find myself wishing that I could point the non-LWers in my life to this post so that they’d find out how awesome LW is — but of course they’d have to have already read LW to understand the post.

    • Alexander Stanislaw says:

      Generalizing from my own experience, I think as an outsider I would have been more compelled by this post, about Scott’s experience in California (although its not directly related to LW). It hit more of an emotional chord with me. I felt like I was seriously missing out on something amazing.

  11. “I re-read the Sequences”, they tell me, “and everything in them seems so obvious. But I have this intense memory of considering them revelatory at the time.”

    This is how I feel when I reread Mencius Moldbug.

    That said, I always considered LR to be somehow hostile to me, since I’m an Orthodox Christian and don’t have any particular interest in being attacked by atheists for being contemptible and irrational. (This has been my experience with internet atheists is several places.) But I followed most of your links and found them interesting, well thought-out, and not at all hostile, so perhaps I’ve been wrong about LR all along.

    • misha says:

      In general: LW will “attack” you for being christian if you bring it up as the reason you hold a factual belief, but will more likely leave you alone if it’s an explanation for your values. Here’s a not unusual example from a recent open thread: http://lesswrong.com/r/discussion/lw/jtb/open_thread_march_4_10/ao08

      • Pawel Aleksander Fedorynski says:

        My perception is closer to Mai La Dreapta’s. Reading LW I often see spurious, off-handed references to religion as a go-to example of something stupid. This is pretty off-putting despite the fact that I’m an atheist myself.

        • nydwracu says:

          Yeah, the norm that epistemic rationality should always take precedence over instrumental rationality [when I first started reading LW I thought the two were defined *in opposition to* each other] is… in addition to sacrificing instrumental rationality where the two conflict (as they very frequently do), it opens up the risk of letting [instrumentally rational] beliefs that increase thede cohesion by defining and signaling against elthedes slip in through the back door while neutralizing the memes that could make that less harmful — “ah, but it’s true!” — which is necessarily epistemically irrational since the process of understanding [anything to do with?] a thede becomes impossible once it’s defined as an elthede.

          I haven’t noticed this in LW-connected circles anywhere near as much as in most places I’ve seen, but it’s still a very common failure mode and one that really ought to be more carefully guarded against.

          (The better answer to the epistemic-instrumental conflict is esotericism, which seems to be quite a common solution. The problem is that, to do this well, you have to be able to pull off ketman to a degree that requires either very good acting ability or total compartmentalization of totally fractured personalities, which creates all sorts of problems and which I really wouldn’t recommend, since given certain thede dynamics it requires either total anonymity or excellent double-agent skills.)

          Inside accounts of thede-cybernetics would be very valuable things to have, especially for remarkably functional/useful/value-preserving thedes — and given how many of such thedes are religious, LW’s atheism-signaling seems like a net negative, and the best solution might be to spin off the toolkit into a different site/thede altogether…

    • I don’t know what your experiences have been like, but unlike many internet atheists, LWers tend to take atheism for granted and therefore it doesn’t come up as much, and they may not focus on religion much.

    • Scott Alexander says:

      I think we’re pretty good at not being explicitly hostile to religious people, and atheism isn’t as important to our identity as it is to a lot of more explicitly atheist forums. That said, I probably wouldn’t notice if there was low-grade background hostility to religious people going on.

      I can say that 7% of users are theist, including a lot of well-respected people like Leah – who’s a very public Catholic and who got hired by CFAR (sorta like the real-life money-making arm of LW) and taught courses for them for a while.

    • Will Newsome says:

      You won’t be able to reason with them about your beliefs—politics is the mind killer, and opinions about religion are extremely politicized to the point that you won’t find any group anywhere that reaches opinions about distant things without regard to where the wind is blowing—but you can always just not talk about them. If the religious perspective you have is what you think you’d mostly be bringing to the table, then I suspect there are other, better fora.

  12. Elizabeth says:

    I think this is the post I’m going to link people to when they ask “So, what’s all this Less Wrong stuff about, anyway?”

    Can you give examples of the things Will, Steve, Jenn etc. talked about two years ago which make sense to you now? Or better yet, a comprehensive list?

    Also, I am very excited to read your Tell Culture post.

    • Anonymous says:

      Can you give examples of the things Will, Steve, Jenn etc. talked about two years ago which make sense to you now? Or better yet, a comprehensive list?

      I very second this. (So my post is True and Kind, but not Necessary?)

    • Scott Alexander says:

      At the risk of capturing only the very simplest things because they’re the ones I am most likely to be able to explain:

      – They were some of the first people I knew to try to apply decision theory to things like politics and much more speculatively to metaphysics.

      – Will’s obsession with “going meta”, which originally annoyed me, now makes a lot more sense to me in a way that is sort of related to this quiz only taken much much further.

      – I remember Jennifer taking a while to study ecclesiology, which seemed to me about the most useless subject in the world. Now I can appreciate why that might be useful. The one-sentence version is that it’s about how to trade off memetic purity with community expansion/survival/strategic-alliance, and I’ve been recently thinking in terms of different communions and how they intersect.

      – The first time I met Steve he tried to convince me that studying law was useful and important. I didn’t think it was since it was just some collection of regulations that random legislators and lawyers threw together. I since read David Friedman’s classic essay on property as Schelling point (can’t remember if he was the one who referred me to that but for some reason my mind associates it with him) and realized that law is basically the study of drawing Schelling fences that cleave reality at the joints, which is REALLY INTERESTING.

      As for the Tell Culture post, no guarantees about when it will have gelled in my head sufficiently to be written down, but I feel like you kind of lived through a bit of it which makes reading my essay about it kind of superfluous.

      • Will Newsome says:

        A monk asked Hofstadter, a North American master: “I have gone meta again and again and I am left with only vexation of concepts and abstractions devoid of living reality; what should I do?”

        Hofstadter answered: “Go meta.”

        Mumon’s Comment: Hofstadter sells dog meat under the sign of mutton.

      • Will Newsome says:

        These other LessWrong narcissists just can’t step to my deep understanding of Narcissistic Decision Theory.

  13. Aurini says:

    Only read the introduction thus far, but-

    YES! I remember having my mind blown by Eliezer’s writings, back in the Overcoming Bias days, and recently when I looked at them I found them mildly pleasant – but utterly obvious.

  14. pwyll says:

    [potentially private description of Steve edited out with original poster’s consent. Thanks everyone for handling this maturely – SA]

    • misha says:

      Let’s not base our opinions of people on a few impressions and rumors.

      Or if we do at least make those rumors INTERESTING. You don’t make him sound any more deficient in qualities than any number of gamer geeks I know, and plenty of them are smart and fun to be around.

      If Scott, who you’ve read intensively and respect, praises someone you’ve almost never talked to, where your opinion of him is based on approximately 2 superficial qualities, the person you reevaluate should probably not be Scott, it should be Steve. And maybe yourself.

      • Viliam Búr says:

        Many geeks are socially imperfect in various ways, and they still may be nice and interesting people. But there is also a known mistake based on what LW people would call “fallacy of grey” and “reversing stupidity”. It works like this:

        A former socially challenged person meets someone “weird” and thinks: “Well, a few years ago I had some problems, some people avoided me, but I am a nice person. Therefore, this person who has obvious problems also must be a nice person.” Thus they rationalize away all the feelings that warned them, and sometimes even convince their friends to also ignore their feelings about the person. Properly framed, people may even be made feel guilty for noticing the weirdness.

        Where is the mistake? First, not all imperfections are equal. Some problems are more serious than other. For example, pwyll writes: “he would just kinda come into their room and start playing [video games], without bothering with things like asking permission or greeting the occupants”, but I guess some people look at these words and only notice “playing video games”, and reply: well, this is exactly what I did and what my friends did, so I don’t see a problem. Please, read again; you may notice a red flag in the text which probably doesn’t apply to you or your friends. Second, even if some of people with the red flags turn out to be okay, it doesn’ mean all of them will; it still may be very reasonable to avoid them.

        I don’t know details about the specific person being discussed; I’m just objecting against a logic of: if you admire a person X, and the person X respects a person Y, then if you have a bad feeling from the person Y, the problem must be in you. (This is how your reaction felt to me. I know I’m oversimplifying it.) I am writing this because in the past I had a bad experience with a person that felt wery weird to me, and I initially decided to avoid them, but then all of our mutual friends, some of whom I respected a lot, told me that my feelings were wrong, that I should not judge the person from the first impression, and instead I should notice how intelligent they are. And they kept repeating this until I gave up. The person was very intelligent, but they also knowingly ignored social conventions and everyone’s feeling, unless they were manipulating someone for some specific goal. A few months later, the person stole some money from those who trusted them (including me), hurt someone (not me), and disappeared from our social circle. As a consequences of this, I decided to trust my gut feelings more, and the judgement of those admirable people less (I still admire them for things other than their judgement of people). But this is unrelated to the specific person being discussed here; I only object against the general advice of respecting people transitively.

        • Douglas Knight says:

          Isn’t the lesson that you should disentangle trust of a person’s ideas from trust of his actions? That you shouldn’t have a single concept of admiration? Maybe that isn’t possible and you should know your limits, but pwyll’s positions seems to me to simply be the opposite of correct.

      • pwyll says:

        Misha, there’s a very good chance you’re right – and, anyone reading this certainly shouldn’t judge anyone just based on heresay from an anonymous commenter like me.

        I think the larger question is, in what circumstances should you ignore “vibes” you get from people, and when should you pay attention to them? I don’t know what the answer is, but as I get older I find myself trusting my gut instincts more often – for better or for worse.

        • pwyll says:

          One more thing… I would never want to criticize anyone merely for being a nerd and having played a lot of video games – I’m guilty on both counts. But I do suspect that being able to interact with others neurotypically is a valuable skill that, if missing, is a strong indicator that there may be other lacunae in a person’s way of thinking.

    • Elizabeth says:

      I am confused about “Fortunately, text-based interaction on the internet enables us to evaluate the statements and ideas of a person, decoupled from potentially irrelevant personal characteristics of that person. But this case reaaaaallllyy makes me wonder about whether I should weight “in-person weird vibes” more highly in my evaluation of a person’s writing.”

      What specifically made you think you should weight “in-person weird vibes” more highly?

      • pwyll says:

        In this particular case, it was the strong disconnect between the statements Scott made and my personal experiences, and my growing suspicion as I get older that my gut feelings are more likely to be what I should be paying attention to.

    • Doug S. says:

      Ouch. This could also describe me, although I’d deny having poor personal hygiene. I, too, did very little other than play video games and read the Internet during college. (I even skipped a midterm once to play Final Fantasy X. In hindsight, this was absolutely the correct decision.) On the other hand, I did eventually manage to graduate, so that’s that…

      • Will Newsome says:

        I even skipped a midterm once to play Final Fantasy X. In hindsight, this was absolutely the correct decision.

        Fuck yeah FFX! I skipped a history final to go to brunch with my friends. I didn’t even come close to graduating. High school.

        • Doug S. says:

          One reason the decision was correct was that, at the time, I had a much better excuse for missing the exam than I did a grasp of the material, and I was well aware of this. (The magic words are “psychiatrist” and “antidepressant”…)

        • Doug S. says:

          At one time, I tried to avoid graduating from high school. My parents sent me to psychiatrists until my mind was changed.

        • Randy M says:

          FFX in college, you kids are making me feel old, why, I was playing FF 8 in college.

          I hear you even have “talkie” video games these days.

        • nydwracu says:

          I once had a professor who told us a story, with a very obvious moral, which he even spelled out — a story about how he missed Woodstock to take a Chinese final.

          I went to all my midterms and finals since I went to college in the absolute middle of nowhere, but I did at least drop out of high school. Mostly because I was accused of running a credit card fraud ring and railroaded out by the administration until they saw my SAT scores and did a total and pathetically desperate 180, but I would’ve left anyway.

        • peterdjones says:

          Luxury! I had to write my own games!

        • nydwracu says:

          Ah, that’s what high school math class was for.

          (I really should port TI-89 Sokoban someday. A maximum board size of 16×9 or so is a hell of a constraint, but I managed to make some fairly clever levels.

          I would not be surprised if the obsolescence of graphing calculators results in a non-negligible hit to the popularity of programming…)

      • pwyll says:

        I also have played a lot of video games – though I mostly quit in college – and I wouldn’t want to criticize someone solely for that. (Nor do I believe that the things they teach in class in college are so valuable that going to class is always a better use of time than playing games – that certainly wasn’t true in my case either!)

    • Kaminiwa says:

      Your description makes me think of Richard Stallman. There seem to be a lot of smart people who either decide society is insane and not worth listening to, or are simply smart enough to build a little pocket where they don’t have any real reason to care 🙂

      I’d suggest that “weird vibes” like that are (probably) a very useful indicator if you’re looking for friends, roommates, etc. – but not as useful if you’re looking for “intelligent ideas”, “blogs to follow”, etc. 🙂

    • Steve says:

      I think you’re trying to honestly report your reaction, and not necessarily reflectively endorsing your reaction. That’s good, because I’m perfectly willing to believe that Steve Rayhawk is lacking in social graces (remember the whole “no ability to communicate their insights” part?). But I’m unable to believe that he’s not incredibly insightful:

      Check out the way Rayhawk explains a position he doesn’t agree with, better than the actual holder of the position can. Here, he explains why one of LW’s smart contrarians gets downvoted a lot. Here, he explains why AI experts may disagree with MIRI. Here, he tells someone not to take it as a compliment that he’s not being agreed with.

      This one is perhaps my favorite; where Rayhawk examines opposition to AGW theories on grounds other than generalized science denialism.

      In general, Rayhawk obviously has a ton of brainpower, and is really really good at figuring out individually rational reasons that people disagree; but has no in-person social graces for whatever reason.

    • jrayhawk says:

      Due to emotional abuse, Steve did not develop a functioning ego to mediate his id, i.e. does not have a meaningful or permanent self-conception of identity, either ideal (to aspire to) or cynical (to work from), needed to direct action beyond naive fulfillment of basic needs. Because of his missing ego, the only value systems (and thus non-basic motivations) available to him are borrowed internalizations from others. The value systems of others are, in various ways, fickle, disjointed, incompatible, and subject to availability problems, and thus are an actively dangerous thing to internalize together, whether simultaneously or serially.

      Actively dangerous, that is, unless you can abstract and rectify them with each-other and reality. All of them. Constantly.

      Can you imagine a lifestyle where failing to do so is processed as an existential risk? Can you imagine the sort of tortuous analytical discipline required?

      Videogame wireheading catatonia, by comparison, seems like a damned wholesome option.

      Steve represents an idiot-savant tradeoff, and not one Steve voluntarily made. Every bit of brainpower you have dedicated to social propriety he has dedicated to value system analysis and integration.

      Social protocols are used to negotiate boundaries in the application of competing value systems and motivations. Steve has a skillset that serves as an adaptation to having no cognitive capacity for those boundaries. He is, at great expense to himself, uniquely useful to the rationality community.

      • Said Achmiz says:

        This is a perfect example of a comment that reads like inscrutable nonsense!

        Scott (or anyone else): care to comment on whether there is truth hidden in there? If so, what is it?

        (Or is it just me who doesn’t get it?)

        • ozymandias says:

          It makes sense to me. Possibly because I have a similar form of crazy?

          As far as I can tell, jrayhawk is saying that Steve didn’t develop a personal identity, a sense of self. A lot of people who don’t develop a personal identity end up grabbing off-the-shelf identities: I am A Rationalist or A Gamer or A Liberal or whatever. The problem is that a lot of those ideas are actively toxic when you don’t have a part of the brain that can put on the breaks, and also completely incompatible with each other. So as a coping mechanism Steve had to get really really good at figuring out how value systems work and making them work together, and the effort it took to do this traded off on his ability to master basic social skills and to not be in video-game-playing catatonia, as well as making him useful to listen to sometimes. (While I have not AFAIK interacted with Steve, I suspect that there are probably other reasons for his poor social skills &c; you don’t get your sense of personal identity that fucked up without fucking up other things about yourself too.)

        • Said Achmiz says:

          Thank you for taking the time to explain, ozy. That helped… a bit. But not much.

          Basically, it does not seem to me like what you (and, I guess, jrayhawk, assuming that your explanation is in fact a faithful rendition of his intended meaning) are describing is… well… a real thing. It’s like, you’re saying something that sounds coherent, and yet doesn’t seem (to me) to actually describe or connect to reality in any way.

          Like, Steve “didn’t develop a personal identity”? What does that mean? Is this actually a real thing? Is that diagnosable? How does it manifest? How does it happen? Is there a way to distinguish between people who developed a personal identity and ones who didn’t? (Is this phenomenon written about anywhere?)

          “The problem is that a lot of those ideas are actively toxic when you don’t have a part of the brain that can put on the breaks” — huh?? A part of the brain that can put on the brakes? How? What?

          “So as a coping mechanism Steve had to get really really good at figuring out how value systems work and making them work together” — What does this mean? Making them work together how? By what criterion do we (or does Steve) evaluate whether they’re working together? Can we get an example of this, rather than this very abstract description?

          ” the effort it took to do this traded off on his ability to master basic social skills and to not be in video-game-playing catatonia” — is this… really a thing that happens? How does it work? Again: is this phenomenon written about anywhere? Citation needed, etc.

          Please don’t take this as me pressuring you, personally, for more explanation, or anything. If you feel like commenting further, that would be cool, but if your response is “eh, too much effort to explain more”, that’s fine too, I totally get that. I just wanted to give an example of the sort of reaction I have when I read stuff like this, and why I find it so difficult to understand; it’s the fact that none of it seems to me to connect to anything that I am at all familiar with (in this case, I say this as someone with an interest in, and some formal background in, psychology and cognitive science).

        • jrayhawk says:

          You are dissatisfied by a failure to intuit the plausibility of cluster-b personality disorder models.

          It is only in learning to model cluster-b personality disorders that you can begin to understand just how fortuitous your intuitive capacities are and just how wrong your dissatisfaction is.

        • Said Achmiz says:

          Well, jrayhawk, in lieu of the flat “wat” that I am very much tempted to respond with, I’d like to ask you this:

          What is your background in psychology or psychiatry? On what basis, in other words, are you making these comments?

          By the way, if Scott (or someone with comparable professional expertise on mental disorders) would care to chime in on whether jrayhawk’s comments make any sort of sense whatsoever, I would greatly appreciate it.

        • Alexander Stanislaw says:

          I didn’t think it was that unclear but:

          Every bit of brainpower you have dedicated to social propriety he has dedicated to value system analysis and integration.

          Social protocols are used to negotiate boundaries in the application of competing value systems and motivations.

          I’m confused as to why this tradeoff is necessary. I don’t see what social protocols have to do with value systems or why you couldn’t learn standard social skills and also figure out how to integrate and reconcile values.

        • ozymandias says:

          Yeah, I had to keep the explanation somewhat vague because… I haven’t met the guy, I don’t even know what he’s been diagnosed with or whether he’s been diagnosed with anything.

          I am analogizing from my experience with borderline personality disorder: basically, it is really hard for me to figure out I value or enjoy or want or am like, which makes me abnormally susceptible to ideologies and other people telling me what I value or enjoy or want or am like. This is problematic, because most ideologies are not remotely designed for people who are taking their entire sense of self from them and tend to work really badly for the purpose, and most people who want to tell you what you’re like are abusive jerks. This is a documented symptom of BPD; I think it’s also a documented symptom of other personality disorders and some other mental illnesses. I am interpreting jrayhawk as saying that Steve has something similar.

          I don’t think that catanoia and poor social skills are *necessarily* related to at least my form of the thing that I interpet him as talking about. (Particularly since depression and poor social skills are symptoms of a lot of shit, including shit that gives you weird sense-of-self issues.) But I don’t know the guy or what he’s diagnosed with or anything so I am sort of uncomfortable being like “this person who looks knowledgeable is totally wrong.”

        • Said Achmiz says:

          Thanks, that was relatively comprehensible! (Man, I sure don’t mean that to be as damning-with-faint-praise as it sounds. I mean: very comprehensible relative to jrayhawk’s explanation, given that I am still unable to connect the subject matter to any examples I’m familiar with.)

          Btw, googling for “catanoia” yields nothing useful. Did you mean “catatonia” (in which case, your usage confuses me), or is this an obscure term I haven’t encountered before?

        • ozymandias says:

          I meant “catatonia” and I was just using it because jrayhawk used it.

        • Sniffnoy says:

          I’m a little confused here; on the one hand, people are saying that Steve has traded off social skills for being able to figure out value systems. On the other hand, people are giving examples of Steve comprehending other people’s positions — which, OK, is figuring out value systems, I guess. But my point is, shouldn’t one translate to the other? Social skills should be doable as an application of figuring out value systems and other people’s positions, right?

      • misha says:

        The part of the brain that puts on brakes is also known as compartmentalization. It’s whatever it is that keeps almost everyone who thinks abortion is murder from bombing abortion clinics.

        • Said Achmiz says:

          Ok. That’s progress.

          How does the notion of compartmentalization relate to the rest of the comment in question?

          (Also (not that this is strictly on-topic…): is compartmentalization really why people don’t bomb abortion clinics? Don’t you think a large part of it is that most people just don’t reason consequentially, as Scott has previously commented?)

    • Scott Alexander says:

      Being extremely eccentric seems par for the course for mathematicians.

      I’m a little concerned by having this comment on my blog since it reads like a character assassination of a non-public figure and could very easily get picked up by casual Google searches. Unless you are going to get very upset and try to Streisand Effect it for revenge, I’d like to delete it or edit it out. Is that okay?

      • pwyll says:

        Yes, 100% okay to delete it. (or edit it, or whatever.) Also, for future reference, don’t feel like you need to ask – just go ahead and delete. (although it usually helps thread comprehensibility if you replace the deleted comment with something like “Deleted/Edited by mod” or “Deleted/Edited by mod for $reason”)

        EDIT: The question of who is a “public figure” is a tricky one… e.g. A person can be a public figure in the rationality community, but not in the broader societal sense… but you can’t really restrict commentary on a “rationality community public figure” to only being accessible by that community. I sympathize with your concern. Perhaps it’s best to err on the side of caution.

  15. I’ll end with something that recently encouraged me a lot. Sometimes I talk to Will Newsome, or Steve Rayhawk, or Jennifer RM, or people like that in the general category of “we all know they are very smart but they have no ability to communicate their insights to others”.

    This reminded me of this quote from philosopher Eric Schwitzgebel:

    From our cultural distance, it is evident that Kant’s arguments against masturbation, for the return of wives to abusive husbands, etc., are gobbledy-gook. This should make us suspicious that there might be other parts of Kant, too, that are gobbledy-gook, for example, the stuff that transparently reads like gobbledy-gook, such as the transcendental deduction, and such as his claims that his various obviously non-equivalent formulations of the fundamental principle of morality are in fact “so many formulations of precisely the same law” (Groundwork, 4:436, Zweig trans.). I read Kant as a master at promising philosophers what they want and then effusing a haze of words with glimmers enough of hope that readers can convince themselves that there is something profound underneath.

    • Doug S. says:

      In the Second Scroll of Wen the Eternally Surprised a story is written concerning one day when the apprentice Clodpool, in a rebellious mood, approached Wen and spake thusly:
      “Master, what is the difference between a humanistic, monastic system of belief in which wisdom is sought by means of an apparently nonsensical system of questions and answers, and a lot of mystic gibberish made up on the spur of the moment?”
      Wen considered this for some time, and at last said: “A fish!”
      And Clodpool went away, satisfied.

      — Terry Pratchett, Thief of Time

    • Scott Alexander says:

      I’m not sure I’m understanding you right, but it looks like the quote is trying to say if one part of someone’s output is really dumb, this is a strong sign that their entire output, including more widely believed things, is also really dumb.

      I note in contradiction that Newton spent decades working on alchemy and weird Bible prophecy codes, Godel developed an ontological proof of God’s existence and went crazy, John Nash thought he was God’s left foot and Emperor of Antarctica, Cantor spent his last years inventing conspiracy theories about Jesus, Frege spent his last years writing weird anti-Semitic rants (before this was even popular in Germany), almost everything Wittgenstein did was hopelessly bizarre, Euler wrote “Defense of the Divine Revelation against the Objections of the Freethinkers” defending Biblical inerrancy, et cetera.

      Certain types of great thinkers seem unusually prone to being extremely strange and having bizarre ideas outside of their areas of genius.

      • pwyll says:

        Would you accept Feynman and von Neumann as counter-anecdotes? I’ll grant you that the very intelligent are often eccentric, but there’s difference between “eccentric” and “mentally ill”. Are you certain it’s not just the attractive-to-many-people idea that great ability *must* come with a dark side? What if the two variables are totally uncorrelated, and it’s merely that the genius-but-crazy examples are the most interesting, and so the most prominent?

        The bigger question to me is, in what circumstances should we ignore personal idiosyncrasies as noise, versus paying attention to them as valuable data?

        • St. Rev says:

          I suspect that the apparent correlation is misleading. The evidence is much stronger that mental illness correlates, not with intelligence, but with two other aspects of ‘genius’–creative ideation, and obsessive productivity.

        • Scott Alexander says:

          I would accept them as counter-anecdotes to the claim “all brilliant mathematicians are crackpots”, but I’m not making that claim. I’m making the weaker claim “Some nontrivial number of brilliant mathematicians are crackpots” and flirting with the stronger claim “The chance that someone who produces math is brilliant is not decreased by learning they are a crackpot in non-math fields”

        • St. Rev says:

          How about we go through something like http://www.fabpedigree.com/james/greatmm.htm? Cantor comes in at #25, Godel at #35.

          Actually, that list puts Newton at #1, so it’s probably not a great source, but something like this.

      • Sniffnoy says:

        I’m going to have to disagree with you here, Scott; I don’t think you’ve really represented it fairly. Indeed, I think you’ve replaced it with a very different situation. It seems to me that the situation (as described, anyway) is as follows:

        1. Kant wrote a bunch of impressive-looking but incomprehensible text; rather than reject it as nonsense, people accept it (or claim to accept it) because they want to appear impressive and it yields (or rather, Kant claims it yields) conclusions they either agree with or would like to be true.
        2. Some of this, though, yields (or rather, Kant claims it yields) conclusions that people now disagree with, or don’t want to be true; in these cases, people generally accept reasoning used is essentially specious.
        3. Point #2 should make us very suspicious of Kant’s incomprehensible reasoning that people accept (or claim to accept), since it’s not essentially dissimilar from his incomprehensible reasoning that they reject.

        You seem to have replaced it with an entirely different situation, where the “bad” work may be wrong or patently nonsensical, but is not incomprehensible, and in some of your examples, the “bad” work is unrelated to, essentially different from, or in a different field from the “good” work. (You even make this explicit in the end, when you say “outside their areas of genius.”) So, I don’t think your analogies are good ones.

        • Alexander Stanislaw says:

          Huh? Godel using modal logic to prove God’s existence seems exactly parallel to Kant using deontology to justify keeping women in abusive marriages.

        • Sniffnoy says:

          I don’t think it is. (And even if it is, that makes one. 😛 ) Gödel’s argument is pretty damn well-specified and verifiable; as long as A. you accept modal logic, B. you accept the assumptions, and C. you accept the interpretation, AFAIK, it makes sense. Of course, plenty of people have pointed out the problems with B, and I’m pretty iffy about C as well. There are errors, but we can understand them and isolate them. It’s not gobbledygook, and it doesn’t give us any reason to doubt his other work.

          By contrast, if I heard that Shinichi Mochizuki had published a proof of the existence of God using Inter-universal Teichmüller theory, that would certainly raise my subjective probability of “Inter-universal Teichmüller theory is gobbledygook”. (Of course, people haven not rushed to accept Mochizuki’s conclusions, though they’d very much like them to be true…)

      • peterdjones says:

        Because of the Myth of Omnicompetent Sharpness?

      • St. Rev says:

        Been thinking more about this claim

        Certain types of great thinkers seem unusually prone to being extremely strange and having bizarre ideas outside of their areas of genius

        and I think it’s a bit misframed.

        In particular:

        I don’t think Newton really counts. He was certainly peculiar, but in his time and context his ideas don’t seem bizarre outside his area of genius to me. I don’t think alchemy and bible analysis were strange or bizarre pursuits for smart men of his class at that point in history. Likewise Euler.

        In general:

        Genius and madness combined make a compelling story, so they make movies about people like Nash and not Gauss (who invented non-Euclidean geometry independently, and sat on it because of Kant!), or Hilbert, or Emmy Noether. But making an accurate statement about whether great thinkers are unusually prone to bizarre ideas would seem to call for systematic investigation, keeping in mind the incidence of mental illness and disease among the general population. If the incidence of schizophrenia in men is about 1%, it shouldn’t be that surprising to find one John Nash among the top 100 mathematicians of all time (and Nash’s productive years came before his schizophrenic break).

        Actual point follows:

        Then again, maybe none of this is relevant. Kant was a philosopher; his arguments against masturbation, for the return of wives to abusive husbands, etc. were of a piece with his other philosophical work. If we find Newton’s work on bible codes to be crazy, that’s one thing; if we find Newton’s work on classification of cubics to be crazy, that’s quite another.

        • Sniffnoy says:

          I agree — well, my comment above is saying something pretty similar, I think. I just wanted to point out that Newton is actually more similar to the Kant example than is obvious; AIUI, to Newton, the mysticism was of a piece with the science.

          That said, I agree with you that it’s still not very similar — in the case of Newton, because while the mysticism may be wrong, it’s comprehensible enough that we can separate it out and say that it’s wrong, and that while Newton might have considered it of a piece with the science, we can see easily why it doesn’t need to be intertwined with the science.

        • St. Rev says:

          Yeah. In an important way, we distinguish mathematics from bible codes (and chemistry from alchemy, etc.) because of the contributions of people like Newton; it seems kind of mean to fault him for not seeing the distinction clearly at the time.

          Maybe I’m undermining my argument here. Not sure.

  16. Will Newsome says:

    ‘Nothing is of any use. We must go and misinterpret this.’

    — A Dwemer as quoted by Vivec

  17. Scott says:

    “By this point, Bayes’ Theorem may seem blatantly obvious or even tautological, rather than exciting and new. If so, this introduction has entirely succeeded in its purpose.”
    — Eliezer Yudkowsky, An Intuitive Explanation of Bayes Theorem.

    I’d say this “the sequences seem so obvious now” feeling speaks to their phenomenal success.

  18. misha says:

    It seems to me more that will newsome has some sort of complex sine-wave like cycle of comprehensibility.

  19. Doug S. says:

    We’ve also moved in some interesting directions on friendships and relationships… I agree that this is the sort of direction we should be thinking in and that this is the sort of “invent new and better ways of interacting with people” that makes me excited to be part of a community trying them.

    https://xkcd.com/592/

    • Elizabeth says:

      Trying to ignore people’s complex needs, wants, motivations etc. and act like Perfect, Simple Rational Agents is a mistake. That doesn’t mean that the broader category “trying to improve on interaction” is entirely full of mistakes.

    • To me it seems like adopting norms of talking about all kinds of issues with people and removing norms that prevent one from talking privately w/ people often seems to improve a lot of areas, especially friendships and relationships. LW does this, and also talks about things more clearly (to the initiated) than baseline.

  20. aj says:

    I’m confused by the lumping in of Jennifer RM with Will Newsome, since my memory of Jennifer is of reading a few highly upvoted, reasonable sounding comments of hers whereas with Will hardly anything I’ve read of his has ever made sense.

    Could you point me towards some of Jennifer’s crazier writings, as well as some of Will’s previously-crazy-sounding-that-you-now-agree-with writings?

    • JenniferRM says:

      Maybe Scott had something in mind, but the things I can think of that I’ve written that were complicated were written in non-public fora or were only complicated by implication. To add to the comedy, I will try to respond in a suitably inscrutable way 🙂

      For me, factors that leap to mind as potentially relevant to what I’ve said and not said include: The Virtue Of Silence, philosophic malpractice, moral distress, subtle teaching incentives, false consciousness and trolling, self assessment, and so on.

      I have a rough theory about how to write things that get upvotes, and know that some things I could say would not get upvotes despite my perception of their importance. This knowledge is not entirely comforting because the mechanisms seem to have more to do with social psychology than epistemic virtue… more to do with intellectual objectification than with epistemic charity.

      (In some senses, Will’s public face has been more virtuous and less selfish than mine. He has self censored much less, and thereby created opportunities for people to learn, despite conscious awareness of the personal reputational costs to himself. Will’s youthful solution to the various double binds inherent to public speech is at the very least more daring than my own attempted solution which mostly amounts to cowardice and delay.)

      I read Scott’s blog, but mostly don’t comment because I find public conversation exhaustingly complex to do properly. By calling me and Will and Steve out as part of an only-partly-coherent vanguard, Scott simultaneously (1) flatters us in a way it would be hard for us to disavow without adding more epicycles to strategically constructed motivational psychologies that facilitate philosophic research, (2) signals to the wider community that he has special access, and (3) sort of forces a response in this venue from one or more of us because of criticisms and status moves latent in his “compliment”. There is an element here of a beautiful troll. Also good marketing.

      I have to admire the political deftness, even though it feels weird to be part of the proverbial sausage… personally I find status to often be a barrier to learning and would sort of prefer not to have attention called to myself. At the same time, I’d rather not be called incoherent. My solution here is to claim that I have more capacity for clarity than Scott says I do, but less courage for engaging in public speech than would naively be assumed… Also, Scott is a poop head who deserves to be challenged to a sword fight for dragging Steve, Will, and I “out of our respective labs” and subjecting us to unequilibrated signaling games.

      • Jonathan Weissman says:

        Hey, no fair saying Scott deserves to be challenged to a sword fight without actually challenging him to a sword fight.

        • JenniferRM says:

          Really challenging him to a duel would be on the opposite side of a whole bunch of schelling fences whose existence I don’t understand well enough to feel comfortable flouting.

          (Also I don’t have my fencing gear with me. Also, I think the challenged person gets to pick weapons? Also, I have other stuff I should be working on. Also, I am somewhat flattered by the complimentary part of the compliment.)

        • Jonathan Weissman says:

          a whole bunch of schelling fences whose existence I don’t understand well enough to feel comfortable flouting

          Did you just describe a Chesterton’s Schelling fence?

        • JenniferRM says:

          Yes!

          (I thought of just saying that shorter thing, but it seemed like I would be accused of being inscrutable if I didn’t unpack/taboo it to some degree…)

      • Said Achmiz says:

        Well, as promised, that was quite inscrutable.

        • JenniferRM says:

          If you have difficulty unpacking weird statements, it can help to ask questions. I probably don’t have time to respond, but someone else can probably help.

          Also, if you track me down in RL I might be able to explain better in a medium with lower feedback latency.

        • Said Achmiz says:

          Unfortunately, I don’t think tracking you down IRL is feasible (you folks are clustered in the Bay Area, aren’t you?), although someday, perhaps, at some rationalist party or something.

          As far as asking questions goes… here’s the thing. The #1 question that I always want to ask people who say inscrutable things in this general vein is:

          What is your goal, here? Are you attempting to communicate? Do you actually want me (or more generally, your audience) to understand the idea(s) you’re ostensibly trying to convey?

          For many utterers of the inscrutable, the true answer seems (call this case #1) to be “No; my goal is to increase my status by saying things that sound like they are very smart, too smart to be understood by most. It is not my intention to actually communicate any ideas to you.”

          If that’s the case, then asking questions is obviously pointless; it would serve only to lower my status (already potentially lowered even by admitting that I didn’t get the first utterance), and continue to raise theirs. Since in such a case there’s nothing to understand, the process would never result in any satisfactory outcome for me. (As an added note, I generally find myself wanting to punch people like this, though I abstain from doing so.)

          Other times (call this case #2), the true answer seems to be “No; I basically just like hearing myself talk. I enjoy reassuring myself that I’m very smart; or perhaps I had some other motivation for making that utterance. In any case, I don’t actually care much if you understand me, and am rather uninterested in explaining my thoughts to you.”

          Again, asking questions is pointless.

          Sometimes, though (case #3), it does seem that the person in question is really attempting to communicate some idea(s). Of course, they’re clearly not doing very well, but that is their goal. This case is more complicated than the other two. Whether it’s useful to try and ask questions of the utterer depends on various factors; but sometimes it is.

          The key thing here is that even if I think the speaker has a genuine desire to convey some idea(s), before I invest effort into attempting to penetrate the maze of inscrutability in which those ideas are allegedly hidden, I have to have some indication that they both really exist and are worth my while. There are, after all, any number of cranks on the internet. I have neither the time nor inclination to try and understand everyone who says inscrutable things.

          Your suggestion that I ask questions, disclaimed by the comment that you won’t be responding, but perhaps other people will, does not really disambiguate between cases #1 and #3. So not only do I not have any clear indication that your ideas are worth my time to tease out (my sincere apologies for the bluntness!), but I don’t even know whether you’re trying to communicate any ideas in the first place!

          This is why I requested that Scott present us with some example of an inscrutable thing you (or Will, or Steve) have said; with an explanation of what it actually means, in clear and comprehensible language; and a confirmation by the person in question that yes, this is what they meant (“so sorry, folks, I clearly don’t have Scott’s gift for clear writing; good thing for us all that we’ve got Scott, eh?”).

        • Douglas Knight says:

          Three people; three cases.

      • Will Newsome says:

        Will’s youthful solution to the various double binds inherent to public speech

        I AM OLDER THAN MUSIC.

        • JenniferRM says:

          All the senses of the word “I” that seem like they could make that true seem dubious. Did you solve the problem of discernment without telling me?

      • Multiheaded says:

        You’re just great, Jen.

      • Scott Alexander says:

        I wonder if challenging me to a duel because I mentioned you’re really smart and I have learned a lot from you is one of those things that will make perfect sense two years from now.

        I’m a mediocre fencer (well, was in college) and would be happy to duel you next time the two of us and appropriate gear are in the same location.

        • JenniferRM says:

          You’re the Prince of Salience and you aimed your salience lasers at me without asking!

          (As above, “personally I find status to often be a barrier to learning and would sort of prefer not to have attention called to myself”.)

          The sword thing was mostly just for lulz. Admittedly, it’s not the greatest move from the attention dodging perspective… I think maybe that part was a net miscalculation? Compared to actual dueling, it would probably be more fun to get together sometime to practice fencing a bit and talk about what a progressive dueling protocol might look like, and whether it would be good for such a thing to exist.

  21. zslastman says:

    One thing less wrong has done is make it much less rewarding for me to speak with a large majority of the people around me. I want to make people read the sequences so they’ll stop having the same stupid arguments over and over, but the stuff about AI and cryonics tends to put people off, as does the sheer length. So I’m stuck thinking about shelling fences and existential risks and reductionism all the time, while other people debate whether nuclear power counts as ‘green energy’.

    • Viliam Búr says:

      You could make a shorter version of the selected parts of the Sequences, just explaining the very basic concepts. (With hyperlinks to relevant parts of Sequences at the end.) Reduce the whole mini-sequence into one article; and maybe later add another if you see in the comments that there is some frequent misunderstanding. You could use the structure of the prepared Sequences e-book.

      For me the most frustrating thing is people not understanding that beliefs should be used to create (probabilistic) predictions, or at least they should be expected to somehow correspond to the territory. For example, this weekend I talked with a smart girl, and I thought “maybe this is a LW material, I should talk a bit more and then invite her to a meetup”, when she mentioned people who can see human auras. I politely noted that some people could fake this ability, and she agreed that not all who claim to see auras really do. So I said: “imagine that there are two people: one of them really can see auras, another one is a fake; how could you (as a person who can’t see auras) experimentally decide which one is real”, and she said something like: it’s not possible. Sigh. So she has a university diploma in natural sciences, and yet the idea of experiment is completely foreign to her. I can’t even… And that was one of the smarter people I met. If most people can’t reason correctly about such simple matters, how could I discuss with them anything more complicated?

      Seems to me that the only meaningful debate with most people is about their first-hand experiences. That is something sufficiently close to reality, and I can learn many interesting things. But when they start making any conclusions, it’s time to run away. On the other hand, many people like talking about their experiences, so this is a mutually acceptable solution for social talk.

  22. Alexander Stanislaw says:

    I hate to be a jerk/devil’s advocate but here goes.

    Perhaps being immersed in a memeplex such as LW for long enough is sufficient to make the source material seem obvious, whether or not the memes are correct. I’m sure that there are intelligent Christians our age who are re-reading C.S. Lewis and thinking, “wow this is so obvious”.

    I admit, I find it very hard to argue with the LW stance on what concepts are for instance, and it drives me nuts when people try to define themselves into being correct, or argue about whether X is really Y, or equivocate between definitions of words. But apparently Gilbert of the Last Conformer disagrees with the LW stance on concepts and he seems knowledgeable.

  23. Douglas Knight says:

    Effective Altruism:

    There are several points here: (1) effectiveness: Givewell, Lomborg, recent trends in economics; (2) giving lots of money (eg, tithing); (3) earning to give; (4) giving to the global poor, rather than the local poor or the local rich.

    Singer has been advocating giving all one’s wealth to the global poor for decades without apparently affecting anyone. He has been much more effective at convincing his students to become vegetarians than to do anything about poverty and I think this is a bit mysterious. At the very least, I wonder why he didn’t seem to notice the contrast and try to be more effective. Fairly abruptly, a bunch of philosophy grad students around him decided that the argument was convincing. Maybe there was a critical mass. Another suggestion I’ve heard is that he used to make very far arguments that people in rich countries should give everything to the poor, but in December 2006, he finally got around to looking at numbers and advocated a sliding scale starting at tithing.

    I wonder how important Bill Gates was as a role model.

    • Michael Edward Vassar says:

      I don’t know how critical this was, but in I believe 2007 Carl Schulman and I approached him about Givewell, and about the idea that deciding exactly where to give might be more important than deciding how much to give.

      • Douglas Knight says:

        It seems quite plausible that the combination of Givewell + Singer had an effect that Singer had not previously had on his students. But I think the timeline is wrong.

    • Adam Strandberg says:

      I would also say that a big difference now is that (at least some) people believe that donating to charities actually helps. I go to MIT and there are a lot of smart people here who are vehemently against the notion of donating to foreign aid.

      I see where they’re coming from. I recall doing a project on Ethiopia in my high school history class and being horrified at how ineffective or counterproductive virtually all the famous “aid” efforts were, and became convinced that any effort of the “throw money at it” sort was fundamentally flawed. It wasn’t until I ran into effective altruism through LW and read “Poor Economics” that I became convinced again that donating can be useful.

  24. Troy says:

    I don’t disagree with your rationale for rereading either history of philosophy, or essays formative in your own philosophical development. But I think there are other reasons too, similar to Alexander’s point above — views can seem obvious to you not only because they are obvious but because you aren’t accustomed to thinking about alternatives (or the best arguments for alternatives). This is especially true when a view is taken for granted in an era or community. C.S. Lewis said that we should read old books because they make different mistakes than current books:

    “People were no cleverer then than they are now; they made as many mistakes as we. But not the same mistakes. They will not flatter us in the errors we are already committing; and their own errors, being now open and palpable, will not endanger us.”

    As a philosopher, I think there are certain research programs in philosophy that are substantially better today than in the past. Much of this is in more formal areas, like probability and decision theory. But I think there are also untenable positions taken for granted by many philosophers today that reading older philosophers can help disabuse us of. I am neither a member of LW nor intimately knowledgeable of the community, but I suspect that there are ideas taken for granted there that I would similarly dispute (probably mostly having to do with naturalism about the mind).

    A separate though related reason to read history of philosophy is that often our own present-day concepts have historical baggage or connotations of which we are unaware. For example, the concept of a law of nature seems to have first come about in a theistic society in which laws of nature were conceived as divine policies for governing the world. Today many non-theistic scientists and philosophers talk about laws of nature in such a way that you would not realize the theistic origins of the idea. And yet, it is not clear that the idea makes sense apart from this theistic interpretation. Reading older philosophers helps us to see when concepts we take for granted have philosophical commitments which we now deny.

  25. peterdjones says:

    Any thoughts on FAI?

  26. JPH says:

    Finding out about the story, Friendship is Optimal, through LW is probably my highlight!

    http://www.fimfiction.net/story/62074/friendship-is-optimal

    The story originated by the author misspelling “paper clip optimiser”, as “paper clopper optimiser”…

  27. Said Achmiz says:

    Scott, I think you overstate how accepted much of what you cite is, among Less Wrong readers. I’ve been reading LW since before LW existed (i.e., since 2007 on Overcoming Bias), have read all the Sequences (and found them almost obviously correct — though quite revelatory — pretty much immediately)… and yet my reaction to a number of the post-Sequence stuff you mention is a whole bunch of *skepticalface*.

    For example: hyperbolic discounting? Why is it bad? Does it really explain as much as people imply? I reread the linked post and am still unconvinced and honestly baffled by why other people are so convinced.

    Efficient charity and efficient altruism? I am super skeptical of the whole movement (although it’s partly because most EA people have bizarre values that I don’t share — and seem to assume that those values are obviously the only possible values).

    Tell Culture? Just bleh.

    And the whole “these people are saying inscrutable crazy-sounding things, but we all (somehow) know they are very smart, and hey after spending a while in this community those crazy-sounding things are starting to sound right!” thing — well, I hope you can see how very, very suspicious that sounds. I think it would help a lot if you could make a post or three that went like: “Here’s a thing Will Newsome (or whoever) has said; as you can see, it sounds insane. In fact, this is what he meant, and this is why it is true and obvious and important. And now, Will Newsome will post a comment endorsing my explanation and confirming that this is in fact what he meant, so you know that I’m not just steelmanning inscrutable woo into something correct but unrelated.”

    (As a side note, I’d add Michael Vassar to the list of people whose posts I can almost never understand even a little; it just sounds like everything he says is predicated on some unspecified set of non-obvious assumptions, and I don’t even know what he’s saying, much less have the slightest clue whether it’s true.)

    In contrast, Eliezer’s writings, and yours, and some of Alicorn’s (the stuff on luminosity), are revelatory, well-written and easy to digest, and almost immediately obviously correct. That stuff, you can justifiably say, forms part of Less Wrong’s corpus of “wow, how did we ever not believe that?”

    • blacktrance says:

      I agree with almost everything in this comment. The Sequences are brilliant, so are many of Alicorn’s and Scott’s writings. The other Main posts are hit-or-miss, perhaps even with more “miss” than “hit”.

      I do like the Ask/Guess/Tell culture posts and discussions they spawned, though.

      • Said Achmiz says:

        I, too, like the discussion around Ask/Guess/Tell Culture, and think it’s beneficial to deconstruct and analyze these dynamics; I just don’t think Tell Culture is itself a good idea. (Certainly I don’t think we should go ahead and try to make it, and things of that nature, into The New Way We Do Things.)

        • nydwracu says:

          It’s greatly appealing, I think, to people with some sort of social anxiety — which is something that I suspect can be developed from having to navigate many different cultures that don’t signal very strongly their Ask/Guess status [above and beyond what comes out of having to navigate many different cultures to begin with, and the concomitant lack of a consistent, reliable cultural identity and thede to fall back on] — but I have to question the possibility of overriding or working-with-in-that-particular-way signaling games/status structures in that manner, as well as its utility among those drawn to it for the above reasons, since lacking a reliable thede seems [at least through introspection] to lead to minimizing ‘networking’/making concrete use of connections in the way the original post series seems to be especially concerned with, and since at least part of the mechanism here appears to me to operate at a prerational level on the belief that explicit statements of norms are always [or usually] misleading and that every culture’s rule-set is full of immensely dangerous unknown unknowns that just can’t be fully identified and therefore the only option is paralysis.

          So, on the one hand, it’s useful as a question — if this hasn’t been tried yet, why not — but on the other hand, as a question of possibility, it’s one where the affirmative answer may be unthinkable, or at least approach unthinkability, unless the preceding paragraph is totally wrong.

      • suntzuanime says:

        To me Alicorn’s stuff seems like a paradigmatic example of the “misses”, rather than an example of the “hits”. Less Wrong as self-help culture rather than the pursuit of truth.

        • Said Achmiz says:

          I concur that “Less Wrong as self-help culture” is not an interesting direction for the community as a whole to move, and for this reason I dislike the endless “Rational dieting” and “Rational productivity” and “Cure akrasia in five easy (and rational) steps” posts that one sees there. However, I found the Luminosity sequence interesting, not because I went out and applied the techniques therein to my own life and profited thereby, but for the insights and concepts contained in it.

    • peterdjones says:

      Discounting seems reasonable to me…it just takes into account the fact that the further away a promised reward is, the more likely something is to intervene to stymie it. The alternative approach makes the epistemically inaccurate assumption that you can be certain some distant reward will be delivered.

      • peterdjones says:

        If you disagree, I’ll promise to pay you $100 in 50 years time on receipt of $10 next week. BTW, I’m 50….

      • Sniffnoy says:

        But there’s a difference between adjusting for uncertainty due to something being in the future, and inherently valuing it less due to it being in the future. Also, this comment seems irrelevant to the question of hyperbolic[0] vs. exponential discounting? The question isn’t about discounting vs. no discounting (though Eliezer’s certainly raised that question), but about the shape of the discounting curve.

        [0]Can I just remark on how much I dislike the name “hyperbolic discounting”? That is not what “hyperbolic” normally means! It should be called harmonic discounting.

        • Viliam Búr says:

          By the way, why should discounting follow any curve?

          I mean, there is already a consensus that people are adaptation executers, not utility maximizers. Evolution gives us random adaptations which contribute to fittness (1) on average (2) in an ancient jungle. Given this, why should we assume that “valuing things in future less” should precisely follow some cool mathematical equation, instead of some unspecified “less” which is a result of random influence of various heuristics, e.g. how easy or difficult it is to imagine the given day in the future. (So for example we may care less about what happens “200 days later” than what happens “next Christmas”, even if the next Christmas happens to be 200 days from now.)

    • Curious says:

      Ever since Scott complimented Leah Libresco for her supposed intellectual prowess, I have begun taking his endorsements of other people with a few buckets of salt.

    • St. Rev says:

      I think there’s a good (and, ironically, Bayesian) mathematical argument that hyperbolic discounting is a rational method under specific uncertainty assumptions.

      It’s funny that you find Vassar undecipherable; when I read him, my reaction is almost always “that’s exactly what I’d say about this if I were a bit smarter”.

    • John Salvatier says:

      The evidence for hyperbolic discounting actually happening is, I think, very strong. I think it’s empirically observed in human psychology experiments and biologically observed in monkey neuroeconomics experiments (they measure the strength of reward inputs into decision making areas and these fall off hyperbolically).

      Of course you can still doubt the importance or relevance.

  28. Error says:

    It wasn’t mentioned in your history above, and I don’t know enough of LW’s history to know if it deserves to be, but your post on Diseased Thinking was personally one of the most useful I’ve read on Less Wrong. A lot of what’s in it I think is also in the Sequences, but it was only after reading that post that I began to *notice* the use of ill-specified categories to smuggle moral conclusions. That was important to me as one of the final nails in the coffin of my interest in contemporary politics. I’m glad to have that time back. Or at least not to consume any more of it.

    • Said Achmiz says:

      Ooh, agreed. That one was excellent! (This is the post, btw.)

    • Scott Alexander says:

      I like that one too, but I think it was a pretty isolated discussion and not something that got internalized and built upon in the same way as some of the others. I don’t find myself referring to it every time I talk to a Less Wronger or getting enraged that I can’t refer it every time I talk to a muggle.

  29. suntzuanime says:

    Yes, you are especially smart. What the heck?

  30. BenSix says:

    …and not as disgusted as I should have been by people making arguments like “If there’s any chance at all a criminal might re-offend, we shouldn’t let them out of jail”.

    I don’t find that disgusting: I’m just baffled that anyone could make it and not realise that by their logic no one should be sent to jail if there is any chance of their being innocent.

    • Said Achmiz says:

      Not true; such an apparent inconsistency is easily explained by an asymmetry of values. This person could simply value incarceration of the guilty very highly (effectively infinitely), and freedom of the innocent much less highly.

      • BenSix says:

        Given that both are dangerous for the same reasons – the deprivation of liberty, property and, perhaps, freedom from rape and murder – that would seem bizarre but I take the point that that our opinions can be nothing if not eccentric.

        The point, I suppose, of Less Wrong.

    • Error says:

      I wonder if those same people would support, on the same principle, jailing anyone currently not in jail that might have any chance at all of offending in the future. I.e. everybody.

  31. Shmi Nux says:

    Scott, I have googled around and it looks like the term “steelman” was coined on LW, as well, shortly after the Luke’s post on hierarchy of arguing or something, though I cannot find the specific comment.

  32. Doug S. says:

    I wonder how class markers have changed in the 30 or so years since Paul Fussell wrote his book on class in America? 1983 not only pre-dates the dot-com boom and bust, it even pre-dates Microsoft’s IPO (which occurred in 1986)! And references to “video games” have to refer to Atari-era systems and video arcades; the NES was released in the United States in 1985. I also think clothing with writing on it has lost some of its stigma as well; you can’t copyright a fashion design, but you can trademark a logo, so fashion designers whose clothes are actually sold in stores have taken to putting logos and names on things so people can tell whether someone is wearing a cheap knockoff or not.

  33. Harvey says:

    You make this website seem like a cult.

  34. Pingback: Weekly review: Week ending March 14, 2014 - sacha chua :: living an awesome life

  35. Broken link: Humans Are Not Automatically Strategic. Needs http:// to make it Humans Are Not Automatically Strategic.

    In the future, should I fix obviously-broken links like this for you by editing the post, or just leave a comment about it and let you do it?

  36. The thing about Will and Steve and Jennifer saying things that seem inscrutable at first but make sense later on is dead on.

  37. Pingback: Meta-error: I like therefore I am | Meteuphoric

  38. a says:

    Write more, thats all I have to say. Literally, it seems as though you relied on the video to make your point.
    You clearly know what youre talking about, why throw away your
    intelligence on just posting videos to your site when you could
    be giving us something informative to read?

  39. Pingback: (Essays 2) Production Notes - Harry Potter and the Methods of Rationality: The Podcast