OT19: Don’t Thread On Me

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:

1. Comments of the week are Scott McGreal actually reading the supplement of that growth mindset study, and gwern responding to the cactus-person story in the most gwernish way possible.

2. Worthy members of the in-group who need financial help: CyborgButterflies (donate here) and as always the guy who runs CrazyMeds (donate by clicking the yellow DONATE button on the right side here)

3. I offer you a statistical mystery a little closer to home than the ones we usually investigate around here: how come my blog readership has collapsed? The week-by-week chart looks like this:

Notice that the week of February 23rd it falls and has never recovered. In fact, I can pinpoint the specific day:

Between February 20th and February 21, I lost about a third of my blog readership, and they haven’t come back.

Now, I did go on vacation starting February 20 and make fewer posts than normal during that time, but usually when I don’t post for a while I get a very gradual drop-off, whereas here, the day after a relatively popular post, everyone departs all of a sudden. And I’ve been back from vacation for a month and a half without anything getting better.

I would assume maybe WordPress changed its method of calculating statistics around that time, but I can’t find any evidence of this on the WordPress webpage. That suggests it might be a real thing. Did any of you leave around February 20th for some reason and not check the blog again until today? Did anything happen February 20th that tempted you to leave and you only barely hung on? I get self-esteem and occasionally money from blog hits, so this is kind of bothering me.

4. I want to clarify that when I discuss growth mindset, the strongest conclusion I can come to is that it’s not on as firm ground as some people seem to think. I do not endorse claims that I have “debunked” growth mindset or that it is “stupid”. There are still lots of excellent studies in favor, they just have to be interpreted in the context of other things.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

869 Responses to OT19: Don’t Thread On Me

  1. Julian says:

    Didnt you make a comment around that time that there would be fewer posts? Maybe people are just taking a break and then will come back when they think you are back to normal (I dont find this too plausible).

    Fewer posts means fewer people linking to your posts? (maybe you can see these stats and this isnt the issue).

    Maybe readership is actually back to normal and the earlier period was the unusual one?

    Google has changed their algorithm for search a few times in the past few months. Maybe that has had an effect?

    Just some really quick thoughts that could be tested empirically, possibly.

    Also this may be a “Fooled by Randomness” kind of thing: Starting Feb 18 ending Mar 1 is a pretty linear drop off, if you only look at the beginning and ending values. Only when you look at the points in between does it look like a big drop. The big drop may just be random and not indicative of anything. If the drop starting Feb 18 maybe we should look there for clues.

    We may also need to see a longer time period to identify a real break from long term trends.

    • Dude Man says:

      Didnt you make a comment around that time that there would be fewer posts? Maybe people are just taking a break and then will come back when they think you are back to normal (I dont find this too plausible).

      Maybe readership is actually back to normal and the earlier period was the unusual one?

      My guess is that it’s a combination of these two. The few months before that graph starts, Scott posted a bunch of stuff that got a lot of outside attention. The untitled post garnered a lot of attention and was posted two weeks before that graph starts. These posts garnered a lot of new readers and, after Scott announced he was taking a couple weeks off, some of the new readers decided that they wouldn’t check while updates stopped and just forgot to come back.

      • I’m one of those crazy people who doesn’t use RSS, so when I want to know if there’s a new SSC post I’m reduced to actually visiting the site (like an animal). I probably check multiple times per day, more if it’s been a while since a post or if I’m particularly bored. If there are others like me, maybe we all started checking less frequently the day after Scott announced that there wouldn’t be any blogging for a while. That could explain the suddenness of the dropoff, and then its persistence could be part of some longer-term trend?

        • se23 says:

          I also actually visit the site to see if there’s a new scott post. When he says there will be fewer posts I still check just as often for that sweet sweet intermittent reward. And hey this past week it has paid off, despite him saying that there would be fewer posts.

      • onyomi says:

        I would agree that it was probably a return to normal after a few unusually popular posts.

        But I would also recommend not saying “expect fewer posts in the near future.” Your standard of “fewer posts” is still more posts than most bloggers. My general expectation of bloggers is for them to overpromise and underdeliver (not that I have any right to expect them to “deliver” since they are generally providing me with free entertainment/intellectual stimulation), so when I see “expect fewer posts for the near future,” I read it as “expect me to drop off the face of the Earth for an indefinite period.”

        You tend to underpromise and overdeliver, but newer readers may not know that.

        Oh, and post more about social justice, tribal politics, and similar things that make people angry.

        • Cauê says:

          Oh, and post more about social justice, tribal politics, and similar things that make people angry.

          He should, but not because it makes people angry. If anything, the way he does it probably helps people to be less angry and less stupid to each other about these things.

          • onyomi says:

            Yes, I also like Scott’s posts on these topics for the same reasons. But even when written about in an evenhanded, rational manner, these sorts of topics do excite people more. What sexy women and cute animals are to advertising politics and social justice may be to blogging.

            And considering that people like Ezra Klein definitely read Scott, it may not be too much to hope for that his writing actually affects national conversations in a positive way.

    • Douglas Knight says:

      Yes, google is the only plausible way that traffic could decline overnight. However, I see people saying 4 February, not 20 February. Also, the analytics software should report proportion coming from google. (Scott, don’t you have google analytics?)

      I think it is pareidolia.

      • Douglas Knight says:

        Or Facebook algorithms, or maybe Twitter algorithms. But the same applies there: people ought to track the change dates and Scott ought to be able to determine whether his traffic from these sources declined.

      • FJ says:

        Thank you for using the word “pareidolia.” I learned a new word today thanks to you. I will now abuse this term by using it in every possible conversation for a week, driving my family nuts.

    • haishan says:

      “We may also need to see a longer time period to identify a real break from long term trends.”

      I know very little about time series analysis, but from what I understand there are ways to get at this question statistically.

    • gwern says:

      Needs moar data – not enough to show changepoint, and differences by traffic source matter. My first thought was ‘if fewer search hits, then must be Mobilegeddon, but if referrals, it’s lack of political/economics blogging prompting traffic from other bloggers, and if direct or RSS, may be lack of quality’. All quite different causes & traffic patterns, all easily checked.

  2. caryatis says:

    I just read a book of first-aid advice for backpackers, published in 1910. How badly wrong would I go if I were to follow this advice?

    My sense is that 21st-century medicine is much better when it comes to vaccines, treating serious illness, more impressive surgical techniques, antibiotics. But I wouldn’t think that basic first-aid techniques have changed much (thinking about things like broken or sprained bones, cuts, burns, colds, flu, drowning) especially in the backpacking context. With pain management, if anything we’ve probably gone downhill. (The advice for removing a foreign object from the eye begins with my new favorite phrase “First, cocainize the eye.”) Cocaine is also used for toothache and diarrhea, and calomel (mercury) for colds.

    Here’s the list of basic drugs to carry on a backpacking trip: Calomel, dosimetric trinity, chlorodyne, intestinal antiseptic, quinine sulphate, elaterin, phenacetine, sun cholera, apomorphia hydrochlorate, digitalin, morphine, strychnia, cocaine (50 cents a tube).

    EDIT: medicine used to be more poetic too. “If the patient is of strong physique and God smiles, he may not have septic fever.”

    • pku says:

      I know CPR has been modified a bunch of times since (did they even have CPR back then?)

    • svalbardcaretaker says:

      I’d strongly advise to not rely on that book. A lot of the medical procedures, as well as available tools/materials have changed a lot. Even nowadays things like CPR are undergoing changes on a <10 year scale regularily.

      In my hypothermia book essential for backpacking in New Zealand there are a lot of techniques and data that got only got invented/implemented after 1970. Nobody before did run data analysis on success rates for different methods for reheating patients!

    • caryatis says:

      Interesting. It actually doesn’t discuss either CPR or hypothermia care.

      • John Schilling says:

        I’m going to hazard a guess that the value of CPR on a backpacking trip is extremely small, in 1910 or today. Is there any discussion of straight artificial respiration, as might be used for a drowning victim whose heart is still beating? That would be more plausibly useful in that environment, and I think it was understood in 1910.

        Hypothermia treatment, that’s definitely useful to backpackers. And it is an area where the state of the art has improved enormously over the past century. In part due to actual Nazi medical research, which obviously wasn’t available in 1910. For that matter, I understand there have been significant improvements even since ~1980, when I was both studying and receiving hypothermia treatment.

    • zz says:

      I’m not really sure how badly wrong you would go without a better idea of what the advice is.

      From the specifics you’ve given, though… the drug list seems excessive. I include an antiseptic, maybe ibuprofen. I may have iodine pills if I can’t carry all the water I’ll need, but not with the first aid kit.

      • Gbdub says:

        On the other hand, your drug list seems too short. I’d say the iodine pills are a must (always assume you might get stuck at least one day longer than you plan). I’d also add Imodium – diarrhea is no joke, and dehydration is a big hazard in a wilderness setting. If you’re hiking with a group, I’d also strongly recommend Benadryl. Anyone with known risk of anaphylaxis probably carries an epi-pen or two, but that’s a fast acting medication – obviously necessary if breathing is immediately endangered, but if you don’t simultaneously pop an oral antihistamine (which is slower but longer lasting) it’s possible that symptoms will return before you can reach help.

        Other than that, you mostly need bandages etc. for wound and blister care, and ace bandages/athletic tape are seriously handy (and don’t weigh much). I’d also have a plan for turning your gear into a sling/splint for an arm/leg. Most injuries are going to be limb related, or cuts/burns/abrasions.

        The biggest thing with wilderness first aid is wrapping your mind around the time scale – most of us spend most of our time in places where appropriate first aid is “plug any gushing wound and call 911”. Once you get outside of cell phone coverage (and even some places with it), 911 might be hours away at best. One outcome of this is that, as mentioned, CPR isn’t super useful – if your heart stops that far from a hospital, you’re probably dead. Same goes for a lot of other conditions that might be survivable, but only if medical care is reached immediately. If it will kill you in 30 minutes, you’re dead. Fortunately most stuff isn’t in this category.

        Your biggest goals in wilderness first aid are 1) don’t get anybody else hurt trying to rescue 2) mitigate any immediate threats to life (airway, breathing, circulation (bleeding), “ABC”) 3) assess condition (what’s wrong, being particularly mindful of shock, head, and spinal injuries) 4) decide on treatment – evacuate, call for help, or carry on (for minor stuff) 5) stabilize/treat – wound care, pain mitigation, and mobility restoration.

        Source: wilderness first aid class from NOLS a couple years ago, which was pretty fun and informative. My local REI offered it. Would recommend.

        • John Schilling says:

          A 1910 guide obviously predates penicillin, and that’s a big deal. Unless you are certain of reaching a hospital within 24 hours, you probably do want a broad-spectrum antibiotic or two for an early start on dealing with potentially infected wounds or infectious diseases. Maybe specific antifungal or antiparasitics as well depending on where you are doing your backpacking. The good thing about being in the wild is that you’ll probably be dealing with wild strains of infection, rather than the resistant-to-everything versions that have evolved in and around modern hospitals, so basic antibiotics are still likely to work.

          These are usually prescription meds, but if you’re serious and if you have any sort of formal training – for me, the Eagle Scout badge was enough – it’s usually not a problem to get your family GP to write out a prescription for some Zithromax every year. It should not need to be noted that “Scott Alexander” is not your family GP.

          Pain medications beyond ibuprofen are a more sensitive matter these days, but still worth pursuing. There’s a world of difference between a day with a broken leg, and a day with a broken leg and no narcotics, and it goes beyond just the negative utils of experiencing pain. The victim’s ability to assist his own evacuation, or even just keep still when he needs to keep still, can be vital, and pain is very much Not Helpful when it comes to early healing. In the 1980s, our family had a standing prescription for Demerol, but doctors (and the FDA) are more sensitive about “gimme some narcotics, just in case” in the current political climate.

          The antihistamines and antidiarrhoeals are, as Gbdub notes, potentially vital and fortunately nonprescription.

        • zz says:

          I’m an Eagle scout and have taught the wilderness survival merit badge about 5 times. I’ve now lowered my confidence levels in everything I learned in BSA.

          (Except knots; relevant experts tell me that BSA does a fine job with those (click ‘exit’).)

    • I am pretty sure that advice for treating snake bite has changed in recent decades, and that sounds like the sort of thing that would be good to know if you were backpacking in the countryside. For example, I have seen older books (I think from the ’70’s) that advise making cuts on the bite and applying a tourniquet if the bite is on a limb. (In old movies people would try to suck out the poison, but I’m not sure if that was ever a seriously advised thing.) Nowadays, as I understand, the advice is to wrap the entire limb in a tight bandage, as tourniquets can be harmful, and cutting the wound apparently is useless. For all I know, there might be lots of old formerly accepted first aid advice that has been discredited in recent years.

      • Gbdub says:

        Removing the venom one way or another is apparently still potentially effective, but it’s sort of like rescue breathing – it’s fallen out of favor because the average amateur is likely to muck it up and do more harm than good. And adding an extra wound is almost always a bad idea.

        The reason tourniquets are a bad idea is because you can damage the limb by cutting off blood flow, and, even if it works as intended (keeping the venom in the limb), that’s bad – it increases the local tissue damage and makes it more likely you’ll lose the limb.

        • Douglas Knight says:

          Do you mean rescue breathing as opposed to CPR? What is the risk of rescue breathing? CPR appears to me to be exactly opposite to your theory. It appears to me to be extremely dangerous, and yet extremely popular. Maybe people who are actually teaching about first-aid give good advice about not doing it or extensive training in how to do it, but it is extremely fashionable to do teach to amateurs. For example, my high school made everyone take a 1 day course on CPR. I’m pretty sure that had negative value by producing people who would act beyond their competence, but even if not, compared to a 1 day course on basic first aid, it was a really terrible decision.

          • Gbdub says:

            In the past decade or so there has been a push to remove rescue breathing from the CPR protocol for amateur rescuers, and recommend chest compressions only.

            The logic is that in the case of cardiac arrest, chest compressions are critical for delivering already oxygenated blood to the brain, and interrupting this process to deliver rescue breaths is counterproductive (there is already enough blood oxygen for survival for a few minutes, the trick is getting it moving)

            Also, the ideal ratio of compressions to rescue breaths is complicated, and depends on the number of rescuers and what exactly caused the arrest. So there’s a theory that it’s better to teach everybody simple chest compressions rather than expect non professionals to remember all the rules.

            It looks like there have been a couple studies suggesting increased success with a chest compression only protocol, but I’m not familiar enough with the underlying issue, and in any case it doesn’t sound like the medical community has really reached a consensus.

            As for chest compressions being dangerous, I understand that it can cause injury, but if you really are in cardiac arrest I don’t see how it could be more dangerous than not doing chest compressions.

          • Jaskologist says:

            As my CPR trainer explained:

            Q: What do you do when you hear ribs crack?

            A: Keep going. Ribs heal. Death doesn’t.

            They also implied that people have been overselling the danger of chest compressions. But then, they also said that it usually doesn’t work.

          • Douglas Knight says:

            if you really are in cardiac arrest

            If you assume away the problem, you’re fine.

          • Garrett says:

            EMT here.
            The main focus of CPR is circulating blood to the brain which contains available oxygen, namely chest compressions. Many people (even in EMS) involved with CPR will take too much time away from compressions to provide ventilations. Lay people are even worse, hense the focus on hands-only CPR.
            The big problem with rescue breathing is that most people get hung up on the “mouth-to-mouth” aspect of it, and so they freeze, instead of starting the important part, the compressions.
            Even in EMS we are starting to take the approach that ventilations shouldn’t be provided until 10 minutes into working a cardiac arrest.

        • Murphy says:

          I have a long standing issue with the removal of tourniquets as the recommendation for serious bleeding.

          The new fad appears to be “apply pressure above the wound” which seems to have all the disadvantages of cutting off bloodflow while being far less effective.

          If I accidentally open an artery in my arm or leg I’m quite happy to tell anyone who tries to stop me applying a tourniquet to fuck off: I’d rather lose some feeling in a limb than die.

          I’m of the opinion that it’s a legal choice rather than a medical one: someone who loses a limb is pretty much certain to sue someone, anyone. Someone who dies in an accident isn’t going to sue and their next of kin are unlikely to be able to go after the first aiders for failing to keep the person alive.

          Plus the numbers appear to be wrong in some recent first aid books, you can cut off bloodflow to a limb for more than 10 minutes and still have it work perfectly well later. Hands and feet are pretty resistant to reasonably short term loss of oxygen.

          Just looked it up and the first first aid guide I found talking about it recommended getting signed written permission to apply the tourniquet from the person before doing so: this being the person who is currently bleeding to death. fucking 100% lawyer change to first aid rules rather than medical one.


          better they bleed to death than sue you i guess

  3. Bryan Hann says:

    Late Wittgenstein. Thoughts?

    • Perhaps yours, first?

    • jaimeastorga2000 says:

      Better than early Wittgenstein. But unless you are interested in the history of philosophy, you are honestly better off just reading Eliezer Yudkowsky’s “A Human’s Guide to Words” instead.

      • Sam says:

        I thought it was interesting that the writer of Ex Machina was so enamored with *early* Wittgenstein in particular.

    • Brock says:

      Overrated. There are a few interesting ideas in PI, but they’re buried in a cryptic mess.

    • Protagoras says:

      interest in late Wittgenstein among philosophers has almost completely collapsed. My own diagnosis is that this is deserved, and the story is roughly this: early Wittgenstein was associated with Logical Positivism. Late Wittgenstein is anti-LPist, so when LP went spectacularly out of fashion, criticisms by someone who had been associated with the movement seemed particularly appealing (in an “even one of their own heroes figured out it couldn’t work” sort of way). Hardly anybody bothers to hate on LP any more, because it’s been dead for too long (there are now even substantial numbers who have come around to the view that it was unfairly maligned), so the main source of late Wittgenstein’s fashionability has disappeared.

      • David Moss says:

        “interest in late Wittgenstein among philosophers has almost completely collapsed.”

        That’s not my impression of the field at all. Maybe it is more more true wherever you are. Here in the UK though, that statement couldn’t be further from the truth. In one of the departments I’ve been in over the past few years, it was an explicitly but unofficially said that you had to “like Wittgenstein” to get into the department, we had around 4 staff working explicitly on Wittgenstein and a couple of others who work with Wittgenstein but on other areas. In another department, the head of department was explicitly a late-Wittgenstein/Ryle scholar and we had about 4 (out of 13) primary Wittgenstein scholars on staff (again, these weren’t ‘just early’ Witt scholars. At Cambridge, where I did my undergrad, Wittgenstein was the only named philosopher to have a whole paper devoted to him, a number of the scholars there routinely cited Wittgenstein approvingly as the most important philosopher of the 20th century, and so on. I get a similar impression just from speaking to other philosophers generally, *a lot* explicitly cite Wittgenstein’s work as a heavy influence on their work, and it shows. And there’s plenty of activity on Wittgenstein directly as far as I can see: I could easily go to a couple of large Wittgenstein specific conferences in the UK alone every year if I wanted to.

        • Protagoras says:

          Fair enough, I only know the U.S. scene. I know that there are some big differences between that and the UK scene, and I’ll take your word for it that this is one of them.

      • My impression is more along the lines of severe evaporative cooling: the
        remaining Witters really, really think he has The Answer,

        • David Moss says:

          Wittgensteinians (and I count myself among them) do tend to be pretty… intense, but I think t’was ever thus (e.g. with a number of his students being life-long disciples). But I think Wittgenstein has had a fairly undiminished influence in philosophy more broadly: there are plenty of big names who’re clearly and explicitly influenced by him (Dennett, Hilary Putnam, Simon Blackburn, Michael Williams, Robert Brandom) actually it’s probably more revealing to try to list people who are definitely not influenced by Wittgenstein, because his influence seems to be felt a lot more broadly. If anything I would use a metaphor of how Wittgenstein has so permeated the atmosphere/changed the landscape or whatever, that it’s hard to discern clearly where he’s influential, because his influence is just everywhere (apart from a couple of outposts that are clearly anti-Wittgenstein, like Fodorians or hyper-formalists or naturalists).

          • David Mathers says:

            ‘Naturalists’ are a rather large group in contemporary philosophy though, at least in the US. (For non-philosophers, a naturalist here isn’t just someone who doesn’t believe in God/the paranormal/mind-body dualism, but rather someone with a particular (very vaguely defined) view on the relationship between philosophy and science.)

    • Peter says:

      Philosophical Investigations helps me sleep at night. That is to say, my bedframe is broken, and I’m propping it up with a pile of books, PI included.

      I’ve seen ideas from PI pop up in various places. From my own inexpert viewpoint: The family resemblances has been picked up by psychologists and linguists (eg Rosch, Lakoff) quite successfully IMO, and all that has found it’s way into EY’s guide as mentioned by jamieastorga2000 above. The business with language games doesn’t seem to have been picked up as such – there’s been a fair bit of development of pragmatics but not along Wittgensteinian lines.

      I’ve seen ideas from PI crop up in cultural anthropology: well, “seen” in the sense of someone on Language Log pointing to cultural anthropology and saying “the horror! the horror!”. Also in other areas which seem influenced by the whole postmodernism thing.

      I was reading a Daniel Dennett book, and he said he read PI and was very influenced by it… but got confused when he encountered lots of Wittgensteinians and found that he didn’t think like them at all. Perhaps the cryptic nature of PI makes it very ambiguous in practice, making it so easy to read your own personal ideas into it. Sort-of like a Rorschach test.

      My personal thoughts: there are some interesting ideas there; PI is fit for raiding, not so fit for fixing up.

  4. Evan Thomas says:

    “I’m all in favor of googaboogabloo,” Tom probably said.
    “Gather up the boys. We’re going after that no-good rotten scoundrel,” Tom possibly said.
    “The waiting is the hardest part,” said Tom pettily.
    “Fortunately, the knife only grazed my spleen,” Tom said organically.
    “I’m… urk… having a seizure,” Tom said twitchily.
    “I’m taking her flying in a hot air balloon,” Tom updated.
    “The trajectory of the stack of paper was a perfect parabola,” Tom remarked.
    “I’m conducting a performance review of our company’s telecommuters,” Tom elaborated.

  5. jaimeastorga2000 says:

    In an attempt to understand the distribution of story lengths and the empirical clusters they form, I copied the text of several short stories, novellas, and novels into a word count tools (then rounded the count to two significant digits, since different tools gave slightly different word counts), and tried to sort them into categories as well as I could carve reality at the joints. Here’s what I got.

    I relied a lot on third-party and first-party descriptions to help me classify the boundary cases (for example, Iceman called Friendship is Optimal a novella and Wikipedia calls “Second Variety” a short story), and also on some other characteristics about the pieces themselves (like the fact that True Names has always been published as part of a collection but never by itself, or the fact that “The Star” won the Hugo award for best short story). Children’s novels were particularly frustrating, since it appears they can be novella length and still get published as standalone books. But the shortest adult novel I found was Fahrenheit 451, which is at least somewhat above the cutoff.

    Anyway, the whole point of this was to help me organize my fanfic reading list, which it has done. I can now sort fanfics into one of the three categories, and will be able to guesstimate how long it will take me to read them based on their length and the amount of time it has taken me to read similar pieces in the past.

    • Douglas Knight says:

      What was the point? If you want to look for clusters, why bother with labels? If you want to know how long it will take to read, why bother with clusters and not just stick with raw word counts? (Maybe the answer is that you don’t encounter words counts, but do encounter the labels “short story” and “novella”? Now you know that they are consistent, but cover broad ranges.)

      Are you aware of the Hugo categories? Half of your “short stories” count as Hugo novelettes, splitting non-novels into three equally common categories.

      • jaimeastorga2000 says:

        I have never seen the label “novelette” used outside of the Hugo and Nebula award categories, but I have seen several people call works novellas. There also doesn’t seem a to be a break around the 7,500 word mark in the stories I sampled, but there does appear to be a several thousand word gap around the 17,500 and 40,000 word marks (though the latter only appears if one ignores the children’s novels and Friendship is Optimal).

        • John Schilling says:

          The three remaining SF magazines (Analog, Asimov’s, and F&SF) all break down their contents by novella/novelette/short story. Back when magazines were a major outlet for original genre fiction, this was an important distinction for the publishers. It may be useful for the readers now that internet publication is a thing, but I haven’t seen it come into common use there.

          • jaimeastorga2000 says:

            The three remaining SF magazines (Analog, Asimov’s, and F&SF) all break down their contents by novella/novelette/short story.

            Huh, so they do. I never noticed that. Thanks!

    • gattsuru says:

      Historically, novellas were more popular among genre fiction authors, and there’s a number of award groups that retain the technical category, typically at <40,000 words. Now, this category is near-extinct, even compared to the already-sparse short story market. The cutoff between novella and short story was more arbitrary and generally relied more on structure or publishing method than word-count, though award groups usually put the cutoff point around 7,500 words.

      Part of this reflects a drift in the size of the community and its expectations. Big-name publishers would regularly take in 40k-60k word books during the Asimov days, while after the early '90s most publishers will prefer 80k+ word-counts for any unsolicited manuscript. There's been the start of a revival for the novella category, between ePublishing and smaller boutique publishers, but it remains a pretty small category for general scifi and fantasy — even well-recognized authors usually can only get it released as part of an anthology or for children's lit.

      There probably is an underlying group of Natural Categories, but I don't think they strongly tie with any real-world usage. As a writer, it's hard to push the "Single Scene and Event" popularized in short stories beyond 15k words. Novellas and novelletes can stretch or compress to a greater extent than that, but <10k words rushes very heavily and much more than 30k tends to drag with a two-act structure, and two-act structures start fraying after 55k words. There's another grouping for Very Very Long works that doesn't really have a name (for online works, usually well-defined by the 500k+ words or the various online 1,000,000+ word monsters, for mainsteam publisher, think Steven King's longer novels or George RR Martin's oeuvre), that tend to use yet another different group of narrative structures that don't really resemble conventional act-based structures.

      As a reader, and for works of normal complexity, the average adult can go around 16,000 WPH, while people who read more regularly can average 20,000 to 30,000 WPH (with higher sustain and comprehension). This is only a rough estimate, though, and you’ll usually go through works with simpler structures (one-scene, one-act, two-act) faster than more complicated ones (three-act, four-act, or saga) if reading for full comprehension even if they could somehow have the exact same word count.

      • Mary says:

        Between about 25,000 words and about 80,000 words, there is the Unpublishable Void: no markets.

        ‘cept nowadays you can go indie.

    • Deiseach says:

      While I applaud your industry, I don’t really get the point of it. That seems to me that it would only work for complete fanfic works, not works-in-progress (which could be any length from Chapter 68 to not-updated-since-2011).

      And if you’ve already read the story/novella, you have a good idea how long it took you, so if you’re looking at (say) a five-chapter new work, can’t you guessitmate “Yeah, I can get through that in two hours” or however long it takes, without needing to go to such arrangements?

      That being said, I actually have no idea how long it takes me to read anything – I generally tend to keep reading straight through until finished, and I also generally only stop reading straight through if I’ve either starting reading late at night (so I do need to get five hours at least sleep) or the book is very, very long.

      Most recent book I read was 400 pages in length and I split that over two nights, because (a) I could only start reading it about 9 at night and (b) I have to get up for work in the mornings. If I’d started reading it on, say, Friday night I’d have read straight through until finished.

      • jaimeastorga2000 says:

        While I applaud your industry, I don’t really get the point of it. That seems to me that it would only work for complete fanfic works, not works-in-progress (which could be any length from Chapter 68 to not-updated-since-2011).

        I have pretty much committed to only reading finished fanfics (with one or two exceptions grandfathered in from before I adopted this policy). The little copy of gwern that lives in my head tells me that there is no reason why ongoing fanfiction should be any better quality-wise than finished fanfiction, and since there are so many good finished fanfics for me to read, why should I go through the trouble and uncertainty of following unfinished fanfiction? Incidentally, he has also convinced me that there is no point buying the latest science-fiction novels and stories and that I was better off just reading whatever award-winning decades-old science-fiction I happened to find at libraries and thrift-stores. Good ol’ gwern.

        • Deiseach says:

          You’re wise to stick to completed works.

          I think most of us have probably suffered the pain of getting really involved in an excellent story, reading the last instalment, and then looking at the date (to find out when the next chapter is likely) and finding out that the author hasn’t updated in one/two/the time the Y2K flap years 🙂

    • RCF says:

      I hope this doesn’t sound too critical, but: You misspelled “word”. I don’t think that reality has any joints to cut at; at best the publishing industry has joints to cut at. Your sample size isn’t very large, and it’s not clear what the inclusion criteria are; are they just works you’ve read? The presentation isn’t particularly reader friendly; could you at least right-justify the numbers?

  6. Ryan B says:

    My favorite posts of yours are the critical analyses of social justice stuff. With the Ferguson/Garner/Trayvon, etc.. stuff slowing down the last few months, perhaps people like me aren’t checking in as often? I’ve seen several Patheos bloggers asking similar questions lately, so maybe it’s something else. Could be blog fatigue, outrage fatigue, etc.. There were so many social justice-y catastrophes this past fall, I wonder if a lot of people are just tired of the internet. I know I’ve quit Facebook and pared down my blog reading a lot in the past few months.

    I recall a post awhile back where you compared your page views to their topics, and “things i’ll regret” was the runaway winner. Haven’t seen many of those lately. So…regret more things?

    • Scott Alexander says:

      Any links to the Patheos people talking about this?

    • suntzuanime says:

      I think this is one of those “be careful what you optimize for” things. The blog has seemed more technically-oriented and less controversial lately, which could explain a decline in hits, but it’s not clear that’s a bad thing. Maybe the answer is to cultivate a smug sense of superiority over your low traffic compared to the Gawker Media Empire, and then you won’t feel so bad about it.

      • Scott Alexander says:

        What you say makes sense except for the suddenness with which it happened.

        • Joe says:

          The weather is getting better perhaps people are getting out more? There were a lot of snowed in people this winter.

          • caryatis says:

            Spring break?

          • Richard says:

            This was my thought too because feb 20th was the day the weather turned for the better.

            How did it look last year? Lots of people were snowed in last winter too.

          • houseboatonstyx says:

            February 20th is famous to me because a family crisis began that day, which kept me occupied and offline for at least a week or so. It was a big deal to us, but I wouldn’t have thought it would cause such a disturbance in the Force. Perhaps that was vice versa.

          • Daniel Speyer says:

            In enough of the world at once? This is a pretty geographically diverse readership, I expect.

        • Izaak Weiss says:

          I also do want to say that your medical/psychological analyses are some of my favorite posts on this blog. Maybe it’s not needed, but I figured positive feedback for doing social justice stuff should be counterbalanced with positive feedback for technical medical/psychological stuff.

          • Joe says:

            I enjoy the fiction, ethics, and the more philosophically based posts.

          • Murphy says:

            My favorite is probably the steelman arguments for positions he opposes, generally they’re far more thought provoking than the actual arguments made by many of the sides he’s steelmaning.

        • Parker says:

          Broken website theory: For some reason, coding is getting a lot better. There are far fewer broken websites, which means there are less hooligans around that might happen upon a SSC post.

          Abortion: a sharp drop in 1990 means that there are fewer people starting to read it at around age 25, and nobody is replacing those who “age out” of reading SSC.

          Or maybe it’s an increased NSA presence online — people sense that and choose to just stay offline.

          Come to think of it, the decrease in numbers is probably due to a multi-factorial trend.


      • Simon says:

        That’s the same as my guess, around February or so the political posts become fewer and further between. Then it takes until late in the month for people to notice.

      • Deiseach says:

        It could just be, as suggested, people paring down their feeds. I’ve done a bit of housekeeping myself about the blogs I follow, and the ones I’ve culled have been because:
        (a) this only makes me angry and if the only reason I’m following this is to be OUTRAGED and start typing angry responses, that’s not a good enough reason for either me or them
        (b) it’s not that they’re bad or worse than any of the others I follow, but they’re new on the list and I’m loading too much on my feeds and I need to drop someone and sorry, new person, it’s going to be you because I’m sticking with my old reliables.

        People may stop following not because you’re not a good writer but simply because they need to prune their reading lists and you’re the new guy so last in, first out 🙂

        • Ano says:

          Yeah, but what are the chances of so many people deciding to cut SSC at the same time? If it really were people just paring down their reading list, we would expect a gradual decline rather than 30% of the readership going *poof*.

          I suppose another important question is whether that 30% loss is among established, regular readers or with one-off readers who get linked here from other sites (usually in response to posts that SA regrets writing).

  7. Sam says:

    RE #3, have you tried comparing the drop in traffic with the trends in incoming traffic from the Financial Times, which has included you in its “Further Reading” link dumps several times in the past few months?

  8. tom says:

    Your crazymeds donation link is not working.

  9. Baby Beluga says:

    Does anybody else associate rationalism/Less Wrong with disliking spicy food, or am I projecting?

    • suntzuanime says:

      I associate it with the opposite, assuming both are partially caused by the “openness to experience” personality trait. But I admit I haven’t had the chance to evaluate many LWers’ culinary tastes.

    • I associate LW/rationalists (and nerds in general) with sensory issues, and sensory issues with a dislike of intensely flavored foods, including spicy food. I don’t think that it’s a majority, though: just a slightly higher than baseline dislike for spicy foods.

      Not me, though. I love spicy foods to death.

      • Douglas Knight says:

        I think that nerds in general are pretty enthusiastic about spicy food. Maybe that’s another sensory issue.

        • Limi says:

          From what I’ve seen, it’s either one extreme or the other – either no time for spicy food, or the hotter the better I have watched dozens of nerds (as a participant) basically form ranks over the likes of Christmas and anniversary parties. It definitely seems like a sensory issue.

      • James Picone says:

        I hang out with a lot of young Australian computer-science/videogame nerds. Food groups are:
        – Pizza (Often covered-in-meat varieties)
        – Various Asian – once a week one of the groups visits a food court in what is roughly the Asian district in the city. Chinese is probably the most popular.
        – Indian. A different group gets Indian takeout on a weekly basis.
        – Pasta. It’s a fallback for when people don’t want to get takeout – probably mostly a convenience thing.

        There’s a Mongolian BBQ place that’s pretty popular with my friendship group as well, but that’s a pretty customisable taste.

        Out of that set, really only the Indian food is particularly spicy, and it’s not usually extremely-spicy curry that gets ordered (Vindaloo is about the limit).

    • Sniffnoy says:

      The closest thing I have to an association between these is that
      1. Razib Khan is kind of LW-adjacent
      2. Razib Khan likes absurdly spicy food.

      So, weakly the opposite.

    • Zykrom says:

      White People.

    • jaimeastorga2000 says:

      I mostly associate rationalists with the following passage from Harry Potter and the Methods of Rationality:

      Harry automatically started loading up his plate with whatever was in front of him, blue sausages with tiny glowing bits… and started eating his blue sausage. It was quite good, especially the glowing bits.

      Dinner passed with surprising rapidity. Harry tried to sample at least a little of all the weird new foods he saw. His curiosity couldn’t stand the thought of not knowing how something tasted. Thank goodness this wasn’t a restaurant where you had to order only one thing and you never found out what all the other things on the menu tasted like. Harry hated that, it was like a torture chamber for anyone with a spark of curiosity: Find out about only one of the mysteries on this list, ha ha ha!

      Then it was time for dessert, which Harry had completely forgotten to leave room for. He gave up after sampling a small bit of treacle tart. Surely all these things would pass around at least once again over the course of the school year.

      Nerds in general I tend to associate with this entry from ESR’s jargon file:


      Ethnic. Spicy. Oriental, esp. Chinese and most esp. Szechuan, Hunan, and Mandarin (hackers consider Cantonese vaguely déclassé). Hackers prefer the exotic; for example, the Japanese-food fans among them will eat with gusto such delicacies as fugu (poisonous pufferfish) and whale. Thai food has experienced flurries of popularity. Where available, high-quality Jewish delicatessen food is much esteemed. A visible minority of Southwestern and Pacific Coast hackers prefers Mexican.

      For those all-night hacks, pizza and microwaved burritos are big. Interestingly, though the mainstream culture has tended to think of hackers as incorrigible junk-food junkies, many have at least mildly health-foodist attitudes and are fairly discriminating about what they eat. This may be generational; anecdotal evidence suggests that the stereotype was more on the mark before the early 1980s.

      Note that I am relying on other people’s accounts because my taste in food is highly atypical. I am a very picky eater who tends to stick to the same few dishes, prefers plain, lightly seasoned food, and has a strong dislike of spicy food. I also eat my steaks well-done, which I am given to understand is considered a hanging offense in foodie circles.

      • It’s funny, I felt kind of insulted by that passage when I first read it, because I’m the kind of person who orders the same thing over and over again at restaurants – and just for that, I’m written off as a curiosity-lacking mutant!? I eventually realized that it’s probably just a typical mind thing, though – I’m a picky enough eater that most items on the menu at a restaurant are going to be of negative value to me, so when I find something I like I usually stick with it. I can imagine, though, that if most menu items were likely to taste good to me, then yeah, I would be frustrated by not being able to try all of them.

        • usenet chillfile says:

          Apropos of this discussion, my strategy for ordering at restaurants is to find the spiciest thing on the menu and then order it every time, except that first I check for any new/interesting spicy-sounding things. So I’m more varied at, say, a Thai place, than I am at a generic family restaurant.

        • porridgebear says:

          You may have just reached the saturation point in Feynman’s Restaurant Problem earlier than most.

        • Shieldfoss says:

          In the other direction: Last time I was in Italy (Note: I speak perhaps five words of Italian) I deliberately did not even attempt to find out what the dishes were, I just ordered whatever was first on the menu unless I recognized the name, in which case I ordered something else.

          This conquered Choice Paralysis utterly and also gave me a number of truly excellent surprises – and one dish that was absolutely terrible.* As such, I recommend the method to everybody who is not a picky eater.

          *I’m sure the chef had prepared it right and the issue was entirely predicated on my taste buds

        • The point of trying things is to find out what you like. Which is hot and sour soup, and Singapore needles.

      • Nestor says:

        fugu (poisonous pufferfish) and whale.

        *screeching ethical roadblock*

        No, no whale for me thanks. The swallowing live octopus thing also seems unecessarily cruel and unusual.

        Might give the fugu a try once, though I believe the farmed variety has no neurotoxin.

        • Rob says:

          CONTENT WARNING – Animal cruelty

          I believe fugu is also not great from an animal suffering perspective, because the fish has to be alive until unusually late in the preparation process, i.e. it’s still alive when it’s being cut up.

          [This video shows a living pufferfish being cut up]

          • Nestor says:

            Mammalian chauvinism is in effect, though the more I learn about fish the more sentient they seem.

            Would not consider fugu anything but a once off experience in any case.

      • Nornagest says:

        The Jargon File often strikes me as somewhat dated. This is one of those times, at least as regards the details, although I think the top-level trend (ethnic, vaguely health-foody) is still pretty accurate. I suspect that comes partly out of openness to experience and partly out of nerds’ wariness-to-hostility toward the cultural mainstream, and I’d expect both to be pretty stable.

        But those details? Americanized Chinese is pretty thoroughly mainstream now, and modal nerd tastes seem to be shifting towards the likes of Vietnamese, Korean barbecue, Indian, or Ethiopian. (I expect tech’s sizeable population of immigrants and children-of-immigrants also has something to do with this.) Modern nerds also seem to be overrepresented in pursuing home cooking and preservation methods that have largely fallen out of favor in the mainstream; I make my own pickles and kimchi, and I know a lot of nerds that maintain their own sourdough cultures.

        • Jiro says:

          The Jargon File was under criticism even many years ago for Eric Raymond writing as though he is typical of hackers. This may be one of the parts that is not so much dated as it was a case of one person typical-minding.

          • Anonymous says:

            But surely the food entry predates him.

          • Jiro says:

            He worked on it for a *long* time. The last pre-ESR version was 1983. He maintained it from 1991-2003. The food section was added between 2.2.1 and 2.3.1 and that was after he had started maintaining it.

            Edit: The main entries came from other people, but the end matter was written by him.

          • James says:

            I definitely remember thinking that the section on hacker politics seemed like ESR drastically self-projecting.

            Aside: he occasionally comments here; I wonder if he’ll see this discussion.

        • James says:

          To me its datedness is part of its charm, though. It paints a picture of a certain bygone era very evocatively, I think. Probably best regarded as more of a historical document than a contemporary reference, though.

          The sourdough and kimchi connection rings a bell to me. Some of my arguably-geekish-though-not-quite-hacker friends do the same.

    • Deiseach says:

      Oddly, I’ve have thought rationalists of that stripe would be very enthusiastic for spicy or ‘foreign’ (depending on what your native cuisine and therefore what ‘foreign’ means to you) foods because of the whole idea of being open to new experiences, examining your biases, and – sorry – a bit of showing off about being sophisticated in their tastes 🙂

    • James says:

      My somewhat caricatured, straw-rationalist vision of a LW type hates spicy food. I guess I get this partly from the enthusiasm for soylent/mealsquares, which suggests a view of food as more of an inconvenience than a source of sensory pleasure. (Actually, I suspect a lot of people have this attitude towards food; it’s just that rationalists, as is their wont, take it to an extreme.) I also remember reading a post or comment in which the poster was baffled by how anyone could enjoy spicy food, since the sensation of hotness is, strictly speaking, a form of pain.

      Then again, there is also the fabled hacker penchant for spicy food. As well ESR’s mention of it in the jargon file, cited below (or is that above?), I seem to recall Stephen Levy mentioning in his book Hackers that the original group of MIT hackers would competitively eat the spiciest thing on the menu at east Asian restaurants.

      Do I contradict myself? Very well, then I contradict myself.

      • haishan says:

        I don’t think this is so much a contradiction as it is time-differing preferences. Sometimes I want an interesting sensory experience, in which case I’ll go get Thai or make something out of Fuchsia Dunlop’s Sichuan cookbook, but sometimes I just need to replenish nutrients, in which case I’ll drink some Soylent or make myself a peanut butter sandwich or, rather too often, just eat some fast food.

        (uh, if the Thai/Sichuan wasn’t a clue, I greatly enjoy spicy foods)

    • onyomi says:

      I like spicy food, though I’m not sure whether I’m a rationalist.

    • Cauê says:

      There’s been some talk about flavors on LW last week. There’s even a poll that goes against this hypothesis.

      (I don’t have that association at all)

    • Anon256 says:

      I weakly associate LessWrong with kink/BDSM, and associate liking spicy food with masochism, so transitivity yields a (weak) association opposite from yours.

      • Matthew says:

        data point against:

        I’m dominant/sadist in the bedroom, but I like very spicy food.

        I think pain-inflicted-by-others and pain-inflicted-on-oneself are very different things, psychologically.

        For example, people perform feats of athletic endurance that are quite physically unpleasant, but this is to demonstrate to themselves and others how tough they are, not to show submission. If eating spicy food was about the pain (which I think is wrong anyway, since most people build up a tolerance to capsaicin), it would likely fall into the proving-your-mettle category, not the almost polar opposite showing-you’re-at-the-mercy-of-the-sadist category.

    • RCF says:

      I find it interesting how much the responses treat food as a philosophical issue. While psychology does affect sensory perception to some extent, how one experiences food is, I believe, determined primarily by culture and physiology. People who like spicy food aren’t more adventurous or braver or anything like that. They simply have taste receptors that produce pleasurable signals when presented with spicy foods.

    • Eli says:

      You’re projecting. I love spicy foods. In fact, prior to living with my fiancee, I basically put z’hug in everything I cooked except for sweets.

  10. What probability would you assign the the idea that there has been, on net, more pain than pleasure in our world so far? I’ve been thinking about this lately and I’m honestly not sure what I think.

    • Charlie says:

      Seems like a toughie that contains lots of hidden definitional/preference issues about pleasure and pain, in addition to the empirical question of what happens to an average organism.

      If a grass plant grows happily for a month but in its second month suffers a drought that floods it with stress signals. Was the grass pleased at all by its normal growth? Was the pain of drought more intense? (Pretend you agree with me that plants’ pain is measurable on a similar scale to animals’.)

      But if you held a gun to my head, I’d say 70% there’s been more pain than pleasure overall. This is with stress and strong disliking grouped into pain, but fulfilled wants not grouped with liking.

      • Jon Gunnarsson says:

        This seems like a weird example. Plants are pretty clearly not conscious, so grass can’t experience pleasure or pain.

        • Jiro says:

          If you have to use the word “clearly”, it often isn’t clear.

          • Jon Gunnarsson says:

            Do you want to tell me that plants are conscious? How is that supposed to work? They don’t have a brain or a nervous system, or anything comparable to it.

          • As is usual in biology, we might not know as much as we think we do. Mimosas appear to have memories.

          • Paul Torek says:

            Pain isn’t just a representation, and it isn’t just an aversive representation. (That would include itches, for one thing. ) It’s an internal state that typically but not always signals damage. We learn to refer to it before we learn any neurology, but particular neural processes are what it is.

          • Murphy says:

            to play devils advocate: plants sense damage and release poison into their own tissue to defend themselves, plants warns others about herbivores by releasing chemicals into the air to signal to other plants, some plants will curl leaves away from touches.

            Indeed Plant Neurobiology is a real area of study:


            Now I wouldn’t bet money on any particular plant having notable information processing ability but I’m not so certain that it’s safe to say that there are definitely no plants anywhere on earth with an information processing ability similar to that of an insect or crustacean.

        • Charlie says:

          By pain, I have included not just representations in neurons, but representations in general. A human who has stubbed their toe might represent pain by certain patterns of neurons firing, and certain changes in body chemistry. A tree getting eaten by beetles might represent pain by activating signalling pathways related to stress and injury, and attendant changes in tree chemistry. Both of these representations are correlated with similar sorts of insults to the organism in question, and both evolved to manage reactions to those insults in a conceptually similar way (though the reactions themselves are quite different).

          Now, I care much more about the human’s pain than the tree’s pain. But the question was about pain in general, not about pain weighted by how much I care about the organism in question.

    • usenet chillfile says:

      Looking at it evolutionarily, you’d have to ask whether pleasure-seeking or pain-avoidance is *generally* more effective in promoting fitness-increasing behaviors. That’s a hard question.

    • houseboatonstyx says:

      What probability would you assign the the idea that there has been, on net, more pain than pleasure in our world so far?

      In the human world, mostly pain. In the animal world, mostly pleasure. (Plants more pain perhaps.)

      In the wild, an unhealthy animal doesn’t live long enough to experience long-term pain. Plants can live on indefinitely in an unhealthy state, as do civilized humans.

      • Nonnamous says:

        What fraction of all animals that ever lived, lived in feed lots? I’d guess not that tiny.

        • I would guess very tiny. You may be forgetting just how small a fraction of Earth’s history contained feed lots.

          • porridgebear says:

            I’d guess that it might be significant for large animals, but for every one of those in a factory farm there are many more of the six-legged majority, often living on that large animal.

          • It can’t be significant for large animals. The claim was about animals that ever lived. There have been large animals for several hundred millions of years. There have been significant numbers of animals in feed lots for (I’m guessing) less than a century. So unless the modern population of feedlot animals is more than a million times as large as the average number of large animals over the past several hundred million years, the claim cannot be true.

          • Nonnamous says:

            This is a very good point. I was generalizing from humans (seems to intuitively make sense that the number of wild cows 10,000 years ago was give or take the order of magnitude equal to the number of humans then, and the current number of farmed cows is similarly close to the number of humans) but forgot that the wild animals existed for a much longer time than humans.

          • “I was generalizing from humans”

            Do you think the number of humans now alive is a large fraction of the number that have ever lived? It isn’t–not even close. That’s a popular myth.

            The Atlas of World Population History has some estimates of world population c. 1000 A.D.—roughly 200 to 300 million. Figure a generation of 20 years (high infant mortality). That’s a billion people a century. So even if there had been no increase since then, it would give you more people dying just in the past millenium than are currently alive.

            A figure I’ve seen as an estimate for total number who have ever lived is around a hundred billion, but that’s very uncertain. The early period has low population but an enormous number of generations.

      • RCF says:

        Even healthy animals experience distress. A predator experiences distress every time it fails in a hunt. Prey experiences distress every time it’s hunted, even if it gets away. Social animals experience distress whenever they fail to reach the top of the status pyramid (and even then, they experience distress every time that status is threatened) and every social group can only have one top individual. Males experience distress when they fail to mate.

        • houseboatonstyx says:

          @ RCF
          Even healthy animals experience distress. A predator experiences distress every time it fails in a hunt. Prey experiences distress every time it’s hunted, even if it gets away.

          The prey, yes, during the hunt. But if prey lives in a place so dangerous that distress would be constant — then pretty soon it would be caught and eaten. If a predator lives in a place where food is so scarce that failing a hunt often causes actual distress, it is unlikely to live long and reproduce successfully.

          Social animals experience distress whenever they fail to reach the top of the status pyramid (and even then, they experience distress every time that status is threatened) and every social group can only have one top individual. Males experience distress when they fail to mate.

          I think this sort of thing is more likely to cause long-term distress in humans than in non-human animals.

    • Alex says:

      I’d go further and say that pain and pleasure units are not directly comparable. Kind of like asking if a car is faster or bluer.

      • houseboatonstyx says:

        I can’t help rounding it off to emotion. Lewis (quoted from memory) says “Coming out of the snow and warming one’s hands at a fire, no one minds the sequence, ‘Um, that’s warm, warmer, nice, very warm, that stings’. ” Different people will move their hands away at different degrees of objective heat — but usually with the happy emotion, ‘Ah, that’s better, that’s enough!’ Sometimes physical pleasure can cause emotional pain (the smile that brings the tear). Either one, below some threshold, can cause ‘irritation’ enough to blink, without being strong or long enough to interrupt whatever happy or unhappy thought was already going on.

        I think that in general animals take small physical pains less seriously than most humans do, but appreciate small pleasures more.

      • FacelessCraven says:

        concur on this point. pain is something we usually want to avoid, pleasure is something we try to maximize, but that doesn’t mean that the two are poles on a single axis. Especially for the more abstract forms of pain and pleasure.

    • Godzillarissa says:

      I don’t have an answer for that, but I do have a question:

      When I suggested that a factory farm chicken life could be net negative, most people disagreed that a life could ever be net negative. It went mostly along the lines of “everything is better than being dead”.

      By that reasoning, is it not necessary that there’s always less pain than pleasure, Since being alive is always at least one iota more pleasure than any pain you experience?

      • Charlie says:

        The trouble with this view is that it doesn’t disentangle pleasure and pain from preferences. A preference for living is the thing that leads to you choosing to live. But we usually use the words pleasure and pain to refer to subjective experiences that aren’t directly preferences. People routinely make the choice to not take drugs that would increase their amount of pleasure, because they prefer not to.

        • Godzillarissa says:

          Yeah, shortly after I wrote that comment, I realized this will all be about words and how we define them in the end.

          I think I might define them in other ways than others do, which not only makes this discussion hard, but could also mean, I got the last discussion wrong 🙁

    • Kaura says:

      I sort of agree with Alex above – I don’t think pain and pleasure are actually comparable like two opposite sides of a single hedonic factor (see for example how reward and punishment influence behaviour in strongly asymmetrical ways, according to this pretty interesting paper). But I guess you can investigate preferences to try to measure how you might value these two factors compared to each other in behavioural situations. Disregarding curiosity, and if it could all be translated to equivalent qualia comprehensible to humans, would you choose to experience everything that has been experienced in the world so far? Or replay the history of life with you as a randomly picked sentient being in it?

      To me, it seems that biological life here on Earth mostly works with suffering-based motivational systems where states of satisfaction and happiness are rare, pain and milder discomfort very frequent in comparison (and a stronger motivating factor, so also more intensely experienced) – the latter type of signal just is more useful in most situations.
      Since you asked for probabilities, I’m almost certain (>97%) that suffering has been “greater” in the sense described above. But I’m also very curious about the roots of your uncertainty on the matter, because it seems so suspiciously obvious to me and I’m probably missing something (and I would certainly like to be wrong about this.)

      • Paul Torek says:

        Thanks for that pretty interesting paper : it was. Also, your reformulations of the question are brilliant. Technically incoherent (because personal identity ), but I think the spirit of the questions still works.

      • houseboatonstyx says:

        @ Kaura
        From the study:
        Surprisingly, the effects of the reward magnitude and the penalty magnitude revealed a pronounced asymmetry. The choice repetition effect of a reward scaled with the magnitude of the reward. In a marked contrast, the avoidance effect of a penalty was flat, not influenced by the magnitude of the penalty.

        This kind of fits with my notion that animals attend more closely to pleasure than to pain, and better remember the information learned. Food is necessary for survival, but absence of pain is not.

    • I’m not sure how you could quantify this for comparison, even in theory. I guess the only way would be to reduce pain/pleasure to some specific neural firing and then estimate based on that. Still, I can think of certain times where I’ve been physically uncomfortable but also happy, or vice versa, so I don’t know if that sort of thing would be an appropriate measure even for a non-selfish hedonist or eudaemonist (of which I am neither so I could be wrong).

      I tend to think pain and pleasure are valid considerations but maybe an imperfect proxy for something more significant and real, like survival / thriving / something else.

  11. chaosmage says:

    I usually read you via RSS feed – if WordPress even counts that, it counts it once. I come to the www page only to comment, and if I’m proud of that comment I will return to the article several times to check for answers because this comments system doesn’t notify me of answers automatically. So from me, you’re getting way more traffic for a post where I comment something substantial than for one where I don’t.

    Your recent posts are focused on topics where I’m glad to learn but don’t have interesting information or reasoning to contribute, so I wouldn’t be surprised if I heard the number of pageviews here that I generate had declined a bit. I would expect it to go back up if you make far-ranging analyses or meditations again, or if you ask interesting questions that I feel qualified and invited to try and answer, like the discussion about who lacks which specific experiences.

    • Niall says:

      I’m a variation on this – I read through RSS and used to click through to read comments, but started to find it too time consuming to try to keep up. So I just gave up on reading comments, and so stopped clicking through to the website. Also the really annoying girl in my dept went on maternity leave on Feb 20, so I contribute to office chat a bit more, and read internet a bit less.

  12. Bernd says:

    Where’s my question? Are we only allowed to ask PC questions?

    • FacelessCraven says:

      no race or gender in the open threads is the global rule, I believe. I think I saw your post, and I’ve likewise had comments touching on the subject in question deleted on the one occasion I made them. It was somewhat disconcerting, but I’d hardly call the general environment here “PC”.

    • Deiseach says:

      Are we only allowed to ask PC questions?

      I’m sure if you’re using a Mac, no-one here will judge you (though I do have An Opinion, based on work experience, of the kinds of people who choose Macs over PCs).

  13. caryatis says:

    EDIT: response to question that was deleted, I guess.

    • Zykrom says:

      I think you’re missing the point. She’s not a “pre-pubescent girl” she’s a thousand year old cyborg construct.

      • Bernd says:


      • Limi says:

        Boy oh boy am I curious about the deleted post now.

        • jaimeastorga2000 says:


        • Bernd says:

          I posted a thread that argued that some paedophilic attraction would have been adaptive for men in ancestral times.

          It’s in /r/evopsych

          • Deiseach says:

            (1) That’s a tangled mess

            (2) Refine your defintions: paedophilia, hebephilia, ephebophilia, what? Also, if you’re talking about notional “people back in X period only lived to be Y years of age”, then yes, something like e.g. “Romeo and Juliet” where Juliet is somewhere around 14 and her mother was married off at 12, and women tended to be married when they started their menses which can be anywhere from 11-14, and the age of marriage for women in Rome was 14, and men tend to be older when marrying and – given rates of death in childbed – could have multiple successive spouses, then all right, men being sexually attracted to much younger women could have been evolutionarily adaptive.

            But I don’t think it’s much of a useful argument to make, since women are not heifers, even if Teagasc is arguing you should bull 15 month old heifers to increase lifetime output and for better calving practice.

          • Jiro says:

            But I don’t think it’s much of a useful argument to make, since women are not heifers

            That objection sounds like an isolated demand for rigor, since it would apply to any evopsych explanation of anything involving women.

          • Bernd says:

            It’s not a mess, you just don’t understand the elegant mathematical argument.

          • Cauê says:

            I think the deletion means our host would rather we didn’t discuss the topic here?

          • Deiseach says:

            Y’know, now I’m curious. What do we really know of relative life span for humans in the Neolithic (say?)

            Okay, any complete skeletal remains can be broadly classified for age, but can we really say that men tended to be older than women? Maybe all the brash young hunters getting killed when going out to hunt large fierce animals, or getting killed in wars, meant that there were lots more older women around than older men to get their pick of the newly sexually mature – the cougar effect, if you will. A thirty year old woman might have been (the equivalent of) a toothless crone, but she could still bear children.

            So can we really start speculating about evolutionary adaptiveness for men preferring younger sexual partners, when it might well have gone the other way (older women, younger men)?

          • Nornagest says:

            A thirty year old woman might have been (the equivalent of) a toothless crone

            Nah. Life expectancies in the Neolithic through the Renaissance were a lot lower than modern ones, but most of that comes out of (extremely) high mortality rates in infancy and early childhood; if you made it to five in most places, you stood a good chance of making it to fifty. Thirty would not have been seen as old in most places; it may have been seen as pushing middle age, but no more.

            And life expectancies were actually lower in the Neolithic than they were in the Paleo-, if skeletal proxies are anything to go by — probably thanks to cramped living conditions and close proximity to livestock making it easier for disease to spread.

          • Shieldfoss says:

            Nah. Life expectancies in the Neolithic through the Renaissance were a lot lower than modern ones

            On a slightly related note: An Afghani refugee I know tells me that in Afghanistan, 45 is considered old.

            That’s not an impression that would be caused by high child mortality if everybody who reached 5 could be expected to reach 45. I don’t know (and, frankly, have not ever asked) whether this is due to the war or due to other causes.

          • Bernd says:


            I think he might have been exaggerating. According to the site below the life expectancy at birth in Afghanistan is about 60 and at 5 it’s about 66.


            UK for comparison:


            Even in Sierra Leone the L.E. at birth is the mid forties and at 5 it’s the mid fifties.


            Cavemen didn’t have short lives.

          • Nornagest says:

            I’m not saying that cavemen had adult life expectancies close to ours. They didn’t. I’m saying that the difference wasn’t dramatic enough for 30 to have been counted as old, despite life expectancies at birth in the mid-to-high thirties during the Upper Paleolithic.

            Equally importantly, we can’t simply assume that life expectancy increased monotonically as civilization grew more complex. Paleolithic adult skeletons are taller — a proxy for nutrition and general health — than sedentary populations until the Renaissance at the earliest; some populations don’t make up the gap until industrialization. Data is deficient for very early civilizations, but Roman life expectancy at birth was about twenty — again driven mostly by infant mortality.

            (There are some caveats. Infectious disease is the main killer among modern forager populations, but it’s not clear how well that generalizes to the Paleolithic.)

          • Bernd says:


            I’m not saying that cavemen had adult life expectancies close to ours.

            What do you mean? The evidence is that they did.

          • Nornagest says:

            What do you mean? The evidence is that they did, infant mortality aside.

            Define “close”? The sources I’ve seen vary, but I’ve seen estimates as low as 40 to 45ish (e.g. here or here). The error bars are quite wide, and considering some of the sampling issues involved (and in view of modern forager populations, e.g.), I’m inclined to shoot a bit higher; but first-world adult life expectancy is higher still, and by more than a little.

            Interestingly, once you start digging into the actuarial tables, you find that forager life expectancy as late as age 45 is quite high — as much as 20 more years (compare ~30 expected years at age 20). We’re not looking a population where everyone would get to 45 or 50 and then keel over; we’re looking at a population where large portions of every age cohort would die, probably thanks mostly to infectious disease, violence, or predation.

          • Bernd says:


            In Sierra Leone which has the worst living conditions and lowest LE in the world, LE at birth is still about 45 and at 5 it’s about 55.

            The fossil evidence shows that prehistoric people generally lived in productive habits and were well fed and so their LE must have surely been (much ?) higher than that.

          • Nornagest says:

            Extrapolating from modern sedentary populations, even marginal ones, is an extremely bad idea in this context. Even the most marginal wouldn’t be dealing with many of the issues that foragers would have — and, to be fair, would be dealing with a lot of new ones. The lower infant mortality rates should make that clear.

            Modern foragers rarely die of malnutrition, either — nor, famously, any of the so-called diseases of affluence that tend to kill us. That doesn’t mean they’re long-lived.

          • Bernd says:

            This is interesting:


            Modern foragers rarely die of malnutrition

            Yeah, I understand. But my point was that prehistoric people seem to have been healthy and robust.

          • Eric says:

            With regards to the Afghani refugee comment, I wonder if it came from the perspective not that 45 is old because the average life expectancy is less than that, and more that 45 Afghanis look old.
            That they are much more worn down than a Westerner of the same age.

  14. Steve Reilly says:

    What’s the easiest way to get a naltrexone prescription if you want to try the Sinclair method? Would most GPs just give you a prescription if you ask? Should you try a psychiatrist?

    • Loquat says:

      Just a month or so ago, an alcoholic in my family ordered some online from a pharmacy in India, which apparently had a reliable method of shipping it into the US without getting caught. He received it fairly promptly and it seems to be working as intended with no unexpected side effects.

      Your GP may or may not be willing to prescribe it for you – there apparently are increasing numbers of doctors becoming aware of the Sinclair method, and others may be open to considering it if you explain the theory to them. There is apparently also at least one online support community with a section to post about doctors who will prescribe it, so you may be able to find help there.

  15. Michael R says:

    I get self-esteem and occasionally money from blog hits, so this is kind of bothering me.

    If there was a Patreon link, I’d certainly chip in a couple of bucks a month. I’ll grant that it’s not great from an effective altruism point of view, but it’s certainly something that I’d like to support, and that I’d be proud to be seen signalling support.

    • Ano says:

      Think of it this way; if donating 100 dollars to SSC’s continued operation means that at some point down the line, SA writes a cracking post that persuades someone to donate 1000 dollars to charity, you will have donated your money very effectively indeed.

      • Douglas Knight says:

        Yes, if money were a bottleneck to its continued existence, that money would be well-spent. But blogs are cheap, so it isn’t. Moreover, Scott thinks that money would make him more stressed and less productive.

      • Artemium says:

        I 100% agree with this statement. Me and several of my friends joined EA movement after we’ve read SA posts, and I presume we are not the only one.

        Some of SA posts here and on LW are one of the most effective rationalist evangelism I’ve ever encountered and even from purely consequentialist perspective it would be perfectly rational to support Scott in his work.

  16. http://www.wired.com/2015/04/geeks-guide-kazuo-ishiguro/

    Kazuo Ishiguro is the author of Never Let Me Go, and more recently, The Buried Giant, an Arthurian fantasy. He loves and respects sf, but his background is mostly literary, so I found the interview interesting (I recommend the whole thing if you have an hour– the transcript is very incomplete) because it’s quite an alien take on my home subculture.

  17. the drop in readership could just be noise. gonna need more data

  18. Limi says:

    I’m sure this has been asked in the past, but I can’t find an answer – what’s the favourite rss reader around here these days? I’ve been using newsblur since the shuttering of Google Reader, but there has to be something better.

    • Feedly. Works fine, though I don’t quite love it.

    • James Picone says:

      I used to be a Google Reader devotee, after the shutdown I switched to feedly. It has essentially the features I used Google Reader for and a very similar web-interface, and it can sync across devices.

      Problems: I use my RSS reader by going through everything that’s popped up and opening it in a new tab. Feedly (and to an extent, approximately every RSS reader I’ve ever tried) makes that a pain – I have to hold CTRL, left click on the title of each entry in the RSS thing (and not the bit that says where it’s from, that just goes to the main page for the thing), and then mark-all-as-read. Mostly works out, unless it’s been a few days and I’ve got enough unread RSS things in a category that I have to scroll, because then I have to spend mental effort remembering my position.

      • Kevin says:

        I also use feedly and I also find it annoying to open many links in new tabs. So I wrote a user script that opens all the links for me when I hit the “Q” key on my keyboard. Maybe others will find it useful.


        I wrote it for Firefox’s Greasemonkey, but I suppose it would work for other browsers and user script addons.

    • Alex says:

      Feedly may be the favorite but digg is my favorite

    • Anderkent says:

      The Old Reader seems to work for me, though the adds have gotten a bit more annoying recently.

      • Elizabeth says:

        I don’t remember what, but something about Feedly annoyed me so much that I switched to The Old Reader. That works great for me (I use Adblock)

    • Error says:

      I use Firefox Live Bookmarks.

    • Bob says:

      I’ve been quite happy with https://www.inoreader.com/

  19. William says:

    New reader to the blog. Very much enjoying it, Scott. Thanks.

    As for #3, this does not explain recent traffic decline, but it’s something to keep in mind moving forward.

    On Tuesday, April 21, 2015 Google updated its algorithms to favor mobile friendly sites (aka ‘mobilegeddon’). The change boils down to this: from now on, if a website is mobile friendly it will appear higher in searches performed on mobile devices. I’m just guessing but I bet a significant slice of your traffic (maybe 10-15%) is coming from mobile organic search.

    Slate Star Codex is not mobile friendly. You can run a test with Google’s Mobile Friendly Test here: http://goo.gl/lTm3de

    It’s always a pain to make changes, but you should be able to switch to another WP theme without too much trouble.

    Google will roll out their update over the course of a few weeks, so the changes might not be noticeable immediately. If you’re not mobile friendly today, you can take action and Google will notice.

    Happy to lend a hand or answer any questions if you have any.

    Keep up the good work.

    • Randy M says:

      It’s funny that it might not be “mobile friendly” but being almost entirely text based, it is quite easy to browse on my (not exactly top of the line) phone.

      • Limi says:

        Yeah, I’m not sure I’ve ever actually come here on my pc – maybe once or twice when I first started commenting.

    • Gene says:

      Mobilegeddon was my first guess as well. Even though your drop off occurred prior to this, it is possible that Google was testing the algo prior to the official release date. There are monthly changes in any case and there was an earlier February update that caused some fluctuations: https://www.seroundtable.com/google-algorithm-update-19820.html

      You should check your Google Analytics reports and see where the traffic drop off is occurring. If you see a large decline in Acquisition : Organic Search then Mobilegeddon, or other update, is a reasonable bet — you should try and pass the Mobile Friendly Test that @William links to regardless.

      If you see a big drop in Direct Traffic I would look at RSS / syndication issues. Check your own WordPress admin settings, and if you get a lot of traffic from other services (Feedly, Twitter), check those sites and see if something has changed about how your content is getting displayed.

    • Muga Sofer says:

      For the record, I’ve found it awkward to read SSC posts on my phone.

      Of course, that just leads to me going back later on my PC, so it’s actually <i.increasing your traffic…

  20. James Picone says:

    Cool tabletop game based around inductive reasoning: Zendo.

    3 players and up. One player is the ‘game master’, the others are ‘students’. Other pieces required are markers in three different colours (I bought some stones intended for use in vases from a garden/hardware store in black/white/transparent), and a collection of simple pieces that can be assembled into groups (‘koans’), with a few properties that vary between the pieces. The game was originally designed for Looney pyramids, but I use Lego (20 2×2, 2×3, and 2×4 blocks in four different colours).

    The GM comes up with a rule that divides assemblages of the game pieces into two categories – one category ‘has the buddha nature’, one doesn’t. Then they create two koans, one that has the nature, and one that doesn’t. Players then take it in turns to:
    – Build a koan
    – Either:
    – Ask the master to mark whether it has the Buddha-nature OR
    – Tell the master you would like to guess, in which case every player simultaneously and secretly decides whether they think the koan has the nature (by selecting a coloured stone), and everyone who gets it right gets an ‘answer stone’
    – Spend any number of answer stones to guess what the rule is. The master has to disprove the guessed rule, by constructing a counterexample. If they can’t, the student wins.

    The ‘null koan’, with no pieces in it, isn’t a valid koan. Also, there are some constraints on what the rule must be – it can’t vary in time or space. If the koans are constructed in a different order, they should have the same nature, and if the koans are moved to a different spot on the table, they should have the same nature.

    Finally, rule-guessing is generally compassionate – if there’s a counterexample to a guessed rule already on the table, the student can retract their guess and keep the answer stone.

    I’ve been playing the game a lot over the last month or so, and it’s very entertaining. Kind of the inductive mirror of Hanabi, or similar to a drawn-out version of the bidding step in 500. Also probably interesting from a rationalist perspective, given the ties to science and cognitive biases. Pretty easy to dig up enough pieces, too.

    • Rauwyn says:

      I’ve come across a description of the game before, but never played; it does sound interesting. Have you seen the rationalism gothic post? I guess other rationalists play Zendo too.

      • James Picone says:

        I had not, in fact, seen that post. It is hilarious.

        I’ve only ever played it with Lego. I think there might be some interesting thinking to do on the way the tokens you’re using to construct koans influence the kinds of rules produced.

        For example, there are apparently huge classes of Zendo rules involving summing the pips on pyramids and doing things with it – prime sums have the Buddha-nature, or Fibonacci numbers, or whatever. You could do that with Lego bricks, but the people I play with very rarely do. Rules tend to be things like “There must be exactly four 2×2 bricks” or “A red block must touch a yellow block, but no red or yellow blocks on the bottom layer” or “At least two red blocks, but no red blocks touching”.

        Or maybe rules vary on a group-by-group basis in interesting ways.

        One thing I’ve noticed about the game is that it becomes very hard to spend answer stones as the game goes on – with enough examples on the table, any given rule you propose becomes more likely to be contradicted by something already on the table if it isn’t right. I have recently figured out that you can always spend an answer stone to force the master to build something that has or doesn’t have the Buddha-nature (“I propose that every example with the Buddha-nature on the table is explicitly named in the rule, and everything else doesn’t have the Budda-nature” forces the master to build a new example with the Buddha-nature as a counterexample, unless there are no such examples, in which case you have just won (although possibly with the least-Zen winning move ever)), but I haven’t played enough games with people who know that to see if that helps.

        Similarly useful meta-hackery of the counterexample system: say you notice that all Buddha-natured koans contain a red block, but also that not all koans that contain a red block are Buddha-natured. You can’t work out, given the examples already present, what the differentiating feature is. “I propose that all koans that have a red block have the Buddha-nature, except for every counterexample already on the table” forces the master to build a non-red-containing Buddha-natured koan (breaking your half-pattern) or a red-containing non-Buddha-natured koan (providing more information about your half-pattern).

      • Error says:

        Rationalism Gothic: Wow. That was creepy as hell and I love it.

      • An explanation of Rationalism Gothic, for the similarly confused: it is a variation of the Regional Gothic meme.

        Regional Gothic is a Tumblr-based literary genre which applies facets of the traditional Southern Gothic genre to other distinct geographical regions. Posts in the genre often are written in the second person, in the format of a bulleted list that details several dark, depressing, moody or creepy aspects of the regional lifestyle.

    • I learned about Zendo from this Less Wrong comment. Subsequently I did a thing where I went to a local elementary school once a week for a while and played it with a group of students after school. It worked pretty well.

    • Secretariat says:

      I second this recommendation. This game seems to be somewhat popular in the LW memeplex. I had a lot of fun playing it at the Berkeley HPMOR party.

    • Sniffnoy says:

      I’ve always seen it played that the null koan is legal. I find it quite a useful test case, personally. Legos seems like an interesting variant.

      Note by the way that explanations of the rules usually include some common vocabulary to guide rule-making and guessing.

      • James Picone says:

        The biggest reason we disallow it is because it’s annoying to indicate – two turns later it’s just a stray stone on the table, unless everyone keeps reminding themselves. Haven’t come up with a trick for making it clear that there is a koan here, there just aren’t any bricks/pieces in it, yet.

    • suntzuanime says:

      So the game is played between the students and the master’s only goal is to provide an entertaining game for the students?

      • Peter says:

        Sort-of. Although if the rule turns out to be hard (it often does, masters tend to underestimate how hard the rule will be) the students will often share their thoughts about what’s going on, and co-operate a bit to help things along.

        • RCF says:

          Since each turn results in at most one bit being learned, and often less than that, anything more than, say, 20 bits is going to take a while to guess.

      • David says:

        Yes. But you take turns to be the master (which is actually at least as much of a challenge, since you must always be ready to provide a disproof).

        I met up with the London meet-up and we played this game. So far not much luck getting anyone outside the LessWrongosphere into it.

      • Harald K says:

        It’s really more play than a competitive activity. But yes.

    • Peter says:

      Yeah, I’ve played a fair amount of Zendo. It’s notable just how surprisingly hard to guess “stripy” rules are – i.e. rules that go back-and-forth depending on how much of some property the koan has. Like “odd number of pips” or even “this XOR that”. It’s tempting to think up some rule, think “this seems a bit easy, needs spicing up a little, I know, let’s make the two properties XOR” and to regret making your rule so hard later. I do bits of machine learning from time to time, and there’s an issue with some sorts of classifiers being unable to get XOR, so it’s interesting like that.

      There was one story about someone who thought up a rule involving nesting pyramids inside of other pyramids, and he thought, “hmmm, someone might try to put a pyramid inside an opaque one, so you can’t see it. So I’ll add a clause to my rule saying that pyramids that you can’t see don’t count.” Anyway, the players constructed no such koans and eventually someone guessed the rule – but without the “hidden pyramids don’t count” clause. So the master was forced to construct a counterexample with a hidden pyramid, and chaos ensued.

      • AlphaGamma says:

        Possibly the best Zendo story I’ve heard is the one about a con where giant Zendo was played in some kind of large hall or gym using foam pyramids several feet tall. Players were failing to guess the rule until one player got tired of going up to the balcony which overlooked the hall every time they wanted to see all the koans, so took out a set of Icehouse pieces and started trying to recreate all of them in miniature.

        It turned out that the foam material of the large pyramids had a different coefficient of friction from normal Icehouse pieces, and the rule was “a koan has the buddha-nature iff it cannot be made with Icehouse pieces”

        • AlexC says:

          Yikes! That’s awesome and terrifying. 😀

        • RCF says:

          That is arguably not a valid rule. And was the Master creating each koan with icehouse pieces to see if it has the Buddha nature, or just guessing?

    • Harald K says:

      Zendo is cool! I wondered if LessWrongers had thought of combining Zendo with something probabilistic, like those calibration games. Then you would have a game which really modeled scientific inquiry (whether it would be fun is another question).

      Some years ago I came across some neat puzzles by a Swedish recreational maths guy. They are inductive, like Zendo. They are also underspecified like IQ test puzzles usually are: you have to work out yourself what the goal of the puzzle is from investigating it. It sounds really hard, but it’s actually doable, and very satisfying once you make it! It’s Java applets, so you’ll need a browser that can still handle that, but I heartily recommend them: http://www.mattesmedjan.se/spel/blindbox/english.html

      • Troy says:

        Playing those games makes me feel like the kids in the Growth Mindset studies who are given impossible puzzles to solve. And, like them… I quit.

        • Harald K says:

          I only solved three of them myself. Minor spoilers:

          Va gur rnfvrfg bar, xrrc cerffvat n fvatyr ohggba, frr gur frdhrapr gurl tb guebhtu, naq gel gb svaq jung’f qvssrerag sebz gur frdhrapr vg plpyrf guebhtu ba gur bgure ohggbaf.

          Bar bs gur chmmyrf znl unir fbzrguvat gb qb jvgu n fyvqvat chmmyr.

      • Jaskologist says:

        It sounds like you all may also enjoy Mastermind.

        • Harald K says:

          In Mastermind you use deductive reasoning, Zendo is inductive. That’s what makes it so cool.

    • Ilya Shpitser says:

      Posting to register my great admiration for Zendo.

    • Daniel Speyer says:

      I’ve only played it a little, but it seemed to me to require a sort of informal agreement about good faith rule design: how much can be hardcoded, how much computational complexity, etc.

      • James Picone says:

        Yeah, we’ve had a couple of off games where the Master had a rule that was way too hard, or a rule that was hard that they then marked inconsistently. You really want the Master to have some experience, particularly if you haven’t played before. Like finding a GM for a roleplaying game.

    • Kiya says:

      Zendo can also be played with strings of letters written on paper or chalkboard, if you find yourself with time to kill but no legos or pyramids. (I first encountered zendo as a time-wasting activity within a more immersive game, so I didn’t categorize it as something you might want to do for its own sake. Thanks for returning it to my attention.)

    • Peffern says:

      That sounds like a car game I play with my family but with more formal structure. The game is called My Aunt Sally; essentially someone comes up with a rule for distinguishing words (either lexicographical or semantic) and gives three examples and counteraxmples. Then players each guess example-counter example pairs and the master corrects them until everyone knows the rule. Basically the same idea.
      Also, this is my first time responding to an SSC post, although I’ve read it for years. I could talk more I guess if people are interested.

    • 1729 says:

      I used to play “whiteboard Zendo”, in which koans are things drawn on a whiteboard. Needs several colors of dry erase markers. Convention is that koans which have the buddha-nature get circled in green; koans which don’t have the buddha-nature get circled in red.

      I like to play Zendo in parallel. There are no “turns”; the master just goes around labeling koans as they get generated, and if someone hasn’t got a koan (or a guess) ready they get skipped. Makes for a much busier game with less downtime.

      • Troy says:

        It seems that you could play Zendo online as well, with strings of certain kinds of characters: e.g., 0 and 1, English letters, etc.

  21. Psycicle says:

    I donated to the CyborgButterflies fund.

    Congratulations in advance to the people below me who donate!

  22. Alex says:

    Lately I’ve been thinking that…

    1. Philosophy is useless. In reality, reasoning is used to justify pre-existing morality, so all philosophers are doing is selling pretty words to whoever wants to hear them.

    This is not true if the philosopher is able to get their ideas written into civil or religious law. But I’m not aware of much “philosophy” that makes it that far.

    2. I don’t think Noah Smith’s version of growth mindset is the same as Dweck’s. He’s right, but she’s wrong.

    3. I would like to know which of right-wing authoritarianism and social dominance orientation is more highly correlated (1) with the main right vs. left public opinion axis and (2) with Eysenck’s tough-mindedness vs. tender-mindedness public opinion axis. I heard somewhere that right-wing authoritarianism is closer to social conservatism and social dominance orientation is closer to economic conservatism. I also heard that social dominance orientation is correlated with the tender-mindedness facet of agreeableness. Maybe these are different sorts of “tender-mindedness.”

    4. (Game of Thrones Season 5 Episode 3 Spoiler)


    It will be nice to know if Tyrion survives his capture during today’s episode, and how Sansa, Theon and Ramsay navigate their situation. It seemed a long episode (but good).

    • Addict says:

      “This is not true if the philosopher is able to get their ideas written into civil or religious law. But I’m not aware of much “philosophy” that makes it that far.”

      The entire Western World has been heavily influenced by the Enlightenment, and the Enlightenment was 100% philosophy. Milton, Locke, Wren, Hooke, Wilkins, and that whole London gang were directly responsible for the Glorious Revolution which overthrew James II Stuart, the French Revolution which overthrew the Bourbons, and the American Revolution.

      First, government was about a ruler who dominated through the right of conquest. Then, with the Magna Carta, government was a contract between the rulers and the ruled, essentially saying that the ruled wouldn’t revolt if the ruler didn’t act like a despot. Then the idea came about that the people could govern themselves how they best saw fit. To deny the philosophical nature of this position is to do a great disservice to the thinkers who brought it about.

      From the Confusion, an excellently well-researched book about the birth of the Enlightenment:

      Moseh laced his fingers together and stretched his arms, which was a noisy procedure. “I am going to bed,” he said. “If they are looking for reasons to burn you, Edmund, and if you are not giving them any, it follows that Jack and I will soon be dangling from the ceiling of the torture-chamber while clerks stand below us with dipped quills. We’ll need our rest.”

      “If any one of us breaks, all three of us burn,” said de Ath. “If all three of us can stand our ground, then I believe they will let us go.”

      “Sooner or later one of us will break,” Jack said wearily. “This Inquisition is as patient as Death. Nothing can stop it.”

      “Nothing,” said de Ath, “except for the Enlightenment.”

      “And what is that?” Moseh asked.

      “It sounds like one of those daft Catholicisms: The Annunciation, the Epiphany, and now the Enlightenment,” Jack said.

      “It is nothing of the sort. If my arms worked, I’d read you some of those letters,” said de Ath, turning his head a fraction of a degree towards some scrawled pages on the end of this table, weighed down by a Bible. “They are from brothers of mine in Europe. They tell a story—albeit in a fragmentary and patchwork way—of a sea-change that is spreading across Christendom, in large part because of men like Leibniz, Newton, and Descartes. It is a change in the way men think, and it is the doom of the Inquisition.”

      “Very good! Well, then, all we must needs do is hold out against the strappado, the bastinado, the water-torture, and the thongs for another two hundred years or so, which ought to be plenty of time for this new way of thinking to penetrate Mexico City,” said Jack.

      “Mexico City is run out of Madrid, and the Enlightenment has already stormed Madrid and taken it,” de Ath said. “The new King of Spain is a Bourbon, the grand-son of King Louis XIV of France.”

      “Feh!” said Moseh.

      “Eeew, him again!” said Jack. “Don’t tell me I’m to peg my hopes of freedom on Leroy!”

      “Many Englishmen share your feelings, which is why a war has been started to settle the issue, but for now Philip wears the crown,” said Edmund de Ath. “Not long after his coronation he was invited to the Inquisition’s auto da fé in Madrid, and sent his regrets.”

      “The King of Spain failed to turn up for an auto da fé!?” Moseh exclaimed.

      “It has shaken the Holy Office to its bones. The Inquisitor of Mexico will probe us once or twice more, but beyond that he’ll not press his luck. Scoff all you like at the Enlightenment. It is already here, in this very cell, and we shall owe our survival to it.”

      “In reality, reasoning is used to justify pre-existing morality, so all philosophers are doing is selling pretty words to whoever wants to hear them.”

      …Certainly a portion of philosophy is this. Perhaps you are simply saying that the rest of philosophy, the successful, useful portions of philosophy which have been so grounded in society as to be taken for granted, should be called by some other name, so as not to taint it by association with people having a wank? Or do you mean to deny the role of philosophy in shaping much of modern society?

      • Alex says:

        I’m going to bed, but in short…vast formless things!

        Unless you happen to be at a rare tipping point.

        • Carinthium says:

          I have a LOT of things to say here.

          1- You’re limiting philosophy to ethics here. What about epistemology, philosophy of mind etc.?

          2- You assume that the idea of true knowledge for it’s own sake is not valuable. To some people it is. If you want to argue that they can’t reach it that’s another matter.
          (Side note- It’s incredibly rare I grant you, but there are philosophers in history who have greatly dissented against the views of the day, including in ethics, to the point where your description of rationalising intuitions is unfair)

          3- How can you deal with radical scepticism without philosophy? Any argument which appeals to empirical information regarding this is automatically circular, leaving only circular ones.

          4- Argument by analogy is fallacious, although making a point clearer by the use of an analogy is legitimate.

          In this case, Paine was an exceptional writer. Assuming US independence was indeed a tipping point, this would give him far more influence than most. If it was indeed a tipping point, George III of all people must clearly have had a lot of influence on the outcome as well even if he used it incompetently).

          5- Just checking, but the logical implication of what you are saying is that being a rationalist in the Elizier Yudowsky/Robin Hanson sense is useless, right? Their view clearly involves large amounts of philosophy.

          Significant numbers are swayed into cryonics by LessWrong beliefs. Whether they make the right or the wrong decision, this decision is rooted in a worldview created by this kind of rationalism.
          (Minor Side Note: I like the rationalist movement in this sense of the term, but am a bit queasy about calling it rationalism because there’s already a philosophical movement by that name which is grossly opposed to it)


          My own view, summarised:
          -I have arguments regarding the Sceptical problem too long to summarise here

          -If you don’t have a mind capable of dissent against majority morality and, whether through your own effort or other’s, at the point where you learn to disregard your intuitions as philosophical evidence, philosophy is indeed useless for you. Most philosophers are partway on this, making them almost useless.

          Even then, you are probably grossly wrong but if you have a good mind as well you can improve on the cultural beliefs you were raised with.

          -If you don’t have the willpower and other capacities to act on which you have figured out, philosophy is ‘useful’ in your internal assessment of the world but will have little effect on how you actually act. Some people consider this to be useful, but most don’t.

          -Philosophy cannot change the world except in the highly unlikely event somebody who has truly taken these steps is in a position of sufficient power. But the individual with it can be better off.


          All that being said, believing that intuitions are not evidence is not a difficult step for somebody on Slate Star Codex. Most of us can handle dissenting against those around us as well.

          • Alex says:

            I can’t quite bring myself to take the Skeptical Argument seriously.

            Does Robin Hanson’s view of how to be “rationalist about X” involve large amounts of philosophy?

            I feel like, clearly, forming a personal “philosophy” has value, or I probably would not be here. But this seems like a broad interpretation of the word “philosophy”. Actually I concede that basic moral philosophy is useful, but mainly as a way of integrating, and describing precisely, different pre-existing intuitions. This takes a back seat to having the right intuitions.

          • Carinthium says:

            You seem to assume that intuitions are useful. Why? Why do you assume your intuitions are actually accurate?

            As an extenstion of that, what other than intuitions do you have against the skeptical argument?

            You chose an unusual case, but there is indeed philosophical background assumptions there. The idea that an empirical approach is superior on certain issues is philosophical, for example, and an implicit rejection of the idea that truth should be pursued for it’s own case.

            You seem to have picked one of the least philosophical posts, however.

          • Alex says:

            Intuitions are sometimes neither useful nor accurate. But I don’t think philosophy can do much to improve them; for that, we need experience. 🙂

          • Carinthium says:

            It’s true that philosophy is very bad at changing how we feel (unless it gets so far as changing social conventions themselves, in which it’s easy).

            But you seem to be ignoring the possibility that people can, at least within limits, change their behaviour by overriding feelings with philosophical reasoning. I agree there’s a limit, but why do you assume it’s practically zero?

          • houseboatonstyx says:

            @ Alex
            Intuitions are sometimes neither useful nor accurate. But I don’t think philosophy can do much to improve them;

            In a situation where two or more of your intuitions conflict, we can use philosophy (or at any rate something above the level of the intuitions themselves) to judge which one to follow in this situation, or whether/how long to keep looking for a way to satisfy both of them. That’s a skill that can be improved.

          • Alex says:


            I agree. My main point is to say that you need intuitions as a foundation, so that some so-called “high-impact questions” are…not high impact. For example, you can’t “solve population ethics” and then convince the world to implement your result. That’s nuts.

        • Alex says:


          To clarify, I think political philosophy has a respectable chance of impact. The philosophers you mention did hasten the arrival of the Enlightenment. My comment about vast formless things was just intended to say that these philosophers did not cause the Enlightenment. If they were never born, other folks eventually would have written similar philosophy.

          What motivated my original comment was abstract questions in moral philosophy, like population ethics. I don’t think anyone grounds their politics on the answers to these questions. Systematized theories can resolve conflicts when intuitions conflict, which has a place, but compared with having the right intuitions, it’s a second-order issue.

          • Addict says:

            “If they were never born, other folks eventually would have written similar philosophy.”

            What makes you believe this? Having studied the time period in some detail, I am almost positive that without wilkins and hooke, the enlightenment would not have happened.

          • Alex says:

            Have you read the vast formless things post?

      • Who wouldn't want to be anonymous says:

        First, government was about a ruler who dominated through the right of conquest. Then, with the Magna Carta, government was a contract between the rulers and the ruled, essentially saying that the ruled wouldn’t revolt if the ruler didn’t act like a despot.

        I am pretty sure this is American Mythology with little relation with history. Not the least of which because the Anglo-Saxons were in the habit of choosing their kings, and extracting from them promises to rule well. It is just a coincidence that choosing the son of the last king was sometimes an effective strategy for not fighting a war with what would otherwise be an irritated, disposed heir. In that light, the Magna Carta is not a novel invention. The Norman magnates, like the Anglo-Saxons before them, chose as their king someone who wasn’t the heir and in exchange received promises to rule well. Promises that were, by the way, ignored before the ink was dry. The fact they were written down at all is likely due to the fact that the disposed heir was Duke of Normandy and the previous Duke of Normandy managed to conquer the kingdom. The usurping king was obliged to make very serious sounding promises to get the barons to back the scheme.

        The ironic thing is that this myth gets it exactly backwards. Prior to the Norman invasion, the Earls were incredibly powerful. A few of leading ones acting together were as or more powerful than the king. For example, Edward the Confessor was a puppet of the Godwin family because they controlled a handful of the earldoms. And when sufficiently irritated with their king they were not at all shy about revolting. William the Bastard arguably had no intention of displacing the natives during The Conquest. But they kept revolting, and he was obliged to keep suppressing the revolts, confiscating their lands and titles in the process until there were none left. Those land were divided up and distributed amongst those loyal to him so that by the time of the Doomsday Book not even the leading few hundred of the new barons could challenge the king. Or his children, such our usurper that signed the Magna Carta. Revolt was no longer an option against a tyrannical king, and the ability of the magnates of the land to choose who ruled them rapidly eroded.

      • ADifferentAnonymous says:

        Antithesis: political and economic forces change the world, and the philosophers whose ideas support the change rise to prominence. I’m pretty sure Marx mostly believed this, but it’s by no means an exclusively Marxist position. Historical causation is generally a fascinating own question; I had a professor who argued technological innovation was not a driving force in bringing about the Industrial Revolution.

        • someone says:

          Marx is an interesting point here, as his Philosophy clearly had an enormous impact on the shape of the world for the following ~100 years.
          Also, we get into weird discussions if someone were to argue with Marx that no, he had no impact, it was a historical imperative determined by the means of production.

        • Carinthium says:

          In this thesis, what led to the rise of Christianity? I see no reason why it was uniquely better suited to Rome than the alternatives. Paganism had problems, but primarily from lack of credibility.

          Mohammed’s Islam clearly changed a lot. It won primarily because of Mohammed being an extraordinary individual, though. What was it about Arabia that made Islam, rather than say a Christian sect, inevitably the winner?

          • ADifferentAnonymous says:

            Maybe a christian sect could have succeeded in Arabia, but only if it looked a lot like Islam. And similarly Rome was ripe for conversion to something with certain traits that Christianity had. I don’t know the history of these cases well enough to say if this is plausible.

          • Carinthium says:

            I think that’s what they’d say. The question is how to demonstrate that certain traits made victory inevitable and what they were.

            Why, for instance, must religious intolerance necessarily succeed in Ancient Rome? Why is, say, Mohammed’s brilliance as a writer not considered a major factor?

    • Sylocat says:

      Tyrion is one of the few characters in GoT who I’m pretty sure has actual Plot Armor, so I think he’s going to survive.

      • Vladimir Slepnev says:

        I feel that GoT might have more characters with plot armor than most people admit. I’d be pretty surprised if Jon, Dany, Tyrion, Arya or Littlefinger died before the final season. In the final season, all bets are off.

        • Addict says:

          …Would you like to bet on that?

        • Quixote says:

          Tyrion is awesome in the books, but he seems much bigger in the show because of Peter Dinkledge being such a powerful and charismatic actor. I’m not sure he actualy has plot armor. But I also wouldn’t bet against him.

        • Susebron says:

          I don’t know much about the show. How far behind the books is it, and how much has it diverged? I’ve got some thoughts, but they’re all based on the books and there’s a special circle of hell devoted to people who give out ASOIAF spoilers.

          • James Picone says:


            EDIT: Looking through it the wiki catalogues differences in /exhausting/ detail. Probable the most relevant ones are that the Stark children, Joffrey/Tommen/Myrcella and Danaerys are older in the TV show by roughly three years, and Robb’s wife is different in the books.

          • Loquat says:

            The show is simultaneously a book behind, a book ahead, and wandering off onto its own path altogether, depending on which character’s subplot you’re looking at. Cersei and Arya are still working on book 4, Tyrion and Daenerys are partway through book 5, Bran finished up his book 5 plot last season and has this season off, and Sansa, Brienne, and Jaime have all either finished or skipped their book 4 and 5 plots and are now doing things that actively conflict with the published material.

            The way the show’s been going, I am 99% certain Sansa will end the series both alive and in a position of relative power.

        • Bugmaster says:

          I feel that GoT might have more characters with plot armor than most people admit. I’d be pretty surprised if Jon, Dany, Tyrion, Arya or Littlefinger died before the final season. In the final season, all bets are off.

          [Edna Krabappel] Ha ! [/Edna Krabappel]

          That’s all I can say 🙂

          • InferentialDistance says:

            [Potential Spoiler]
            There is a fortuitously placed Red Priest near the only confirmed death among those characters. And another Red Priest was able to raise the dead. And the character’s death was a cliffhanger. I predict the nearby Red Priest will bring said character back from the dead.

          • Susebron says:

            [Potential spoiler]
            Also, said character has at least one other confirmed method of not dying, which was specifically foreshadowed in close proximity to said character.

          • James Picone says:

            [Potential spoiler]
            Also we didn’t see the corpse, just some text indicating that they were wounded, badly. Maybe they just flat out survive.

      • FacelessCraven says:

        I’m pretty sure he’s going to survive the series, but that doesn’t feel like “Plot Armor” to me. He’s highly intelligent and pragmatic, courageous enough to take serious risks when the payout is worthwhile, always looking for an angle.

        It seems to me that most major characters who die in Game of Thrones do so because they fail to accept the world as it is, rather than as they wish it to be. They make bad decisions, and those bad decisions catch up with them sooner or later. Tyrion makes very few bad decisions, and has a keen sense of when to roll the dice and when to play it cool. He always does what he can to ensure that those around him have an interest in his survival, and so he survives.

        Jon Snow would have seemed like a much better example of someone insulated by plot armor. He’s not very smart, has long been out of his depth, and usually just seems to be making it up as he goes along.

        • I have a notion that Martin puts cluefulness in the slot where a lot of other authors would put goodness.

          • Zorgon says:

            While you’re probably right in general, it’s seemed to me for that Daenerys is an exception to this. Jon Snow has been forced to learn cluefulness while Daenerys stumbles from idiotic decision to idiotic decision with nothing like the degree of consequences that would be inflicted on any other character in the series.

            Then again, since we know GRRM revels in killing off popular characters, we may yet see him pop the “Khaleesi” bubble before the end.

          • Held In Escrow says:

            Daenerys is somewhat of a deconstruction of the “lost son of the king” trope in that she would be your standard fantasy protagonist anywhere else… but in ASoIaF she’s constantly screwing things up despite having plot armor. One of my friends calls her George W Bush with Dragons, because she’s goes Right to Protect all over the city-states without having a decent plan of how to unfuck the situation.

            I highly doubt she’s going to have a happy ending because of it

          • FacelessCraven says:

            @Zorgon – “…while Daenerys stumbles from idiotic decision to idiotic decision with nothing like the degree of consequences that would be inflicted on any other character in the series.”

            …She has a dedicated core of fanatically loyal retainers, a near-unbeatable army, and three dragons. She’s got some truly impressive insulation from the bad consequences of her decisions.

            That said, the decisions she’s been faced with have been uniformly no-win for a while. It seems to me that her decision-making skills up to acquiring the Unsullied were fairly good. Starting from nothing, she acquired loyal followers, dragons, and a professional army. She had no land, but acquiring the army left her in de facto control of a city state. What should she have done from there?

        • John Schilling says:

          None of Tyrion’s positive attributes, in-universe, saved him from being executed for Joffrey’s murder. He didn’t have the sense to get out of King’s Landing before it was too late, his scheming was ineffectual, nobody had enough of a real interest in his survival to help him – but he’s the kind of person GRRM likes, and so Jamie also suddenly likes him enough to risk his life for Tyrion’s on no politically sound basis. Tyrion is the kind of person I like too, so I’m not complaining.

          Jon, Arya, and Daenerys have Plot Armor in the sense of being central to plots that have too much invested in them to be abandoned at this stage. Without Dany, nothing that happens on Essos can plausibly matter to anyone else we care about, and that plot is too big to turn into a giant shaggy dog story.

          Dany and Arya are also sympathetic, attractive, strong, independent female characters, which makes them nigh immortal. GRRM could perhaps have killed them off in the novels, when they were just novels, but it isn’t going to happen on-screen.

          • Alex says:

            Whether her ending is happy or sad I don’t know, but I agree that Dany cannot die before the end.

          • Susebron says:

            Dany is immortal not because she’s a strong female character, but because she has dragons. When it comes to plots, Essos is only the tip of the iceberg. If GRRM isn’t going to kill off literally everyone at the end, dragons are almost certainly going to get involved. There are only a few other characters who I would plausibly expect to be able to control a dragon, and none of them have access to any dragons. He’s not going to kill her off without giving dragons to someone else first.

          • FacelessCraven says:

            @John Schilling – ” but he’s the kind of person GRRM likes, and so Jamie also suddenly likes him enough to risk his life for Tyrion’s on no politically sound basis.”

            Jaime doesn’t appear to have ever given much of a crap about politics of any kind. He’s always done exactly what he wanted to, and it was long established that he’d do pretty much anything for Tyrion. There wasn’t a sound political basis for attacking Eddard Stark and murdering his men in the street over Tyrion’s arrest, for instance.

            For that matter, it seems plausible that Tywin would have actually sent him to the Wall rather than executing him, for reasons of family honor if nothing else.

          • John Schilling says:

            Ah, so when he pushed Brandon Stark out of a tower, that’s exactly what he wanted. Not how I had read his character at all.

            Like most everyone else in the Game, Jamie has been consistently written as understanding and accepting the rules even if he doesn’t seek the same victory condition, and willing to do repugnant things if the rules demand it. If he were going to follow desire or principle with suicidal intensity, he’d have been dead long ago.

            And while I have deliberately avoided the books, the version of the story being told on television has never presented Jamie Lannister as being suicidally devoted to Tyrion. He enjoys Tyrion’s company and conversation, he thinks Tyrion got a raw deal from their father, sees Tyrion as kin – but the people who are trying to kill Tyrion are also kin, and more than kin, as is the person Tyrion is plausibly accused of killing, and since when does Jamie Lannister lay down his life for his family?

            So now Jamie Lannister is now so great a plotter that he can rescue an accused traitor from the black cells of King’s Landing without undue risk in spite of being an obvious suspect, can command the support and silence of veteran spymaster Varys without fear of betrayal. Or his devotion to Tyrion was inexplicably boosted to nigh-suicidal levels for the purpose of ensuring that a favored character escapes having been written into a lethal corner.

          • InferentialDistance says:

            So now Jamie Lannister is now so great a plotter that he can rescue an accused traitor from the black cells of King’s Landing without undue risk in spite of being an obvious suspect

            Sort of like how he’s such a great plotter he can murder the king and yet retain his position as a king’s bodyguard for the new monarch.

    • LTP says:

      “Philosophy is useless. In reality, reasoning is used to justify pre-existing morality, so all philosophers are doing is selling pretty words to whoever wants to hear them.”

      A great deal of philosophy has nothing to do with morality. For instance, the scientific revolution had philosophical foundations, and pretty much every liberal arts (i.e. humanities, social sciences, pure math, pure “hard” sciences) academic discipline has a philosophical basis without which it could not function.

      • Carinthium says:

        I agree with you, but pointing out that Alex can argue in his defence that these foundations were created not by conscious philosophy but as a side effect of social forces, purely empirical analysis without any philosophical training, or a mix. There are plenty of possible mixes Alex could use as an argument.

        I don’t have enough empirical evidence to assess in detail whether this is right or not, but I figure since he’s asleep I may as well steelman him.

    • Harald K says:

      Strange that the Wikipedia political spectrum article doesn’t mention the F scale. It predates Eysenck’s, but otherwise sounds a lot like it.

      Fun fact: the F scale (controversially) asserted that belief in astrology was a typical authoritarian/fascist trait… and Eysenck was a fan of astrology.

      And who made the F Scale? None other than the Cultural Marxism bogeyman himself, Theodor W. Adorno! So maybe take scales developed by both these political hacks with a grain of salt.

      Chris Lightfoot gave his political survey to a representative sample of the UK population in 2005, and ran PCA on the answers.

      He found that the main axis of variation was actually more about something you could call authoritarianism: If you think prisons are too soft on criminals, that immigrants are ruining the country, and that the EU is the beast in revelations, that says a lot more about which party you’ll vote for than your views on taxing the rich, privatizing the railroads or genetically engineered crops. All in all a very interesting read, I recommend it. Unfortunately the site which had the actual survey is taken over by a Japanese SEO spammer. Still sad that Chris Lightfoot is dead 🙁

      • Peter says:

        Ah yes, Chris Lightfoot’s work was great. I remember him calling the main axis “the axis of UKIP”.

        That survey had me being surprisingly economic-right wing. I think it might be that I’m one of those weird people who likes both redistribution and markets-as-understood-by-economic-theory, I’m not the sort to arbitrarily say “such-and-such a thing is unfair and there should be a rule against it”. (checks link) Ah yes, The Economist. I don’t subscribe but I do like to read it when other people leave it lying around on the coffee-table, and it used to be my go-to magazine for train journeys.

        Somewhere in the comments of that link there’s the F Scale itself. I’m a liberal airhead, and almost a whining rotter. Hurrah!

        • Alex says:

          Hmm. Eysenck found the biggest axis to be left vs. right, but Lightfoot found the T-axis. Either (1) someone messed up, (2) doing PCA on opinions is irredeemably flawed or (3) UK opinion changed a lot in 50 years.

          • Peter says:

            Lurking somewhere around is a previous version of Lightfoot’s work, based on a survey he put together himself. The first axis seemed to be a combined left/right axis (both social and economic), and the second turned out to be an odd “idealism vs pragmatism” axis.

            I think a lot may depend on the questions.

          • Harald K says:

            I wouldn’t say it’s irredeemably flawed, but a current issue like the Iraq war can really matter a lot. Liberal-ish publications like the economist supported it, as did Labour, but the smaller parties on left and right were strongly opposed to it.

            People who want to go digging for deeply coded personality traits that dictate people’s politics, need to accept that the very same policy may be “conservative” in one period and “radical” in another. But unless you have a clear explanation of what things are inherently conservative and which are only conservative right now, and why, it sounds like a good way to cheat yourself to the results you want.

      • Douglas Knight says:

        It does mention the F scale:

        Subsequent criticism of Eysenck’s research…The interpretation of tough-mindedness as a manifestation of “authoritarian” versus tender-minded “democratic” values was incompatible with the Frankfurt school’s single-axis model, which conceptualized authoritarianism as being a fundamental manifestation of conservatism, and many researchers took issue with the idea of “left-wing authoritarianism.”

    • By philosophy you appear to mean moral philosophy.

      I suppose there would have to be a pre-existing view in the person. They would need to think (1) there is some form of right and wrong, whether subjective/objective/personal/universal/whatever (2) they wish to follow/achieve/live it (3) innate knowledge of this is imperfect (4) study or thinking about it can improve their knowledge of it.

      In such a case moral philosophy is the rational option. The question is, how many people think the above, and how many are really looking to justify or prove their pre-existing conceptions to others as you suggest. I think both are true in some cases (some moral philosophers are noted for pretty unusual and not neccessarily popular ways of living), but I can’t think of any easy way to separate and quantify the two. If you think of a way, please let us know.

      • Alex says:

        In such a case moral philosophy is the rational option.

        You are right. There’s a lot I don’t really see the usefulness of, but basic moral philosophy is important.

    • I don’t know what sources of information on “right wing authoritarianism” you are looking at, but you might be interested in my critique of Altemeyer’s work and his response:


      and stuff linked thereto.

    • magicman says:

      The Noah Smith thing underlines how special SSC is. So many blogs, even by smart people like NS seem to rely on simple assertion, tone policing etc. Even when I strongly disagree with SA its not because he is misrepresenting arguments or ignoring evidence.
      I can’t help thinking that his argument applies much better to himself than SA.

  23. OTC says:

    Dunno. I skim over RSS and never post. Dunno if it counts.

  24. Chris Billington says:

    Hey! I remember that around about that time (plus or minus a month because my memory is terrible), I stopped getting emails from wordpress that you had new posts.

    I wasn’t sure why, but having lost faith in wordpress RSS notifications and not bothering to investigate, I googled ‘how to get email notifications when a website updates’, and am now getting email notifications from some thing called blogtrottr instead.

    Actually, I think I assumed that you turned off your RSS feed because you didn’t like the attention you were getting over the social justice stuff. I didn’t bother to check whether this was the case, since I was able to restore email notifications after my five second trip to google.

    EDIT: nope, I was way off. My last email notification from wordpress was in September 2014, and was the ‘I am being framed’ post. It made sense to me that you might be trying to reduce publicity after that, so I didn’t think twice about the hypothesis that you had disabled RSS.

    • Douglas Knight says:

      Just as an aside, your experience has nothing to do with RSS.

      • Chris Billington says:

        Ah, of course. It was just WordPress’s functionality for subscribing to updates. I didn’t bother to think about what I had signed up for since it was working and I don’t subscribe to many things. I had just mentally grouped all these things together.

  25. Secretariat says:

    In the US it seems like more and more employers are being asked to provide more and more of their employees’ needs, not just supplying wages for productivity. There’s employer mandate for health insurance, health insurance for the family, paid maternity leave, retirement programs (pensions, 401k), life insurance, tuition programs. This appears to be what Wikipedia calls liberal corporatism (“capitalist companies are social institutions that should require their managers to do more than maximize net income, by recognizing the needs of their employees”).

    At the same time the labor force seems to be more flexible than ever. Big institutional employers that used to offer life time employment seem to be either dying (Kodak) or moving away from the model (GM). Corporate restructuring layoffs have been a norm for quite a while. At-will employment seems to be the norm in the US, especially with falling unionization rates. And people seem to be changing jobs more often voluntarily as well. Future Work Place says that vast majority (91%) of millenials expect to stay in a job for less than three years. And let’s not forget about the flexible jobs of the so called “sharing economy” (Internet Taxis, Taskrabbit, &c.) where it’s often the norm for workers to be individual independent contractors (as opposed to working for staffing firms) not employees. In these positions it’s being uncommon to be splitting time between direct competitors. These folks aren’t full time employees at all.

    These two forces seem to be at odds. So how does it make sense to have long term core needs filled by employers rather than by government when your employer can get fire you for no reason at all and is likely to do so as it responds to the natural business cycle. Sure it may make sense for some employers to provide benefits as part of an efficiency wage, but overall it doesn’t seem to work. Both of these trends seem to be accelerating but they also seem to be on a collision course.

    I even recently saw an article from attacking the liberal corporatist model from the left, even going so far as to question the gospel of minimum wage and full-time-employees-should-not-be-on-food-stamps, favoring social democracy instead.

    So why are both of these trends accelerating at the same time when they seem to be at odds, and where might this situation end up?

    • ddreytes says:

      I have no data or real evidence here. But anecdotally, there seems to be in part a link between the two trends, in that companies are more and more incentivized to hire short-term contract workers, or non-full time, or etc, precisely to avoid paying benefits.

      As to the broader logic of it, as a system, I think you have a pretty goddamn good point there.

      • roystgnr says:

        “We keep cutting deeper and deeper into the goose, but we seem to be getting even fewer golden eggs out than ever!”

    • Kiya says:

      To take issue with one thing you mention without having an opinion on the overall point: the Forbes article that gives the 91% number cites in turn an infographic that does not, as far as I can find, actually make that claim. I’d also be inclined to distrust younger millennials’ (according to the infographic, millenials are 18- to 38-year-olds) self-reports as representative of their long-term career plans, as many of them are still in college or have just started working. I’m not sure exactly what the original survey asked due to the infographic dead end, but people in their twenties might plan to stay in their initial few jobs for only a few years, and then apply somewhere they prefer once they have more experience to put on their resume. This doesn’t explain the 38-year-olds.

    • John Schilling says:

      The trends are separated by about half a century. The one where corporations are asked to provide for their workers’ health care, retirement, etc, that one peaked in the mid-twentieth century. A lot of that was strong labor unions, and another big chunk was FDR’s wage freezes forcing corporations to find other ways to increase compensation for high-value employees. None of this was mandated by law, it just became industry standard practice, and in an environment where lifetime employment at a single corporation was the norm, it was not an unreasonable practice. Enforcement? If you ran a corporation and didn’t offer your employees free medical, you were running a corporation without employees.

      The old model is breaking down, people aren’t expected to work for one corporation their whole life, and that’s generally a good thing. But it means corporations have less of an interest in investing in the long-term health, prosperity, and productivity of their work force. People who have become accustomed to getting such things for “free”, are now finding out how much they really cost and reacting the way people usually do when faced with the loss of something to which they feel entitled – go to the biggest player in the game and say, “the evil meanies are taking away my stuff, which is rightfully mine. Make them give it back!”

      Since we live in a democracy, this means that what was once industry standard practice is now becoming legal mandate, even as it become less sensible as policy. But very little of this is industry being asked to provide “more”, except where things like maternity leave are being implemented in a somewhat clumsy attempt at gender equality.

    • Jon Gunnarsson says:

      I don’t see a contradiction here. Employers are mostly giving these benefits instead of wage increases for tax reasons or because they’re forced to do so by law. As for why such laws exist, my guess is that voters are stupid and don’t understand economics. They think it’d be nice to get those benefits, but they don’t realise that for every dollar they get in non-wage benefits, they get (approximately) one dollar less in wages, and that this also goes for benefits they don’t actually want badly enough to pay for voluntarily.

      • ddreytes says:

        I think it’s stupid but I also think it’s mostly a compromise. I don’t think voters would be asking employers to provide those things if it was politically practical to ask the state to provide them. But, because that’s not politically realistic, we end up with a slightly nonsensical alternate position that is politically realistic.

        (Whether or not you think the state should provide those things, I think that’s at least a more logical position)

        • Jaskologist says:

          It’s not a compromise, it’s a back-door. It’s not that voters are necessarily clamoring for companies to be required to act as agents of the state providing ever-more benefits, it’s that politicians are trying to make them agents of the state, and this is one way of hiding that from the voters.

          Obamacare would be a good case-in-point. Obviously, the people writing it really wanted some sort of single-payer, but the people didn’t, so they instead made a Rube-Goldberg version of single-payer to try to hide it. As it happens, the voters weren’t thrilled with that either, hence the massive Democratic losses in Congress.

          But at no point in this were voters clamoring for nuns to buy birth control. That’s all on the Brahmins.

          • ddreytes says:

            Without getting too much into political narratology, I don’t think it’s true that the political actors responsible for the ACA all wanted single-payer. The narrative on the time among the left was all about Obama’s complete disinterest in single-payer, his refusal to even consider it. Of course that could all have been lies, but the reporting at the time was certainly about the left wing in Congress pushing for single-payer and the Obama administration pushing against it. Anyway there were certainly people who came to ACA as a compromise but the idea that it’s entirely a back-door for single payer doesn’t ring true to me.

            More broadly, it makes more sense to me to look at all of these things as the result of the complicated and conflicting interests and desires of numerous (sometimes interlocking) political groups. And a very inefficient result of that process. Rather than as a fully-formed result of some specific cadre of politicians.

          • Held In Escrow says:

            From actually talking with someone involved in the scorekeeping and having the politicos bounce ideas off them side of things: the issue with the ACA is that Obama played it entirely hands off. He gave it to Max Baucus to write and that’s how we got what we have, thanks to Senator’s Baucus having the legislative skills of a rhesus monkey.

          • HeelBearCub says:

            It is definitely not the case that we got the ACA as it is because Democracts broadly were trying shoe-horn in single-payer.

            Making Federal Employee health insurance available as an option to everyone on the exchanges passed the house. That might have ultimately led to single payer, it certainly was making the biggest payer possible available as an option.

            But 0 Republicans were willing to vote for a bill that then needed 60 out of 60 Democratic votes in the Senate. Some of those Democrats were from quite conservative states and were them themselves just barely left of center. They are the ones who determined how far left the bill could go.

            And this is also why Obama stayed relatively hands off. The Senate is the body that was the smallest bottleneck. The Senate puts a great deal of power in the hands of single Senators. Obama wanted the next step on the road to universal healthcare, not another failed attempt.

            And Obama was proven correct. The bill passed. Which is more than any other president that wanted to make health insurance universally available had been able to accomplish.

  26. I’m one of your readers who discovered you in July and peaked off around February of this year. It’s crazy that so many other people fell off too, if the data is accurate.

    Echoing others, there’s definitely been a drop-off of controversial posts since around then. I’m curious if you’ve tried charting viewership against posts tagged with “things I regret writing”. Another idea, although it’s a lot of effort, would be to do another mini-SSC survey and compare results.

    I have more complicated theories as to why the waves of culture war bubble up as they do, but it’s clearly out of scope. The gist of my theory is there’s a pedagogy around internet outrage culture, and (specifically) a moment of development where fighting online culture wars against THE DAMNED SJWs makes a lot of sense, but it’s never quite sustainable for all but the most socially outcast of people. And the most socially outcast of people typically find a way to exit themselves from any budding community / movement, leaving the progressive status quo largely intact.

    I’m sure there’s some Advanced Moldbug Theory that explains this way better than I can.

  27. Timothy Underwood says:

    Possibly drop off from something? I started reading regularly after the link from Vox, which was a few weeks before that. Are you losing long time readers or seeing a temporary bump in regular readers that was small enough to not be visible dissipate?

  28. Gwen S. says:

    Last month, future Daily Show host Trevor Noah was discovered to have made a bunch of tweets which demeaned women, Jews, atheists and transgender people. Noah refused to apologize and Comedy Central came to his defense saying “Like many comedians, Trevor Noah pushes boundaries; he is provocative and spares no one, himself included. To judge him or his comedy based on a handful of jokes is unfair. Trevor is a talented comedian with a bright future at Comedy Central.”

    I am really angered by the hypocrisy of Comedy Central. Comedy Central loves to make judgy videos about people who’ve made a handful of unfortunate remarks. Not just big names like Brendan Eich or Jon Kyl but ordinary people like Crystal O’Connor, or the Redskins fans who were lured onto the Daily Show under false premises. Noah and his employer are saying “a big picture view that doesn’t reduce a complex person to a handful of unfortunate remarks for me, but not for thee.”

    And it’s not even a conservative verses liberal issue. Women, Jews, atheists and trans people are supposed to be type of people the Comedy Central protects. Instead, they’ve sided with the big-name celebrity who wouldn’t even apologize. Talk about punching down.

    • For context: I don’t actually know who or what Comedy Central is, and although I vaguely recall watching various online clips that I think may have been from the Daily Show I’m not sure I am remembering correctly. It may or may not be on air in my country; I watch little live TV. I have no idea who Trevor Noah is and don’t recall reading about the criticism you describe.

      But “Women, Jews, atheists and trans people are supposed to be type of people the Comedy Central protects” sounds improbable. Surely Comedy Central’s mission is not to protect, but to entertain? To be funny?

      Again, I’ve never watched them, but from your description I would guess that those “judgy” videos are or were intended to be entertaining. They may also serve a social purpose by criticizing anti-social points of view, but I would expect that to be seen at most as a bonus; the primary purpose, I would assume, is entertainment. I would not imagine that affecting the employment of the people being made fun of was considered to be either a primary or secondary purpose.

      In similar ignorance, I would guess that the criticism of Trevor Noah was not intended to be entertaining and was intended to affect his employment prospects.

      From this perspective, it does not seem to me that Comedy Central’s position is hypocritical.

      • Peter says:

        Possibly: “being mean to people is funny, but also icky. Now if we’re mean to people who deserve it, then the ick factor goes away and we can be mean to people with a clean conscience.”

        Am I being excessively cynical here? Am I being hypocritical even?

        • Fazathra says:

          If by “people who deserve it” you mean people of the opposing tribe, then you are completely correct. The punching up/down rhetoric is just a paper thin rationalisation of this.

          • ddreytes says:

            The concept of “not punching down”, I think, made a lot more sense when it was a point about the specific ethos of a specific comedian. Whether or not you thought Colbert lived up to it, he at least meant something pretty definite by it (and I suspect something not entirely political).

            When you try to turn it into a moral imperative or a broad cultural maxim or some kind of foundational principle of politics, it makes a lot less sense.

          • Shieldfoss says:

            The concept of “not punching down”

            Honestly, I suspect the chain is even simpler, and adequately explained in the SSC post afrom some days ago about “intolerant” being the new accusation we use in these days of tolerance when we want to hit people we don’t tolerate: Bullies will just adapt to whatever our culture approves of as targets and bully those people. Right now, the accepted targets are “Whoever you can mention right after saying ‘punching up’,” and so the phrase gets associated with bullies.

          • Anonymous Coward says:

            I guess that’s technically true if you abstract “racists” and “minorities” into opposing tribes, but I hope you agree that it’s more acceptable to make fun of racists then it is to make fun of minorities. That’s what people mean by punching up vs. punching down.

          • ddreytes says:

            Eh, never mind.

          • Shieldfoss says:

            I guess that’s technically true if you abstract “racists” and “minorities” into opposing tribes, but I hope you agree that it’s more acceptable to make fun of racists then it is to make fun of minorities. That’s what people mean by punching up vs. punching down.

            That seems like a cached thought. I invite you to take five minutes to think it over (I will reply tomorrow with the counter-example that immediately sprang to mind when I read your post. I am tribally in agreement with you, but cannot in good intellectual conscience agree that you are accurately describing the world-as-is.)


            Eh, never mind.

            If the original was directed at me, I encourage you to repost – my first post in this thread was quick and off-the-cuff and I am likely to agree with any criticism you have of it. (For some reason, if I make a second block-quote, it doesn’t include the name, even if the name has changed to a different poster)

          • Jiro says:

            It’s more acceptable to make fun of racists than minorities if

            1) You’re in the US, or perhaps in other areas of the west, and

            2) “racists” and “minorities” refer to those groups who typically receive such labels in the US or the West, not to those groups who literally fit the definition.

            “Rude employees at the northernmost Target in Cleveland Ohio working in the electronics department last Thursday” is a minority of one, but it is okay for me to make jokes about that minority.

          • RCF says:

            “it’s more acceptable to make fun of racists then it is to make fun of minorities. That’s what people mean by punching up vs. punching down.”

            No, I think for many of the people who say that, the division is being “privileged” and “oppressed”, not “racist” and “minorities”. If you complain about a black person being racist, you’re tone trolling. And black people can’t be racist, because racism requires institutional oppression. Etc.

          • Shieldfoss says:

            That seems like a cached thought. I invite you to take five minutes to think it over (I will reply tomorrow with the counter-example that immediately sprang to mind when I read your post. I am tribally in agreement with you, but cannot in good intellectual conscience agree that you are accurately describing the world-as-is.)

            Returning to this: I was thinking specifically of the case of the pizza place that wouldn’t cater to homosexuals. In order to avoid the race/gender/openThread edict, I will not be performing any analysis here except to say that if you read that story as an example of weak homosexuals “punching up” against powerful bigots, then you are using non-standard versions of “weak” and “powerful.”

        • Faradn says:

          I’m not sure if it’s a matter of deserts but what is actually funny. Racist and sexist jokes tend to be lazy snowclones. Comedians can get away with third-rate racial humor because it taps into still-existing prejudices that spawn the crude and unimaginative parts of people’s cognition.

          Not saying racial humor is inherently unfunny, just statistically so.

          • Cauê says:

            Racist and sexist jokes work because they push against taboos. Compilations of “offensive jokes” on the internet include not only race and gender, but also things like incest, pedophilia, ludicrous violence, tragedies (e.g. cancer jokes, 9/11), and religion depending on the community. The pattern also shows up in comedians that focus on offensiveness (e.g. Jimmy Carr, Frankie Boyle) and in the general tone of Encyclopedia Dramatica, for instance.

            These themes have something in common, and it’s not “tapping into still-existing prejudices”, but “you can’t joke about that!!”

          • Faradn says:

            It’s not letting me reply directly–nested too far in maybe?

            Yes, the “edgy” excuse. Sometimes it’s valid. The problem is, stereotypes are worn out and boring almost by definition. It’s possible to do interesting and truly edgy things with them–trouble is there’s little incentive to. People will guffaw at “edgy” humor that is anything but, because people are stupid.

          • Cauê says:

            because people are stupid.

            If that’s how you want to put it. But then that’s the reason people laugh at anything. A comedian’s job is to find ways to press the stupid buttons that make people stupidly and pointlessly laugh.

            I don’t think you can make the case that this reaction is any less stupid and pointless when triggered by some buttons than others.

    • Carinthium says:

      Question. I don’t actually know about this issue, but is there any logical, non ad-hoc rule differentiating the remarks Comedy Central has mocked and the remarks Trevor Noah has made?

      My actual position is pro-Freedom of Speech with mockery being a bad idea, except mockery of logical holes in an argument or other irrationalities. I don’t know the facts, so I don’t know how well that fits.

      But to stigmatise people for demeaning these categories whilst allowing remarks of equal irrationality to slide is very hard to justify. How are people supposed to judge whether humans are in fact equal or whether demeaning claims are true if nobody is allowed to make them?

      The alternate position would be to say that humans are too stupid and irrational to be trusted with exposure to these sorts of things. But there are plenty of stupid mistakes people make that aren’t censored when publically advocated.

      You may have a consistent position for all I know. But I thought I might as well check.

      • Gwen S. says:

        Not that I can see. That’s why I think it’s hypocritical of Comedy Central to mock people for making stupid comments, but deflect criticism when their own employee makes a stupid comment.

        • Comedy Central is definitely being inconsistent, but I don’t see why they should be bound to consistency on this issue. Their job is to entertain, not implement a rigorously consistent ethical framework about who to mock.

          At the object level, I support their decision because I want to do everything possible to erode the notion that having ever made a *-ist tweet or whatever makes you permanently unemployable.

    • AR+ says:

      These are completely different things. Comedy Central mocks people. People who dredge up problematic content like this are out to force people to kneel in supplication to their ideology, or else be professionally ruined. Nobody would care if this sort of dustup tended to end in people making judgy videos of the target’s unfortunate remarks.

      But that not the world we live in, and that’s not what Comedy Central is defending him from.

      • Gwen S. says:

        Maybe. To CC’s credit, Colbert did interview Andrew Sullivan who said that Brendan Eich shouldn’t have lost his job.

      • Gbdub says:

        Comedy Central actually does have an unfortunate habit of only mocking the “right” people, as any follower of South Park will be aware (in particular, South Park constantly mocks Jews, Catholics, Mormons, and Scientologists, but CC first censored and then stopped showing South Park episodes involving Mohammad).

        And The Daily Show / Colbert Report definitely have a political slant in who they are willing to mock / how cruel they are willing to be. I’m hoping a Republican wins the next presidential election, if for no other reason than that political comedians, who tend to be coastal liberals, will take the kid gloves off again.

        • Wrong Species says:

          South Park probably mocks environmentalists more than Jews or Mormons. They’re pretty well known for their moderate libertarian views. Of course, Comedy Central is a different story.

        • Careless says:

          The South Park thing isn’t really a political correctness issue, though, it’s a “they’re terrified of Muslims” issue.

      • RCF says:

        “People who dredge up problematic content like this are out to force people to kneel in supplication to their ideology, or else be professionally ruined.”

        That is just a bunch of demagogic nonsense.

    • Irrelevant says:

      Women, Jews, atheists and trans people are supposed to be the type of people Comedy Central protects.

      That is an utterly bizarre statement. Comedy Central doesn’t exist to protect anyone. The Daily Show doesn’t exist to protect anyone either, unless we’re extending the verb “protect” to include creating a sort of ballpit for disaffected Bush-era lite-liberals to hang out in.

      • Muga Sofer says:

        “The kind of people CC claims to be protecting when they mock these people,” perhaps.

    • RCF says:

      What is an example of a tweet that demeans a member of the mentioned groups, and what is an example of a judgy video?

  29. Alex says:

    Re: The readership drop-off. Ash Wednesday this year was Feb. 18, and maybe you have a large block of devout Christian readers who gave up the Internet for Lent. I wouldn’t worry too much as they should start trickling back now that we’re done with Easter celebrations.

    (Tongue firmly planted in cheek ic that was not clear)

    • haishan says:

      You’re joking, but I did commit to giving up commenting on this blog for Great Lent (starting 23 February Gregorian this year), with some degree of success. Also, “rationalist Lent” is a thing.

      • Irenist says:

        I gave up commenting on blogs for Lent. That meant that I would look at SSC to see if there was a new post, and maybe to read the comments, but I wouldn’t follow the comments as closely because I wouldn’t be waiting to see if I’d prompted any interesting replies. I found this blog through Leah Libresco, IIRC, and I may not have been the only Christian to have done so. I doubt it’s a major factor, but Lent is at least a small part of it.

    • Cerebral Paul Z. says:

      “No way am I giving up the Internet two years in a row,” Tom said unrelentingly.

    • Muga Sofer says:

      I did actually reduce my internet usage for Lent, although I didn’t cut it entirely.

  30. Max says:

    Well I found this blog around February trough the link on infoproc and find it fascinating, intelligent, articulate and thought provocative. So please keep it up !

  31. Sylocat says:

    Jacobin (of Jacobinghazi fame) recently published an article about the search for extraterrestrial life and how some of the common fears and expectations are really just anthropocentrism on our part. There are parallels with the field of AI as well, in terms of how a truly alien intelligence might set societal priorities.

    • Creutzer says:

      It doesn’t seem implausible that the natural evolution of an intelligent species always ends in a certain region of mindspace, though. And it seems overwhelmingly likely that the resulting creatures will possess a certain degree of assholishness. So I don’t see at all how concerns about that can be dismissed as unjustified anthropocentrism. The article also makes the mistake of tying psychology to the economic system when human minds evolved long before current economic conditions came about.

    • Peter says:

      Politicized hacks look at astronomers, and see only themselves…

      • Peter says:

        Also – “That aliens would have imperial ambitions is taken as natural. Far from being the historical outcome of a specific organization of capital in the latter half of the second millennium…”.

        Um. Ghengis Khan? Attila? Tamerlane? Caesar? Alexander? Shaka? Xerxes? Suleiman? I’m a bit hazier on the pre-1492 Americas but I’m sure the Aztec and Inca empires didn’t just spring up from nowhere. Let’s add Pachacuti-Cusi Yupanqui… I couldn’t settle on a famous Aztec conqueror, but it’s pretty hard to deny that they engaged in plenty of violent expansionist imperialism.

        • FacelessCraven says:

          If you haven’t read it, I recommend Peter Watts’ first contact novel Blindsight. It’s available for free on his website, and I highly recommend it. One of the interesting parts is his theory that Technology Implies Belligerence:

          • DrBeat says:

            I highly recommend you not read that novel, because it cheats, and almost every detail of the setting exists not as a natural consequence of something, but because the author needs it to be that way to Make A Point.

          • Nornagest says:

            Would you mind explaining what you mean without using the word “cheat”, DrBeat?

          • FacelessCraven says:

            @DrBeat – I greatly enjoyed it and didn’t detect any cheating at all, but I’d be fascinated to hear your views on it. What about it seems so objectionable to you?

          • DrBeat says:

            Given the theme about uncaring evolution, the abilities of the starfish-alien-cells only make sense if they evolved in an environment with humans and their evolutionary strategy was to make humans feel bad about their ability to perceive the world.

            Since so much of the story is devoted to figuring out the aliens, this is kind of an enormous deal. The whole Big Important Point the author is making is about consciousness not being adaptive, and how ruthless evolutionary competition is, and how therefore consciousness will be crushed out by competition… and he does this by putting the humans up against aliens that are specifically evolved to be able to beat humans and make humans feel bad about how much of the world they can’t see without having consciousness.

            I’m not going to stop harping on this. The aliens, without consciousness, without ever having met humans or any Earthican life, adapted to be invisible to a quirk in human vision that I am pretty sure doesn’t even work that way (saccades are when we see by simulating movement, not when we can’t see;) that requires absolutely flawless down-to-the-millisecond timing in reacting to a nerve impulse that has less than an inch to travel to be carried out, in an organism that is across the room.

            Making an intelligent organism able to do that would be bullshit. Making an unintelligent organism able to do that without ever having encountered Earthly life is super turbo ultra bullshit II: hyper fighting.

            Also, the vampires. The vampires only exist so that at the end they can kill off humanity and the author can Make A Point about how consciousness isn’t adaptive. There is no reason presented for humanity to bring vampires back to life. Every thing they do, a computer does better, and the story establishes that we already have people whose job it is to interpret the outcomes of computers for people to understand, so it’s not like vampires are around because they are closer to human. They require specialized chemicals to be able to see right angles without stroking out, and we make it for them For Reasons, and even though their super-calculating malicious insentient intelligence doesn’t communicate with other vampires, they kill off almost everyone offscreen Because Of Reasons.

            Of course, the vampires, being insentient, can do anything, because sentience doesn’t do anything! We know it doesn’t do anything because the author describes how people make impulsive decisions before the ‘conscious thought’ part of their brains activates, and then handwaves away long-term consideration or planning as something that doesn’t count Because Reasons. He lays out an argument that denies or handwaves away every good thing sentience grants an organism, and talks about the huge evolutionary cost it has, and somehow doesn’t put together that he disproved his own argument; he has handwaved away all of the evolutionary benefits of sentience, showed the evolutionary cost of sentience, and then says that sentience is doomed because evolution doesn’t select for things with high cost and no benefit — his argument says that sentience shouldn’t exist, so clearly, his argument is wrong!

            The author wants to Make A Point with Serious, Hard SF about how being sentient isn’t adaptive. He creates the aliens in the story so that they somehow adapated to an environment nothing like their own, without having intelligence. Everything involving the vampires is a Second-Order Idiot Plot, as their existence, not to mention their conquest of Earth, requires every human in the world to be an idiot. He ends with handwaving about how the science proves him right even though it doesn’t, because without that handwave, his story’s Important Point collapses into “What if things that require thought didn’t actually require thought? That would make you feel bad, wouldn’t it?”

          • Bugmaster says:

            For once, I disagree completely with Dr. Beat. I do agree that some of the events in the book, and aspects of the setting, were probably chosen in order to make for a more interesting story; but then, this is why Blindsight is a work of fiction, and not a scientific article.

            (WARNING: very minor spoilers)

            The book makes a very compelling argument that consciousness is not the same thing as intelligence or agency. It further argues that consciousness is itself quite maladaptive. Of course, it’s still better to have both intelligence and consciousness than neither of those things, which is why humans have had such a good run so far.

            The “vampires” in the setting should be familiar to the crowd here; they’re basically unboxed AIs who have been created by engineering as much consciousness out of humans as possible. As such, they can re-purpose all that brainpower that humans routinely use to contemplate themselves to the task of actually achieving their goals. The vampires were created to perform specific jobs, and humans let them out of the box because of course humans would let something like that out of the box — if you’ve ever talked to a human, you’d know this was true.

            The aliens are a step beyound even the “vampires”. They are fully intelligent, and lack any kind of consciousness whatsoever, because they never developed any during their evolution.

            I agree that some of the powers ascribed to “vampires” as well as the aliens are a tad unrealistic; but again, this is a work of fiction, not a scientific treatise. Besides, I also find some of the powers that LessWrong-ians ascribe to AIs to be a tad unrealistic (to put it mildly), so I don’t see what the problem is.

            What makes Blindsight such a great book, IMO, is not merely its scientific accuracy (which is considerable, especially compared to your average SF), but the fact that it somehow makes you experience, in a very rough way, what a being who possesses intelligence but not human consciousness would be like. The answer is, “it would be a lot more efficient, for one”.

            “Think of all the things that you most cherish,” — Blindsight says — “All of the things that make you who you are, at the very core. Everything that is quintessentially Human. Well, all of that stuff is just an evolutionary dead-end that is holding you back. You’ll never get anywhere unless you dump that baggage, like the appendix that it is, and can totally help you do that”.

            The fact that Blindsight says this very convincingly is what makes the book so utterly terrifying, and, IMO, such a wonderful read.

          • Nita says:

            The book makes a very compelling argument that consciousness is not the same thing as intelligence or agency.

            Does anyone actually believe they’re the same thing? A common belief is that consciousness is either necessary for agency or an inevitable side-effect of it. The book explores the possibility that this belief is wrong, but that’s not the same thing as making an argument against it (unless you consider “I can imagine it” an argument).

            To be honest, I was more disturbed by the assumption that there are no perpendicular lines in nature — now that’s cheating!

          • FacelessCraven says:

            [Deleting a bunch of stuff that Nita and Bugmaster said more eloquently]

            The core of the book is whether intelligence requires sentience. I think he makes a good case that intelligence without sentience is at least plausible, but if you strongly disagree, I can see why the book wouldn’t work well for you.

            Regarding two specific plot points you raised:

            The invisibility trick seems entirely plausible to me, given that I’ve experienced an extremely dramatic example of inattentive blindness, and that the Scrambler in question had plausible access to a real-time brain scan of the victim.

            Likewise, the Vampires seem to me to make much better sense then you’re giving them credit for. The book explicitly laid out that AI was of very limited utility, that the vampires were essentially mass-producible and eminently controllable and manipulable geniuses. There’s also a reasonable suspicion that the AIs are actually the ones pushing the Vampire proliferation, as a tool to better interface with/control humanity. As the Theseus AI says, “you don’t like taking orders from machines. Easier this way.

            I also strongly suspect that the Vampire “takeover” doesn’t work the way you seem to be thinking it does, and in general your read on Vampires doesn’t match mine at all. My read is that they push the already teetering human society over the edge by taking harmful action when doing so is possible without detection. They do this without needing coordination, because destroying human society is in their individual interest. A world where humans are reduced to prey is better for them, so they make that world one action at a time. They aren’t conquerors, they’re slow-acting poison in the water supply.

            There’s also the other possibility, that they’re simply continuing to act as tools of the human elite, or the AIs. None of the three options is even mutually exclusive, so it could be any combination of the three. Watts doesn’t specify in detail, because that is what the sequel is for.

          • DrBeat says:

            The fact that Blindsight says this very convincingly is what makes the book so utterly terrifying, and, IMO, such a wonderful read.

            And what I am saying is that no, no it doesn’t say that very convincingly. Every single element that allows it to appear convincing is cheated in. The author, through a proxy, confronts the single most obvious argument against his theory, and gives it a handwave that is, at best, a quarter-step above “Well that doesn’t count because of reasons.”

            And don’t tell me it’s okay because LW-ers ascribe bullshit powers to AI as well — I’m the guy who called that out as cheating too, remember?

          • Zykrom says:

            “The book explores the possibility that this belief is wrong, but that’s not the same thing as making an argument against it (unless you consider “I can imagine it” an argument).”

            Being fair, this is about the best we can do. Or at least, the best I’ve encountered and understood.

            Before I read Blindsight, I assumed that consciousness and agency had to go together precisely because I couldn’t imagine it being any other way.

        • Wrong Species says:

          My favorite part:

          “The idea that humans possess inherent traits is known as “biological determinism” — the notion that traits we observe in ourselves are natural, products of our biology, not of the cultural and historical situation we live in.”

          Because believing that people have inherent traits is literally the same as believing that culture doesn’t matter at all.

          • Nornagest says:

            So humans are made of meat because of contingencies of the cultural and historical situation, now?


          • Peter says:

            Made of meat, historical contingencies – my vote on this goes for “possibly technically true, but I just said ‘technically'”:

            Consider: if humans had somehow avoided all of the wars and pointless status symbols and so forth for a millennium or two, we might all be uploaded into computers or robots by now.

          • Evolution is sort of historical, isn’t it?

          • RCF says:

            And then they say “anthropologists are near-ubiquitous in their assertion that that biological determinism is flagrantly false”. (Double “that” from the original) So nothing is biologically determined?

        • FacelessCraven says:

          @DRBeat – “And what I am saying is that no, no it doesn’t say that very convincingly.”

          Why do you think consciousness is integral to intelligence?

          Numerous creatures, many of them insects, exhibit complex social and behavioral systems, even simple technology. Ants appear to engage in farming. What makes you certain that this behavior has an upper bound on complexity without a conscious component?

          What do you think of the various citations Watts provides from the neuroscience field, particularly things like savantism, or on things like the “zen” induced by tDCS? Every skill I’ve ever studied seems to involve ingraining responses so they come instinctively, ie without mental effort, without thought. What specifically does consciousness contribute to problem-solving/complex skills?

          • Nornagest says:

            I’m not DrBeat, but for me, the best argument against the book’s thesis was the exact complexity that Watts invokes to give his aliens their competitive edge. I haven’t got a clue what consciousness consists of, but evolution doesn’t spit out complex maladaptive systems for no good reason; the very fact that we evolved it is excellent evidence either that it’s adaptive or that it’s very simple and energetically lightweight.

            Sexual selection or path dependence would offer a way out here, but I don’t recall the former being mentioned, and the vampires disprove the latter in the book’s universe.

            (That being said, I don’t feel that the book meaningfully “cheated”, at least beyond the ordinary standards of science fiction.)

          • FacelessCraven says:

            Let me just take a moment to bask in the delight of actually finding a community where an appreciable percentage of the population have read (and formed opinions on!) Blindsight.

            You people are awesome.

          • Deiseach says:

            Intelligence need not depend on consciousness, I’ll give you (and Watts) that. But agency is a different matter.

            It’s in the vampires’ interests (individually) to reduce humans to prey? But if they are not conscious, if they react and don’t have long-term planning and foresight, if they are biological drives that see a human and go “dinner” and go for what will let them turn humans into “happy meals on legs”, what is the agency there? How can they be agents, if there is no communication, no co-ordination, and no sense of “this is beneficial for me” because there is no sense of “me” and “my interests”, just appetite and a form of intellect being driven by those appetites and instinctual automatic reactions to stimuli?

          • FacelessCraven says:

            @Deiseach – “But if they are not conscious, if they react and don’t have long-term planning and foresight…”

            I think the argument is that they DO have long-term planning and foresight, and that those things don’t require consciousness either. Not only that, but they do a pretty decent job of simulating consciousness to interact with humans; they can learn human language and carry on a conversation.

            “How can they be agents, if there is no communication, no co-ordination, and no sense of “this is beneficial for me” because there is no sense of “me” and “my interests”, just appetite and a form of intellect being driven by those appetites and instinctual automatic reactions to stimuli?”

            Just truncate the “me”. “This is beneficial for me” reduces down to “this is beneficial.” Removing consciousness actually universalizes the Vampire’s perspective. To the extent that vampires have similar value profiles (arguably a very great extent, since they have no consciousness to drive individualism), they are going to pursue mutually beneficial outcomes without needing coordination.

            @Nornagest – “the very fact that we evolved it is excellent evidence either that it’s adaptive or that it’s very simple and energetically lightweight.”

            His argument didn’t seem to be that it was “maladaptive” in anything but relative terms. And as Watts notes, there’s a growing amount of experimental evidence that consciousness interferes with skill rather than increasing it. Our brains seem to work better in a number of ways when they aren’t generating subjective experience, or are at least generating less of it. One of his implied endgames, I think, is that humans *turn off their own consciousness* in pursuit of power.

          • James Picone says:

            Are the vampires just p-zombies?

          • FacelessCraven says:

            @James Picone – “Are the vampires just p-zombies?”

            No no, they’re VAMPIRES.

            sorry, couldn’t help it. Yeah, pretty much, with the thesis of the book being that lack of consciousness makes them a whole lot smarter than baseline humanity.

            Judging from the excerpts, the sequel involves humans reverse-engineering the process to allow consciousness to be turned on and off selectively. Corps hire people to have their consciousness switched off for the duration of their contract, hence actual “zombies”.

    • satanistgoblin says:

      Did you think this to be a worthwhile article? Or did you link it out of sadism? Because it is pure idiocy. Yes, maybe aliens would be nicer than us. Or maybe nastier! So, since we have no idea yet, why take any risks?

    • RCF says:

      Wow, that’s quite a parade of fallacies. There’s the whole “ascribe the entirety of negative human attributes to failing to follow your ideology”. Then there’s also how they take the concern about the possibility of hostility and say that people are assuming that aliens would be hostile. Are these people so mentally incompetent that they are unable to distinguish between assuming something, versus bringing up the possibility of it? Or do they think their readers are too stupid to recognize the difference?

  32. Simon says:

    I’m gonna admit that I don’t read SSC as often as I used to, mostly because I don’t have as much free time as last year.

    One possible explanation for the lower hit count after February is that I noticed that you don’t write as many political blog posts nor as often as in January and before. Aren’t the political ones those that get the most links from elsewhere?

  33. zslastman says:

    So, looking back at the posts from that time, there is one minor thing present that turns me off the blog a bit, exemplified by your announcement of a friends wedding, and your plugging of meal squares. That thing is the sense that this is a blog for rationalists, but rationalists in the sense of ‘the actual social clique of people living in the SF bay area calling themselves rationalists’. Stuck as I am on another continent, it sometimes makes me feel a bit foolish to be so invested in a community of people that I will never meet. I think to myself, “Maybe I should be reading more mainstream philosophy and newspapers, so I can have conversations with actual people around me. Nobody here wants to hear about Moloch”. I don’t mean this as criticism of course, you guys are totally right to be forming a community. It’s just not practical in most places.

    I would never have pinpointed ~Feb 23rd as a focal point of that though, so I’m maybe privileging the hypothesis a bit.

    • frogcurious says:

      Generalising all Scott’s readers from myself, it’s this.
      The post which said, among other things, “I’ll see some of you at Ruby and Miranda’s wedding,” was on Feb 20.

      It sat at the top of the blog for a week.

      I remember looking at that post and having a sense that this blog is not for me. Not as in “I don’t like it,” but as in “this blog is written for people who know who Ruby and Miranda are. It’s not for me.”

      And I definitely haven’t checked it as much since then. It wasn’t in any way a conscious, rational decision. I just didn’t feel the same pull. I only came back here today to ask my kambo question.

      I think people who are high on the nerd spectrum, so to speak, are particularly over-sensitive to a sense of rejection or exclusion in the spaces where they’ve invested their nerdiness. Part of the reason behind getting nerdy about stuff is often that it’s a socially safe place to invest your sense of self. I wonder if Scott inadvertently scared off a bunch of readers who had started to invest their nerdy selves in SSC.

      • Irenist says:

        I wasn’t put off by that post, but I do seem to recall thinking “Oh, I guess he’ll be too busy to post for a while” and checking SSC less often for a bit thereafter.

      • Anonymous says:

        I actually had a similar reaction but had forgotten.

    • Bugmaster says:

      Huh, that’s interesting. I came to this blog from LessWrong, so I was always under the impression that the word “rationalist” referred to a very specific community of people who live in San Francisco, care about mental biases, either donate to or are on the payroll of MIRI, are all dating each other, support cryonics, have weird dietary habits, are led by Eliezer Yudkowsky, are good writers, etc.

      Some of these things aspects simply don’t concern me personally (living in San Francisco, dating each other); others I don’t care about in general (meal squares, being led by anyone); still others I either disagree with or find downright silly (AI risk, cryonics); but some aspects of the rationality community are either useful or simply interesting (mental biases, science writing); so, overall, I don’t think the community is totally irrelevant to those outside of its physical presence.

      • Mark says:

        I would love for there to be a community about “mental biases, science writing” without any of the less wrong silliness. Can someone start a kickstarter?

  34. US says:

    Others have touched upon this, but I see pretty much no way the drop-off in readership has anything to do with something you did. Are the wordpress stats the only stats you have of your readers? I’d feel very deprived of data if that was all I had to go on, and I don’t get anywhere near 1% of your traffic (…nor would I want to get that kind of attention…). If you don’t have much data to go on, one way to investigate further might be to compare the development of comments over time with that of the hits; if comments haven’t dropped off as well, what’s going on is not what you’re worried about (old-timers giving up on your blog).

    A substantial proportion of all blog hits are first-timers who arrive via search engines. That you can’t find any announcements about changes in search algorithms or similar I’m not surprised about (update: apparently I can’t read – you didn’t look for those, but rather for wordpress announcements…); I think tweaks to the algorithms are mostly made without public announcements of the changes, and it seems to me that search providers have a clear incentive to withhold information about many of the changes they make, as withholding such information makes their algorithms harder to game for people in that business.

    For what it’s worth I don’t observe any significant changes to the wordpress stats of my own blog around that time when looking at the data, but n is arguably way too small for this to lend much support to the view that it’s not a wordpress thing.

  35. zz says:

    It occurs to me that dead children is often not a large enough denomination: dead children, like birds, don’t scale well. I don’t feel much difference between 2k dead children (~$6M), 20k dead children (~$66M), and 200k dead children (~$666M).

    Thus (and this probably only really works for Americans), I propose adding the denomination Columbine massacres. The Columbine shooting killed 12 children. At $3340 / child, that works out to $40k. Roughly speaking, $200k is a Columbine massacre a day for a work week, $1M is a Columbine massacre a day for a month, $15M is a Columbine massacre a day for a year, and $1B is a Columbine massacre a day for your entire life (assuming the life transhumanists and life extension folks fail, which is a pretty strong assumption.)

    Now $6M (Columbine massacre a day for half a year), $66M (Columbine massacre a day for 4 years), and $666M (Columbine massacre a day for half your life) are closer to feeling as far apart as they actually are.

    (nb, this values change if you include the adult or shooters who died—and I think you should—but the result remains correct on the Fermi level, which is where all the original calculations happened in the first place.)

    • RCF says:

      This comment is rather bizarre. Changing to Columbine adds just one order of magnitude. The real work is done by having it repeat over a long time. So why are you treating Columbine as being the central innovation? Also, Columbine was a rather politicized event. You might not want to bring up those associations.

  36. Godzillarissa says:

    So, I was on holiday in Scotland (via the Netherlands from Germany) a few weeks ago and I noticed something that I’d like to have a few opinions on.

    The thing is, as far as I can tell, there’s not really a market for fair-trade-anything in Germany. We kinda have an “organic” shelf every now and then, and little shops that sell local goods along with vegan food etc. But it’s really not a big thing for most people around here (Bavaria, if you’re interested).

    The minute we got off the plane in Amsterdam, though, EVERYTHING was fair-trade, organic, like really in your face (and twice as expensive). Even the airplane food was made with ‘local bread’ (whatever that means, when you’re on a plane). And it continued all through Glasgow, onto the Isle Of Arran (but that was to be expected, since it’s a small island and all).

    Anyway, question time:
    Is fair-trade, locally grown, organic food this huge thing Germany didn’t pick up on yet? Or is it just a “how can we get tourists to pay more?”-label that is applied in airports and other tourist-dense areas?

    • Emily says:

      I think there’s a general trend for this stuff in the UK. It’s possible that it’s more prevalent in touristy areas, but it’s definitely not confined to them. (I live in the southwest, and I’ve noticed it mainly in various bits of the south. Interesting to know it’s in full force in Scotland.)

    • James says:

      Yeah, I’d say it’s pretty common across Britain. Fairtrade varieties of certain fairtrade-y goods (chocolate, bananas, wine?) can be obtained in most supermarkets.

      I can’t speak for anywhere else. I’m surprised it’s so scarce in Germany. I wonder if somewhere like Berlin is any different to Bavaria on this axis.

    • Deiseach says:

      I find that interesting, since the Anthroposophist, Steiner Schools, biodynamic stuff I know about through my sister all orginates in Germany, and the local health store here has products like Weleda and Dr Hauschka, so my impression is that naturopathy and things like homeopathic remedies are, if not “big”, at least not uncommon in Germany?

      Not to mention the Reinheitsgebot which was a selling point in advertising German beers over here!

      I would therefore have expected the ‘organic, fair-trade, all natural’ things to do well there, and you are saying that is not so?

      • Godzillarissa says:

        It might be I’m just not as exposed to it, due to my (very neraly nonexistent) social circles. But since the organic (or “bio” in german) hype tapered off, it’s not a mainstream thing as far as I can tell. It’s just kinda there and some care, but most don’t, as far as I can tell from my personal experience.

        Re “Reinheitsgebot”: I also noticed that beer in the UK is much more diverse than in Germany. Which makes the vast bulk taste… very unexpected, while there’s also really good stuff. I’m not sure if the “Reinheitsgebot” isn’t to blame for german beer’s same-ness.

    • Franz_Panzer says:

      I’m surprised to hear that this is not a thing in Germany, because it certainly is in Austria. Fruit, vegetables and dairy product made “aus biologischer Landwirtschaft” and similar labels (which I would say are the equivalent of the english “organic”) are everywhere. When you buy meat you can see from which farmer in which part of Austria it comes from. Supermarkets, even the disconters, have fresh bread, often even baked in the store.

      Now, I don’t know how “in your face” the advertisement about that kind of products is in the UK, so maybe I can’t say whether I would find it over the top or not. But to hear that this is something basically not noticable in Germany is surprising to me.

      • Godzillarissa says:

        I’m sure the actual difference got blown out of proportion by the marketing (which is really In Your Face).

        So I guess it is “a thing” here, too. It’s just that when you buy organic/bio, you’re basically signalling “I’m special”, while in the UK I had a feeling it’s just what you do.

        Regarding “freshly baked” goods (and a bit off-topic):
        I just always assume that’s frozen and de-frosted/re-baked at the supermarket and they make no claims for it to be “local” or “baked right here”.

        • Cadie says:

          Not sure how it works elsewhere, but in American supermarkets, “freshly baked” usually means that the product arrived frozen, and it was thawed and baked at the store. I worked in a store bakery for awhile and we’d set up trays of frozen unbaked (but already risen, shaped, etc.) products in a large refrigerator, and early the next morning we’d bake them and add icing if needed. So they really were BAKED freshly at the store, but not prepared there except for the last step of icing donuts.

          • Godzillarissa says:

            That’s probably how they do it here, too. In hindsight I made it sound a bit different, sorry for that :/

            Anyway, if that’s what people understand and buy then more power to them, I guess. I myself wouldn’t call that “freshly baked”, though, even if it’s technically true.

    • Jon Gunnarsson says:

      Fellow Bavarian here. I suspect this is just part of the general trend of Germany being quite thrifty when it comes to buying food. Which raises the question of why that tendency exists, and I have no idea why.

      • Deiseach says:

        Which raises the question of why that tendency exists, and I have no idea why.

        Don’t mention the war(s)? 🙂

        • Jon Gunnarsson says:

          I don’t quite follow. Are you saying that Germans are frugal when it comes to buying food because of food shortages during the World Wars?

          • Godzillarissa says:

            I don’t know about people in general, but the elderly here seem to be very much influenced by the shortages during and post WWII. Many won’t buy organic food if it’s more expensive than non-organic food.
            The mentality seems to be “We didn’t have that before and still we lived.”.

    • Douglas Knight says:

      Organic is a strategy of bundling competent labor with expensive products, but Germans are generally competent, so there is no room for this niche.

    • chaosmage says:

      As a German, I find fair-trade and organic stuff in my very unremarkable local supermarket. There are a number of niche shops, a couple of vegan restaurants and vegan fast food joints, and even a “vegan only” supermarket. Of course this is in Leipzig, a big-ish city. Hannover is a similar-sized city and also has things like an “organic only” food supermarket. Berlin has even more of this; I once read a funny newspaper article about a Berlin drug dealer who specializes in organic fair trade cocaine.

      But there is a sharp urban/rural divide. When I’m out in the country, food stores compete on nothing but price. Even eggs that aren’t from factory farming can be hard to find.

      So my guess is you’re from rural, rather than urban, Bavaria.

      • Godzillarissa says:

        Hm… I grew up in a city with a population of ~25k and moved to one of ~60k about 5 years ago. Both are tourist-dense, so that maybe cancels out the relatively low population a bit. I also visited Augsburg pretty frequently and munich on occasion, but apart from fancy restaurants it’s pretty much the same everywhere, afaict.

        And while there are specialty stores and the organic shelf in the supermarket, I don’t feel like it’s much of a thing for the general population. Sure, there’s the fancy, upper middle class “I only eat organic” crowd and the hipster students and the “hippies”, but that’s about it.

  37. frogcurious says:

    I’m on immunosuppressants for IBD (azathioprine) and I want to take kambo*. Since they’re both active on the immune system I’m cautious about it. Also the azathioprine works, not 100%, but it is effective, and I definitely don’t want to mess that effect up. Since people often explain kambo as “stimulating the immune system” I’m a bit worried that it would undo the effect of immunosuppressants, though I suspect that explanation is probably too oversimplified to be useful in this case.

    Should I just ask a pharmacist? It seems like kambo has 100s of different active compounds, which complicates the question. And it is not particularly well known.

    This commentariat seems like a good place to crowdsource information which I could use to calculate the risks. Anyone know anything about this subject? Scott?

    (*not endorsing this link but it seems as good as any other on this subject http://www.heartoftheinitiate.com/files/Kambo-Scientific-Research-Healing-Treatments.pdf)

    • Rowan says:

      I’ve never before heard of kambo, but I’m enthusiastic enough about seeing someone else with IBD in the rationalsphere that I feel I need to add something. I do have experience with worries about other drugs that affect the immune system while taking azathioprine, in my case that meant reading about melatonin one day, discovering it supposedly boosted the immune system, and immediately ceasing to take it. It was listed as having a drug interaction with azathioprine and other immunosuppressants, but I suspect that got listed on relevant websites just based on assumptions derived from “boosts the immune system” without more reason than that, and that that’s only happened to melatonin but not kambo because the former is wayyy more well-known. I noticed no change, although I did get worse in subsequent months which now that I’m thinking about it may have actually been because the melatonin was being helpful instead of counteracting the immunosuppressant. But now I’m on Humira, and that’s a thing that’s working that I don’t want to mess with, at least until I’ve been on it for a while and things have stabilised enough for me to notice effects of other changes I make.

      I suggest, if there’s this little information out there, you try self-experimentation and see if you can go full Gwern on the problem.

  38. Fazathra says:

    I’m fairly unversed in moral philosophy, so this is an open-ended question, but is there any philosophically consistent basis for “rights” like a right to life etc. Rights do not seem to fit very well into a utilitarian or virtue ethical framework, and only really fit into deontology if you have rules like “you must respect people’s rights” which is basically arbitrary. Why do people believe there are such things as “human rights” which must be followed? Is there any real argument for thinking rights exist beyond the bare assertion that they do in places like the UN’s declaration of Human Rights?

    • Carinthium says:

      Off the top of my head (I’m pressed for time), rights make no sense within a utilitarian system.

      For a deontologist, saying people have a ‘right’ to a certain thing is another way of phrasing behaviours which should not be done to them without their consent, which I could accept if there was a good moral argument for it. With such an argument behind it, I see no reason not to call them rights. For a virtue ethicist, general rules of behaviour are still possible so ‘rights’ could exist in a broad sense.

      However, in practice I agree that there is no reasonable case for the existence of human rights of any sort, or for that matter moral truth of any sort.

      • porridgebear says:

        There’s also a moral agent focused version: “rights” as moral claims about what it is properly ethical to secure by force.

        Roderick Long has some lectures that focus on this version.

    • Peter says:

      I’ve seen the phrase “patient-centred deontology” used to described rights-oriented moral philosophies – e.g. http://plato.stanford.edu/entries/ethics-deontological/#PatCenDeoThe. Personally I have little time for this. Incidentally, I also have little time for preference utilitarianism – to me, these two seem to have the same problem in reverse; they seem to have their agents and patients in a muddle. Possibly I need to expand on that if there’s demand.

      Bentham was certainly keen on legal rights, so some forms of rights fit into utilitarian systems, and it’s not too much of a push to go for customary rights too. I forget what Mill’s exact views were but ISTR them being more in favour of rights than Bentham’s. If you consider rule utilitarian systems, then conceivably the optimal rules might take the form of rights, and contract(arian|ualist) systems (which can often be shoehorned in under “deontology” if you like to keep the deontology/consequentialism/virtue ethics trilemma going) are pretty similar IMO and the same applies.

      I had a thought about natural rights, which until recently I’d been dismissing as nonsense upon stilts, but I wondering whether the tendency of people to get angry when some sorts of bad stuff happens to them or people/things they care about gives some naturalness to whatever customary/hypothetically-optimal/whatever rights your system might give. This is partly based on re-interpreting “endowed by the Creator” as a reference to evolution. At this stage this is an idea to play with, nothing more.

      I tend to go for multi-layered moral philosophies, there’s plenty of room for rights in some of the middle or upper layers, I think whatever your deep underpinnings.

      • Carinthium says:

        I’m curious about your beliefs about agents and patients, for what it’s worth. Above all, I’m curious about your reasoning for getting to your beliefs about agents and patients in the first place.

        I’m also curious about what you mean by a moral-layered moral philosophy.

        Minor note- On your idea of natural rights, I have no actual logical argument against it but I’m pretty sure it would lead to conclusions unpleasant to modern Western culture. If you’re curious, I’ll elaborate.

        • Peter says:

          Agents and patients. Well, classical hedonic utilitarianism is very much patient-centred, it looks at various experiences and calls things that lead to the good ones “good” and the bad ones “bad”. I might also call it agent-patient.

          Deontology… well, the term covers a multitude of sins, including what I’d like to call “Irreducible List Deontologies”[1], but the ones worth taking seriously are of the Kantian family. These I see as agent-agent, or even moral agent-moral agent. Less about acting on a patient, and more about interacting with another agent, maybe hypothetical ones that think as you do in the place of the real ones you actually encounter, but still, agent-agent.

          There’s a conception of animals as being patients but not moral agents; Bentham famously advises protecting animal welfare, in a fairly natural and direct manner, whereas Kant has to go through some IMO unsatisfying contortions to do this. On the other hand hedonic utilitarianism has some issues with trustworthiness; there are ways of dealing with these but I’m sure that others might opine that they’re unsatisfying contortions.

          So I see preference utilitarianism as trying to make utilitarianism more agent-agenty, and patient-centred deontology (PCD) as trying to make deontology more agent-patienty. And I’m at a loss to explain why, but these feel… like cheap ersatzes, “ugly hybrids”.

          I said something about “unsatisfying contortions” – actually I’m a big fan of the idea that a moral philosophy specifies a small, concise idea of what fundamentally determines morality, and that other things flow from that – this is the “multi-layer” thing. Some people seem to get terribly upset when confronted with a moral system that doesn’t make their favorite issue a fundamental thing – it’s not enough for some people for something to be very important, it has to be intrinsically important – you can see I have little time for this way of thinking, I may be biased. Anyway, for me, I suppose I think that preference utilitarianism and PCD are trying to push the “missing” bit into the fundamental level, rather than letting it flow from it.

          [1] An arbitrary list of prescriptions and proscriptions, “just because”, possibly backed with something like Divine Command, often repeated loudly and in a moralizing tone of voice. Closely related to various “Objective List” philosophies that Parfit identifies. As you can see I have little time for this, and also it gives me an opportunity to take a cheap sideswipe at antireductionism too.

          • Troy says:

            If I wanted to play Devil’s Advocate, my response would be that a theory should be as simple as possible, but no simpler: and, as it turns out, the moral landscape is complicated.

            As it happens, though, I share your desire for a simple fundamental system. On “Objective List” deontology, I am reminded of this marvelous quote from Geach’s essay, “Good and Evil”:

            “We must allow in the first place that the question ‘Why should I?’ or ‘Why shouldn’t I?’ is a reasonable question, which calls for an answer, not for abusive remarks about the wickedness of asking; and I think that the only relevant answer is an appeal to something the questioner wants. Since Kant’s time people have supposed that there is another sort of relevant reply–an appeal not to inclination but to the Sense of Duty. Now indeed a man may be got by training into a state of mind in which ‘You must not’ is a sufficient answer to ‘Why shouldn’t I’?; in which, giving this answer to himself, or hearing it given by others, strikes him with a quite peculiar awe; in which, perhaps, he even thinks he must not ask why he must not. … Moral philosophers of the Objectivist school, like Sir David Ross, would call this ‘apprehension of one’s obligations’; it does not worry them that, but for God’s grace, this sort of training can make a man apprehend practically anything as his obligations.”

            I don’t agree with him that the answer to this question must ultimately appeal to something the questioner wants, but I share his (and your) dislike for Objective List views that don’t give any kind of deeper explanation for the content of the List.

          • Carinthium says:

            Why should preference utilitarianism or patient-centered deontology be bad simply because of what they do? I’ve given to understand preference utilitarianism at least has good actual reasoning.

            Preference utilitarianism is definitely concise, and is ‘small’ in the same sense act utiltarianism is. I don’t know patient centered deontology though.

            What about the possibility the moral landscape (which is a set of intuitive moral ‘beliefs’) is ultimately self-contradictory or otherwise unable to logically ‘fit’?

            If so, shouldn’t we just discard morality alltogether as useless?

          • Peter says:

            Troy – well, your Devil’s Advocate – the moral landscape is indeed complicated, one of the appeals of consequentialist systems is that the link between act (or rule or whatever) and consequences is mediated by the complicatedness of the world as it is, thus you can have a complicated surface morality underpinned by a small clean kernel.

            (Also, having done a chemistry degree, it’s remarkable how much a messy and complicated world of surface phenomena can be reconciled with a relatively clean and simple theory.)

            Carinthium: as I say I’m at a slight loss to say why I feel what I do. I think, searching for bad reasons for my position that I might be guilty of, I’m a bit of a rule utilitarianism partisan (more or less of the acceptance varieties that Parfit a) likes and b) thinks reconcilable with his preferred version of Kantianism). In particular, the various dilemmas that people came up with suggest that classic act utilitarianism involves biting quite a lot of bullets; I think intuitions conflict and you’re going to have to bite a few, but it’s worth keeping the bullet count down, and I’m impressed by the way (my preferred forms of – you can assume this from here on) rule utilitarianism help avoid a great many of those bullets. Also, given that act utilitarianism is self-effacing, I think that rule utilitarianism is what it effaces itself into, which I think is a point in favour of it, although others may disagree. Anyway, given this, I sort of see preference utilitarianism as being a bit redundant – it sort-of fixes up classic act utilitarianism in a roughly similar way, but doesn’t give me the “ooh, that’s neat” that rule utilitarianism does for me.

            Also, preference utilitarianism, for me, seems a little tainted by association with Singer. Yes, “Singer is anathema” is closer to being anathema to me than Singer is, but that doesn’t mean I have to like his more… controversial… views.

          • Carinthium says:

            To sum up- I diagree with the idea of appealing to intuitions as part of a moral theory in the first place.

            Why do you think we should do that much? Do you have good reasoning behind it?

    • I think you’re right that deontology is the form of reasoning from which rights would be most likely to philosophically flow, though people from other schools of thoughts may support them for practical reasons (eg. they’re good at achieving morally good outcomes). If I remember correctly both libertarian philosophy and Kantian reasoning can provide a basis for rights-based moral thinking, but someone that knows either in more detail may wish to correct me on that.

      In practice I imagine support for a rights-based approach in the real world is one third pragmatism, one third emotionally based, and one third theological in some form. I think in general most philosophers usually see rights as more of a simple label for deeper and more complex moral questions.

    • ADifferentAnonymous says:

      I’d say rights aren’t basic in consequentialism. They can still make sense as political constructs. To first order you might parse “I declare a universal right to X” as “I don’t think it’s ever good for a government to take away/not provide X”.

      ETA: Consider an analogy: “Crimes” can’t be a basic entity in utilitarianism either, but having a knowable set of laws with somewhat predictable application leads to more utility than trying to punish everyone who decreases utility.

    • blacktrance says:

      Rights are a product of morally ideal law. As long as your preferred ethical system produces some ideal set of laws, it has room for rights.

    • TomA says:

      There is also the evolutionary aspect as it relates to all memetic traits. If the adoption and implementation of a social rights regime is advantageous to the survive and thrive imperative, then it will persist simply because it works.

    • cypher says:

      You can view rights as a heuristical approach to ethics based on common human failure modes, designed for relatively fast resolution of ethical problems.

      “Right to life” (as a negative right), for example, covers all possible ways an agent might kill someone else, and classifies them all as wrong (until we add our additional layers later). It’s short and easy to remember, and people seem to like phrasing it in that way instead of a command (“don’t kill.”)

      Utilitarianisms only generate rights as an intermediate node or government policy, however, rather than something true by itself.

    • For a slightly different angle on this …

      I live in the SF bay area and was vaguely aware of the existence of the LW/Rationalist community, mostly via my elder son. But what got me interested enough to start coming to the third Saturday Palo Alto parties was this blog. It struck me that a fair number of the commenters were people it would be interesting to talk with, and many seemed to be part of a social network near me.

      Of course, discovering that the host of the third Saturday parties was a fellow Kipling fan didn’t hurt.

    • I think rights can be part of a rule utilitarian approach. “Don’t violate property rights,” for instance, is a rule that, if generally followed, is likely to result in higher utility than “violate property rights whenever you think doing so increases total utility,” given the imperfections of human judgement. Similarly for “don’t murder people.”

      Think of it in terms of conventional economic efficiency proofs. If something is worth more to you than to its present owner, you can buy it. Your feeling free to steal it only changes things if it’s worth less to you than to the present owner, and in addition creates the costs associated with guarding property and trying to defeat the defenses. That isn’t a rigorous utilitarian argument because willingness to pay is an imperfect measure of utility, for familiar reasons, not to mention transaction costs, but it might produce better outcomes than any alternative rule.

      • Shieldfoss says:

        I think rights can be part of a rule utilitarian approach. “Don’t violate property rights,” for instance, is a rule that, if generally followed, is likely to result in higher utility than “violate property rights whenever you think doing so increases total utility,” given the imperfections of human judgement. Similarly for “don’t murder people.”

        This is my approach as well – I tend to phrase it as “Rights do not exist in any meaningful fashion as such, but are very useful heuristics so long as moral decisions are made by fallible humanity.”

        Though I suddenly realize rights might still be around even after all the important decisions start being made by AI – so long as the AI cares bout us and our concerns – because we care about rights.

    • David says:

      Other commenters have already touched on the idea, but our host actually has proposed a model of rights as heuristics which are so useful in general, and where our human biases make us terrible judges of when they aren’t useful, that it makes sense to conceive of them as if they were inviolable laws of the universe. See part 6 of the Consequentialism FAQ (though I don’t know how much of that he still endorses).

    • RCF says:

      It seems quite clear to me that rights are simply another way of stating moral rules. “People have a right to their belongings” means “It is morally wrong to steal”.

      • Troy says:

        Counterexample: it is morally wrong for you to never let your brother play with your new toy. But your brother does not have a right to your toy.

  39. Nestor says:

    Your node of the ancestor simulation got dialed down, you’ve probably drifted away from whoever the true focus of this world instance is.

    Quick, try to figure out who it could be! Who did you stop hanging out with around that date? Social relevance is just the canary in the coalmine, the cognitive effects will soon follow, and you’ll end up as a less resource intensive p-zombie.

  40. Harald K says:

    I have been thinking a lot about that Credence Calibration Game that Scott posted in a comment recently. I know it’s old news to all you long time fans of Scott and the LW crowd, but seems so cool.

    Unfortunately, the app is no longer available. Bent Spoon games, which made the game for Android, is apparently no more. It’s a bit odd, since I know apps from other defunct publishers are not automatically removed from Google Play.

    Anyway, do anyone know a way to get that app?

    • Anonymous says:

      You can find the app here: http://rationality.org/calibration/

      I found it incredibly frustrating, because my 70% was extremely overconfident, like 55%, but on the other hand, my 60% was extremely underconfident, like 75%. Which, okay is weird, but I should be able to correct that. I mean, if I just hit 70% every time I think 60% and vice versa, that should reverse the scores. But no matter how hard I tried, I couldn’t get reasonable scores.

      I basically used 60% for “I have no idea but I have some inclination of a guess”, but then when I tried to correct these scores later, I would think “I only have a guess… 60%, but aha! I should do 70% instead. But those would invariably turn out wrong. On the flip side, when I thought “70%, wait a minute I’m probably overconfident, I should do 60% instead” I would invariably answer correctly.


      • Harald K says:

        No, as you can see, the Android link from that site is dead. I don’t have windows or mac at home, so can’t play it.

        On the other hand, it wouldn’t be terribly difficult to write a version of it myself. He won’t mind, will he?

  41. Daniel Speyer says:

    Do you have http-referer data for your traffic? If the sudden drop is in one source, that will probably tell you what’s going on. If it’s in everything at once, it’s more likely to be a co-incidence.

    Maybe Google improved their algorithm with regards to Pakistani milf porn, and that stream of misplaced traffic went away. My plurality credence is that it’s something that silly.

  42. Troy says:

    Two thoughts on Growth Mindset, now that I’ve finally read through all of Scott’s recent posts on the subject:

    (1) Scott expresses skepticism of the Very Controversial Position that “Belief in the importance of ability directly saps a child’s good qualities in some complicated psychological way. … It shifts children into a mode where they must protect their claim to genius at all costs, whether that requires lying, cheating, self-sabotaging, or just avoiding intellectual effort entirely.”

    While I share much of Scott’s skepticism about growth mindset, I don’t think that it’s true for me that the VCP “really doesn’t match my experience” or that “The people I know who are most interested in issues of innate ability don’t behave at all like Dweck’s subjects.” With respect to the latter, I suspect that the LW-sphere is not a representative sample of bright people who believe in the importance of ability. On the other hand, the VCP seems semi-plausible to me in another domain with lots of bright people who implicitly believe themselves to be very talented, namely academia. I think academics in my own field do often feel a need to “protect their claim to genius at all costs.” If I am to be honest, I think I feel this need myself in the way in which I present myself to my colleagues. I don’t think it significantly biases the conclusions that I come to, but I think it does make me less likely to admit when I don’t understand something or to ask for help when I need it. And I think my experience of the latter is quite common among academics.

    (2) Arguing for the other side: if I wanted to explain the impressive positive results in growth mindset studies that Scott mentioned in his earlier posts, it seems like the best bet would be experimenter bias of some sort: in particular, I’d bet that children could tell how the experimenters wanted them to perform and performed accordingly. It seems like it ought to be possible to have assistants run the studies who are not familiar with the theory being tested and just read to the children the speech given to them. (e.g., experimenters could pay college students to do this.) That’s what I’d like to see on some of these impressive studies of Dweck et al.

  43. Troy says:

    Here’s an economics question that has puzzled me for some time. Keynes predicted (in his essay, “Economic possibilities for our grandchildren) that as people got richer, they would work less — e.g., move from 40 hour weeks to 20 hours weeks. Although Keynes was right that there would continue to be substantial economic growth, this prediction has not been borne out. Why?

    Some possibilities:
    (1) People value leisure less than Keynes thought, or value money (or something that comes with money, like status) more than he thought.
    (2) Employers have incentives to not hire workers who will work for less of the week, e.g., because of costs associated with hiring more workers and ensuring their productivity.
    (3) People would like to work less, but artificial incentives produced by government regulation stand in the way. For example, requiring health insurance for “full-time” jobs but not “part-time” jobs makes 40+ hour a week jobs more attractive than they would otherwise be.
    (4) People would like to work less, but social conventions stand in the way. The 40-hour workweek is a well-established social convention, and employers are reticent to break it when that’s what employees expect and employees are reticent to ask for something different when that’s not what employers are explicitly offering.

    (1) is not true of my own case, but I may not be representative. Perhaps (2) is true in lower-wage jobs with high turnover, but is it true when companies are hiring, say, office workers who tend to stick around for 10 years? (3) seems plausible for countries with these kinds of regulations, but are all western countries like this? In Canada, for instance, health care is not tied to your employer, but 40+ hour work weeks are still the norm.

    If I had to bet I’d put my money on (4), but I’m interested in seeing what other people think.

    • Held In Escrow says:

      There’s several good explanations to this, but the one I’m fond of is that we normalize new additions to our standards of living. It costs more to have a house with running water, internet, and a smart AC/heat system than to not, but nobody wants to live in a house without those if they can help it. Thus although we’re making more money, we’re also having to spend more money in order to gain the same amount of happiness that our forefathers had. The human psyche is gaseous, expanding to fit whatever container it is put in and it takes a lot of effort to compress it back again.

      There are of course strong social expectations in place in regards to 40 hour work weeks, with much of that being built around looking professional rather than being efficient, so that plays somewhat of a role.

      • Troy says:

        I think this is certainly part of it, but it’s possible to live in a house with most modern conveniences (e.g., the ones you list) for much less money than what’s considered middle class income in the U.S. — I’d say $20,000/year for a married couple in cheaper parts of the country, based on my personal experience. However, when you add in frivolous spending (eating out for lunch each day rather than packing a sandwich) expenses add up quickly. And I suspect that many Americans value what I would see as frivolous purchases more than I do.

        • Held In Escrow says:

          I think that’s a core component of it; people don’t want to live in the cheaper parts of the country. They want to live in the urban or suburban centers, they want neighborhood pools and rec centers, they want a 401k and knowing that they’ll get paid every other week.

          What’s frivolous for one person is a basic human right to another. Hell, when I had a $10 an hour job I’d bring a cheap lunch every day. Now that I have one that’s over double that I eat out and would loathe to go back to making a sandwich for lunch in the morning.

          Our needs expand to meet our means.

    • John Schilling says:

      (2) is definitely true for high-skill, long-term workers because of overhead effects. Assume it takes six months of full-time (40 hr/wk) effort for a new employee to get up to speed on whatever task or project they are working. And then ten hours a week to keep track of what everyone else on the project is doing, maintain currency in their professional skills, keep up with the literature, and of course complete the mandatory sexual-harassment prevention training, ethics training, desert tortoise awareness training, etc.

      If I hire one person to work forty hours a week for five years under such conditions, I get 7,020 hours of productive work. If I hire two people to work twenty hours a week for five years, I get a total of 4,160 hours of productive work. That’s 40% less productivity for the same cost; I really need to hire 3.375 half-time workers to replace one full-time.

      And that assumes that the “keep track of what everyone else on the project is doing” overhead is constant. If I’ve got three-and-change times as many people working on the project, that’s more people everyone has to talk to in order to understand what is going on. Probably an extra layer of management. More desert-tortoise awareness training staff, because I can’t train each person half or a third as much.

      But if I can get my people to work 60 hours a week, that’s 12,130 hours of productivity from one man over five years. I will about break even even if I have to pay time-and-a-half for overtime – which I maybe don’t if the guy is a salaried employee. And I can start cutting down on overhead and run a generally leaner project team.

      Where labor is not a commodity, where specific skills and knowledge and connections matter, you want the right man for the job and you want that man on the job right up to the edge of burnout.

      And if you’re asking to work 20 hours a week so you have time to raise your children, play golf, or whatever, understand that you are not offering one-third of what the 60 hr/wk overachiever is providing, you’re offering maybe one-sixth. This will be reflected in the salary and benefits you can negotiate, but it may be camouflaged (e.g. half the nominal full-time salary for a lower-ranked position, no insurance or pension, no bonuses or promotions).

      • Troy says:

        Good points. I think that I would marginally prefer to make $20,000/year with a 20-hr workweek than $120,000/year with a 60-hr workweek (other things equal — e.g., I don’t really love my job), because I value leisure time much more than money after I’ve got enough money for a certain basic level of comfort. However, my preferences are probably atypical.

    • Jon Gunnarsson says:

      I think it’s a combination of (1) and (2). Since Keynes’s days, work in the developed world has become more complicated (less manual labour, more white collar labour), which means that the costs of education and training are higher. And since these are fixed costs, this development would tend to drive up the number of hours worked.

    • Douglas Knight says:

      When Keynes made that prediction, the poor worked more hours than the rich. Today, the rich work more hours than the poor.

        • Jon Gunnarsson says:

          The article you cited actually agrees that the rich today work more hours than the poor, so I don’t see how it is a counter-point to what Douglas Knight wrote.

      • Douglas Knight says:

        Sorry, I didn’t mean to be cryptic. I meant to preface the claim with “Keep in mind that…” This fact does not in any way addresses the question. However, almost everything that people say about this topic is nonsense and can be filtered out by knowing this.

    • jaimeastorga2000 says:

      Zero-sum competitions and/or monopolies, such as land and housing, credentialed education and licensing, status signals, and women, eat virtually any surplus above what you need to survive. Several really smart people have come to this conclusion, such as Eliezer Yudkowsky, Michael Vassar, and Vladimir M. Thus, most of the gains of economic growth go to people who are positioned to capture economic rents, and most people benefit merely in the sense that they can afford to buy slightly nicer stuff with their meager left-overs (such as CD players instead of gramaphones).

      • Any thoughts about men as a limited resource for women?

        Women spend quite a bit on being attractive.

        • jaimeastorga2000 says:

          Probably the biggest expense for most women looking for a man these days is going to college, where they can meet a variety of high-quality potential mates. Some women even attend university for this explicit purpose, often referred to as getting an MRS degree. Less common but also expensive is cosmetic surgery such as liposuction, breast augmentation, and face lifting.

          I don’t think clothing counts. As long you don’t care about brand names or following the latest fashion, attractive clothing is quite cheap compared to the above expenses, and most of the obsession over designers and fashions seems to me more like a status competition with other women than an attempt to attract men; I doubt most men can even tell the difference.

          • FacelessCraven says:

            perhaps the status contest between women has serious impacts on the ability to secure the limited pool of men?

          • Cadie says:

            At least at the middle-class level, if you’re in at least okay shape and know your body and personal style, you can dress yourself well *extremely* cheaply. Just be picky about the fit. Careful accessorizing and shopping at second-hand stores can get you clothes that are just different enough from what everyone else is wearing that you look even more fashionable because it’s subtly different but still similar and it looks good.

            Perhaps the rich are more attuned to designers and the small details that other people miss and that wouldn’t work. I don’t know. I just know that a $2.99 basic knit shirt from Thrift City and $4.99 jeans look about the same as spending $75 on a similar outfit at Ann Taylor or somewhere, except I’m not limited to this season’s colors which 8 times out of 10 don’t look good with my skin tone anyway, and I have more money left over to buy better shoes or the perfect earrings.

      • Cauê says:

        “slightly nicer stuff” doesn’t come close to doing it justice.

        Cars, televisions, computers/internet, telephones/cellphones/smartphones, these things are lifechangers (I’m sure we can come up with other examples). And all the small everyday luxuries we don’t usually think about are cumulatively important as well.

        The CD/gramophone looks cherrypicked as an example of incremental development of the same basic thing, but even then, how many people have access to mp3 players today compared with gramophones then?

      • Irenist says:

        Anecdotally, zero-sum competition is a huge factor in the U.S. because of bidding wars for housing in desirable public school districts. I grew up in a poor area and as an adult I was once perfectly happy living alone in an unfurnished bedsit with no private bathroom (since I had a library card to keep my floor piled high with stacks of books), but now that I’m a parent I’d be uncomfortable raising my child in similar conditions, mostly because I’d be wary of our likely neighbors in an area as impoverished the one where I was raised, or sending her to the schools there. But to buy into a good school district, my wife and I had to buy more house than I would have wanted otherwise. I suspect this is far from uncommon. I think Elizabeth Warren’s “Two Income Trap” idea is partly about this: the gains of a second working spouse get wasted on school-district competition.

      • ADifferentAnonymous says:

        To be clear, it looks like EY and Vassar are talking about extraction of economic rent by monopolies, while Vladimir M is talking about zero-sum competitions, rather than all three of those authorities agreeing it’s a mix of both factors.

    • Quixote says:

      I’d bet on 4. I work at a large corporate, we’ve discussed this before and would like to move in the direction of other kinds of work arrangement since te research shows they are more productive and boost retention. But it’s hard to shift existing equilibrium. It’s really hard to change culture and even harder to change cultural backgrounds.

      • Troy says:

        Thanks, that’s interesting. It’s a good counterpoint to John Schilling above too that there are benefits as well as costs to having more workers work fewer hours per week (in terms of productivity, retention, etc.). I don’t know what the optimal balance is, but I bet it’s less than 40 hours a week at least for some lines of work.

        • Quixote says:

          Just for a bit more context we would be looking to do something more like 60->45 rather than 40->20. So some of johns points about fixed costs and the fixed ‘cost’ or training / learning periods apply differently.

          The trade offs work differently for different people. At 80 hours a week almost no one who takes maternity leave comes back and works at their prior level. At 60 hours you still lose over half. If you get that down to 40 and you are keeping >75% of people. so overtime you can have a much better retention rate and retain internal knowledge.

          On the other hand for a lot of single folk, people would accept a greater than 20% pay it for having to work 4 days a week instead of 5 on the theory of what’s the point of having money if you never have the Freetime to enjoy spending it.

          // all numbers in this post rounded to nearest pleasing sounding attractor

    • Doctor Mist says:

      I vote for (3), and would add that there is both a present-effect and a future-effect. Seeing what chaos the government is willing to inject into the economy makes it much harder to achieve any sense of security about how much money I “need”.

      I am a few days away from retirement, and by sensible computations could have retired some time ago. But if I have to include “What if the government decides to confiscate or inflate away half of what I have saved?” it’s a bit harder.

      Note also that (3) impacts (2). Part of the overhead of having an employee is imposed by the government, in the form of obstacles to firing, required benefits, etc.

    • Matt C says:

      Are you sure we’re not working a lot less than we did in Keynes’s time?

      People spend a lot more time in school than they used to, and most people live long enough to have a retirement now. Six more years of schooling and ten years of retirement (made up numbers) is a sizeable chunk of life not working compared to a guy in 1940.

      Also there are more people on disability, on unemployment, on the dole/welfare, etc.

      Women started doing paid work more since 1940, but if you’re counting hours worked we’d want to compare to the unpaid work women had been doing. Don’t know how that shakes out.

      I looked at https://stats.oecd.org/ and running the numbers back to 1950 shows a decline in hours worked for France, Sweden, and USA (these countries ran all the way back to 1950). Other countries show declines too. The stat I am looking at is labeled “Average annual hours actually worked per worker” so I do not think these declines are talking about the decrease in labor force participation mentioned above.

      It looks to me like Keynes was at least partly correct and we are working quite a bit less than we used to. I think the rest of the difference is mostly your 1). Most people don’t really mind working a 40 hour week and they like nice cars and central heat and iPhones.

      • Not evidence for Americans in general, but it does seem like popular culture is a lot more time-consuming.

        • Matt C says:

          I don’t understand what you mean. Are you saying today’s leisure activities are more time consuming than leisure used to be?

          • It may just be a matter of where I hang out, but it seems as though people don’t just have more media (including books) to consume, they’re consuming more of it and knowing more about the details.

    • Do you have data showing that people are not, on average, working fewer hours a year now than they were working when Keynes made that prediction? It wouldn’t be my guess.

      • Troy says:

        I thought this was the conventional wisdom on Keynes’s essay today, but I do not now remember where in particular I’ve read this.

        If we do work less now than we used to, I am confident that we don’t work as much less as Keynes thought; he thought that by now most of us would be working 15-hour weeks.

        • Suppose, what strikes me as more likely, that people are working fewer hours, but not nearly as many fewer as Keynes expected. Three possibilities occur to me:

          1. Jobs, on average, have become substantially less unpleasant over the past eighty or ninety years, so lower marginal utility of income with higher incomes is balanced, at least in part, by lower marginal disutility of labor.

          2. Keynes overestimated how rapidly MUI falls as income increases. That fits my more general impression that most people believe that, if their income doubled, they would have everything they wanted, and additional expenditure would be mainly display. The attitude makes sense, since people have little reason to think about ways of spending money that obviously make no sense at their current income. But when it does double … .

          3. A different version of 2, less consistent with the usual economic approach. The individual utility function actually resets as a result of a change in income. Someone who was mildly happy at $20,000 a year has in increase over time to $30,000. For a while he is very happy, then he gets used to it, and his utility slides back to what it used to be—and requires another increase to bring it back up. Which will again only be temporary.

          For an extreme example, a friend who spent time living with a pretty isolated Indian tribe in (I think) central America—she thought she was only the second non-native speaker of their language—reported that they didn’t seem significantly less happy than the people she knew back in the U.S.

        • Suppose, what strikes me as more likely, that people are working fewer hours, but not nearly as many fewer as Keynes expected. Three possibilities occur to me:

          1. Jobs, on average, have become substantially less unpleasant over the past eighty or ninety years, so lower marginal utility of income with higher incomes is balanced, at least in part, by lower marginal disutility of labor.

          2. Keynes overestimated how rapidly MUI falls as income increases. That fits my more general impression that most people believe that, if their income doubled, they would have everything they wanted, and additional expenditure would be mainly display. The attitude makes sense, since people have little reason to think about ways of spending money that obviously make no sense at their current income. But when it does double … .

          3. A different version of 2, less consistent with the usual economic approach. The individual utility function actually resets as a result of a change in income. Someone who was mildly happy at $20,000 a year has an increase over time to $30,000. For a while he is very happy, then he gets used to it, and his utility slides back to what it used to be—and requires another increase to bring it back up. Which will again only be temporary.

          For an extreme example, a friend who spent time living with a pretty isolated Indian tribe in (I think) central America—she thought she was only the second non-native speaker of their language—reported that they didn’t seem significantly less happy than the people she knew back in the U.S.

    • someone says:

      The somewhat famous “Bullshit Jobs” Essay by David Graeber votes #4 with the variation that lots of people already work less, we just have to stretch it out to 40h because of #4.
      That vague culture thing is also my favourite.

      Edit: the data i internalized, also i am too lazy to look for sources now, is this: Agricultural society had somewhat less workhours than today but with great variation over the year/situation, industrialisation brought 60-70h workweeks from which workers fought gradually downward combined with change of work to white-coller work i.e. things workers needed to be in good shape for and had to be competited over. The question is why the decline mysteriously stopped at 40h.

      • Held In Escrow says:

        An argument with no actual backing, and the only evidence he puts forth is factual wrong (unlimited supply of jobs for corporate lawyers? I had to check to make sure that wasn’t written decades ago). Accusations of an Illuminati-esque ruling class trying to prevent a popular revolution through meaningless work? I half expect for Scooby Do and the Gang to pull off his mask and show that it was Old Man Marx all along!

        • someone says:

          You are right of course. I brought it up because it corresponds well with the anecdotal evidence i have from consulting and a law firm. These are kinds of work were you cannot guess from the number of hours done on the outcome. Still everyone is busy signalling that they stay so very long at the office, look how hard they work. Which leads to things being done just to fill the hours. You know a legal argument, thats kind of bullshit? doesn’t matter, we will add it, just because we can.
          I was hoping someone could bring a little flesh to that skeleton of an argument.

    • ADifferentAnonymous says:

      Robin Hanson thinks short hours signal bad things.

      “Once as a young man working at Lockheed, I decided to switch from working 40 to 30 hours per week, to spend more time on my independent research. My rate of advancement in the company didn’t just slow by 25%, it stopped completely — I was seen as not serious about my job. This suggests a signaling explanation for retirement: spreading our end of life play across the rest of our life would makes us look less serious and productive as workers.”

  44. FedeV says:

    Scott, if you’d like some help poking around with the data, I can try to fit some kind of piece-wise linear model (http://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines) and to see if I can identify the discontinuity or if it’s just some random noise.

    Alternatively, you could use google analytics and check to see which part of your traffic fell down the hardest? Facebook referrals? Google search results? I have absolutely zero knowledge of web analytics though.

  45. Navin Kumar says:

    I have a constrained effective altruism problem.

    Suppose you want to donate to Nepali relief efforts (for reasons of solidarity) – what’s the best charity to donate to?

    Thanks in advance.

  46. Helge Bjerck says:

    I went to a liberal arts college, but didn’t get a degree in any of the liberal arts, going for biology instead. In any case, this means that I was constantly barraged by people talking about how great the liberal arts are. One of the main claims is that a liberal arts degree teaches you how to think. I feel like this is claim is taken for granted, event though it seems like it has easily testable predictions (e.g. those with liberal arts degrees have greater critical thinking skills than those who do not, performance in liberal arts courses correlates with critical thinking skills).
    Has anybody ever tested this? Basically, I’m too lazy/snobby to do read the psychological/sociological/pedagogical literature I would need read in order to answer this question myself, and I’m reaching out to y’all to do my dirty work for me.

    • pneumatik says:

      I’m marginally less lazy than you. Short answer, no, it doesn’t. First link I could find is http://www.themarysue.com/college-broken-critical-thinking/ but I remember seeing this information elsewhere a few years ago.

      • LTP says:

        I believe certain majors have been shown to increase critical thinking abilities in other studies, namely philosophy and math, though I heard that second hand and don’t have a source for it.

        • Protagoras says:

          Philosophers are slightly less likely to make a narrow range of basic logical mistakes which are otherwise extremely common. They seem as prone to other forms of irrationality as anyone else, sadly. Similarly, extensive education in statistics seems to slightly reduce people’s susceptibility to some of the mistakes in dealing with probabilities that people commonly commit. In both cases, extensive study of the fields in question seems to be required before the effects become noticable; one class as an undergrad doesn’t make a discernable difference. Reducing cognitive biases seems to be extremely difficult.

  47. yli says:

    Anyone else having problems with the commenting system?

    When try to I post a comment, about half the time it’ll never appear. If I try to post it again, though, it’ll say that it’s a duplicate comment. Maybe it’s going to a moderation queue? Input appreciated from anyone who knows what’s going on.

    When this happens, neither Firefox nor Chrome will work. Turning on a VPN so I’m coming from a diferent IP address doesn’t work either. Trying with my Ubuntu computer instead of my Win 7 one also doesn’t work. The only thing that (sometimes) works is waiting and trying again later. In the last open thread someone reported this, but said it went away when they stopped using hyperlinks – this doesn’t help for me.

    I hope this comment goes through. EDIT: I’m lucky today.

  48. yli says:

    I would guess that nothing happened to your readership and it’s just that the way the data collection works changed or something. In any case, the fact that the commenters can invent a bunch of rationalizations for why a drop would have happened recently is almost irrelevant. If you posted that graph on any previous open thread, people would have had no trouble coming up with other rationalizations for why a drop should have happened *then*. If you’d said the readership had suddenly increased, ditto. if you’d said the readership had stayed weirdly constant, same thing.

    “Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.”

    … Maybe this even was some kind of test of willingness to rationalize arbitrary claims. If so, I hope I pass. 🙂

    • 27chaos says:

      Given that we trust Scott to not randomly lie, it makes sense to look for explanations. If you approach every problem with the mindset that one of the stated premises is wrong, then you’ll be fooled less often by problems with wrong premises, but you’ll have a harder time answering problems which are correctly posed.

      If I asked you “X*2=4, what is X?” and you responded “maybe 4 is a lie, and actually 8 is supposed to be there, or 204398435784, or 6”, that answer would not make you a clever rationalist.

      • US says:

        (This was meant to be a reply to ‘yli’, not ’27chaos’).

        “I would guess that nothing happened to your readership and it’s just that the way the data collection works changed or something. In any case, the fact that the commenters can invent a bunch of rationalizations for why a drop would have happened recently is almost irrelevant.”

        He mentions your first option himself (“maybe WordPress changed its method of calculating statistics”) but also notes that this seems unlikely to be the cause as they haven’t made any announcements about that (“I can’t find any evidence of this on the WordPress webpage”) (and you would expect them to announce changes with such a large effect), which is probably why people are willing to speculate about other causes. I also have a wordpress account and note in the comments that I can’t see a similar effect in my stats, making this explanation even less likely (although not much less likely, as I mention n is small in my case – but it is independent support for a ‘not wordpress’-explanation).

        Multiple readers mention changes in a high-impact algorithm, e.g. google, as a likely explanation, and to the extent that it’s not just noise I consider this explanation far more likely than changes in stats management by wordpress. Such an explanation is really hard to test because you’ll never be told by google that they made these important changes and they had these consequences, but an explanation being hard to test does not mean the explanation is wrong.

        I’m sure if he’d made up random data and asked us to explain what was going on, he’d also get a lot of explanations involving various factors, some more plausible than others. But that doesn’t mean something real isn’t going on in the data in this case, nor that the right explanation has not been provided by some of the contributors (given the abruptness of the change, conditional on it being a real effect and not just noise a monocausal explanation to me seems more likely). I’m pretty sure I’ve seen less impressive effect sizes in regression discontinuity designs applied in published papers.

    • Julie K says:

      I would have said that your strength as a fiction-writer is your ability to write something that readers find *less* confusing that reality. (i.e. No plot holes) 🙂

    • RCF says:

      Perhaps the numbers are normal, but Scott has developed anosognosis that causes him to think they’re off.

  49. Corey says:

    I’m kind of annoyed that Scott McGreal’s comment got elevated to comment of the week. McGreal correctly picks up on the dodginess of apparent motivation for the collapsed analysis, but from my (Bayesian) point of view, his (and Scott Alexander’s) focus on statistical significance blinds him to the actual import of the data. To wit: the justification on offer for collapsing the categories may be wrong but the the collapsed analysis results reported in the paper are reasonable.

    (Of course, if Janet Johnson’s assessment is correct, the whole question of statistical methodology is moot anyway.)

  50. Cauê says:

    I’d like to push my luck and once again ask a question to the smart religious people in SSC, as I’ve done a couple of times before, to have a better, non-straw understanding of religion.. Again I promise not to argue about it, and would ask other people to try not to rehearse the same old arguments again.

    So… the Bible. It’s not at all clear to me what place it occupies in “sophisticated theology” today. I’m not even sure what questions to ask, so I’ll try to make it open-ended: How important is it? What’s important about it? How do you see those problems that we atheists love to bring up (regarding science, morality, consistency)? Do the answers change significantly depending on which books we’re talking about (or, e.g., Old vs. New Testaments)?

    Thanks in advance for any responses.

    • Troy says:

      Okay, I’ll give it a shot: the Bible is important because it is a record of God’s progressive self-revelation in history, which began in the Old Testament but culminated in the incarnation. However, the Bible itself is not God’s revelation (a point on which many Christians are confused); it is a record of revelation. It it is not inerrant, and should be read and understood in the light of (a) God’s primary revelation, Jesus Christ, and (b) God’s Church, which was responsible for codifying the Biblical canon.

      Beyond that, it’s very largely contextual. Books of the Bible are written in different genres with different aims and require different approaches for interpretation. You shouldn’t take Psalm 137 (“Blessed shall he be who takes your little ones and dashes them against the rock”) as giving you moral advice, but you should take it as a heartfelt expression of the depths of human emotion and the pain of the Jews in exile. Some books of the Bible are clearly not historical, and likely not intended as such (Jonah), others clearly are (the Gospels).

      The most difficult cases of Biblical interpretation, it seems to me, arise with respect to passages that appear to be presented as historical but impute to God morally problematic qualities, in particular the conquest narratives of the Old Testament. There are several interpretative strategies available here. One is to deny the historicity of these parts of the OT. The passages in question may still have value in, say, giving us a perspective on how the ancient Hebrews viewed God (inasmuch as they are ascribing to him these words and actions), or in serving as allegory for spiritual struggles (as many early Christian interpreters thought). Another strategy is to accept the historicity of these passages, and explain God’s apparently morally problematic commands by holding that God’s revelation is tailored to the culture and attitudes of the time.

      • “However, the Bible itself is not God’s revelation (a point on which many Christians are confused); it is a record of revelation. It it is not inerrant, and should be read and understood in the light of (a) God’s primary revelation, Jesus Christ, and (b) God’s Church, which was responsible for codifying the Biblical canon.”

        Out of curiosity, how standard is that point of view? Is it part of the “official” teachings of a particular denomination of Christianity? I mean, you say that some Christians are confused about this, but I’m not sure why it wouldn’t simply count as having a “legitimate” theological difference of opinion.

        • Lutherans, Catholics, Orthodox, and probably Anglicans would sign on to that statement. The low-church Protestants would not; the Biblicism of those groups is such that they assert that the Bible itself, rather than the person of Christ, is the definitive revelation of God. I have even heard in some places that “the Bible is Jesus”, a view which most other groups regard as Bibliolatry.

          You might regard this as a “legitimate theological difference” — I think that the Baptists and other Protestants do — but partly because of issues like this, you won’t find very many Christian rationalists or nerds who identify with low-church Protestantism. So if you’re asking this crowd, you’re going to get overwhelmingly high-church answers. My experience with Protestant pastors leads me to think that most people with formal theological training would also agree with the statement above, with varying degrees of discomfort, but I’m not very confident that this would hold over a larger sample size.

          • Very interesting (to both you and Troy). Thanks!

          • FacelessCraven says:

            Low-Church-Protestant analogue here. We’d disagree with the Bible being errant per se; we shift the error over to the reader’s interpretation rather than the text itself. It seems to me that both approaches are intended to keep motivated interpretation from trumping reason in a theological debate. From our perspective, though, we’re equally worried about syncretism. Once you’ve agreed that the text itself isn’t infallible, it seems to me that fallibility becomes a fully-general answer to any conflict between the faith and the world. From a High Church perspective, how do you decide which parts of the bible are in error and which aren’t?

          • Troy says:

            thepenforests: I advocate a “big tent” approach to who to count as Christian, so I wouldn’t say that Biblical inerrantists aren’t Christians or anything like that. To that extent it’s a “legitimate theological disagreement.” However, I would agree with Mai La Dreapta that inerrantism is very much a modern Protestant concept, and that it can (but certainly does not always) come close to idolatry in placing the Bible above Christ.

          • Troy says:

            FacelessCraven: you ask, From a High Church perspective, how do you decide which parts of the bible are in error and which aren’t?

            The standard High Church answer is that you rely on the Church. I’m not entirely satisfied with that answer myself, inasmuch as it seems to presuppose ecclessial infallibility, which I don’t accept.

            My own view is that the answer to this question is (trivially) the same as the answer to how you decide what parts of any putative source of information are or are not in error: you figure out what your evidence supports about the reliability of that source on this instance.

            Giving content to that is a highly non-trivial task here just as in any other area of life, but here’s a sketch: we have extremely strong evidence, on ordinary historical grounds, to think that the New Testament books, at least, are generally reliable historical documents. From the NT we learn about Jesus, the founder of our faith and head of the Church, and because of the miracles that he and his disciples performed have strong reason to think that he was God incarnate. We also learn in the NT and other ancient sources about the formation of the Church, and what we learn gives us reason to trust the (especially early) Church as a generally (though not infallibly) reliable source. Inasmuch as the Bible was and is endorsed by the Church, this gives us additional reason to trust the Bible.

            Given that framework, that the Bible records some event as history or a certain Biblical author (say Paul) endorses some rule is strong evidence that that event occurred or that that rule is morally binding on us, but it is not infallible evidence, and should be weighed against both Church teaching and the teachings of Jesus. And Jesus is, I think, an infallible source (although our access to his teachings is fallible).

            To take my example from upthread: I think it’s fairly clear (though not undisputable) that Jesus taught pacifism and that the early Church was almost uniformly pacifist. Inasmuch as the conquest narratives of the OT portray a God that conflicts with this ethic, this gives us reason to doubt the accuracy of those narratives.

          • FacelessCraven says:

            @Troy – I’m rather embarrassed that I didn’t anticipate appeal to the authority of the Church itself. That seems obvious in retrospect. The rest of your explanation makes good sense, and sounds pretty similar to how we do things, with only a few variables and changes in emphasis.

            As for the OT conquering accounts, it seems to me that most people are willing to accept “the ends justify the means” logic, even when it’s utterly fallible humans doing the calculation. If we’re willing to accept dust specks versus torture, who’s to say that the sim programmer isn’t right to take similarly callous actions for what he can see as entirely sufficient ends?

          • Deiseach says:

            I’m tempted to say that a variety of Protestantism (not necessarily low-church but definitely of the Reformed strain) elevates the Epistles of St Paul above the Gospels; the amount of to-ing and fro-ing over, for instance, the concept of “justification” as worked out by Paul in these documents, and the corresponding lack of appeal to the words of Jesus in the Gospels, continually strikes me 🙂

          • We’d disagree with the Bible being errant per se;

            Well so would I, so as far as that goes we’re in agreement. I would state that the Scriptures are inerrant in everything that they intend to teach, but discerning what the Scriptures intend to teach in a particular passage requires wisdom guided by the Tradition.

            All groups do this to some extent. No one thinks that Genesis 30:37-43 is infallible advice about animal husbandry, for example: we all agree that that’s not the point of the passage. The same thing can be said for many other passages.

          • Troy says:

            I’m tempted to say that a variety of Protestantism (not necessarily low-church but definitely of the Reformed strain) elevates the Epistles of St Paul above the Gospels; the amount of to-ing and fro-ing over, for instance, the concept of “justification” as worked out by Paul in these documents, and the corresponding lack of appeal to the words of Jesus in the Gospels, continually strikes me

            Sounds plausible. I would add that it’s certain parts of Paul read through a broadly Augustinian lens.

            If we’re willing to accept dust specks versus torture, who’s to say that the sim programmer isn’t right to take similarly callous actions for what he can see as entirely sufficient ends?

            Well, yes, if you’re a consequentialist then any kind of action can be justified if the circumstances are right. Sounds like a good reason to not be a consequentialist. 🙂

    • FacelessCraven says:

      For reference, I was raised Christian, turned Atheist for a decade, and then rejoined Christianity. I’m pretty sure I don’t count as a “rationalist”; I haven’t read through all the sequences yet, but what I have read I greatly enjoy. I would like to think that I consistently attempt rationality, at least. With the caveats out of the way…

      We Christians are proceeding from the axiom that the world we live in is a sim created and maintained for our enjoyment and benefit, because the sim’s programmer values our existence. The Bible is seen as communication from the sim programmer, made in good faith and with perfect competency, but necessarily constrained by the values of the creator and hence the core rules of the sim (ie, respect for free will, no deterministic brain-rewriting). The purpose of this communication is to explain the sim, its programmer and his values, so that we can effectively cooperate with him in the process of leaving the sim for eventual instantiation into baseline reality.

      In my experience, popular Atheist critique of the bible is usually a pretty close analogue to the Daily Show’s critique of conservative thought. Most of it seems to revolve around shallow gotchas, straw-manning and sneers rather than engagement on the actual issues. This isn’t terribly surprising, given how “popular” communication in general works. Most arguments about morality and consistency I’ve encountered seemed easy to answer within an interpretation framework. The nasty ones, things like the Canaanite genocide, I have no good answer for but am warily willing to bite the bullet on.

      The scientific attacks that concern me are mainly archaeological/historical. A large portion of the Old Testament is pretty clearly a historical account, and it being a false account would trouble me a great deal. On the other hand, I’m pretty skeptical about how reliable “scientific findings” are in general, and the attacks along that axis didn’t seem conclusive to me even when I was an Atheist. [EDIT] – the potential for further scientific attacks also seems clear; I’m not sure my faith would be compatible with a world where the operations of the brain were provably deterministic and directly manipulable, but at the same time I’m very much hoping I live to see brain uploading. There’s obviously a conflict there, but one I’m willing to defer until it becomes a practical concern.

      It seems to me that the issue ultimately comes down to a personal choice. It was possible for me to build a coherent worldview around Atheist axioms, and I believe I’ve built a coherent worldview around Christian axioms. I prefer the Christian ones for a variety of interconnected reasons, of varying levels of rationality. This fits what I would expect from what I think I understand about God.

    • Irenist says:

      In addition to echoing what Troy said, I’d like to offer two points:

      1. Vatican II issued a dogmatic constitution called “Dei Verbum.” Contemporary Catholic thinkers generally strive to think in concert with it, and it might be useful reading for you. Among the points it makes are that God is the author of Scripture which unerringly teaches us what God intended to teach us through Scripture, but that He willed to work through autonomous human authors. These latter authors could be wrong about matters of historical or scientific fact, but since that didn’t impinge on the moral or creedal stuff God wills to communicate, He let them write what they thought. The Bible ain’t a science or history textbook, in other words, and shouldn’t ever be expected to be inerrant about stuff like that.

      2. Esotericism is a huge deal in understanding ancient writing. Not esotericism of the “spooky occult secrets” sort, but pedagogical and belle-lettristic esotericism. I’ve been asked in SSC comments before why, if God inspired the Bible, He didn’t CLEARLY LAY OUT what His plan was, and prove it was Him by dropping some modern science in there or something. The answer to the latter part is that He was working through autonomous human authors who didn’t know any modern science. The answer to the former is that ancient people never would’ve preserved a book that clearly laid out anything, because they scorned such books. They liked their books like we like our online RPGs: full of hidden Easter eggs. (Indeed, if you’ll pardon the pun finding analogical “Easter” eggs in the Old Testament was kind of the main hobby for Christian exegetes for centuries.)

      From a P.E. Gobry’s “pedagogical esotericism”-stressing review of the recent Straussian work “Philosophy Between the Lines” by Arthur M. Melzer

      Philosophers did not just practice esotericism as a way of sneaking subversive ideas past the censors, but also as a pedagogical device, much in the way of Socrates’ insistent questioning. For the Ancient philosophers, philosophy was not just, perhaps not even primarily, a body of doctrine, but an attitude of the mind towards contemplation and relentless questioning. The task of making philosophers, then, was not primarily about imparting ideas, but about leading people towards a certain state of mind. The philosopher wanted his pupils to discover his ideas on their own, by studying the text and working hard to get past the literal meaning, and thereby growing into a philosophic mind and posture.

      In this regard, Melzer points out something else (in retrospect obvious, but which was quite an “Aha!” moment for me), which is the rarity of books in the era before the advent of the printing press, and the fact that the classical liberal arts curriculum included long study in “rhetoric” (i.e. the art of writing) which is something we have all-but forgotten. Everyone who was educated was trained in writing and reading between the lines. And because books were rare and expensive, owners of books, instead of the contemporary practice of reading a book once and then just moving on to the next, would typically reread the same book many times over their lifetime. Knowing this, authors would typically be alert to write in an esoteric style, concealing many layers of meaning into the text, so that the book would still be rewarding on the Nth reading. Just like, to the contrary, anyone writing a book today knows all-too-well that his book is competing with millions of other books, and so strives to make his argument as clear, literal and obvious as possible for fear that the reader just drop the book and move on to another.

      If this is how everyone understood the art of writing and the art of reading until very recently, then, certainly, this should have an impact on how we read the Bible. In fact, Strauss was first alerted to the reality of esoteric writing by his reading of Maimonides and Rashi, the two greatest Medieval rabbis. (Maimonides (like Aquinas) read Aristotle esoterically, as did every single Ancient commentator (Aristotle is the single author with the biggest secondary literature in the Ancient world), even though today Aristotle is considered as perhaps the most literal Ancient philosopher.)

      Even without referring to inspired spiritual senses, we should still realize that the Modern prejudice that the surface meaning of a text is almost always the most authentic is just that–a culturally-contingent prejudice. By contrast, educated readers and writers for the rest of history would have had precisely the opposite assumption: that it’s more likely that the surface meaning of the text is not the most authentic. And this is indeed how many rabbis and Church Fathers read the Bible.

      Source: http://www.patheos.com/blogs/inebriateme/2014/11/the-ancient-art-of-reading-and-biblical-interpretation/

      Anyway, that’s a lot. But as a “high church” type, if there was ONE concept I wish rationalists had in their head about the Bible, it would be how the modern secular, post-Protestant prejudice that a high quality book is necessarily a highly perspicuous book is 180 degrees from the stylistic canons of the Bible’s own era. Once I discovered the perspective of pedagogical esotericism, studying the Bible went for me from frustration at its ambiguity to delight in its intricacy. And I suddenly understood why the Church Fathers and the medieval Scholastics enjoyed commentating it so much, and why moderns tend to dislike it so much.

      • darxan says:

        Apparently God kept final edit privilege w/r/t the Bible. From OrthodoxWiki:

        The Righteous Simeon was one of the seventy scholars who came to Alexandria to translate the Holy Scriptures into Greek. The completed work was called “The Septuagint,” and is the version of the Old Testament used by the Orthodox Church.

        St Simeon was translating a book of the Prophet Isaiah, and read the words: “Behold, a virgin shall conceive in the womb, and shall bring forth a Son” (Is 7:14). He thought that “virgin” was inaccurate, and he wanted to correct the text to read “woman.” At that moment an angel appeared to him and held back his hand saying, “You shall see these words fulfilled. You shall not die until you behold Christ the Lord born of a pure and spotless Virgin.” Tradition says he died at the great age of 360.

        What is funny is that God is using his divine authority to keep a mistranslation in the Bible, and poor Simeon is cursed to live 300 extra years because he wanted to correct it.

        • Zykrom says:

          For a sufficiently weird value of “cursed.”

          I’d better start learning biblical Hebrew so I can “mistranslate” something…

        • Irenist says:

          Great story!

          St. Augustine, for one, tended to view the Septuagint Greek New Testament as almost equal in authority, and certainly as divinely inspired as, the original Hebrew of the Old Testament. When there’s an obvious discrepancy (like in different age values reported in some list of patriarchs in Genesis or something), Augustine will generally try to ferret out the symbolism God intended with the discrepancy. Us high church types tend to enjoy mocking King James Only sorts, but heck if the attitudes of the Church Fathers (and the writers of the New Testament!) toward the Septuagint aren’t (sort of kind of very broadly) reminiscent.

          On stuff like the issue of whether Hebrew “almah” (roughly, maiden) was properly translated by the Septuagint as Greek “parthenos,” (virgin), I think a couple observations bear making:
          1. For all I know (or care), modern or medieval Hebrew “almah” just means “young woman” if the word survives at all. But contemporary English “maiden” doesn’t mean much more than that, whereas Elizabethan English “maiden” clearly implied virginity. The Septuagint translators were closer in time to the authorship of the prophetic texts than the rabbis who helped forge post-Temple Judaism after 70 CE. So maybe rather than assuming the legendary Seventy translators were idiots, we should assume that the Seventy were fluent in the Hebrew and Greek of their day, and translated accordingly?
          2. Given its use for Old Testament citations both in the inspired text of the New Testament, and the canonical value assigned to it by the early Church (e.g., Augustine), the general Christian attitude toward the Septuagint has been that it was trustworthy in its own right. Modern fads in Biblical “higher criticism” have eroded this regard, and led many to fetishize dubious reconstructions of the intent in the minds of the ancient Hebrew scribes. But it is a basic principle of Christian exegesis that what we are interested in is what God intended to say through the text, a matter for which the understanding in the mind of the original scribe is one piece of the puzzle, but no more than that. I doubt the Psalmist was picturing Christ in particular when he was inspired to write of the suffering servant, but God knew what He was about. Like Hebrew scribal intent, the Septuagint, which I for one stand with the Fathers in according a high independent value, is another appropriately consulted guide to what God meant. So while it’s certainly ambiguous if “maiden” in the Hebrew means “virgin,” I’ll take the authority of the New Testament-and-Patristic-endorsed Septuagint on that question over some nineteenth century German higher-critical Biblical scholar, and certainly over somebody playing “gotcha” with it.

          • Douglas Knight says:

            1. For all I know (or care), modern or medieval Hebrew “almah” just means “young woman” if the word survives at all. But contemporary English “maiden” doesn’t mean much more than that, whereas Elizabethan English “maiden” clearly implied virginity. The Septuagint translators were closer in time to the authorship of the prophetic texts than the rabbis who helped forge post-Temple Judaism after 70 CE. So maybe rather than assuming the legendary Seventy translators were idiots, we should assume that the Seventy were fluent in the Hebrew and Greek of their day, and translated accordingly?

            Equally, why do you assume modern critics are idiots? I cannot adjudicate the actual translation, but I can see that the modern critics do neither of the things you claim: neither look to medieval Hebrew nor assume that the Septuagint was written by idiots.

          • Irenist says:

            @Douglas Knight:

            Fair enough. But the usual gotcha is “almah” didn’t mean virgin, so Christians are dopes for thinking it was prophetic. If Christians have independent reason for using the Septuagint to interpret the Hebrew, then the gotcha falls flat. That’s my (limited) point.

          • Matthew says:

            You know, there are these people out there called Jews, who’ve been relying on the Hebrew version continuously. You’ll find that the virginity interpretation has never had any currency in Jewish tradition.

          • Jaskologist says:

            Are you sure about that? The Jews were the ones who wrote the Septuagint in the first place.

          • Matthew says:

            Yes, I’m sure. They had no choice when writing the Septuagint, because ancient Greek had no way to make the distinction.

          • Douglas Knight says:

            What about “kore”?

          • Irenist says:


            You know, there are these people out there called Jews, who’ve been relying on the Hebrew version continuously.

            You know, Jews in the Hellenistic world were primarily Greek-speaking for centuries, and used the Septuagint in synagogue. (Unsurprisingly enough: How many American Jews look up Bible quotes in Hebrew, vs. googling them in English?)

            Wikipedia (yes, I know, but it’s quick) article on the Septuagint:

            Pre-Christian Jews, Philo and Josephus considered the Septuagint on equal standing with the Hebrew text.[33][34] Manuscripts of the Septuagint have been found among the Qumran Scrolls in the Dead Sea, and were thought to have been in use among Jews at the time.
            Starting approximately in the 2nd century CE, several factors led most Jews to abandon use of the LXX. The earliest gentile Christians of necessity used the LXX, as it was at the time the only Greek version of the Bible, and most, if not all, of these early non-Jewish Christians could not read Hebrew. The association of the LXX with a rival religion may have rendered it suspect in the eyes of the newer generation of Jews and Jewish scholars.[23] Instead, Jews used Hebrew/Aramaic Targum manuscripts later compiled by the Masoretes; and authoritative Aramaic translations, such as those of Onkelos and Rabbi Yonathan ben Uziel

            And Wikpedia on the “Development of the Hebrew Canon”:

            The Septuagint (LXX) is a Koine Greek translation of the Hebrew scriptures, translated in stages between the 3rd to 2nd century BCE in Alexandria, Egypt.
            Philo and Josephus (both associated with first century Hellenistic Judaism) ascribed divine inspiration to its translators, and the primary ancient account of the process is the circa 2nd century BCE Letter of Aristeas. Some of the Dead Sea Scrolls attest to Hebrew texts other than those on which the Masoretic Text was based; in some cases, these newly found texts accord with the Septuagint version.[10] Strong evidence exists that the Septuagint was the canon in place in first century Palestine. “Authors Archer and Chirichigno list 340 places where the New Testament cites the Septuagint but only 33 places where it cites from the Masoretic Text rather than the Septuagint.”[11]

            So the Septuagint predates widespread diaspora use of the Masoretic text by roughly 400 years, sometimes tracks better with the oldest manuscripts we have, enjoyed widespread currency among Jews throughout the Mediterranean world for centuries, and was only abandoned around the time that Christian proof-texting from the Septuagint became a serious irritant to the Jewish community (which abandonment couldn’t possibly have involved any animus or bias, right?), which as Rodney Stark points out, was likely (given reasonable demographic assumptions) hemorrhaging less rabbinically inclined, more culturally Hellenized Jews to the less demanding, more assimilationist new Jesus sect. So rabbinical opinion on whether the once universally applauded Septuagint had mistranslated almah, or whether Jesus was the bastard son of some Roman legionary named Panthera, or any of the other opinions about Christianity issuing from the rabbis of that period, were just the impartial observations of philological experts, right? What could be more rationalist than to comfortingly buttress one’s atheism by taking the demographically threatened rabbis at their word about why a 400 year old, nigh-universally employed translation was suddenly unworthy, instead of wondering if maybe they had biases, too, just like the rest of us?

          • RCF says:


            “If Christians have independent reason for using the Septuagint to interpret the Hebrew, then the gotcha falls flat. That’s my (limited) point.”

            If the assertion that it was prophetic is made for evidentiary reasons, then the objection does not fall flat at all. The degree to which an event is evidence depends on how unlikely it is. The more doubt that there as to what the correct interpretation is, and the more possible interpretations there are, the more likely it is that one of them will fit what actually happens.

          • Irenist says:


            That’s a very good point. I was just thinking more of the “you guys are dopes, don’t you know that’s a mistranslation?” level of argument, which I think can be rebutted just by showing that Christian churches have been well aware of it for centuries and have thought about the matter in some detail.

            At the far more serious level you’re arguing at, though, you’re right that the possibility of an alternate intention in the mind of the original Hebrew scribe is an effective rebuttal to any prophetic claims.

            Now, in my case, I don’t share the enthusiasm for arguments rooted in the accuracy of prophecy. It’s still a licit form of argument among us high church types, but it’s rather fallen out of fashion. (Perhaps b/c we’re all insufferable snobs and prophecy stuff is sort of a Fundamentalist fixation. I don’t think I’m a horrible snob, but I’d be the last to know, wouldn’t I?)

            Anyhow, as I’ve cluttered up this thread with at great length, I have my own reasons for placing credence in the “parthenos” translation, about which of course YMMV and doubtless does.

            Still, “this prophecy came true” is an extraordinary claim, and the bare possibility that an alternate meaning was intended is enough to render the evidence decidedly not-extraordinary. Frankly, given how voluminous and symbolism-laden the Bible is, even if the virgin birth was prophesied, we’d be likely to get a few hits like that just on random chance, yeah? (This is my conscious reason for finding the prophecy-as-evidence apologetics far, far less compelling than, say, Augustine and Pascal did. But I also find them rather…vulgar…which is what makes me worry there’s some snobbery in there, too.)

            In sum: FWIW, I happily concede your point, which is an excellent one.

        • Deiseach says:

          On my reading, the point of the story is that the whole “almah doesn’t mean ‘virgin’, it just means ‘young woman'” interpretation (which is quite deliberately meant to deny the doctrine of the Virgin Birth) is not some modern discovery by smart independent thinkers, but that the old-timey orthodox (small “o”) theologians were quite aware of the objection and did not find it particularly convincing.

          Which, if you’re taking the bare literal meaning of the words on face value, it’s not: this is the equivalent of Isaiah saying “The young woman will have a baby when she’s married”. Whoa, hold on there with that crazy future prophecy stuff! Saying a married woman is going to get pregnant is a bit like saying that the sun is going to rise in the morning (unless she or her husband is sterile or that there is use of contraception to prevent pregnancy) – not really all that startling when it comes to being a prophecy.

          Unless you invoke the context of the prophecy (that Isaiah was referring to a specific woman and a specific marriage and a specific hoped-for pregnancy in the context of the political and dynastic struggles of his time), or unless you take it as “the New foreshadowed by the Old” traditional exegesis, then it really has no use as a prophecy.

          • Jiro says:

            the point of the story is that the whole… interpretation (which is quite deliberately meant to deny the doctrine of the Virgin Birth) is not some modern discovery by smart independent thinkers, but that the old-timey orthodox (small “o”) theologians were quite aware of the objection and did not find it particularly convincing.

            Look at any argument about why Jews shouldn’t be treated as inferior beings (that doesn’t depend on something modern such as IQ tests), and I’ll bet you could find it addressed by old time theologians who didn’t find it very convincing. Likewise, look at any argument why the Biblical creation story should be interpreted figuratively.

            Theologians heavily use, and used, motivated reasoning, so the fact that something has been addressed by theologians has little relevance to whether it’s correct. Do you seriously think that if the objection was correct, theologians wouldn’t still have “addressed” it in a plausible-sounding way?

          • Deiseach says:

            Jiro, despite what you seem to fear, I have no intentions of dragging you to the baptismal font by the hair of your head.

            What I was saying that the modern “Actually, almah is a mistranslation (you rubes)” gotcha is not some stunning new idea that nobody ever heard of before, but that those musty old scholastics were actually aware of the objection and the variant translation.

            We’ve had this before about Scriptural translation and why translators used certain words and not others, and Augustine and Jerome had a correspondence about Augustine asking him “So why did you use this term rather than that one, which is more accurate?” and Jerome explaining his choices.

          • Jiro says:

            The point is that it’s going to be an idea theologians have heard of and “addressed” before completely independently of whether the idea has any merit.

      • Deiseach says:

        Has anyone ever read any of the von Daniken books that were so amazingly popular back in the 70s? You know, the ancient astronaut type of thing. Glyphs and ancient monuments showing Advanced Modern Technology they could not possibly have created themselves, so some hyper-advanced alien race must have given it to them.

        And how these items are interpreted as hyper-advanced alien technology is that they’re shown to be the tech of the time (the 70s) – which we’ve now gone far past, so to us those devices don’t look anything like proof that they were left by a civilisation capable of interstellar travel, precisely because the interpretations of the technology are so dated. von Daniken is seeing what he wants to see and basing it on the technical ability of his time – “it has to be a rocket ship, look at the radio valves!” type of reasoning. The flip side of that is that if such ancient documents or paintings or sculptures genuinely showed tech of our day (or beyond), they would have been literally unrecognisable by the standards of the 70s.

        That’s how demands about “If the Bible is indeed the inspired word of God, why doesn’t God tell the scribes about DNA?” strike me. Suppose a 17th century sceptic had demanded that God tell the scribes about phlogiston, as that was the cutting edge science of the day. We now know that to be false. What would have been proof of its correctness at that time would nowadays be proof of its falsity (think of Joseph Smith’s papyri).

        I’m not saying DNA is false! But demands that “If this really is knowledge provided by an omniscient being, then it should [insert Best Science of Our Time] as an explanation” strike me as falling into this trap: our Best Theories of the Day can be shown to be false later, and by putting them into a book, instead of proving its truth, it instead proves its falsity. And also, if we say “But why not put in something like DNA that is undeniably true and won’t be superceded by a later theory?”, then again we’ve got the “un-understandable by the knowledge of the day” problem; imagine how garbled the transmission of such knowledge would be after three thousand years (we could be looking at stylised images like the Assyrian Tree of Life motif representing the double helix!)

        In sum: the Bible is not about science, it’s about the right relationship between God and humanity. It’s like Scott’s cactus and bat people trying to talk about love and joy and the narrator wanting a specific mathematical solution first 🙂

        • It does seem to me that a benevolent God handing out rules should have mentioned something about boiling the drinking water.

          • Irenist says:

            A bunch of people are trapped inside a video game. Decades in the game are mere milliseconds in real life. Whether the people’s characters die young or old makes only a few milliseconds’ difference to playing time, and none to their post-game real life fates. However, whether they act morally within the game will (for, um, reasons) radically and irrevocably determine their real life fates. The game doesn’t have much runtime left. Do you send the players a message about morality, or one about how to survive another few milliseconds?

          • Clockwork Marx says:

            Also, you designed the game, the post game, and all of the minds you’re trying to communicate with.

          • Deiseach says:

            It does seem to me that a benevolent God handing out rules should have mentioned something about boiling the drinking water.

            Oh, you mean like all the purity rules in Leviticus? The ones that progressives in churches like to wax merry about, in the “shellfish argument” about same-sex marriage and gay rights, for instance?

            God did indeed give a list of dietary restrictions, if I’m recalling correctly, but I don’t see people taking those as evidence one way or the other for independent scientific corroboration of His existence 🙂

          • Irenist says:

            @Clockwork Marx:

            Sure, all the usual theodicy problems remain for Christian apologetics. I’ve no desire to debate them here, and I bet Scott would rather I not do that anyway.

            The (comparatively minor) point was just that the “in-Universe” interpretation of Biblical inspiration isn’t prima facie stupid just because there aren’t a lot of modern health tips in the Bible, because on the “in-Universe” view of the Christian mythos, a single sin is objectively worse than any amount of temporal misfortune, because the soul is eternal and the body isn’t. Now, the view that God should care more about our morals, and thus that it is entirely fitting that His Revelation should focus on moral and dogmatic matters rather than on relieving our temporal misfortunes may very well be a silly one. But not an obviously inconsistent one.

        • Jiro says:

          Phlogiston was pre-science, Science as we know it now rarely (I’d say pretty much never) overturns existing scientific wisdom (except when it shows that existing scientific wisdom is appropriate for the cases to which it has been applied but does not generalize).

          • Calling phlogiston “pre-science” is begging the question. Phlogiston was the accepted consensus of scholars of its day, and it correctly accounted for the available experimental evidence. If you’re allowed to banish phlogiston to the realm of non-science, then you can simply do the same for any other theory which becomes sufficiently out-of-date.

          • Deiseach says:

            So you want God to factor out a prime number, and then three thousand years later after transcription errors, this is run on a computer by modern day people, turns out wrong, and everyone goes “Ha! Told you deities don’t exist!”

            There are enough people gone and going crazy deriving secret Biblical codes based on mathematics that I’m quite glad God did not put mathematics into the Bible; gematria does not interest me particularly.

      • I had written a pretty long reply to the OP which seems to have been eaten by the comment monster, but fortunately your post covered most of the main points. In particular, I want to underline the fact that allegory and esotericism are the dominant interpretative frameworks by which the Fathers interpret the Old and New Testaments, and more importantly it’s how the New Testament authors interpret the Old. I generally presume that the literal meaning of the text is also true in the absence of any other consideration, but if the literal meaning becomes untenable, the spiritual meaning may remain. (And this approach to the problem, contra many internet atheists including Yudkowsky, is not a form of doublespeak invented after science cast the Scriptures into doubt, but was known and acknowledged in ancient times. St. Augustine most prominently discusses this.)

        • Deiseach says:

          From Verbum Domini by Pope Benedict XVI, on the four senses of Scripture (traditionally the literal and the spiritual, the spiritual being divided into the allegorical, the tropological or moral, and the anagogical):

          One may mention in this regard the medieval couplet which expresses the relationship between the different senses of Scripture

          “Littera gesta docet, quid credas allegoria,
          Moralis quid agas, quo tendas anagogia.

          The letter speaks of deeds; allegory about the faith;
          The moral about our actions; anagogy about our destiny”

          So – for Catholicism (and I’m guessing the Orthodox and Oriental Churches as well), the Bible is indeed the word of God, but not the Word of God – the Word is not a book, but a Person, the logos, the beginning of St John’s Gospel “In the beginning was the Word” (yes, Scott, like that joke on your Twitter), the Word made Flesh, Jesus, God made Man, Second Person of the Trinity.

          • Mary says:

            One notes that in this structure, the “literal” encompasses both “literal” and “figurative.” It means what the passage obviously means.

            “It started to rain”‘s literal meaning is “it started to rain.” “Her heart broke”‘s literal meaning is “she suffered great anguish.”

            In a work of fiction, of course, any author could also have added spiritual meanings. The rain falling could also be an indication of grief. The woman’s heart being broken could also be the moment that the society she belongs to (and represents) rejects the man who broke her heart.

        • Carinthium says:

          Objection. What is to stop allegory readers from seeing what they want to see? Modern literary interpretation has enough trouble getting objective meaning, and there’s a lot less temptation to rationalise a fiction book than the Word of God.

          If God wanted to make himself clear, he could easily have been more literal.

          • Wrong Species says:

            Good point. For an omnipotent being God sure is terrible at sending messages clearly.

          • Troy says:

            If God wanted to make himself clear, he could easily have been more literal.

            One of Irenist’s points in the post that started this subthread is that God was working through autonomous human authors, and that ancients wouldn’t have respected or kept around a book that did not have layers of meaning. It’s not God who is not being literal; it’s the human authors of the Bible.

            The worry that we then don’t have anything solid enough to go in in understanding God’s will has been addressed elsewhere in this thread, I think. Above I suggested that the Bible is not itself God’s revelation; it is a record of that revelation. God’s revelation includes things like his interactions with the ancient Israelites and especially his incarnation in Christ. Becoming incarnate and modeling the kind of life that he wants humans to lead sounds to me like a pretty clear message from God.

          • Deiseach says:

            How much more literal than “I am the Lord thy God, thou shalt not have strange gods before me” can you get?

          • Carinthium says:

            Some bits are highly literal, I agree. But others really aren’t. How do you explain the bit where the Israelites are promised the land from the Euphrates to the Nile?

  51. XXX says:

    By how much do methods in clinical psychology improve in 25 years?
    I have the impression that consensus in psychology changes all the time and in fact I have helped with organising (the tech part of) some kind of biannual training for psychologists which seems to be mandatory where I live. I wonder if it is very different for the subfield that I mentioned.

    To be more precise, I did some kind of concentration test that the guy told me pilots also have to do, as my doctor suspected ADHD. I did it on a laptop and computer that were clearly from the early nineties, and I had great results. But apparently some of the surveys I did showed that I am depressed, so I was fed with antidepressants and my inability to concentrate seen as a symptom.
    But now that I feel that is over, it just got worse. And I wonder if a test that works with numbers and patterns isn’t that good of a method to spot concentration problems in someone who is naturally interested in such a thing. And a professional did have a suspicion about it, so it’s not like I’m self-diagnosing myself, and I’m just wondering if I should try again or believe that test.

  52. Lambert says:

    Scott, if you are in need of some random contriversial scientific issues about which to write 1000 word analyses, may I reccomend/request detrimental effect of aspartame on humans and memory repression?

    A better meta-level solution to sate my curiosity on such topics would be to blog about how to make the conclusions in such analyses in a ielatively foolproof way.

    • Matthew says:

      Not necessarily on memory specifically, but seconding the request for Scott-quality review of Aspartame.

  53. 27chaos says:

    Maybe a lot of your readers are college students who are now busier.

  54. Albatross says:

    Wasn’t there a national hot button issue around February? I seem to remember some issues crossing over into other media at some point on feminism and rehab clinics. There could be readers who only check for certain issues.

    Open thread idea: What if the long decline in crime is due to police catching criminals instead of innocent people? The FBI crime lab made up most of its information. Each innocent person they convicted means the criminal escaped to commit more crimes. Perhaps affordable portable video, DNA evidence etc have reduced crime by ruling out innocent suspects that would have been convicted in the past. When the police arrest and convict the correct person, future crime is reduced because the criminal is in prison. As prejudice and hunches have gradually been replaced by evidence and a diverse police force, more accurate convictions have reduced crime.

  55. sgr says:

    Statistically: yes, there was a statistically significant, but small, decrease in traffic around Feb 20.

    I took your plots and guessed some numerical values of whatever the vertical axis means (“readership units” will do for now). Then I did an unequal variance, 1-sided t test in R, to see if the mean was greater before Feb 20 than after.

    The result was significant at both weekly (p ~ 4e-5) and daily (p ~ 0.001) granularities:

    Welch Two Sample t-test

    data: weeklyHitsBefore and weeklyHitsAfter
    t = 7.3011, df = 8.017, p-value = 4.144e-05
    alternative hypothesis: true difference in means is greater than 0
    95 percent confidence interval:
    0.4938131 Inf
    sample estimates:
    mean of x mean of y
    2.7750 2.1125

    Welch Two Sample t-test

    data: dailyHitsBefore and dailyHitsAfter
    t = 4.2012, df = 8.574, p-value = 0.001282
    alternative hypothesis: true difference in means is greater than 0
    95 percent confidence interval:
    0.62889 Inf
    sample estimates:
    mean of x mean of y
    3.842857 2.722222

    However, though the result is statistically significant, the effect size is small. The weekly mean readership units declined from 2.78 to 2.11; the daily readership units (presumably different units) declined from 3.84 to 2.72.

    So the finding is statistically significant but the effect size is small.

    • Douglas Knight says:

      You should apply this procedure to data from several random processes, including a random walk. But first you should think hard about what you actually did. This is the hard part.

  56. Wrong Species says:

    Hypothetical: A group of really smart people get stranded on an island. They decide to make their peace with the island and try to build a civilization. Knowledge isn’t an issue because there is a diverse group of engineers, scientists, doctors and whatever else they might need. Would they be able to have electricity, factories or pretty much anything we associate with modern civilization?

    • Bugmaster says:

      It depends, how many people are there in the group, and how many of them survive the first month ?

      Imagine a situation where you just have a single guy with a super-genius level of intelligence, but an average human body. He knows exactly what to do in order to rebuild civilization. That night, he gets eaten by a puma.

      On the other hand, if 100,000 people of average intelligence land on that island and manage to survive, chances are decent that their remote descendants will rediscover electricity at least at some point — after all, this is what our remote ancestors did…

      • Wrong Species says:

        They all survive. I’m not sure on the size of the group but anymore than a thousand would feel like cheating(and a thousand is probably pushing it). I’m mostly wondering if it’s possible to build an advanced society with only the tools available on an island. So much of our society depends on global trade. How much could people accomplish without all of the resources we have at our disposal?

      • houseboatonstyx says:

        Lack of materials would be the most limiting factor. But your moderns would have a great advantage: they already know what works. It may have taken a CAD program to design the best shape for a windmill rotor, but now that we know what that shape is, we don’t need a computer or even metal to make one.

    • Faradn says:

      A good treatment of this idea is Jules Verne’s The Mysterious Island. It’s fiction, but it’s very thoroughly researched fiction (which won’t surprise you if you’re familiar with Verne).

    • Seems to me that the main limiting factor would be lack of materials, wouldn’t it? Factories would require metal, which may or may not be present on the island. Electricity I suppose could be generated through wind, tidal, or hydroelectric means, so you wouldn’t necessarily need fossil fuels, but again you’d need metal to create the generator and the wires.

    • ddreytes says:

      I guess I don’t really see what use there would really be for factories, in that situation, so I suppose they probably wouldn’t have them.

    • Dennis Ochei says:

      You’d need special kinds of stones to make knives and other tools with your bare hands. There would also need to be metal ore on the island. A full exhaustive list of the requirements of thr island would be difficult to specify, but I’m pretty confident all the things needed to create modern conveniences couldn’t be found on a single island.

    • According to Jules Verne (_The Mysterious Island_) a handful of people could do it pretty quickly. But I wasn’t convinced.

      • Jiro says:

        Jules Verne’s characters were given help by Captain Nemo, including a box of equipment and tools that “just washed up on the island” and a random grain of wheat that may or may not have actually come from him.

    • Irenist says:

      Well, there’s knowledge and there’s know how. I think one of the biggest barriers (not necessarily insurmountable, just big) even for “a diverse group of engineers, scientists, doctors” is that transitioning from “this rock has iron ore in it; let’s build a fire under it” to actual machine tools and whatnot probably requires a bunch of craft skill that isn’t that common nowadays, even if the abstract knowledge of what’s needed would be present in your group.

      • Paul Torek says:

        This, and you’re putting it mildly. Your group has to rediscover the technological quirks of, say, clay ovens in which to cook ores, and how to make charcoal for fuel. Yikes. I hope the rest of your engineers know more history than I do; I’ll be largely useless.

        • Wrong Species says:

          That’s the main thing I was thinking about. If civilization collapsed, we couldn’t just start at the same place we are now, we might have to retrace our steps. It probably wouldn’t take thousands of years to get where we are now but it could take a lot longer than we would like.

      • Deiseach says:

        Not even that; on a basic level “Do we know what plants are safe to eat – will this mushroom kill us?” Can they butcher game? Can they hunt, fish, and so forth? Can they make shelters and clothing? The whole Robinson Crusoe bit, and Defoe cheated massively there by having Crusoe able to swim out to the ship and retrieve handy convenient tools he had no way of manufacturing for himself 🙂

  57. The downside of emotional intelligence–emotional intelligence makes it easier to manipulate other people, regardless of the intention or effect, and it’s distracting if your work involves things rather than people.

    Overview of research on emotional intelligence.

  58. It seems to me that conscientiousness involves a feeling of “this thing ought to be done well and thoroughly”– it’s an external requirement, rather than feeling “I am doing this thing because it makes sense to me and serves my goals.”

    I’m not sure where to go with this, but to people have ideas about where this sense of an external requirement comes from?

  59. Any podcasts that the SSC commentariat would recommend? I always like the idea of listening to podcasts but I’ve never found one that really grabs me. I was listening to Sam Harris’s Waking Up, which is pretty interesting, but it doesn’t have that many episodes.

    (All suggestions will be considered, but obviously if any get multiple recommendations they will be considered more strongly)

    • Wrong Species says:

      I would recommend Hardcore History by Dan Carlin if you’re in to history. EconTalk by Russ Roberts if you’re interested in economics.

      • I am interested in those things, and I’ll check them out. I just had a realization though, which is that (for better or for worse) what I was probably really looking for in a podcast was insight porn. That’s what really drew me into the LW/SSC sphere, realistically. And I’m not sure how to feel about that. I mean, there are worse things in the world than insight porn – insights are great, and it’s not like they’re all fake or meaningless. But on the other hand I probably undervalue the kind of steady, methodical learning that doesn’t come with the built-in reward of a dramatic “aha!” moment. I’ll have to ponder this.

        (Mind you, if anyone does know of a reliable insight porn podcast, I’m all ears)

        • Wrong Species says:

          I’m not exactly sure what you mean by insight porn but I don’t think those count. This isn’t like a TED talk where some guy spends 20 minutes talking about a “world changing idea”. It’s more like a guy(or two) having a conversation about stuff they know that is pretty accessible to a decently smart person.

          • Oh sorry, maybe it’s not as widely used a term as I thought. I’ve seen it on LW from time to time, but that doesn’t mean much. Personally I would say TED talks sometimes fall into the insight porn category, but they’re more often inspiration porn if anything. Here’s an essay talking about the concept of insight porn (which (fittingly? ironically?) is itself pretty insightful):


            In any case, I probably shouldn’t only be looking for insight porn podcasts, so I thank you for the recommendations. I’ll probably give hardcore history a try.

      • FacelessCraven says:

        seconding Dan Carlin. Highly recommend Wrath of the Khans and Roadmap to Armaggedon.

    • James Picone says:

      Welcome to Night Vale. Community radio from a town where all the conspiracy theories don’t go far enough.

    • LTP says:

      If you’re interested in non-LW philosophical conversation, The Partially Examined Life is excellent. They have a few bad episodes IMO, but overall they’re an excellent show.

      Basically, it’s a semi-formal roundtable with a bunch of young-ish philosophy PhD holders where they go over a reasonably sized section of a philosophical text.

    • Jaskologist says:

      History of Philosophy without any Gaps
      Laszlo Montgomery’s Chinese History Podcast
      History on the Run

      (There are a number of “Learn Foreign Language X” podcasts as well. They are likely useful as part of active study, but I have not found them helpful from a passive learning standpoint.)

    • Paul Torek says:

      Great question! I have had hit-and-miss luck with Philosophy Bites by Edmonds and Warburton, and Space Time Mind by Mandik and Brown. Both are philosophy. Both contain enough “hits” to make the “misses” tolerable. Thanks for the suggestions, y’all.

  60. Emlin says:

    Without reading other comments so it won’t influence my opinion, but sorry if this turns out to be repetitive:
    I went back to around the drop-off period and what I recall, is that I got “behind” in reading/absorbing the posts around that time and meant to “come back when I have a bunch of time and feel really mentally ‘on’ so I can ‘catch up'” and that didn’t really happen.

    Eventually, I just gave up on catching up and started from the present recently again.

    I don’t know how common this experience is, since it was certainly influenced by personal factors such as illness and a large project around that time, but also I am a reader who tends to take my time understanding the whole post and then also read all the comments so denser posts, and posts with more math, are going to get me further behind

  61. onyomi says:

    This is an incredibly broad question, but does anyone have opinions about why Marxist and literary theoretical writings (and, God forbid, Marxist cultural/literary theoretical writings like Adorno) are so inscrutable? It’s not that they don’t contain some good insights, it’s that they make you fight so hard to find them.

    I say this as someone in the liberal arts wing of academia who has to deal with this stuff regularly and still finds it quite annoying. And I feel that at a certain point, “it’s not me, it’s them,” that is, I don’t think their ideas are genuinely so complex that it’s the ideas themselves I struggle to comprehend, but rather their manner of expressing them. This seems verified to me in that every time I do “get it,” I realize that I could have said the same thing far more simply.

    My working theories are all very uncharitable: one, only very-smart-but-very-confused minds become Marxists in the first place, so it’s no wonder their writing is like a maze, and/or it’s a way of translating a few minor insights into a tenured professorship, i.e. take the average amount of insight in a slatestarcodex post and somehow turn it into an impenetrable book-length study the primary purpose of which is to make said insight seem more insightful by virtue of all the work the reader must go through (and intelligence the author must be perceived to have had) to get to it.

    I know I’m painting with a broad brush and that there are wide variations in the quality and readability of Marxist theorists (I find Bourdieu to be pretty readable, for example), but there is nonetheless a very noticeable trend. This also kind of applies to Continental philosophy more generally.

    Anybody have any better ideas?

    • Jon Gunnarsson says:

      I suspect your very uncharitable theory is correct. If they expressed their ideas clearly, the average person, or at least the intelligent layman, would be able to understand them, and then their theories would be exposed for the nonsense they are.

    • LTP says:

      Based on lurking in some philosophy related parts of the web and being a philosophy major myself in a heavily analytic/anglosphere department, you’re not alone in your feelings about continental philosophy, and marxist and literary stuff in particular. I myself have limited exposure to primary sources of the continental kind. I can’t speak about Marxist stuff specifically. But in terms of continental stuff in general, based listening to others’ perspective the charitable way to view continental philosophy is that continental philosophy is much more reliant on antecedent knowledge of the history of philosophy and the technical vocabulary of whatever type of continental philosophy you happen to be talking about (“continental”, as I’m sure you know, is more a sociological and stylistic category and less a substantive one). You may say that is intentionally obscurantist and cliquish, and maybe you’re right. But, to steelman them, I think many continentals of various types would say that they are often engaged in more ambitious and radical projects than analytic philosophers, and so they need specialized and vaguer language because they simply cannot express their ideas in plain language. The difficulty for non-experts to understand it is an unfortunate but necessary part of their projects.

      But, that’s me trying to be charitable, and your uncharitable interpretation may be correct.

      ETA: I hope this is clear, it’s late and I’m tired.

      • Douglas Knight says:

        That is just not compatible with Onyomi’s claimed experience of extracting simple claims expressible in simple language from these works. It seems to me that there are only two possibilities: either O is entirely mistaken and the claim is not at all in the work; Or the claim is in the work and the author should have spelled it out, perhaps as a first approximation, or even as a pitfall! Either Onyomi is incompetent, the author is incompetent, or the author is intentionally obscure.

    • Harald K says:

      I think the standard explanation (read: Popper’s, which I loosely recount) for bad humanistic writing is that Kant’s big works were accidentally verbose because he didn’t have time to be succinct. Then Hegel took advantage of the impenetrability of the critiques, to write deliberately verbose and impenetrable works, convincing everyone that this was what “critique” and “dialectic” was all about, and that he was obviously the natural heir to Kant, and if you can’t understand why you’re obviously not a real intellectual.

      Seems the attitude was that if the hoi polloi can understand what you’re saying, you’re suited for the priesthood, not for being an academic philosopher.

      Marx was of course indebted to Hegel, getting the whole idea of inevitable historical progress from him. So it’s no surprise he was inscrutable, too. Not that it’s the only path through which inscrutability became a virtue, but probably for the things you mention.

      • Peter says:

        I seem to recall there being something about Hegel being a crypto-atheist too who (felt he) had to hide his views by being obscure.

        • Protagoras says:

          The crypto-atheist explanation probably already applies to Kant. Kant did in fact encounter minor difficulties with official censorship, and likely would have encountered less minor difficulties if he had been less obscure.

          • Harald K says:

            If Kant was a crypto-atheist, the second necessary postulate would have been a colossal lie… and then Kant’s infamous views on lying would also have been a collossal lie… and at that point, you got to wonder if Kant meant anything he said at all. I think we can safely say he was a theist.

          • Harald K says:

            Or wait, maybe you only mean that he was obscure in order to avoid censorship? That would be another matter…

          • Protagoras says:

            I am more committed to the “Kant was obscure to avoid censorship,” but I also believe, less confidently, that Kant was an atheist. However, I wouldn’t call the second necessary postulate a “colossal” lie. I merely think “God” is a misleading and basically inaccurate name for what Kant is talking about (and similarly for “immortality” and “freedom.”)

          • Cauê says:

            It’s been years since I read Kant, but I remember reading the Critique of Practical Reason and thinking “you need some pretty religious assumptions for this to even begin to make sense”.

            I especially remember him making a case for the existence of an afterlife, on the basis that, since we clearly aren’t rewarded for virtuous behavior in this life, an afterlife might balance it out.

            You may be using a pretty specific definition of “atheist” here, or my undergrad-era reading may be way off (very possible, actually), but this surprised me too.

          • Protagoras says:

            I’m pretty confident that your undergrad-era reading was way off (certainly mine was). Kant’s moral philosophy is too complicated for a comment; the best brief summary I can give is to say that Christine Korsgaard has convinced me that her interpretation of Kant’s moral philosophy is largely correct. Korsgaard’s interpretation of Kant also lines up with her own views about ethics, which do not make any appeal to religion. Korsgaard is a little easier to understand than Kant is, perhaps partly because there are fewer things she needs to be obscure about. So I’d say anyone who actually wants to understand Kant’s ethics should look at a lot of Korsgaard’s work. But it’s not easy; “easier to understand than Kant” is of course not saying much.

        • Lightman says:

          As someone who has read a lot of secondary reading on Hegel (seminar last semester) – the interpretation of Hegel’s religion is a very controversial question; some people read him as an atheist, some as the last great Christian philosopher, some as a sort of in-between. It’s a very complicated question, though God features pretty prominently in Hegel’s works.

    • James says:

      “Very-smart-but-very-confused” is a great phrase. I think it expresses what I was trying to say about Derrida when he came up in the comments to the last links thread.

    • Brock says:

      John Searle has said that he asked Michel Foucault (a colleague of his at Berkeley), “Why the hell do you write so badly?” Foucault replied, “If I wrote as clearly as you do, people in Paris wouldn’t take me seriously.”


      Historically, I think a lot of the bad writing of Marxists (and other philosophers in the Continental traditions) is a stylistic inheritance from Hegel.

      • onyomi says:

        I definitely suspected Hegel had something to do with it, though I think John Searle is probably right about the more direct cause: a culture in Europe (France and Germany especially, it seems) of viewing clear and succinct ideas as simplistic and naive (“not deep”). I guess this culture must have preceded Hegel (and maybe Kant), however, else I doubt he would have become as popular as he did.

        I actually run into this all the time in academia: simple answers of any kind are viewed as prima facie suspect and naive because, you know, the world is a complex place. What many in the liberal arts seem almost to have forgotten is that, all-else-equal, simpler answers are better.

        But simple answer shut down interminable conversation on intractable problems and so are actually undesirable, perhaps in the way that no one in the business of writing diet books wants to hear “eat less, exercise more.”

        Americans writers, even academic writers, however, are in my experience, much better at grasping the value of simplicity where simplicity is possible. I absolutely take seriously Foucault and Bourdieu’s reported claim that, in order to be taken seriously in France, they *had* to be at least a little obscure.

        • James says:

          I actually run into this all the time in academia: simple answers of any kind are viewed as prima facie suspect and naive because, you know, the world is a complex place. What many in the liberal arts seem almost to have forgotten is that, all-else-equal, simpler answers are better.

          Stephen Pinker:

          I’ll give you an anecdote that might give you the difference between the mindset of the scientist and the humanities scholar. I once went to an interdisciplinary conference with scientists and humanities professors. At the end of a talk exploring a painting, the speaker said: “Well, I hope to have complicated the subject matter in several ways.” I thought, that’s the difference between a scientist and a critic – the scientist would say: “I hope to have simplified the matter in several ways.”

          • Deiseach says:

            But sometimes subjects are more complicated than on a surface reading! I’m pulling this off the top of my head, but take David’s Oath of the Horatii.

            On the surface, it’s a perfect example of the clear, calm, Neo-Classical style in art, put to the service of, and representative of, the Enlightenment and Enlightenment values.

            But what are those values? Are Enlightenment values the valorising of the state over the individual? Because that is what David’s painting seems to be saying: an appeal to the French people (and note the prominent foreground character dressed in the colours of the Tricoleur) to bond together as patriots and soldiers in the cause of –

            – what? France? The monarchy? The revolution (liberty, equality, fraternity)? But the Revolution was certainly not clear, calm Neo-Classicism in action!

            And David invents (the liberty of the artist) the scene that he is depicting; he shows the three brothers taking the swords and making the oath on those swords, from and in the hands of their father – which apparently was not the historical version of ‘what really happened’.

            Also, the salute/oath taking of the brothers looks, with our history behind us, uncomfortably like the Fascist salute – and apparently this is not a coincidence. Is Fascism the heir of the Enlightenment? It seems to have made a claim to be so!

            And that’s not even getting into the action of the event depicted, and whether we consider the Horatii admirable and virtuous, or robotic in their denial of the claims of blood and kinship; the emotional suffering of the women and how it is downplayed in favour of the heroism of the men, and much more!

          • onyomi says:


            I don’t think anyone is arguing for *over*simplification, just that, when possible, simpler is better.

            Though you do give a good example of what academics might call “unpacking.” In general, I prefer “unpacking” to “complicating” as a term for what we do, though there is a lot of overlap between these two in practice.

            I think I prefer “unpacking” because it implies that people have been chunking something which needs to be, at least temporarily, de-chunked and re-examined. “Complicating” on the other hand, seems to imply that simple answers are prima facie undesirable.

        • Harald K says:

          But simple answer shut down interminable conversation on intractable problems and so are actually undesirable, perhaps in the way that no one in the business of writing diet books wants to hear “eat less, exercise more.”

          The professionalization of philosophy is also something Popper touches on as a problem, and explanation of how Hegel could become so popular. I want to call this the “Vroomfondel and Majikthise” problem.

      • Lightman says:

        I strongly suspect that Searle is making up the Foucault quote. I also think that Foucault is not particularly difficult to follow, at least for a philosopher (not just for a continental philosopher, but for philosophers in general). Derrida is the real obscurantist, I think.

        • Irenist says:

          I’m inclined to agree, although in fairness to Derrida, his interests seem to be much more abstract and hermeneutic, whereas Foucault will often be describing actual, like, stuff that happened (say, the torture of the regicide with which he opens Discipline & Punish), which is a lot easier to describe concretely than whatever the heck Grammatology is.

          • Lightman says:

            I’ve actually only recently started reading Derrida (for class) and I do agree that he’s nowhere near as bad as people make him out to be, if you read him in context (i.e., if you’ve read Heidegger, Husserl, Nietzsche – which is of course a tall order for most people). “Differance” is still a very interesting essay, if difficult to understand. I feel like I can extract claims from it (though that kind of defeats his project, in a certain way – he wants us to think beyond the idea of propositional logic) but it’s hard for me to evaluate them. His texts are made less penetrable because he employs the “hermeneutic circle” technique that Heidegger pioneered (the idea that a text can only be interpreted as a whole; that reading should be circular, not linear; this causes difficulties on a first read, though makes rereading somewhat rewarding). He also has a tendency to undertake the procedures he’s describing while describing them.

            Derrida’s sycophants and followers are generally more obscurantist than he was.

        • onyomi says:

          Foucault is really not that bad at all. He belabors points, but his writing is reasonably clear, and he is usually belaboring by providing copious historical detail, which is more justifiable.

          Derrida is pretty bad, though he has nothing on Lacan, who clearly liked the cult of personality aspect, which strengthens my impression that “seeming deep” was very important to him (in his defense, he explicitly states that he tries to force the reader to work), which is too bad, because I think he actually has some interesting things to say. I feel similarly about Deleuze.

          Even worse are Paul De Man, Guy Debord, and Adorno. As far as I can tell, Adorno wrote tens of thousands of words just to argue that advertising creates artificial needs. And at no point in all those words did he clearly address basic objections like, “why do many advertising campaigns fail?”, so far as I can recall. I’m sure I’m not doing justice to Adorno, but that’s the whole problem with him and many other theorists like him: it’s not that he has no interesting ideas, but that to do justice to him is entirely too much work for too little payoff in terms of interesting ideas.

          Not sure whether I should be glad that Marxists didn’t write more clearly because if they had, maybe their ideas would have gained more widespread currency, or unhappy that they didn’t write more clearly, because if they had the bad ideas among their ideas would have been debunked more quickly and thoroughly, and their successors wouldn’t dominate the liberal arts wing of academia. I lean toward wishing they had written clearly.

    • ddreytes says:

      My sense is that it’s primarily a stylistic & aesthetic difference that stems from the two coming out of different cultures and having different influences. I think continental thought has valued literary style more, which is not necessarily a compatible end with clarity; for instance, with Nietzsche. I think continental thinkers are much more likely to think that the point of philosophy is not to state truths but to inculcate modes of thought in the reader – see, again, Nietzsche – and that has similar effects.

      And sometimes I think it’s just a matter of the distinct personal tastes – I’ve often suspected, reading Heidegger, that he wrote the way he did mostly because he was just an asshole.

      But again, I think this is primarily an aesthetic difference.

      • Douglas Knight says:

        No, no, no. The question is why so few continental philosophers write like Nietzsche.

        • Protagoras says:

          Because it takes a very rare degree of talent to write like Nietzsche?

        • ddreytes says:

          Writing like Nietzsche (particularly writing as well as Nietzsche) is really, really hard.

        • Lightman says:

          Luce Irigaray wrote a feminist critique of Thus Spoke Zarathustra in the style of Thus Spoke Zarathustra. It’s interesting. “Marine Lover,” if you want to check it out.

    • Irenist says:

      I mentioned the antique practice of pedagogical esotericism upthread in relation to a question about the Bible. I don’t think it’s directly applicable to continental philosophy, but the subcultural urge to value the gnomic over the plainspoken might be coming from a similar place to the ancient preference. In ancient times, the main issue was that you weren’t going to own many books, so they’d better repay rereading. In modern continental academia, I imagine the denser stuff is more likely to be a fruitful field for doctoral candidates looking for texts to analyze, leading to more citations, etc. Noah Smith has done some blogging about the idea that simpler models might make for better macroeconomics than the DSGE models that are presently prestigious in the field. Maybe the attraction of continental philosophers to convoluted prose springs from similar incentives to those that attract macroeconomists to more prestigious, but sometimes computationally unwieldy, DSGE models.

    • Zykrom says:

      Is it possible that writing this way is actually a really good way to get a hardcore “fanbase?”

      Basically, you would want to write in a style that’s exactly obscure enough that most of your readers wouldn’t “get it” but a few would do so easily.

      So then when people start evaluating your work, the ‘chosen’ will be struck by how much the dissenters seem to be misunderstanding everything, and come to the conclusion that only idiots disagree with you.

      Even if at first you aren’t too impressed, seeing enough bad arguments against something will eventually make you more favorably disposed towards it.

      Also, it has the (more likely) benefit that people who know about your ideas will be the ones who either like your writing style or were invested enough to dig the point out of your awful prose.

      It’s possible that Mencius Moldbug benefited from this dynamic to some extent.

  62. LTP says:

    I wonder if there’s a name for this, or if at least some people here can relate.

    I sometimes call myself apolitical because I don’t have many *positive* political beliefs or convictions. Oh sure, if a ballot measure comes up I know how I’ll vote, but I don’t feel *that* strongly about it, usually.

    And yet, I do have very many and strong *negative* political beliefs. Basically, I’m anti-left, particularly of the radical (of one degree or another) sort. There is something that is just so repellent about that point of view. I could engage with a neo-monarchist and not feel much more than mild annoyance on a bad day, and yet people like Ta-Neihisi Coates (sp?) or the Jacobin magazine folks, or most forms of internet social justice, for example, just make my blood boil even when I try to be charitable (ETA: sometimes even when I have leanings in their direction, even!). I would almost call it triggering, occasionally I’ll have literal panic attacks after reading this stuff. This is especially weird because I’m very blue tribe in my personal behavior and social values (“social” in the sense of how I treat people in and what I value in my personal relationships and communities, not in the sense of “society”).

    I’ve actually known people who were the inverse of me: anti-right but not really pro-anything. I’ve never met people like me offline, though.

    I usually just refrain from expressing my anti-left-ness because I fear people will assume I’m very red tribe in my non-political beliefs and behaviors when I’m not.

    Can anyone relate to this?

    • Nita says:

      I can anti-relate: I see radical leftists as misguided, yet sympathetic, but tend to get annoyed by neoreactionaries and such.

      It might be because I associate the former with “Justice Is Very Important”, and the latter with “Crush the Weak and the Weird for the Greater Good!” (nrx) or “I Do What I Want” (libertarians).

      Or it might be because I’ve never talked to a radical leftist in real life, while nationalists, traditionalists and libertarians are easy to find, so they seem like a more credible threat.

      • Wrong Species says:

        You have never met a radical leftists in real life but have met a self-proclaimed white nationalist? I have a very hard time believing that.

        • Nita says:

          I mean ordinary, ethnic nationalists, not “white nationalists” (they probably do believe in the superiority of white people as well, but their primary loyalty is to “their” people, specifically).

          I’ve also met an outspoken racist, but he was more anti-black than pro-white — you know, complaining that there are too many “monkeys” in Brussels and such (it was a Russian dude working for a big pharma company, so he went on a lot of business trips).

        • Whatever happened to Anonymous says:

          Missing, possibly relevant information:

          I think Nita is from Eastern Europe.

    • Do your panic attacks include specific scenarios?

    • Alexander Stanislaw says:

      This isn’t unique. A quick perusal of Friendly Atheist, the politics section of Fox News or many other partisan sources (I daresay most), will reveal that much of politics involves bashing the weakest arguments of your enemies or pointing them out at their worst. Making positive policy suggestions is rare because it is difficult.

    • onyomi says:

      I can relate to having more in common culturally and temperamentally with the blue tribe and yet feeling my blood boil when I read leftist tracts. That said, I have more distinct political views than it sounds like you have.

      The problem with the social justice warriors is obvious, but I’m curious, can you point to a specific quality or qualities in the writing of less willfully inflammatory leftists which trigger you (I can think of many examples in my own case, but curious to hear what it is you think is bothering you, specifically)?

      • LTP says:

        Hrm, I feel like it is a lot of things.

        Part of it certainly is being born and raised in the most blue-tribe place in the my region of the US, where I’ve seen a lot of the mindless tribalism, the inconsistencies, the smugness, the social signalling reasons for being leftist, etc. among blue-tribe people in the real world which makes a lot of leftist arguments often seem fake and disingenuous to me, though I totally recognize that this is unfair and uncharitable.

        I think the biggest part of it is that there’s a utopianism underlying most leftism, even if only implicitly, which implies very unreasonable moral demands and judgments on society and individuals. Often I think the implication is that if you’re privileged in your society (as I am, being a white male American with upper middle class parents) then unless you devote your life to leftist activism you’re a bad person. When a leftist says society is “unjust”, they are saying it is unjust compared to a utopian society. Note that this isn’t just SJW who say this, but marxists, socialists, environmentalists, and even the non-SJW social justice people as well. No matter what form of leftism you’re talking about, there is underlying it a theory of an all pervasive system that is evil and oppressive. This means that pretty much everything I value, and even my own existence, is morally corrupt. And furthermore, because I’m in a privileged position, I can’t even argue against the existence of the system, or even merely the kind of system the leftist proposes, effectively because either I’m obviously opposed for selfish reasons or can’t see my invisible privilege. I cannot be a good person in my society, is the upshot, unless I devote my life to toppling the system in some way. Furthermore, my moral intuitions (I have stronger moral intuitions than political ones) just are so different that those that leads people down that line of reasoning, so I don’t even think it’s true.

        I guess, in a way, I find leftism very dehumanizing and reductive (in the bad sense of the word) about human life, morality, and politics, and how these things relate to my life as a privileged (by their definitions) person (though I also think it can be dehumanizing about non-privileged people, too, especially those that are inconvenient for their ideology); it tells me my life is inherently morally corrupt; and, I can’t even defend myself against them (in their eyes) or have a discussion because their ideology has defenses against that built into it.

        ETA: Plus, I have a very strong negative reaction to many of the political tactics that happen to be much more common on the left, and even valorized by the left: disruptive protests, riots, public shaming, “education” that doesn’t involve any nuance or non-strawman dissenting views, etc.

        Finally, exacerbating all this, because some form of leftism is very prominent in my local blue-tribe community, it’s not some theoretical thing I only run into online. I can’t simply turn off my computer and be secure that I’ll very likely never have to reckon with it in my life and personal relationships with people in my day-to-day (unlike, say, radical feminism or neoreactionaries).

        Note, that in less inflammatory leftism much of this is at least partially implicit, or couched in nicer language, but it is still definitely there.

        I feel like that was rambling, I’ve never tried to put what is behind those emotions into words. I hope that gives you an idea of what’s behind them. What about your case? I’m curious about that.

        • onyomi says:

          I feel pretty similarly to you, though I did grow up in a family of moderate-ish Republicans in the South, and my grandfather was really into Ayn Rand (though I didn’t really know much about that till after I had already become a libertarian through other channels), so I don’t think there was as much of a “rebelling against” quality for me, though I have certainly been exposed to massive amounts of leftist thinking since living in New England for several years going to grad school in the humanities.

          For me, it’s just that, ever since I was first exposed to Libertarianism (Harry Browne), it has seemed blindingly, obviously correct to me. Not only on ethical grounds, but it seemed obvious to me that all the empirical evidence pointed to more libertarian societies being more successful. The fact that people in places like Detroit could still be voting for Democrats just beggars belief to me.

          Moreover, even before that, it always seemed intuitively obvious to me that rights of freedom of association should be pretty much absolute, and that it was outrageous to think that just because some people voted therefore you couldn’t enter into a voluntary agreement with someone to (pay for sex, buy drugs, pay less than the minimum wage, hire a non-union worker…).

          I think deep down what irks me especially is being treated like a child. I didn’t like being treated like a child even when I was a child, and I like it even less now. I once thought to summarize it: “The GOP wants to be your dad, the democrats want to be your mom, and the libertarians want to treat you like an adult.”

          Added to this is a kind of coy obscurantism I notice is very common among leftist intellectuals (see my comments on Adorno et al elsewhere in this same open thread) that really pisses me off. It’s like, if your ideas are so great then express them clearly and unequivocally.

          I am also disgusted by the more mainstream left’s blatant appeal to emotion and ignorance on the part of the general public: if you are against the affordable care act that must mean you want care to be unaffordable, etc.

          I also can’t stand identity politics and the sort of “victim olympics” that seems to have arisen.

          Mainstream GOPers sometimes make my blood boil, mostly when they are calling people “isolationists” for not wanting to bomb the hell out of everywhere, or when they are elevating what I consider to be minor social issues to the level that they determine the outcome of an election (though leftwing bullying of “intolerant” right wingers has recently reached the level where I feel more sympathy for the people who just didn’t want to make a gay wedding cake).

          And there’s the willingness to overlook repugnant means (the potential for violence and/or imprisonment implied by every law) in order to achieve desirable ends, as well as the hubris that says they always know what is best for everyone–again, treating people like children. Am reminded of this comic:


          Replace “eugenics” with all utopian ideas about what society “should” look like.

          • Jesse M. says:

            “Not only on ethical grounds, but it seemed obvious to me that all the empirical evidence pointed to more libertarian societies being more successful.”

            Leaving all ethical questions like “should people have the right to any voluntary associations they want?” aside, I think there is pretty decent empirical evidence that in terms of quality-of-life issues like health and crime and social mobility, people tend to be better off in states that use more progressive tax systems to reduce inequality, see for example the various charts towards the bottom of this article: http://www.nybooks.com/articles/archives/2010/apr/29/ill-fares-the-land/

          • Alexander Stanislaw says:

            it always seemed intuitively obvious to me that rights of freedom of association should be pretty much absolute

            Earnest question, do you think that racial segregation should be legal? For example that private businesses should have the right to only allow people of a certain race to say rent an apartment?

            If not how do you square that with absolute freedom of association?

          • onyomi says:

            Yes, I think private individuals should be able to choose to associate or not associate with anyone on the basis of any criteria whatsoever. That IS freedom of association. Telling someone they can’t own a business that only sells to people with red hair named Paul is at odds with freedom of association.

            That said, I am, of course, against any *laws* enforcing racial or any other type of segregation as that would also be at odds with freedom of association. The law should neither say whom you must associate with nor whom you may not associate with.

          • Jesse M. says:

            “it was outrageous to think that just because some people voted therefore you couldn’t enter into a voluntary agreement with someone to (pay for sex, buy drugs, pay less than the minimum wage, hire a non-union worker…).”

            What would you say about a voluntary agreement to take a job under the condition that you vote for a certain politician? And what if all the good jobs available have such requirements, so the choice is basically one between accepting such an agreement or being poor?

          • Jaskologist says:


            I believe the term for that is “welfare.”

          • onyomi says:

            @Jesse M, I don’t see why I would want to adjust my commitment to freedom of association under those circumstances, though I also don’t see how those circumstances could ever obtain in the absence of a law of some kind enforcing them.

          • Cauê says:

            The thing with “what if all would-be options had the same unsavory restriction” is that, if that’s true, then this is a society that thinks those restrictions are acceptable, and would probably create laws that agree with this.

            If, however, this society has enough people who disagree with the restriction that it would be possible for laws to be made against it, then it also has enough people who will spontaneously offer options without the restriction.

            (the picture can be complicated in some ways, such as federal laws against local customs, but in general this kind of hypothetical scenario feels like cheating)

          • Jesse M. says:


            “I also don’t see how those circumstances could ever obtain in the absence of a law of some kind enforcing them”

            Do you mean you think this would only happen given a law saying that all companies must require employees to agree to vote for a particular candidate? If so I disagree, it doesn’t seem particularly implausible that most major companies would choose to adopt such hiring requirements in the absence of any laws for or against it, since there’d be very little downside to doing so (assuming a culture in which there are large numbers of qualified applicants that don’t object to such contracts) but a potential major upside in being able to tilt elections in favor of politicians who would do things to those corporations’ benefit, like spending more tax money on corporate welfare, striking down laws against pollution, etc.

            For another example, would you be fine with it being legal for people to voluntary sign away their freedoms to become slaves, even if the people who did this were typically doing so out of desperation? (say, because it was in the midst of a depression and they were facing starvation)

          • onyomi says:

            Firstly, I’m not really sure why the issue of voting, specifically, is relevant. Couldn’t it just be any weird condition on employment, like, say, “if you work for us you must always wear a funny hat in public”?

            Second, in the absence of a law enforcing uniformity (that is, foreclosing the option of doing otherwise), there will *always* be a downside of putting weird conditions on employment–namely that weird conditions are a disutility imposed on the employee. All things equal, an employer is going to have to pay an employee at least a little bit more to do job x+accept weird condition than just to do job x.

            In the case of the vote buying scenario, the question is, is the employee’s single vote worth as much as the additional money the employer will need to pay to get him/her to accept this condition? In most cases I’d imagine the answer is no, though I don’t see it as a huge problem if it isn’t. Moreover, I’m not even sure this is illegal right now in the US. Is it? Regardless, even if a particular type of free association threw a big wrench in democracy, which I don’t think this would, I would still not favor curtailing free association for that reason, because I’m not in favor of democracy (because a majority of people voting to force the minority to do something doesn’t make it right).

            As for the “can you sell yourself into slavery” question, I would say that, in theory, yes you can, just as I’d be okay with you selling your organs, though I think most decent societies, with states or without, would not uphold more extreme versions of such contracts (say, you sold yourself into indefinite servitude as a desperate teenager and now you’re 40 but your “master” won’t allow you the possibility of buying back your freedom), due to, perhaps, failure of consideration.

          • I don’t think the vote buying case is a simple issue of freedom of contract. The votes are being cast somewhere. Assume, for simplicity, they are electing a politician, and assume we regard the polity in question as a morally legitimate actor (if not, switch to a vote used to decide some decision by a private actor entitled to make it—say what subject Scott will post on next week).

            The polity sets its voting rules. One rule might be “you are not allowed to vote in this election unless you are voting for your own preferred candidate—purchased votes are not allowed.” Someone who sells his vote and still casts it is violating the terms on which he is associating with the polity.

            Again, if you want to make the argument cleaner by avoiding the question of whether a government has the right to exist, let alone to make voting rules, suppose Scott announces that he will decide which of two topics to post on next on the basis of a vote, but that the condition for voting is that your vote has not been purchased.

          • onyomi says:

            This is a good point. This problem can be circumvented fairly easily by changing the terms on which the voting (private or public) is allowed, rather than the allowable terms of private contracts.

            Also, given that all voting in US elections is secret, I’m not sure how a firm could verify that an employee was voting for the right person anyway. That is, the mechanics of voting in the US seem already to largely preclude such a problem.

          • Harald K says:

            “…a society that thinks those restrictions are acceptable, and would probably create laws that agree with this.

            If, however, this society has enough people who disagree with the restriction that it would be possible for laws to be made against it, then it also has enough people who will spontaneously offer options without the restriction.

            This is something Libertarians do sadly often. Entirely discount costs and difficulties of coordination, and assuming that if society has found a way of coordinating (e.g. laws against discrimination), then coordination must have been trivial to achieve in other ways.

            I’ll tell you a fable. There once was a libertarian economist who had the great idea of issuing bonds for outcomes. “I can decide what should be done. Then I leave to the market to decide how to best achieve it!”, he said.

            He wanted a ditch dug in a field. To this purpose, he auctioned off bonds saying: “I will pay 100$ to the holder of this bond, if at this date, an adequate ditch has been dug on my property.” With some legalese, of course, to ensure he got a good ditch.

            Some young men heard of this, and they thought “great! we can dig that ditch no problem!”. Together, they managed to buy all the bonds at a low price, and they set off to town to buy shovels. But then they heard the economist shouting at them from the distance: “No no, you silly people, what are you wasting money on shovels for? The bonds will ensure that the ditch will get dug! You don’t need them!”

          • Jaskologist says:

            What would you say about a voluntary agreement to take a job under the condition that you vote for a certain politician?

            Isn’t it precisely the purpose of Eich et al. to condition employment on voting the correct way? What you call a nightmare scenario, I call social justice.

          • James Picone says:

            “To work here you must vote for candidate X” is qualitatively different to “To work here you must not support policy Y”, although they’re both bad.

          • Cauê says:

            Harald, I don’t see how your post is an answer to mine… If a [democratic] society has laws against discrimination, then it has many people who dislike discrimination. Simply by conducting their business as usual, these people will provide non-discriminatory options on the market, no coordination required.

            But insert general disclaimer here, as there are enough potential complications that I don’t feel like listing them.

          • Cerebral Paul Z. says:

            Following up on what Caue said: the Jim Crow laws can be viewed as the “solution” to the massive coordination problems which kept private discrimination in the market from providing as much segregation as the white majority of the time wanted. For example, the specific law at issue in Plessy v Ferguson was passed because the railroads balked at the expense of adding separate coaches for blacks; the railroad on which Plessy took his fateful ride cooperated with opponents of the law to provide a test case that they hoped would end in the law being struck down.

          • Cerebral Paul Z. says:

            On second thought, I may have been hasty calling it a coordination problem; more likely, the problem was simply a gap between the amount of segregation whites were willing to vote for and the amount they were willing to pay for.

          • “Also, given that all voting in US elections is secret”

            No longer the case, now that absentee voting is commonly an option.

          • Jesse M. says:

            “Firstly, I’m not really sure why the issue of voting, specifically, is relevant.”

            It’s specifically relevant because if my scenario would have any significant chance of happening in the type of society you outline, that would suggest that this type of society is inherently unstable, because it has a good chance of naturally turning into an oligarchy where the political system is controlled by large corporations, and there’s no reason to expect they will preserve all the libertarian freedoms of the original society (for example, they might plausibly think it was in their interests to outlaw unions, or to persecute people who criticize their government)

            “Second, in the absence of a law enforcing uniformity (that is, foreclosing the option of doing otherwise), there will *always* be a downside of putting weird conditions on employment–namely that weird conditions are a disutility imposed on the employee. All things equal, an employer is going to have to pay an employee at least a little bit more to do job x+accept weird condition than just to do job x.”

            This would only work if the prospective employee is getting offers from multiple companies that want to hire them to do job x, and at least some aren’t including the same condition in the offer. Do you think this would still be true for most employees in the midst of an economic recession or depression, for example? And what if all the biggest hirers in a given job market do have voting conditions, so that even if other companies without such conditions exist, they don’t have enough need or money to hire a significant fraction of the skilled people who want work in this field? Even without either of these conditions, there may be other reasons job applicants wouldn’t feel they had any perfectly equivalent alternative options, like if other job possibilities would require moving and there’s only one major company looking for people to do a given type of job in their home town (a factory town, for example), or cases where one company just seems like it’d be more interesting and rewarding to work at, like being a programmer at Apple or Google vs. some less exciting software job (or one with less prestige or opportunity for advancement) that pays about the same.

            “Moreover, I’m not even sure this is illegal right now in the US. Is it?”

            Even if it wouldn’t be illegal to require people to promise to vote for a given candidate, it’s illegal to follow someone into a voting booth or record yourself voting. But in the type of society you envision, this type of law would interfere with a person’s ability to form a voluntary contract with the provision “I promise to record myself voting on my cell phone and give the footage to you”, so presumably you would want to strike down such laws.

            “I’m not in favor of democracy (because a majority of people voting to force the minority to do something doesn’t make it right).”

            Constitutional democracy doesn’t allow a simple majority to do anything they like, there are rights built into the constitution which are much more difficult to change. In any case, what system do you prefer to determine what is “right”? If it’s some type of monarchy or dictatorship, then when the leadership changes hands, what if the new leader doesn’t believe that all mutually-agreed contracts should be legal?

            “As for the “can you sell yourself into slavery” question, I would say that, in theory, yes you can, just as I’d be okay with you selling your organs, though I think most decent societies, with states or without, would not uphold more extreme versions of such contracts (say, you sold yourself into indefinite servitude as a desperate teenager and now you’re 40 but your “master” won’t allow you the possibility of buying back your freedom), due to, perhaps, failure of consideration.”

            With teenagers there’s always the question of whether they have the mental competence to fully understand the consequences of their actions, but what if you sold yourself into slavery as a mentally competent adult, for example one who did it out of desperation during an economic depression when it seemed the only alternative was starvation? In general, do you think people should be able to break contracts with no significant legal penalties, even if the contract itself stipulated a significant penalty?

          • Harald K says:

            “If a [democratic] society has laws against discrimination, then it has many people who dislike discrimination. Simply by conducting their business as usual, these people will provide non-discriminatory options on the market, no coordination required.”

            Wrong. This is because there may be a collective action problem. See, there may be profitable reasons to discriminate.

            Say that you ban all gypsies from your shop, because gypsies steal, you say. If you do this, you are of course a racist asshole, as it’s morally wrong to judge an entire group like that. It’s a great injustice to those gypsies who don’t steal, no matter how few there may be.

            But it is quite possible that gypsies on average do steal more than others. It’s also possible that there is a significant minority of the population (larger than the gypsies) who hate gypsies, and who refuse to go into the shop if they see a gypsy there. It’s quite possible for a solid majority to be opposed to racism, racial prejudice, collective punishment of racial groups, and still for it to be profitable to be a racist. You as a racist shop owner may get a competitive advantage over non-assholes.

            There are costs to being an asshole too. Thanks to shop owners like you, now the gypsies are marginalized and even more resentful and angry at the majority population, and they become even more inclined to steal and act antisocially (heh) on average. But that’s a cost carried by the whole community. The profit is private to you.

            There are lots of times in the world that it’s profitable to break ranks and be unjust. That we have succeeded in coordinating to stop that, in form of laws, does not mean that we could have easily done it in any other way.

          • Cauê says:

            I must confess that, when I put in the disclaimer about potential complications, “what if a minority is in fact so much more likely to steal that you’d lose more money from theft than you’d gain from doing business with them” was not something I had in mind. Unrealistic.

            More seriously, there’s the “other customers may avoid a store that serves gypsies” thing. But then you have, what, a bunch of gypsy customers looking for someone to take their money, and nobody will? No, of course many people will, especially in this society who dislikes discrimination so much that they can democratically pass a law against it.

            (I started this arguing only against Jesse’s “what if all the jobs available have such requirements” example – the “all” part is what I find very unlikely)

          • Harald K says:

            You simply refuse to believe that it can be profitable to be unjust if a majority is in favor of justice. You simply deny the possibility of coordination problems. That’s exactly the libertarian attitude that I complained about.

            It is not nearly as unrealistic that there may be some demographic it’s profitable to exclude. They may have little money to spend, and shops may have tight margins.

            If you think it’s impossible for shops and shoplifting, consider banks and insurance. There, the costs associated with information are more up in the day. If it was legal to discriminate credit access or insurance premiums on the basis of race, are you so sure there would be no way to make money on it? Are you so sure it wouldn’t have an impact on the affected groups?

            Even if we can pass laws against discrimination, it’s not certain there’s a principled majority against discrimination. It’s possible that consistent rejection of discrimination is merely something of a compromise, everybody’s second choice (classical antidemocrats used to argue this). If so, there may well be other stable equilibria. Reaching the “good” equilibrium may requires coordination and credible commitments. You can’t assume it would go just find without laws.

          • Jiro says:

            Here’s a simpler version. Keep the shoplifting example. However, the shopowners don’t completely keep gypsies out of their shops. Rather, they figure out that because gypsies have a higher rate of shoplifting, selling to gypsies is only profitable if they charge gypsy customers extra, to compensate for the increased risk of shoplifting.

            That doesn’t seem to allow the “of course, people will take the gypsies’ money” objection because they are, indeed, willing to take gypsies’ money. Furthermore, any store who tries to take gypsies’ money *without* charging extra will lose money (because of the increased risk of shoplifting), so stores cannot profit by failing to discriminate.

            Is this a desirable situation?

          • InferentialDistance says:

            Furthermore, any store who tries to take gypsies’ money *without* charging extra will lose money (because of the increased risk of shoplifting)

            Maybe we should investigate ways to get gypsies to steal less rather than legislate shop-owners into being vulnerable?

          • Cauê says:

            Harald, we’re not saying incompatible things.

            Again, I never “denied the possibility of coordination problems”. I’m saying that some effects do not require coordination, which is completely different.

            Also, you seem to be arguing that it’s possible for some people to have economic incentives to discriminate. OK! Meanwhile, I’m arguing that it’s at least very unlikely for all people to simultaneously have such incentives.

            Perhaps you’re saying it takes coordination for there to be zero discrimination? Yes, it does. But it’s not required for the availability of many nondiscriminatory options on the market, alongside would-be options that are denied to some people.

            Jiro, this only happens if the expected losses from gypsy shoplifiting are actually higher than the expected profits from gypsy customers. This doesn’t look realistic to me. Is this actually the case for any existing demographic?

          • Luke Somers says:

            For some stores, like, say, jewelry? Yeah, I can see profiling customers as being more profitable than letting everyone in. If P(sale) is low enough, it doesn’t take a lot of P(robbery) to drive profits negative.

          • Cauê says:

            Would you say that, in the case of jewelry stores, there are examples of actual, real world demographics that would be more profitable to keep out as a matter of policy, specifically on account of the expected losses from increased probability of shoplifting exceeding the expected gains from ordinary sales? What I meant by “unrealistic” is something like “I don’t think this actually happens in the real world”.

        • Nita says:

          I think the biggest part of it is that there’s a utopianism underlying most leftism, even if only implicitly, which implies very unreasonable moral demands and judgments on society and individuals.

          Well, that’s interesting. How do you feel about the utopianism of LW-style rationalists? Their goals (Friendly super-AI, immortality) are even more ambitious than leftists’.

          Also, Eliezer seems to believe that anyone who isn’t smart enough to work for MIRI should get the highest-paying job they can and donate as much as possible to FAI research. I suppose Scott would add GiveWell’s recommended charities and intelligence enhancement research to the list of acceptable donation targets. Are you bothered by them implicitly judging you ethically inferior?

          • Harald K says:

            I am really bothered by the implicit judgment by the LW crowd that right and wrong depend only on outcomes, that smart people are better at predicting outcomes, and therefore you should do as Eliezer Yudkowsky says even if it sounds crazy to your inferior brain.

            Granted, they don’t come out and directly say that very often. But it seems to me it follows from the things they believe. For all I know they just avoid saying it because they worry about the reactions it would provoke. In that case, they are really just smarter neoreactionaries.

          • FacelessCraven says:

            @Nita – “Well, that’s interesting. How do you feel about the utopianism of LW-style rationalists? Their goals (Friendly super-AI, immortality) are even more ambitious than leftists’.”

            Immortality and friendly AI are potential discrete technologies. For now they don’t exist, and those of us not actively working on making them exist can go on about our lives assuming they never will exist right up until they do. Compare to the Revolution, for which we need to more or less unmake our entire civilization so that something better can then rise from the ashes… eventually… at some unspecified date in the future.

          • Nita says:

            @ FacelessCraven

            There’s nothing “discrete” about FAI — the idea is that it will unmake our entire civilization / subsume and transform it in a completely unpredictable way (hence the term “singularity”). And you must do what you can to help create it, or else UFAI will destroy civilization instead, and it will be all your fault, you selfish fool.

            I see two differences between left-utopia and LW-utopia:

            1) left-utopia proposes that better versions of existing tools (human minds and labor, institutions, memes, machines and software) will be sufficient, while LW-utopia requires hypothetical new tools (superintelligence);

            2) although both left-utopians and LW-utopians consider the current situation terrible, only LW-utopians argue that failing to fulfill their goals means certain doom (“existential risk”).

          • Bugmaster says:

            @Harald K:
            I think there’s some merit to the LW philosophy, just not as much as some people might think.

            > that right and wrong depend only on outcomes

            I would provisionally agree with this. If you have the best intentions in mind, and you really want to help humanity, and you believe that the best way to do that is to implement some crazy policy (say, banning left-handed people or something like that), and you did that and it made things worse… then yeah, it doesn’t matter how good your intentions were, you were wrong. On the other hand, people can reasonably disagree on what “helping humanity” entails.

            > that smart people are better at predicting outcomes

            It depends, are they predicting outcomes in their area of expertise ? Einstein was pretty smart (or so I’m told), but I bet there are lots of bookies who are better predicting the outcomes of sports matches than he ever was.

            > and therefore you should do as Eliezer Yudkowsky says even if it sounds crazy to your inferior brain.

            This implies that Eliezer Yudkowsky is super-smart, and possibly even the smartest person ever. Even if we were willing to grant this dubious proposition, it doesn’t really matter. What matters is Eliezer Yudkowsky’s track record regarding the prediction of outcomes, in areas of interest to us. As far as I understand, that record is spotty at best…

          • Nornagest says:

            @Harald — If it makes you feel any better, I’m pretty sure that right and wrong depend only on outcomes, and that smart people are better (all else equal) at predicting outcomes, but it doesn’t remotely follow from that that you should do whatever Eliezer says, particularly if it sounds crazy.

            IQ isn’t magic. In particular, it doesn’t give you domain knowledge without (sometimes large) investments of effort, and even in situations where domain knowledge doesn’t seem to aid prediction much (see for example the articles on the limits of expert judgment that float around LW occasionally), I’m unaware of anything saying that raw IQ does better.

            The rationality program is largely an attempt to escape those problems by way of identifying and cultivating factors that lead to domain-independent good judgment, but the jury’s still out on how good an attempt it is.

          • onyomi says:

            I personally am not against utopianism in the sense of being cautiously optimistic about amazing new technologies, or in terms of looking for better forms of human organization. What I always object to is the “ideas so good we made them mandatory.”

            I also disagree with many LWer’s apparent tendency to embrace an “ends justifies the means” sort of consequentialism or utilitarianism. I am not a consequentialist, though I do think consequences matter; they are just not the only thing that matters. Means matter too, and embracing repugnant means to achieve supposedly desirable ends strikes me as almost never long-term beneficial.

          • jaimeastorga2000 says:

            embracing repugnant means to achieve supposedly desirable ends strikes me as almost never long-term beneficial.

            I think the standard LW response would be that long-term benefits are consequences, and that if the outside view tells you that using repugnant means for seemingly good short-term benefits tends to end badly, that is a perfectly good reason to avoid using repugnant means even if it seems to you like a good idea at the time. See Eliezer Yudkowsky’s “Ends Don’t Justify Means (Among Humans)” and “Ethical Injunctions.”

          • onyomi says:

            Good point, but I also object to using repugnant means because they are repugnant, not simply because they tend to lead to bad consequences.

          • LTP says:

            “Well, that’s interesting. How do you feel about the utopianism of LW-style rationalists? Their goals (Friendly super-AI, immortality) are even more ambitious than leftists’.”

            Well, I’m one of those SSC readers who isn’t a big fan of LW rationality, Eliezer, or utilitarianism, and the unreasonable demands and utopianism is a big part of it, plus a few other things (I’m not opposed to techno-optimism, as long as it’s grounded). I do find much from that community to be annoying, and in fact I strongly disagree with them on many counts. However, I don’t feel *angry* at them most of the time (there are a few opinions/attitudes they’ve addressed that grind my gears, but only a few). I think I get less angry at the LW folks because it’s a small community whose members I won’t really encounter much in the real world unless I actively look for them.

        • Emile says:

          I feel somewhat like this – a lot of the fundamental philosophy around oppression and privilege and the environment and status-quo just seems wrong to me (as in, not a useful way to think and talk about reality), and, like you, I strongly dislike the shaming, the holier-than-thou preaching, the disruptive protests…

          However I wouldn’t say that form of leftism is prominent in my local community (I live in France), so it is something I’m mostly exposed to online.

          • Peter says:

            From the UK (well, from my parts of Cambridge, which are a weird bubble all of their own)… it seems to be something seeping into the UK, but it does seem to be more of an American thing. There’s a perpetual worry than anything bad from America is going to find a way across the Pond in a decade or two, possibly minus the redeeming features (I think PTerry said something about that in… Eric, was it?) – and faster in the UK than on the continent.

            Me: a slightly old-fashioned centre-left liberal/social democrat who doesn’t like the new rhetoric etc. and worries about becoming “politically homeless” so to speak.

        • Often I think the implication is that if you’re privileged in your society (as I am, being a white male American with upper middle class parents) then unless you devote your life to leftist activism you’re a bad person. When a leftist says society is “unjust”, they are saying it is unjust compared to a utopian society. Note that this isn’t just SJW who say this, but marxists, socialists, environmentalists, and even the non-SJW social justice people as well. No matter what form of leftism you’re talking about, there is underlying it a theory of an all pervasive system that is evil and oppressive. This means that pretty much everything I value, and even my own existence, is morally corrupt.

          Sure, there are people who think that way, but at least in my world, they are more active and prevalent in online fora than in actual politics.

          I’m more liberal and certainly more politically active than probably almost everyone here. Moreover, I live in, and have won elections in, one of the most politically liberal counties in America. And I don’t think I fit your description at all.

          Yes, there are problems that should be addressed, there are things that are unfair, but that doesn’t mean we should remake society from the ground up.

          I note that the most successful and liveable nations (specifically including the U.S.) have been the ones with mixed economies and pragmatic, incrementalist leadership.

          Admittedly, as a local government official, I have a deep investment in Making Stuff Work, rather than ideological purity or overthrowing the established order.

          In my case, Making Stuff Work includes extending the benefits of marriage to same-sex couples, which is to say, real-world people with real-world problems, that will benefit from a relatively trifling change in the law. I certainly don’t see that as overthrowing anything.

          I cannot be a good person in my society, is the upshot, unless I devote my life to toppling the system in some way…. Note, that in less inflammatory leftism much of this is at least partially implicit, or couched in nicer language, but it is still definitely there.

          I deny that I am doing this. Indeed, I reject the concept of political sins of omission. Most people have other things to do with their lives — things that are entirely worthwhile on their own terms.

          • I think this is true, it’s too easy to get caught up in all that goes on online and forget that it doesn’t necessarily represent the wider world.

          • ThirteenthLetter says:

            “In my case, Making Stuff Work includes extending the benefits of marriage to same-sex couples, which is to say, real-world people with real-world problems, that will benefit from a relatively trifling change in the law. I certainly don’t see that as overthrowing anything.”

            I agree with you on that specific point. But unfortunately we now live in a country where the next question has to be: what happens if a baker in your jurisdiction declines to cater a gay wedding for religious reasons? And even if you, personally, defended their right to do so, how long would you stay in office — or be willing or able to hold up against the activist backlash?

    • houseboatonstyx says:

      I present here as Far Left, because I’d like to see Environmentalism, Animal Rights, etc, pushed to extreme measures, and some pretty strong feminist measures (like quotas in legislatures and more female [preferably Lesbian] heads of state).

      But the SJ-type rhetoric does rather scare me. It’s like they’re tearing down rationality itself, just to bully people online — and if Coates has caught it too….!