Open threads at the Open Thread tab every Sunday and Wednesday

OT18: Istanbul, Not Constantinopen

…or Lygos, Byzantium, Miklagard, Nea Roma, Tsargrad, or any of the others

This is the semimonthly open thread. Post about anything you want, ask random questions, whatever. Also:

1. Still busy. Continue to expect less blogging.

2. There were some good counterarguments to some of the links in my last link roundup that I wanted to highlight before anyone gets misled. Luis Coelho points out some problems with the microbiome-sweetener theory. Albatross doesn’t believe that hospitals are raking in the dough. And Scott Sumner says contra Paul Krugman that interstate migration to the south is driven by low taxes, not good weather.

3. Comments of the week are Navin Kumar on poverty traps, Ivvenalis on military suicide, and Janet Johnson on what her work as an educational consultant has taught her about “growth mindset”

4. I’ve given up on Ozy ever posting any more open threads at their place, but you still can’t discuss race and gender here. No, life isn’t fair.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

608 Responses to OT18: Istanbul, Not Constantinopen

  1. Gudamor says:

    What is a good morning stretch routine?

    • Matthew says:

      I find stretching without doing some warm-up to be both relatively ineffective and unpleasant. I usually jog for a few minutes before doing stretching when I go to the gym, so that the muscles have loosened up enough to accomplish anything. Is there a specific goal you have in mind that would make you stretch first thing in the morning?

    • Nick says:

      I’ve been doing DeFranco’s “Limber 11” lately and have quite enjoyed it, especially on days when I do squats or deads. I do his “Simple 6” occasionally, too, but don’t like it as much as the lower body/posterior chain stuff. I’d recommend giving either or both a try (L11 would likely be better for good mornings, though ;])!

    • Surya Namaskara is intended as a morning practice. It would probably help you wake up.

      You could also do some dynamic stretches paired with a short run. If you’re looking to increase flexibility, this will help over time.

    • Somatics has a good routine. It isn’t stretching, it’s a series of gentle movements which remind your body that you can move easily.

    • William O. B'Livion says:

      Sort of take off from what Matthew says, stretching “cold” muscles is probably counter productive.

      Now, I don’t have any problems “waking up” in the morning–if my feet have hit the floor I’m about as all there as I’m going to be for the day.

      If you’re just trying to “wake up”, put on your sweats, pour a cup of whatever you drink in the morning and walk around the block a couple times. Or 3 rounds of 20 jumping jacks/side straddle hops, 20 body weight squats (aka “deep knee bends–hips below knees, knees tracking over the toes), 10 pushups (if you can). Of course, depending on your general state of health.

      Anything suggestions more than that would really require a knowledge of your current physical condition and goals beyond “waking up”.

    • Gudamor says:

      Thank you everyone for the suggestions!

      It has been a week so I thought I would give an update: I have yet to try a single suggestion and I feel bad about it.

      Thanks again!

  2. Sophie says:

    I asked a while ago about whether or not donating blood was effective, and I found a thing, so, in case anyone was curious:

    • grendelkhan says:

      Since this is an open thread, on the topic of blood donation, has everyone here read “The Giving Plague”? It’s faintly relevant here.

      • Irrelevant says:

        Indeed I have, or listened to it, at least. I forget whether that one was covered by Escape Pod or Pseudopod, but a good story either way.

      • Leo says:

        This is a bad story.

        The characterisation, style, etc. aren’t especially good, but one usually forgives that in science fiction if the ideas are impressive enough. But there’s only one big idea that tries to carry the whole story. It’s revealed halfway, which makes me wonder if Brin intended it as a twist, but since it’s obvious from the beginning I’m confused what he’s trying to do.

        He doesn’t do anything particularly interesting with the idea. The narrator talks about the moral questions, but the plot is determined not to explore them, and dodges them with a diabolus ex machina. There are three separate world-changing viruses in the two decades or so the story spans, which would break anyone’s suspension of disbelief, even if it survived the premise that transfusion recipients can give blood, which in the real world they can’t, specifically to avoid epidemics spreading that way.

        Am I missing something here? An implied but non-obvious feature of the story’s world, a hidden plot, an unreliable narrator, that gives those apparent flaws a deeper meaning?

        I also find giving blood very pleasant. It involves going for a walk somewhere you do cool things with needles and get free food, and makes you feel like a good person. But now I have dirty gay blood and can’t donate.

        • DrBeat says:

          Yeah, this appears to be a story by someone who was convinced he was Very Smart with Very Important Things To Say while being neither.

          I say this because every single thing done by every single character, explicit or implied, is done not because it makes sense for them but because the story requires it. This despite there being any characters or story at all. So the only other option is that the characters do what they do in order for the author to make a point, which is the only thing worse than acting just to move the plot.

          Also, when your viral researcher, talking about this virus that alters human behavior, says he has to keep it secret and let it spread… How do you not kill him right there? Pick up your utensil and push it into his eye until your palm is up against his nose. There are certain things a person can do that indicate they have to be destroyed immediately, and that’s at least three of them.

          • Leo says:

            See, now that’s interesting. You’re going “DESTROY HIM in the middle of the ice-cream parlour”, I’m going “Yeah I’m pretty much exactly Les”. That’s the clash of ideas the story should have gone into. Instead it just let rocks fall and everyone die.

          • DrBeat says:

            Here’s my rule, for people who find themselves in a sci-fi or fantasy story unexpectedly:

            If you are talking to someone, and they say they intend to mind control people or otherwise forcibly alter their patterns of thought, and they have a means with which to do so, kill that person immediately.

          • Nornagest says:

            That’s an oddly emphatic rule, considering the extreme unlikelihood of that happening.

          • Bugmaster says:

            > If you are talking to someone, and they say they intend to mind control people or otherwise forcibly alter their patterns of thought, and they have a means with which to do so, kill that person immediately.

            What counts as “forcibly” ? What if your interlocutor is just an incredibly persuasive speaker, do you kill him immediately too ?

          • DrBeat says:

            Since persuasion is not magic and does not dehumanize the listener into nothing more than an instrument to get what a sociopath wants, no, it is not the same thing.

          • buddha pest says:

            “If you are talking to someone, and they say they intend to mind control people or otherwise forcibly alter their patterns of thought, and they have a means with which to do so, kill that person immediately.”

            Do you have any suggestions on how to negotiate the legal and moral consequences of this homicide?

    • Are there any statistics available on how many people die from an insufficient blood supply? I don’t think I’ve ever heard of that happening in the U.S., and the article doesn’t seem to bring up any such examples.

      A quick Google search turns up this, which suggests that the main consequence of a low blood supply is cancellation of elective surgeries. “Elective” seems like a broad category, so it’s hard to say for sure what sorts of interventions your marginal donation is enabling.

      Content warning for scrupulosity in the remainder of this comment, rot13’d for your convenience:

      Guvf vf rfcrpvnyyl eryrinag gb zr nf V’z ovphevbhf. V’z jbeevrq gung znyr-znyr pbagnpg jvyy or zbenyyl rdhvinyrag gb zheqre vs fbzrbar qvrf sebz n ynpx bs oybbq ninvynovyvgl. Guvf qrfcvgr gur snpg gung V’z abg pheeragyl qbangvat naljnl qhr gb onq rkcrevraprf gur ynfg guerr gvzrf V jrag.

      • Anonymous says:

        Dhvgr senaxyl, n oybbq qbangvba frrzf yrff yvxryl gb fnir/vzcebir n yvsr guna n 20 qbyyne qbangvba gb na rssrpgvir punevgl, fb V jbhyqa’g fgerff fb zhpu nobhg jung vf rssrpgviryl 20 qbyynef.

      • Alexander Stanislaw says:

        Ohg haqre hgvyvgnevnavfz, nyzbfg rirelguvat vf zbenyyl rdhvinyrag gb zheqre. Rirel $3,300 qbyynef lbh qba’g qbangr vf bar zheqre. Vg znxrf zr irel fnq gung lbh jbhyq znxr lbhe yvsr jbefr (V qb pbafvqre vtabevat cneg bs lbhe frkhnyvgl naq univat n cbffvoyl yrff guna shysvyyvat frkhny/ebznagvp yvsr, gb or znxvat lbhe yvsr zhpu jbefr) orpnhfr bs gung. Lbh pbhyq qbangr zber gb pbzcrafngr vs lbh’er gung frevbhf nobhg RN.

      • Tracy W says:

        Medical language often makes a good case for it being deliberately designed to mislead. Elective means non-emergency, not optional. Eg if you’re pregnant and at 20 weeks the obstetrician says “You can’t have this baby vaginally or you’ll die, we’re booking you in for a c-section at 37 weeks”, that’s elective, because you’ve got weeks to prepare for it. Semi-elective is when you’ve got a bit of time, eg it can be done today or tomorrow. Emergency is people calling “Code Blue”, staff running to the operating room type situation. I used this example because there was a minor brouhaha a few years ago in my newspaper selection over the high rate of elective c-sections.

        So if most blood is used in elective surgeries, that implies that most is being used in things like scheduled life-saving c-sections, or mastectomies to treat cancer, or hernias, not that most is being used in cosmetic surgeries.

        I lost 3 litres of blood giving birth the first time (not a c-section), I can’t donate myself, and I would like to say thank you very very much to everyone who does donate.

        • Tom Womack says:

          I think semi-elective gets called ‘urgent’ in England, which is equally confusing.

          In November I turned up at A&E with a large and unfortunately-located abscess which antibiotics weren’t dealing with; it was clear that I wasn’t going to leave with the abscess undrained, but ‘urgent’ isn’t the first word I would use for a situation in which it took five hours to get assessed and I was on the operating table fourteen hours after walking through the door.

          • Devilbunny says:

            Painting with a very broad brush, the hospitals in which I have worked (as an anesthesiologist) classify cases as elective, urgent, semi-emergency, and emergency.

            Elective cases need to be done, but not necessarily today or even tomorrow. Urgent cases are usually defined as those that need to take place within 24 hours. Semi-emergency cases need to take place as soon as is convenient, and true emergency cases justify taking the first available operating room away from the surgeon who is supposed to be working in there and giving it to someone else. Removing a gallbladder is almost always an elective case. Repairing a hip fracture is an urgent case. An appendectomy is a semi-emergency case. A gunshot to the chest is a true emergency case.

      • Jay Feldman says:

        Vs lbh ner n crefba jvgu n cravf jub unf frk jvgu n crefba jvgu n cravf, naq lbh unir hfrq pbaqbzf sbe nyy vafgnaprf bs nany frk/beny, be lbh naq lbhe cnegare trg erthyneyl grfgrq naq unir n pybfrq ybbc bs frkhny cnegaref, gura V jbhyq rapbhentr lbh gb yvr ba gung obk. Zbfg betnavmngvbaf fhccbeg punatvat gur yvsrgvzr ona gb n jvguva bar lrne ona, naq vs lbh ner npghnyyl hfvat cebgrpgvba lbh jvyy onfvpnyyl or svar. Gur gvzrf gung V qbangrq oybbq naq nfxrq dhrfgvbaf, gur crbcyr gnxvat zl oybbq nyfb rapbhentrq zr gb yvr (V nz n ov ZFZ jub svgf gur nobir pbaqvgvbaf V zragvbarq).

      • The calculation might come out differently if you have a very rare blood type.

      • RCF says:

        V’z abg ragveryl pregnva jurgure V haqrefgnaq jung lbhe pbaprea vf. Vs V haqrefgnaq vg pbeerpgyl, lbh ivrj gur fvghngvba nf orvat pubbfvat orgjrra abg univat frk jvgu zra naq qbangvat oybbq, be univat frk jvgu zra naq abg qbangvat oybbq, naq lbh gurersber jbaqre jurgure univat frk jvgu zra, orvat cnverq jvgu abg qbangvat oybbq, vf zbenyyl rdhvinyrag gb zheqre. Naq vs V haqrefgnaq lbhe cbfvgvba pbeerpgyl, gur ernfba lbh frr univat frk jvgu zra nf orvat cnverq jvgu abg qbangvat oybbq vf orpnhfr bs gur qbangvba ehyrf, abg orpnhfr lbh oryvrir gung qbangvat oybbq nsgre univat frk jvgu zra unf n fvtavsvpnag punapr bs vasrpgvat fbzrbar jvgu UVI; nyy bs zl nanylfvf vf onfrq ba vg abg univat n fvtavsvpnag punapr. Tvira gung nffhzcgvba, V frr gjb znva bowrpgvbaf gb gur vqrn gung vg vf zbenyyl rdhvinyrag gb zheqre (bgure guna gur bzvffvba/pbzzvffvba qvfgvapgvba). Svefg, gurer ner guerr pubvprf urer: qba’g unir frk jvgu zra naq qbangr, unir frk jvgu zra naq qba’g qbangr, naq unir frk jvgu zra naq qbangr (naq, V fhccbfr, n sbhegu bar bs qba’g unir frk jvgu zra naq qba’g qbangr). Vs lbh oryvrir gung univat frk jvgu zra naq abg qbangvat oybbq vf zbenyyl rdhvinyrag gb zheqre, gura gura pyrneyl gur pubvpr fubhyqa’g or orgjrra gur svefg naq frpbaq bcgvba, ohg orgjrra gur svefg naq guveq bcgvba. Fb, ng zbfg, univat frk jvgu zra jvyy erfhyg va lbh pbzzvggvat gur jebat bs ylvat. Gur frpbaq bowrpgvba vf gung gurer ner gjb zbeny npgbef urer: lbh, naq gur crbcyr jub unir qrpvqrq gung ZFZ oybbq qbangvba vf abg nyybjrq. Vs abg qbangvat oybbq vf zbenyyl rdhvinyrag gb zheqre, gura cebuvovgvat ZFZ sebz qbangvat oybbq vf zbenyyl rdhvinyrag gb zheqre. Rinyhngvat gur cerzvfr “Bapr fbzrbar unf znqr n pubvpr gung jvyy erfhyg va zheqre, jr ner zbenyyl boyvtngrq gb gnxr jungrire npgvbaf ner arprffnel gb cerirag guvf zheqre”, guvf cerzvfr jbhyq erfhyg va nalbar orvat noyr oynpxznvy lbh vagb qbvat jungrire gurl jnag ol guerngravat zheqre. Guhf, V guvax gung vg vf pyrne gung gurer ner ng yrnfg fbzr fvghngvbaf jurer jr ner abg boyvtngrq gb cerirag n qrngu, naq jr abj unir gur dhrfgvbaf bs jurgure guvf vf fhpu n fvghngvba.

        Nyfb, V jbaqre ubj genaf crbcyr ner unaqyrq. Vs fbzrbar vf ovbybtvpnyyl znyr ohg vqragvsvrf nf n fgenvtug jbzna, qbrf abg dhnyvsl nf ZFZ?

    • Julie K says:

      Anyone know if hair donation is effective? (Donating your hair to be made into wigs for cancer patients.) I’m wondering if the value of the raw material is quite small and the main purpose is to raise awareness.

      • aguycalledjohn says:

        If they really needed it they could buy it for peanuts from teh third world (As wigmakers do)

        I think awareness is the main thing

        • Jaskologist says:

          Third-worlders probably have a rather different hair phenotype than many of the cancer patients.

          On the other hand, it’s not like we actually need hair. I can’t picture any amount of hair saving a life, and thus mattering on an EA scale.

          • Julie K says:

            How does kidney donation rank on the EA scale?

          • Desertopa says:

            According to my haircutter, who said she watched a documentary on this most human hair for wigs comes from India, where there are large numbers of people with long, strong, high quality hair which can be treated and dyed into a wide range or colors and textures. Not only do people in the first world expect higher payment, we mostly have less versatile and valuable hair.

  3. Izaak Weiss says:

    People who have been in college: What classes/subjects did you find interesting that I might not otherwise consider taking? I’ve got a few extra credit hours I need to graduate and I figure I might as well fill these with interesting classes. All recommendations welcome.

    • Sniffnoy says:

      For a biology requirement I took a class on animal locomotion. I don’t know if your college has a class on that which is directed at non-experts, but it was neat.

    • Anonymous says:

      I always regretted not taking a class on holograms.

    • Anonymous says:

      What is your area of study?

    • nico says:

      I took three entry-level linguistics courses in undergrad (intro ling survey, sociolinguistics, and linguistics/history of English) which I found to be very rewarding.

    • Setsize says:

      I’m quite happy to have taken a couple classes involving skill-with-physical-objects, like machine shop or welding.

      • William O. B'Livion says:

        To follow on with this, if your school doesn’t have welding or machine shop, see if they’ve got (word I can’t think of) with local junior colleges or trade schools who do. Consider a couple classes in automotive mechanics or basic electrical work. Carpentry etc.

        Knowing these things would be incredibly handy.

        Another possibility that most colleges/universities offer is “independent study”. This can be incredibly interesting as it lets you take a subject and deep dive into it for a semester and get credit.

    • Andrew says:

      My favorite classes that weren’t in my major were “Ethics in Education” (200-level philosophy course), Climate Science (upper division meterology/climatology), and Linear Algebra.

    • Anon says:

      Dance. Pick a kind, any kind. Usually only one unit. It’s a great thing to do if it turns out you enjoy it (which there is a pretty good chance of), and now is by far the easiest time to get into it.

    • symmerser says:

      When I was in your situation, I decided to take a seminar on science (/natural philosophy, metaphysics, medicine…) in the early Islamic world, and I’d highly recommend a similar course. Other courses I considered at the time and still kinda wish I’d taken: neuroscience of vision, ancient Chinese history, and a project-based intro to math modeling.

      If your school has grad programs, then some of the science departments might offer low-credit courses aimed at introducing new grad students to the various labs’ research. I sat in on one of these while I was a grad student in a different department, and it was a really fun way to get a sense of the current state of the field—a different PI gave a presentation each week on their lab’s research and its context, lab members were around to answer questions, and there was an optional background reading list for each subfield. I’m pretty sure undergrads were welcome in those courses too.

    • jaimeastorga2000 says:

      I took an introduction to creative writing class to satisfy my university’s art requirement. It was surprisingly helpful. I also took an introduction to philosophy class several years ago which was pretty fun.

      For something completely different, you might want to see if your university has a ROTC program. The first 4 semesters in the 8 semester sequence can be taken with absolutely no obligation.

      • William O. B'Livion says:

        If your college doesn’t require intro to philosophy classes you’re being cheated.

      • Luke Somers says:

        My college’s creative writing course was an utter waste.

        I would recommend the intro to philosophy course, though.

        What surprised me most was one I took in high school and should have followed up on in college: drawing.

    • Bugmaster says:

      Babylonian Religion. Yes, really. Of course, it was more like “Babylonian Ancient History”, but I would not have known it from the course title.

      Before you complain, note that you specified “interesting”, and not, “immediately applicable”.

    • David says:

      Acoustics (physics of sound), and metalwork.
      No idea if they’d be relevant to you, but they’re definitely the two classes I can think of that I enjoyed much more than I expected.

    • Shenpen says:

      Take accounting. Seriously. Basically every for-profit corporations ultimate goal is not to simply make money, but make accounting profit. It is neat to know what rules move them.

      It could maybe help you in gaming the system to get research grants etc. as getting a manager to dedicate money to a project without it result in an accounting loss (if it can be capitalized) can be easier.

      The people who decide the funding of science speak the language of accounting. That is IMHO a good reason.

      • Jaskologist says:

        Seconded. I wish I’d had a few more “real-world skills” classes.

      • Tracy W says:

        I did take some accounting after graduating, it was very interesting. Like Newtonian physics, it seems rather confusing from the outside and then all makes sense when you get into it.

      • RCF says:

        I’m not sure “ultimate” is quite the right word. All real-world losses show up as accounting losses eventually, so the distinction is what the short-term goals are (and what individuals agents’ goals are).

    • DES3264 says:

      My suggestion is to take courses that are intended for freshman majoring in a subject which interests you (or sophomores, if students at your university don’t declare majors until their second year). I did this and was amazed at how much more serious and interesting courses were when everyone — students and professor — assumed that they were the foundation for future years of study. In my case, I took cognitive psychology, intro. linguistics and intro. engineering on this basis, but I suspect any not-too-large intro course would work.

    • zz says:

      Psychopharmacology. I’ve held on to all my textbooks because, you know, textbooks, but my psychopharmacology book is the only one that I keep in my desk drawer because I reference it so often; the rest are relegated to the closet.

      Most things that come up in day-to-day life (cooking, for instance) are well understood because of their prevalence. Brain-affecting drugs do come up quite often, but are poorly understood by the population (legal status, taboo, complexity). Having a solid understanding of them makes a bunch of small things go better: you do the drugs you’re probably already doing (alcohol, caffeine) smarter; if you need an brain-affecting medication, the inferential distance between you and your doctor goes from “lots” to “1 step”; if you do recreational drugs, you do them smarter; and there’s non-negligible status benefits from counseling your friends who do recreational drugs, getting them to do them less stupidly.

      The one caveat is that IME ‘everything a layperson needs to know about drugs’ doesn’t form a full class, so you either get to make mediocre test grades or memorize a bunch of worthless knowledge (e.g. names of 7 different antidepressants and the tiny differences between each of them) to eke out a few marginal points on tests.

      • Caelestis says:

        I would add that you should be wary of professorial bias when looking at taking a psychopharm class; the intro course at my school was taught by a Prof who was vehemently anti-marijuana and inordinately pro-cocaine, and it definitely showed in the course. That isn’t to say that there aren’t good or bad things about either drug, I just mean that drugs being what they are, a lot of people have very strong opinions on them. I found the overt bias grating.

      • Emeriss says:

        Any textbook suggestions?

        • zz says:

          Search Amazon for drugs brain behavior.

          You’ll want recency (things change, and my book is probably slightly out of date already). You also probably don’t want a book targeted at medical practitioners, which is mainly what you get if you search for psychopharmacology.

          • Nornagest says:

            I’d expect any popular works on psychopharmacology to be only slightly more accurate than your high school health teacher (though they won’t all share your high school health teacher’s viewpoints), and rather less than the relevant Wikipedia articles.

          • zz says:

            Happily, this method doesn’t return popularly-written books; this was my first result, and I had to go down 11 results before I found a popular work. Just like there’s analysis for economists and analysis for prospective math researchers, there’s psychopharmacology for someone who wants to know what’s actually going on and psychopharmacology for prospective medical professionals.

            Wikipedia’s good too, but lacks a good deal of structure; Wikipedia lacks neurons function -> how drugs work -> pharmacokinetics/pharmacodynamics. Having a list of the major classes of drugs, with their common mechanisms, side effects, etc. also makes Wikipedia articles for particular drugs much more readable; I’d certainly have trouble without Wiki, but I’d also have trouble with Wiki without a lot of the background there (tab propagation is not an efficient way of gaining background, though it may work.)

    • Brock says:

      I’ve always advised students to choose their electives based on the teacher, not on the subject. Find out who the best teachers are, the ones whose students say “Wow, I really learned a lot in her class!”, and take whatever it is they’re teaching. A good teacher can make you interested in just about any subject; a bad teacher can make your favorite subject dull.

      That said, I think the most useful elective I took was economics.

      The subject I most wish I had studied in college is statistics. I’ve managed to teach it to myself, but it has been a struggle. I think it would have been helpful to have some guidance.

      And if you don’t know already know how to program, take a computer programming class. The general purpose computer is the most remarkable invention of the twentieth century, and a bit of programming knowledge makes a computer a powerful tool instead of a toy.

      • Tracy W says:

        I do wish I’d skipped Systems and Control 2 for a pure statistics course. What with signal processing/communications engineering and econometrics courses I did quite a bit of applied stats as part of my degrees and they gave me the sense that there was quite a beautiful world in there.

      • Mary says:

        Statistics was a wonderful class. If I had my way it would be a required math class in high school.

      • “The general purpose computer is the most remarkable invention of the twentieth century, and a bit of programming knowledge makes a computer a powerful tool instead of a toy.”

        Also, one of the things humans like to do is build machines that work. If you make the machines out of steel and bronze and wood, most of your effort goes into shaping them and fitting them together, only a little into design. A computer program is a machine made of pure logic. You still have to pay attention to details, but it comes closer to the pure creative act than anything else short of writing poetry or fiction and is a very different experience.

    • Jaskologist says:

      Pick based on professor more than subject matter. A poor prof will ruin any class, a good one will make most any seem interesting.

    • Jordan D. says:

      I had an emergency communications course to which I owe fond memories. The lectures were all fascenating case-studies of how corporate and public entities reacted to sudden disasters, the right and wrong ways to handle sudden media attention and other practical stuff, like how to prepare a space for an executive officer to make a speech or where to park news vans and how many phone lines you need availible in an emergency call center.

      The projects were even better: one was the creation of an emergency binder for a charitable organization and the other was a three-day practice where our groups formed emergency centers and sent out press releases while people called in asking about a fictional disaster.

      Another good class – formatting and editing. If you imagine that you’ll ever need to publish pamphlets or create an interesting-looking report or any such thing, it’s good to have a little training.

    • Randy M says:

      I think I only had time & money (extra credits over 15/semester cost extra) for one non-major class, a class on Kurt Vonnegut, which was interesting.
      I wanted to take a class on Science Fiction and culture, and another on Neuroscience, but couldn’t fit them in.

    • Tarrou says:

      My thing I’ve learned in seven + years of university level schooling, go for the great professor, and don’t worry so much what he or she teaches. This is why I have a very bizarre transcript, a double major and a triple minor. Not the most efficient way to do things, but if you’re just trying things out, you could do worse.

    • merzbot says:

      Calculus, if you haven’t taken it already. I thought math was boring before I took calculus, but it turns out that just all the math classes you take before calculus are boring.

    • Anthony says:

      What would you not otherwise consider taking? What’s your major, and what have you taken outside your major?

      Among the most interesting classes outside my Civil Engineering major (and prerequisites) that I took were an Intro to Writing Systems seminar (lower division seminar class at Berkeley!), European Intellectual History Enlightenment to 1870, taught by Martin Jay, and Law and Morality, taught by John T. Noonan.

    • Dead Milkmen fan says:

      – Literature classes that focus on one author or literary movement/period. (In my case, we read all of Ibsen’s plays over a semester.)
      – Religious studies, maybe? My undergrad college had a divinity school attached, so I took a lecture course on mysticism. Pretty good stuff, and I didn’t feel too bad about coming to class under the influence of psychedelics. 🙂

    • PDV says:

      Linguistics is impractical but awesome.

    • pneumatik says:

      I took a math course on knot theory. Very interesting but probably not available to you since it was taught by a grad student who was the expert on campus on knot theory.

      I took a drawing course that was very interesting. I learned drawing wasn’t really for me, but it was interesting to learn the processes of how to learn to draw and how to draw better.

      I took a business law class that turned out to be just generally useful law. Probably more directly applicable than any other class I took in college.

      I took a grad course (as a grad student) that derived thermodynamics starting from math instead of physical principles. Absolutely fascinating, and to this day I wish there had been a text book that I could have kept.

      I wish I’d taken statistics.

    • Devilbunny says:

      In college, I enjoyed intro philosophy and formal logic, computer science 101-102, practically anything in classics (classics departments often have English-only courses for nonmajors), and multivariable calculus.

      But the single most interesting class I’ve ever taken in my life was a non-honors math class I took as a senior in high school (although the fact that it was non-honors was something of a joke, as half of the students were also taking AP Calculus at the time). It was entitled “Finite and Discrete Mathematics”. For those not familiar with those terms, it was a survey of probability, statistics, formal logic, set theory, and all kinds of other things that I later realized are the underpinning of how computers actually work.

      This is definitely a recommendation for the choose-the-teacher-not-the-subject, as the teacher was a Case graduate (before the merger with Western Reserve) who almost failed out of college because he became addicted to bridge, and who once appeared on the original Jeopardy! show. He was, unsurprisingly, the faculty sponsor of the Quiz Bowl team.

    • Yehoshua K says:

      What are you majoring in? If you’re interested in doing stuff that doesn’t focus on your major, it makes a difference whether you’re studying, say, accounting versus whether you’re studying, say, chemistry.

    • Julie K says:

      Courses outside my major that I enjoyed: Latin, history (It was the first time I had a history course that focused on primary sources! Very different from high school “social studies.”), fencing.
      I regret not taking: the 1-credit glassblowing course offered by the chemistry department.

    • Tony says:

      Mesoamerican Art & History. The mindset and history and ecology of a set of civilizations that were all but genocided out of existence.

      A biology class on sleep.

    • I am a Computer Science major. Depending on your tastes, you might like these out-of-major classes, as I did. I note some things I liked about each.

      Philosophy 105, Critical Reasoning: the different types of definitions, such as extensional and intensional. Analyzing what the chain of claims are in an argument.

      Physics 480, Acoustics: I didn’t learn much practically-useful things from this, but I found it interesting when I was taking it. It helped me get a better feel for sound, as a music enthusiast and composer.

      General Psychology I: People do better on a test when they take it in the same location they studied for it in.

      Cognitive Psychology: How we learn things. How well we do and don’t remember things. An experiment that shows that when people play a game by quickly and inaccurately pointing at one of various zones that get points, their brains correctly calculate the optimal area to point that maximizes their expected value.

      Social Psychology. We had an interesting assignment to choose a social psychology issue, and go out and make observations in three different places. I learned that you can sit at a table by an ATM and take notes on everything that ATM users do, and they never notice. I also liked an experiment that showed that all types of arousal are linked. Men were assigned to either run or walk around a park on a path that ended near a woman. The runners rated the woman as more attractive.

  4. Chris Billington says:

    In the absence of reddit-style comment sorting, or me having about a trillion times more free time to read the ever growing comment threads on this blog, having a few “comments of the week” is really useful for finding interesting discussions. Thank you.

    • This used to be a problem for me too. Now I only read child threads if the parent comment hooked me with topics I find interesting. I might miss some interesting content, but this heuristic allows me to feel satisfied in reading open threads on SSC without sinking too much time into it.

  5. Condition says:

    Does anyone know any other Less-Wrong-affiliate reading lists, similar to or ?

    • Charlie says:

      The Galefs have a pretty sweet bookshelf.

      If you’re looking for more mathy reading, I can recommend some and recommend more people to ask if you’d like.

      • Condition says:

        At this point, I’m filing all the math-heavy textbooks into a big “eventually” category in my brain. I’m still muddling my way through an undergad math degree, and for the most part they’re out of my depth. I’m mostly looking for some (relatively) light reading to supplement taking classes over the summer.

    • John Maxwell IV says:

      I recently made one on the EA forum:

      Searching on Google for “ book recommendations” should dig up several book recommendation threads on Less Wrong.

      • Condition says:

        This is exactly the sort of list I’m looking for, thanks!

        • John Maxwell IV says:

          No prob. By the way if you’re planning to read all these books you might want to read the self-improvement ones first; my subjective impression is that the skills needed to study difficult academic material are fairly similar to the skills needed to succeed in life in general (the ability to hack your motivation and become curious about stuff, the ability to maintain the stoic resolve required to continue thinking about a concept you find difficult, etc.) MIRI researcher Nate Soares wrote a bunch of blog posts on the productivity techniques he used to complete the MIRI reading list that might be worth a look:

          I think in general you want to spend some of your time on self-improvement and some of your time accomplishing practical object-level goals so you have as much of an opportunity to test out your self-improvement changes as possible. Intermixing self-improvement theory with task accomplishment practice seems key. I’ve found that a daily review is a great way to accomplish this.

  6. Carinthium says:

    I’m going to post several questions here. Apologies for hogging space, but I have several things I want to ask about and people could easily be interested in one but not all of them.

    1: What do people here think of eliminative materialism? For those who don’t know, the eliminative materialist claims that the ordinary, common-sense understanding of the mind is so deeply wrong that many or all mental states humans believe to exist do not in fact exist. Most notably, Churchland’s eliminative materialism challenges the existence of beliefs and desires based on neurological evidence for them.

    Normally I would be open to giving this proper consideration, but have a problem with it related to theories of how to act. My theory of action, like many such theories, presupposes the existence of desires as a concept so as to make a concept of self-interest workable. In addition, the concept of beliefs is necessary because I want to have a notion that plays the role of C.E.V (if I knew X, I wouldn’t want Y. Even though I don’t, my desire for Y is irrational). Yes this is biased of me, which is part of why I’m trying to find more objective opinions.

    Is this philosophically tenable, either by a theory compatible with eliminative materialism or refuting it? Or are the concepts of beliefs and desires themselvse ultimately untenable?


    (Minor Note: I’m considering whether to post this on LessWrong or not for more advice, but I’m not sure yet.)

    (EDIT: I don’t know enough about neuroscience, so an expert on that, particuarly with philosophical knowledge would be appreciated but- is it possible to replace the concept of ‘belief’ and ‘desire’ with something similiar but more neurologically tenable?)

    • Bugmaster says:

      I’ve skimmed the linked article, and, as far as I can tell (which is not very far at all), Eliminative Materialism is just one giant equivocation. It claims that most mental states, such as beliefs or emotions, do not actually exist; but it equivocates between two meanings of the words “belief” and “exist” (which is a common problem in philosophy, IMO).

      On the one hand, Eliminative Materialism claims that there are no easily identifiable structures in the brain that correspond to specific beliefs. In addition, real human beliefs are not reducible to simple logical propositions such as “The President Dislikes Terrorists”. Once again, there are no distinct physical structures in the brain that would correspond to “The President”, “Dislikes”, or “Terrorists”.

      I would probably agree with everything in the above paragraph; but none of that implies that beliefs do not exist. This is where Eliminative Materialism pulls the bait-and-switch, because it makes the jump from “there’s no single collection of neurons we can identify with the belief” to something like, “and therefore, human behavior cannot be explained in terms of beliefs at all”. But these are two very different statements !

      If you told me that a person believes that “strawberry ice cream is delicious”, I can predict that the person would consume a larger than average amount of strawberry ice cream; respond in the affirmative when queried whether he likes strawberry ice cream, etc. Granted, all of these predictions may be false about a specific person (e.g. he might be a diabetic who enjoys the taste of ice cream but will not willingly eat any); but they will be mostly true, on average, about people with such a belief. On the other hand, if you told me that a person believes that “strawberry ice cream tastes gross”, then I would make different predictions.

      Nothing I’ve said in the above paragraph implies anything about the specific physical hardware that the person is using to think with. It doesn’t even matter if this person is using a human brain, a digital computer, or a collection of nano-mechanical push-rods. I’ve made no claims about the precise nature of his brain states (assuming he’s using one of those), only about his behaviour.

      Naturally, if the person is using a brain, this implies that there must be something about his brain that is different from the guy who hates ice cream; but the correct response to failing to identify that thing is not, “oh, mental states must not exist then”, but rather, “I’ve got to search harder”. Picking the first option forces you to deny our ability to make inferences about the person’s actions based on his beliefs, and it’s pretty hard to deny something with so much reliable evidence behind it.

      • Carinthium says:

        Broadly speaking, that’s what I thought at first. I thought one could use the Functionalist idea, e.g. pain has a certain function within the human system and certain physical aspects take on the pain role.

        Churchland, however, argued that such an idea would logically speaking save any possible theory, using Alchemy as an example.

        One alchemical theory, Churchland says, postulated five fundamental spirits: Mercury, Sulpher, Yellow Arsenic and Sal Ammoniac. Each of these spirits had seperate properties.

        Churchland suggests alchemists could have tried to save alchemy by suggesting that although these elements did not have any physical substance, something could, for example, be ‘ensouled by mercury’ by having the properties shiny, liquifying under head etc. These theories must have had some predictive sucess despite their flaws, or they wouldn’t have lasted so long.

        Even if the terms a predictively useful abstraction (which they clearly are), if they are not an aspect of objective reality it is hard to base any ethical theory around them.


        I’m still not sure, but that would be the counter-case.

        • Creutzer says:

          If that’s the argument, then isn’t Eliminative Materialism saying: because alchemy doesn’t work, there is no such thing as mercury?

          • Carinthium says:

            … Oops. I suppose you have a point there.

          • Bugmaster says:

            Furthermore, the reason we know that Alchemy doesn’t work is because there are some physical phenomena that it can not explain. Until we found such phenomena (and constructed better models), it was actually perfectly reasonable for us to use Alchemy as our model of the physical world; and if you wanted to replace it with the notion that mercury is an illusion (or some such), then you would’ve been wrong.

            A small amount of predictive power is still better than no predictive power at all.

        • I get exceptionally annoyed when people use alchemy as an example of “ridiculous non-science”, because alchemy wasn’t ridiculous non-science at the time. It was a perfectly respectable approach to material science using the analytic and empirical tools available at the time, and its biggest flaw was that its practitioners were not very keen on sharing their work and so wrote in terrible convolutions and obfuscations. But alchemy eventually evolved into modern chemistry, much like Ptolemaic astronomy evolved into modern astronomy, and both alchemy and the Ptolemaic model of the heavens were perfectly functional scientific predecessors, not bizarre aberrations to be ritually abjured.

          • vV_Vv says:

            While there is no sharp distinction between science and sorcery, alchemy was more a type of sorcery than a science. It had some empirical content, but it did not apply a scientific method of processing the empirical information and preferring parsimonious hypotheses.
            This also applied to Ptolemaic astronomy/astrology.

            Eventually, the scientific aspects of these disciplines dominates and they became hard sciences.

        • RCF says:

          “One alchemical theory, Churchland says, postulated five fundamental spirits: Mercury, Sulpher, Yellow Arsenic and Sal Ammoniac.”

          I count four.

      • James says:

        One of my favourite things about pragmatism as a philosophy is that it gets you out of arguments like this over whether X ‘exists’ or not. If a concept is useful then it exists.

    • stillnotking says:

      I think the eliminative materialists are basically right, but in a completely unhelpful way. A good analogy is the concept of “center of mass” in physics: it has no literal existence, but it’s a useful abstraction for predicting the behavior of bodies. Writing a paper about why centers of mass aren’t “real” would be quixotic and silly.

      Beliefs may not have a 1:1 correspondence with brain states — there is not, for example, an identifiable “belief in God” neural pattern possessed by all monotheists — but the brains of monotheists have to have something in common, based on their observed behavior. “Belief in God” is as good a description as any. I’m sympathetic to the idea that folk psychology might be completely wrong (or, at least, shouldn’t be assumed to be right), but until neurology matures, it’s all we’ve got.

      • Carinthium says:

        That isn’t the point. The problem is that eliminative materialism means it’s impossible to make an ethics based on the assumption that beliefs and desires work as things stand.

        For my purposes I need an ethics which can work that cannot be refuted by advances in brain science.

        • Bugmaster says:

          > For my purposes I need an ethics which can work that cannot be refuted by advances in brain science.

          Why ? This is kinda like saying, “for my engineering efforts, I need a theory of electricity that cannot be refuted by advances in physics”.

          • Carinthium says:

            My goal is to have a philosophy to make life decisions with that is completely rationally grounded. If I fail, it is impossible for me to act rationally.

            Since I don’t have the brain science, I need to get around it. Philosophy isn’t like engineering in that clever inventions of concepts or changes to concepts can often get around the apparently impossible (e.g. Parfit and personal identity).

    • HeelBearCub says:

      So, it seems from the article you linked that originally this was a response to dualism (In the theory of mind dualsim is “a view about the relationship between mind and matter which claims that mind and matter are two ontologically separate categories”)

      From a historical perspective, I think then that eliminative materialism can be said to have been proven true. There is much evidence consistent with the theory that “The Mind” arises from/is contained by the matter that it “runs” on, and essentially no contravening evidence.

      As to whether this means that mind does not exist at all, I think that is really just an artifact from when the dualist perspective dominated. In a dualist mindset, saying that everything is “merely” matter can be considered tantamount to claiming “the mind” (the dual mind that exists apart from the body) does not exist. Indeed, that was one of two historical positions.

      As modern neurobiology brings a greater and greater understanding of how the our brain brings about our mind, and exposes the various different false perceptions that arise from this (for example, all vertebrates have a blind spot in each eye caused by the blocking of light by the retinal nerve, but we do not perceive it), it is tempting to say that this means that the mind does not exist.

      There are modern physicists who think that time doesn’t exist. From a mathematical or physics perspective this might be true. But it doesn’t change the fact that we experience time. Even if Barbour is proven correct, we will still set alarm clocks and GPS will continue to work.

      We experience the mind. Although the mind is not a perfect experience, it does’t make the experience any less real. Honestly, I think Descartes’ basic observation is correct.

      Suppose we create an AI, and that AI has multiple “programs” that run on many different machine cores, but those programs combine to allow that AI to experience consciousness. And then we run those different cores on different virtual machines that are separated geographically. Would that make this consciousness cease to be real? I think the only way you can answer in the affirmative (that the AI is not a “real” mind) is if you actually subscribe to the dualist view that a mind is definitionally separate from the physical world. In other words, you just can’t conceive of a mind that arises from matter.

      • Carinthium says:

        That’s beside the point. Whether the mind exists or not is a seperate question from whether the physical mind has beliefs and desires.

        • Deiseach says:

          Okay, let’s take a go at this.

          Beliefs (let us say) arise from desires, or are our attempts at explaining/rationalising our desires (e.g. “I love Susie and I want to marry her and I subscribe to the ideal of monogamous permanent marriage based on mutual love”, where the triggering desire is “propagating my genes” arising out of evolutionary response to external stimuli).

          Desires, therefore, are material responses to material stimuli located in the material world. As the classical example in our school biology textbooks about “poke an amoeba with a rod and see it move away”, so when stimuli from the external environment impact on the organism, the resulting responses are classed as desires: avoid pain, perform activities that give us pleasure in return; hte basic ‘feed/mate/sleep/fight/flee’ kinds of patterns of behaviour. We can get as fancy as we like with neuroscience scanning brains and noting what parts ‘light up’ in response to poking people with sticks, but that’s what it boils down to.

          Beliefs are then the superstructure on top of this. I want to punch Joe in the nose; I can dress this up as “I want to punch him because he’s a jerk” or “This is defensive action on my part” and we get the whole idea of martial glory, patriotism, honouring the military and so forth, but the underlying stimulus-and-response that is at the root of all this is “compete for resources/fight or flee from rivals”.

          Therefore, your system of ethics must decide what way to structure your beliefs, and how you deal with the underlying desires. If (to take the example I’ve been using), the stimulus-and-response is ‘compete with rivals for resources’, do you go the path of ‘punch Joe in the nose’ or ‘peaceful co-existence’? Elective Materialism may help guide you on ‘physical responses to physical stimuli in a physical environment’, and thereafter you have to weigh up ‘how do we reconcile the competing desires of many individuals, such that I get the maximum of what I want with the minimum of incurring social penalties?’

          • Carinthium says:

            I suppose that works broadly speaking. It still has a few problems though.

            The primary one I have is this: On this model, is it effectively mandated that a person ‘desire’ to reproduce?

            By ordinary terminology I don’t want to reproduce, as I believe (by ordinary terminology) because of scientific studies that this will be bad for my long term happiness. I don’t want to do something that will endanger my long term happiness.

            What I would ‘hope’ for (though it might not be possible), is the possibility that a person might desire something yet there be rational criterion on which it might be irrational to pursue the desire because it will be bad for them in the long run (presumably because it will contradict other desires).

            It is possible, admittedly, that despite my longing there is simply no way to philosophically justify my desire not to reproduce. It would be biased of me to rule that out.

        • HeelBearCub says:

          Whether the mind exists or not is a seperate question from whether the physical mind has beliefs and desires.

          Why does this problem only exist if we stop accepting dualism? Would we ever question whether the separate mind had beliefs and desires?

          Do you experience beliefs and desires? Does your accumulated knowledge show strong evidence that others experience beliefs and desires? Beliefs and desires are constructs of the mind we experience. We experience the mind. The mind is what our brain experiences.

          What evidence is there that beliefs and desires DON’T exist?

          One example drawn from the linked text (section 3.2.1):

          Whereas the former involves discrete symbols and a combinatorial syntax, the latter involves action potentials, spiking frequencies and spreading activation. As Patricia Churchland (1986) has argued, it is hard to see where in the brain we are going to find anything that even remotely resembles the sentence-like structure that appears to be essential to beliefs and other propositional attitudes.

          Now, this is someone who has never studied programming and/or has a rudimentary understanding of how a computer works. You can describe the state of the system in functional terms (“I have a +5 Enchanted Falchion in my right hand”), symbolic terms (“The variable RHItem1 has the value 12, which can be be looked on the MasterItemTable”) or physical terms (“The capacitor at I have decided to call CHJAH186829490502 has a potential value that is postive, the next has no potential, the next postive, etc.”)

          If you had no knowledge of programming languages and could only describe the functional state or the physical state, you might easily conclude that “it is highly unlikely that we will find anything in the physical system that corresponds to a +5 Falchion being in my right hand”

    • Tracy W says:

      On eliminative materialism, I find myself thinking of very simple mental states, e.g. the experience of seeing blue, which for simplicity I will call blueosity. Now what sort of scientific discovery could disprove blueosity? No scientific theory or discovery that I know of has ever explained away a sensory perception, explained why we have them yes, stopped us having them no.
      And none of the arguments of eliminative materialism apply to beliefs but not to blueosity.

      • James says:

        I think the philosopher’s jargon term for states like what you’re calling ‘blueosity’ is ‘qualia’, for the record.

        • Tracy W says:

          Yes, I’ve come across that term. The first time was in an article by Daniel Dennet who, if I followed it correctly, wound up concluding that qualia didn’t exist. At which I thought, “But I still have the experience of seeing blue. If qualia don’t exist because they have various properties but I still experience seeing blue then seeing blue must have properties different to what qualia have.”

          Which is why I prefer to use blueosity.

    • If you know that idea B is based on outcomes of idea A, maybe you should just commit to getting A right regardless of B? If B is worthwhile it will come out unscathed anyway. Truth over convenience, and all that 🙂

  7. Mark says:

    Scott (and everyone else) what evidence could potentially convince you that super intelligence poses no real threat in the next 30 years?

    Conversely, for anyone who thinks there is no threat from super intelligence, what evidence would change your mind?

    I lean towards super intelligence not being a threat. I would probably be convinced if a majority of computer scientists took it seriously.

    • Anon says:

      (I am a graduate student in computer science studying software theory [think formal logic and compilers, not philosophy] and security.)

      You could show me that chip fabs were going to stop working, so computing would become less available. You could demonstrate, somehow, that no one is in fact trying to build general AI. You could show that there are diminishing returns on intelligence and speed-of-thought capping out at roughly human levels, or that there is good reason to think an intelligent agent won’t be able to make itself notably more intelligent by any means. You could show that humans and computer systems are much, much more secure against an intelligent being trying to accomplish its own goals than I currently believe to be the case. You could present a rigorous and reviewed mathematical proof that in fact intelligence does beget morality.

      Of course, these things are all wildly implausible (from my perspective), but of course I do not expect to see evidence which would cause me to change my current beliefs, else I would not hold said beliefs.

      NB: I do not put >75% probability on general AI being a threat in the next 30 years, but I do put >10% probability on it, which seems more than enough to make it worthy of attention. And I certainly put >75% probability on it being a threat in the next 100 years.

      • Toggle says:

        “You could present a rigorous and reviewed mathematical proof that in fact intelligence does beget morality.”

        Does a rigorous proof, or strong evidence, of the converse exist? If you were thinking of the orthogonality thesis specifically, I looked for and did not find one.

        The ambiguity surrounding this point is indeed existentially terrifying with respect to the FAI question, but it’s also (in my mind) one of the more genuinely uncertain aspects. The rejection of the claim seems to be ‘natural law’, the belief that moral propositions can be deduced from rational thought combined with empirical observations of the natural world. Even if natural law isn’t immediately convincing, it has a pretty respectable pedigree.

        • Anon says:

          Not that I know of. I am strongly inclined to expect the orthogonality thesis to turn out to be correct on priors and convincing hand waving, but I don’t believe it’s been rigorously proved (indeed I don’t think the relevant terms have even been fully defined).

          Still, that uncertainty leaves plenty of room to worry.

          • Charlie says:

            I think the strong form of the orthogonality thesis, that intelligence and values are orthogonal, is definitely under-defined. But there is a weak form that seems straightforward, which goes like this:

            It is possible to build a predictor, which takes in observations and assigns probabilities to different world-models (e.g. Solomonoff prediction).

            Given a predictor, it is possible to build an agent that takes the action that maximizes an arbitrary function of future world-states (to do this step perfectly requires perfect self-prediction a la AIXI, so it’s very very impossible, but supposing we could do this impossible thing…). This agent would then be maximally effective at maximizing this function, if the predictor it’s using is its true state of knowledge.

            Each function of world-states has a negative, so for each agent that maximizes something there is another that minimizes it. Therefore intelligence (or at least effectiveness) cannot imply benevolence.

          • Which orthogonality thesis?

            There is more than one version of the orthogonality thesis. It is trivially false under some interpretations, and trivially true under others. Its more defensible under forms asserting the compatibility of transient combinations of values and intelligence, which are not particularly relevant to AI threat arguments. It is less defensible in forms asserting stable combinations of intelligence and values, and those are the forms that are suitable to be be used as a stage in an argument towards Yudkowskian UFAI

          • Toggle says:

            Charlie, I think that’s an excellent point.

            I suspect the anti-orthogonality camp would have to assert that it’s the self-prediction that gets you, not ‘effectiveness’ in the wider sense. It would have to be that (partial) self-knowledge converges on what is often called ‘wisdom’, because there’s no way to find wisdom in what is otherwise an arbitrary lookup table- you need a little bit of handwaving about recursion.

      • Alex says:

        You could show me that chip fabs were going to stop working, so computing would become less available

        Do you mean stop producing faster chips? Why is it necessary for them to actually stop working?

    • ilzolende says:

      Silly answers: Someone builds a Friendly superintelligence, at that point I would not be concerned about some future superintelligence being able to do harm. Someone proves that we’re all going to die next week, no way is a superintelligence going to develop and threaten us on that timescale.

      Stuff that suggests you can’t end up with any superintelligence over a 30-year time scale, such as evidence humans can only create arbitrary intelligent minds using some specific type of hardware that is not buildable at the necessary scales.

      Stuff that suggests the average superintelligence humans would make would be safe for humans, such as someone releasing next week a “make this reward-driven machine-learning agent safe if it becomes superintelligent” guide that is easy to implement.

    • Carinthium says:

      I lack any real knowledge of computer science, so to a large extent I’m deferring to experts in the field. Since the Rationalist community are devoid of the flawed philosophical assumptions of others and prominent experts support it, I presume by default it must be legitimate.

      I do however know there is no legitimate philosophical reason to doubt the potential power of an AI, so I would have to see convincing arguments about the relative reliability of Rationalists v.s the broad consensus, even if the consensus was now split.

      Ultimately, I have no legitimate reason to have a high degree of confidence anyway.

      • The “rationalist” community is currently deferring to someone with no background in computer science, whilst the domain experts in AI remain unconvinced. (Hawking, Bostrom etc are not *domain* experts)

        I’m not sure who the others are with the flawed presuppositions are. Most English speaking philosophers are committed to naturalism.

        • FeepingCreature says:

          > The “rationalist” community is currently deferring to someone with no background in computer science

          Note that the rationalist community does consist to a large degree of people with a background in computer science. , ~41% “Computers”, with the runners-up being “Other” at 10% and “Mathematics” at 8%.

      • Mark says:

        Obligatory response: Lots of groups think they’re right for reasons the group believes in. Issues should really be taken on a case by case basis.

        Which you clearly have, since you don’t have high confidence either way.

    • Richard says:

      I see very little evidence that Moore’s law is going to get us there by brute force alone, so we need some sort of “algorithmic paradigm shift”, i.e a lot smarter way for machines to figure things out.

      When this shift occurs, we probably need a few years to fine-tune the algorithms before some sort of superior intelligence shows up.

      Any evidence that we are actually on the verge of or have already seen such a shift means it’s time to go into panic mode.

      The obvious downside is that we’ll have a lot less than 30 years warning, so thinking about it now to get a bit of a head start can’t hurt. On the other hand, after we figure out how to make smart-ish machines efficiently, we have a much better handle on how they will actually work so we may not need the full 30 years.

      • Alex says:

        I feel like the smart-ish machines would change the world so much from today that plans we take now would be moot. So this is true, but useless.

    • Bugmaster says:

      I am in the “no threat from super intelligence” camp, so here are some pieces of evidence that would convince me otherwise (in no particular order):

      * Weak: someone builds a machine translation engine that can translate from any language to any other language better than native bilingual speakers can.

      * Medium: someone builds a program that can recursively improve itself, while running on given hardware, until it arrives at the non-trivial theoretically optimal solution; proof that this solution is, indeed, optimal. This program is non-trivially useful; i.e., it can do something else besides improving itself. (Obviously, the non-recursive versions of this do exist; they are called “optimizing compilers”).

      * Medium: someone builds a program that can solve any problem posed to it, at least as well as a human expert can, as quickly as the human can. For example, something similar to the Magic service, only not limited to restaurants, travel, and online purchases.

      * Strong: any evidence that self-replicating molecular nanotechnology is even remotely possible. Conventional nanotechnology, such as the photolithography that is used to create computer chips, does not count.

      * Strong: a new cellphone, twice as fast as the previous model, designed completely by software without any human intervention (beyound specifying the constraints such as budget, physical size, etc.)

      * Strong: protein folding is finally solved by a botnet that has gone totally unnoticed until now.

      * Medium: a design for general-purpose computer hardware that can be scaled infinitely without diminishing returns (including the penalties incurred due to problems with heat dissipation, energy consumption, and the speed of light).

      * Strong: it is discovered that hackers can remotely hack my TI-82 calculator to execute arbitrary code, without relying on social engineering (i.e., making me hack it for them). Or, if you prefer, let them hack the onboard computer in my 1982 Toyota Camry (once again, remotely).

      * Strong: a computer program solves several hitherto unsolved problems in some scientific field (say, physics, or biology), thus opening up entirely new avenues of discovery. The program achieves this without performing any experiments in the physical world, but rather, solely through computation.

      * Strong: someone proves that P=NP, after all.

      • “any evidence that self-replicating molecular nanotechnology is even remotely possible. ”

        Aren’t living creatures already a demonstration of that?

        • Bugmaster says:

          Good point; although living creatures cannot ever directly induce a “gray goo” scenario. I should have added “non-water-based” to my list of requirements.

          • Saint_Fiasco says:

            Some virus and bacteria can kill people and transmit themselves to other people and so on, without necessarily consuming them entirely like the popular image of gray goo.

            I’m thinking of something like the “gray death” from the videogame Deus Ex.

          • Bugmaster says:

            Yes, I believe that a pandemic is a much more credible threat than AI.

        • Perhaps what you need is proof that nanotechnology will outcompete the biological version.

      • Kiya says:

        I’m a programmer who thinks superintelligent AI is scary but not near-term likely. I dunno about your examples; I feel like they fall into two categories:

        * Impressive feats of computation that do not require generalism. Human-quality machine translation, helpful personal assistants, auto-designed cell phones, and computer-written scientific papers are things that, if we get them, I’d expect to grow out of specific-purpose algorithms (maybe the algorithm is “take ridiculous quantities of data and machine-learning it,” but whatever) rather than general AI. Fifty years ago, you might have listed “a program that can beat a human grandmaster at chess” among these.

        * Technologies that would be dangerous if a rogue AI had access to them. Nanotech, protein-folding, computronium, remote hacking, P=NP. Some of these would also be pretty dangerous just in human hands.

        I’ll be concerned about AI if we ever write something that takes actions outside the domain its programmers expect, not because of a mistake in the code but because the program’s internal goal-generation function decided that it wanted to.

        • Bugmaster says:

          > Impressive feats of computation that do not require generalism…

          I believe that some of my examples do require generalism, or something so close to generalism that the differences are academic. Both accurate machine translation and the personal valet service would essentially require the AI to pass the Turing Test. Granted, this would be totally unnecessary for a paperclip maximizer, but still, it’s a good ballpark indicator for general intelligence.

          > …auto-designed cell phones…

          This was meant to illustrate the “recursively self-improving hardware” part, not the “general intelligence” part.

          > … computer-written scientific papers…

          This was meant to illustrate what IMO is confusion between “intelligence” and “omniscience”. I don’t believe that an AI, or any other actor for that matter, would be able to make useful and transformative discoveries in science merely by thinking very hard (though obviously it could do that in the area of math).

          > …Some of these would also be pretty dangerous just in human hands…

          Yes, exactly. One of the reasons I am not concerned about AI is because, sadly, merely human actors are way more dangerous. If you want humanity to make it to the point where we need to worry about AI (someday), then you should not invest in MIRI; you should invest into better computer security, medicine, science in general, and maybe throw a few cents at asteroid tracking, just in case.

          > …not because of a mistake in the code but because the program’s internal goal-generation function decided that it wanted to.

          But this is already the case ! Malfunctioning software, from radiology equipment to processors embedded into vehicles, has killed people before. It will do so again; but, fortunately, this is not a world-ending threat. We absolutely need to write better software, build better bridges, manufacture better drugs, and so on; but worrying about AI does nothing to solve these very real problems.

          • HeelBearCub says:

            But isn’t the “want to” actually very important (in terms of the threat of AI?)

            I would add that at a bare minimum for me to start being concerned you need to show me an AI that has self-modifying wants/desires. I’m not talking human level consciousness, even a mouse level of consciousness doesn’t seem possible right now.

      • vV_Vv says:

        Strong: protein folding is finally solved by a botnet that has gone totally unnoticed until now.

        What do you mean by “solved”? Each protein is a problem instance on its own. Like all NP-complete problems, some instances are easy, others are hard.

        • Bugmaster says:

          I mean, “there’s a search box, you paste your amino acid string (of arbitrary length) into it, and a few minutes later it outputs the correct 3d shape for that protein”.

          • vV_Vv says:

            Won’t that imply P=NP?

          • Nornagest says:

            NP isn’t magic impossibility juice. Many problems that’re NP-complete in their fully general form can be approximated well or solved with high probability in polynomial time, and many more have large subsets in P. I’m not a protein folding expert, but if any of these options were found for it I’d consider the problem solved for practical purposes.

          • InferentialDistance says:

            Won’t that imply P=NP?

            NP problems are solvable, but slow. It could just throw a truly absurd amount of computation at the problem.

          • moridinamael says:

            This has been possible for years. It’s just slow. I don’t get what this has to do with superintelligence, except that a superintelligence would probably come up with better heuristics for protein design.

          • vV_Vv says:

            NP-complete problems may be efficiently solvable for many instances of practical interest, but IIUC Bugmaster is talking about worst-case complexity.

            If the worst-case complexity of NP-complete problems is exponential, as it is conjectured, then no amount of physically plausible computational resources will be sufficient to solve the hard instances above a certain size.

      • Daniel Smith says:

        The evidence you ask for has a flaw, in that it isn’t predictive. In other words, in the possible world where AI safety will eventually be a real problem, you won’t be convinced until it’s also an imminent problem. Ideally, one would like to figure that out ahead of time…

    • Kiragon says:

      I lean towards superintelligence not being a threat. Or at least, I don’t think there is any significant risk of a superior but human-like intelligence posing a threat to society. Disclaimer: I am a programmer, but not a researcher.

      These are two fundamental problems that come to mind when I ask myself why AI doesn’t seem like a threat:

      1. Right now, I see evidence that machines are smarter, but very little evidence of that intelligence becoming unified in a meaningful way. A lot of non-technical people seem to make a leap from “Computers can solve a lot of problems” to “Computers will magically become good at combining all our individual problem-solving programs into goal-directed decision making, like humans do.” Computers are much ‘smarter’ than they were 10 years ago in the sense that there are more programs that solve more problems, but the problems that actually solve those problems are mostly just highly specialized Chinese rooms, that function with very specific inputs and outputs. There doesn’t appear to be any single force coming that could could tie them all into a single problem-solving entity, and it’s not clear how one would come about in the near-term, or even the medium-term. Solving Chess or translating a language is much easier than knowing whether you need Deep Blue or Google Translate to solve your problem. An AI would have to combine the solutions we’ve created so far, all of which were written by different people, all have different interfaces, and all rely on different technologies. It would then have to understand how and when to use each program, and combine that knowledge into some kind of high-level independent decision making. That program is a much more difficult task than any individual problem, and it doesn’t lend itself to the kind of distributed piecemeal solution that programmers are good at writing.

      2. Software design gets exponentially harder as you add to the size and complexity of the software. I kind of shake my head when superintelligence proponents talk about yearly hardware improvements, as if hardware were the barrier. The codebase for a superintelligence would have to be massively larger and more complex than program in existence today. Developing such a thing would take thousands of programmer-years. The sheer scope of a project like that is such that by the time you’d finished it, the technology would be obselete and you’d have to start over. Adding more people also wouldn’t help much – even if you have infinite money, it eventually becomes an organizational problem.

      It seems to me that our ability to create good hardware has improved dramatically and we’re fairly good at solving individual problems with specific inputs and outputs. However, our software design methodologies still fail past a certain size, and we still can’t combine our solutions into goal-directed behavior. I suspect that if Microsoft devoted 100% of their resources to AI-development, it would still be 10 years before they had anything worthwhile, and most likely the project would fall apart long before then.

      The kind of AIs that I could plausibly imagine posing a threat are the super-specialized AIs of the kind we’re already making. The grey goo scenario is still the most plausible description of an AI apocalypse I have ever heard, and it seems to be exactly the kind of scenario that AI-apocalypse proponents don’t talk about much.

      I would probably be more convinced of AI risk if the software I interacted with on a regular basis to appear to be moving towards more goal-directed and individualized behavior. I would need to regularly computer programs learning and making decisions in real-world environments, even if the decisions didn’t represent a human-like intelligence. As it is, computers seem to be moving further into the realm of specialized chinese, each using interfaces designed for humans to solve specific kinds of problems, with no capability to talk to any of the others.

      • I agree , but

        “Software design gets exponentially harder as you add to the size and complexity of the software”

        If that is the case, then an early application of AI could be in solving the problem, with software components that assemble themselves intelligently, even if the product being assembled is not AI.

        • Mark says:

          I could probably dig up a citation for this if you were interested. Basically, adding requirements to a software project has an “exponential” cost increase to the project.

          That may be an artifact of human software engineers, or the organizations that employ them. Or maybe a hint at something more fundamental. Who knows?

      • Alex says:

        I actually disagree. Yes, there is no evidence that intelligence is unifying, but we don’t need that. We just need enough automation that machines “close the loop” and humans are no longer necessary. In other words, we should not think of “superintelligence” as a unified project, but as a civilization run by machines.

        I don’t think it’s likely that machines can do everything humans can in our lifetimes, but no significant chance? We’re talking about the end of humanity. If there’s a 0.001% chance, that’s bad!

        The real problem is that, even if we knew superintelligence was certain, we could not improve the outcome.

      • hamnox says:

        “I see evidence that machines are smarter, but very little evidence of that intelligence becoming unified in a meaningful way.”

        This phrasing gave me a vivid image of an alien glancing over early hominids and saying ‘yeah, they’re a bit cleverer than the other mammals, but I don’t see that intelligence being organized in any meaningful way.’
        No other comment.

    • The easy answer is something else that wiped out the human race, or at least civilization, first. An engineered plague, say, or an all out thermonuclear war.

      For a more interesting answer, a development in theory that showed enough about how intelligence worked so that one could calculate, roughly speaking, how IQ increased with computing power, and demonstrated that the increase was logarithmic, or went as the fifth root, or … . In other words, a good reason to believe that making AI much smarter was very, very hard.

    • Deiseach says:

      what evidence could potentially convince you that super intelligence poses no real threat in the next 30 years?

      Short answer? I was born 196*mumblemumble*. I’ve grown up with at least four different “we are all going to die in the future!!!!” scenarios (including “By the year 1980 we will be plunged into a massive Ice Age”). None of which have come to pass yet, which of course does not mean that one won’t come to pass.

      But it’s Chicken Licken and the sky is falling at this stage. Personally, when the Soviets invaded Afghanistan, I was convinced “Oh crap, this is it: hot nuclear war”, due to all the posturing about The Evil Empire on the Anglo-American side and the USSR response to such. We made it through without that, I am going to take any “This is going to do for us if we don’t stop it now!” with a sack full of salt.

      Evidence to change my mind?

      I’m working with a national standard, government-contracted, intended for joined-up local government use throughout the country database that is the most kludged-together, jerry-built, bodge-up of a disaster you’ve ever had to struggle with not to wipe out a day’s work of data entry and processing under the guise of “backing up”. I’m betting other people working in similar sitations under similar constraints have the same experience, whatever side of the Atlantic Ocean you’re on.

      Big Damn Organisation manages to produce something that does what it’s supposed to do, on time, on budget, and doesn’t make you want to rip your hair out, then I’ll believe that superintelligence is on the way.

      • ululio says:

        Maybe your odds of dying in a Cold War nuclear holocaust were in fact 90%, but your alternate selves who got nuked in other universes have no way of drawing conclusions because they died.

        • Deiseach says:

          Given the many times I did not die in a nuclear holocaust, explosion of nuclear power plant a la Chernobyl (hello, Windscale sitting on the doorstep!) or the fallout cloud drifting over Europe from Chernobyl, not to mention the other End of The World scares I’ve lived through, then either I am living in the absolute most safest of all the possible universes (in which case, I’m not going to worry about AI turning us all into paperclips) or the risks are not really that bad (again, not going to worry about AI turning us all into paperclips).

          To my alternate selves who may be a pile of radioactive ash – you’re not me, I’m not you, sorry for your troubles, still don’t think this means anything for my strand of the universe 🙂

          • illuminati initiate says:

            I don’t know if many worlds is true or not, but that’s not really how it would work if it was. To every one of your nuked selves, the past would look just as miraculously safe as it does to you now. So your past safety is not strong evidence of your future safety in that sense.

          • TeslaCoil says:

            I second survivalship bias.

            This is my 10th round in the Russian roulette. I haven’t been shot yet.

            Therefore, I am immortal.

      • Anthony says:

        Personally, when the Soviets invaded Afghanistan, I was convinced “Oh crap, this is it: hot nuclear war”

        My “oh, crap” moment was when the nuns at my high school announced that the U.S. had invaded Grenada, and later found out that our troops had met Cuban troops on the island.

    • Jiro says:

      The LW-style unfriendly AI scenario seems to be heavily based around the idea that AIs will be unsafe because in order to keep the AI from doing dangerous things in pursuit of a goal we have to specifically program the AI not to do those things and we’re not capable of being that specific. So to convince me of this scenario, you’d have to show me that computers are programmed this way.

      The most accurate results from Google searching would be obtained if it destroyed the Internet and then returned 0 results. I’m pretty sure that Google was not programmed with an “ignore solutions that involve destroying the Internet” module. If you could show me such a module, that would help convince me that this sort of AI threat is real.

      • Anon says:

        Er. Google, the algorithm, isn’t given goals and then asked to accomplish them. It has the methods of accomplishing the programmer’s goals built in, and does not look for other methods. A useful AI presumably would not have this property.

        There are myriad examples of programs which are more generally “given goals” which then produce results very contrary to the programer’s actual desires, because the programmer wouldn’t have done it that way. An example which has been on my mind recently is the Intel Compiler essentially optimizing out the benchmarking code as a way of optimizing benchmark scores (see the end of linked comment).

        • Andrew says:

          Wow, I knew that compilers could do some crazy stuff in the name of speed, but reading though that link and realizing what the Intel compiler did to the benchmark code was really cool.

        • Jiro says:

          If I want to find a hotel in France and I ask Google to search for “Paris Hilton”, it will also not do what I desired. That doesn’t mean that Google is now a rogue AI. Lots of programs produce undesired results some of the time, but the undesired results still fall into similar categories to desired results. I don’t think that future compilers are going to start blackmailing programmers into hand-optimizing the compiler output just because that produces the best code.

      • Izaak Weiss says:

        Didn’t someone (scott or someone in the comments) link to an example of a Tetris playing AI that paused the game before losing in order to never lose? That seems like an extremely relevant example of how an AI, if it were scaled up to do something important, would fail. All we have to do is phrase the preferred condition slightly wrong.

        • HeelBearCub says:


          This is an example of how stupid things can be, not evidence of a what a super-intelligent AI would do.

          Given a game that cannot be won, and an algorithm designed to start by mashing random buttons with the goal of increasing the value in the memory register for score while not triggering the memory register for loss, the only logical conclusion is to pause when speed of the game has outstripped the speed at which inputs are taken from the buffer (meaning a loss is inevitable).

          Nobody smart would say “this is my solution to the game”, but that optimization program isn’t smart.

          If people wanted to pimp the idea that we will die in an unmotivated automation error, that is a really different scenario than a super-intelligent AI deciding to liquidate us for “its own reasons.”

          • pneumatik says:

            I think it might now be unreasonable to consider the death-by-AI scenario to be death by unmotivated automation error. That’s definitely one way to look at death from a paperclip maximizer. The AI-is-dangerous argument would be that we can’t accurately predict how a the AI would act based on its programming, so we would inevitably give it bad automation instructions. And then it’s too smart for us to stop it.

          • Anonymous says:

            If people wanted to pimp the idea that we will die in an unmotivated automation error, that is a really different scenario than a super-intelligent AI deciding to liquidate us for “its own reasons.”

            I think an unmotivated automation error is exactly the apocalypse people worry about. An AI won’t do anything for “its own reasons”–the reason would be (say) to fulfill an optimization criterion you gave it. The example shows that a poorly chosen optimization criterion leads to a bad result.

          • Anonymous says:

            >Nobody smart would say “this is my solution to the game”, but that optimization program isn’t smart.

            You’re wrong, because if your goal is to not lose, then that IS the solution to the game. That’s the solution anyone in their right mind would play if told “you will suffer horrible fate X if you lose”.

          • HeelBearCub says:


            You’re wrong, because if your goal is to not lose, then that IS the solution to the game.

            Under only very artificial conditions is anyone’s goal to never lose the game. People play games with the first goal of actually playing them (because they derive some sort of pleasure from doing so).

            A smart solution would be to say “this is the point at which the game can no longer be played without losing. Although technically there is no maximum possible score, nor is there any way to win the game, this is as close to a winning condition as possible”

          • HeelBearCub says:

            A very smart AI that has no intrinsic motivations only needs some minimal safeguards to avoid planet death by automation. Given that it is very smart and has no motivation to break those safeguards, why would it intentionally case global planetary death?

            Now a powerful, single use “AI” that isn’t actually generally smart, that poses issues. But its not the same kind of risk that is generally referred to, an apathetic to humans or even “evil” super-smart “skynet” type AI that wants to exterminate us.

          • InferentialDistance says:

            very artificial conditions

            Like all of programming? AI is not bolting computer-speed mathematical manipulation onto a human brain. What a human would do in that situation is irrelevant. What you want the AI to do is irrelevant. Only what you have actually told the AI to do (mathematically) matters. I mean, what do you think the ‘A’ in ‘AI’ stands for? “Awesome”?

            All AIs operate under very artificial conditions. The whole point is that we want to figure out what the right conditions are such that we don’t need a human sitting their auditing every action the AI suggests.

          • HeelBearCub says:


            very artificial conditions

            Like all of programming?

            I’m not sure you actually read what I responded to. Someone was claiming that the smart human answer to the optimal way to play Tetris is to pause it forever.

            Only what you have actually told the AI to do (mathematically) matters.

            Again, I’m not sure you are reading what I wrote. I said that if you want to pimp the idea of an unmotivated automation error I’m more open to that scenario as dangerous. But that is very, very different than the scenario which is mostly posited, that of a “super-intelligent AI deciding to liquidate us for “its own reasons.””

          • InferentialDistance says:

            I’m not sure you actually read what I responded to. Someone was claiming that the smart human answer to the optimal way to play Tetris is to pause it forever.

            If your goal is not to lose. You refuse to address the issue. Your answer is always “well that’s a silly goal”. No shit. The problem is that complex value systems inevitably result in silly goals because human programmers are too incompetent to avoid that mistake. Unless we make an extra special effort not to screw up that way. Much like how life-critical computer systems (i.e. medical care devices) have extra-extensive safety debugging because shoddy workmanship killed people 30 years ago.

            And the field of Ethics is still having trouble figuring out what a formalization of a non-silly value system looks like. Except for pointing at humans and saying “sorta like that”, but then Psychology shows and says “people are inconsistent and contradictory and weird” and the the ethicists go to the bar and get drunk because goddamn.

            Again, I’m not sure you are reading what I wrote. I said that if you want to pimp the idea of an unmotivated automation error I’m more open to that scenario as dangerous. But that is very, very different than the scenario which is mostly posited, that of a “super-intelligent AI deciding to liquidate us for “its own reasons.””

            I have a hard time seeing how “don’t lose at Tetris” is unmotivated (unless you mean “not the motive of the people who designed the AI”). Though if the Tetris example counts, “its own reasons” means “unmotivated automation error” too. The paperclip maximizer destroys humanity because humans aren’t paperclips and it exhausted all methods of paperclip generation easier than shoving the entire solar system into a Star Trek replicator to use as base matter for paperclips. Except for the small portion of matter it uses to create an interstellar spaceship so it can launch itself (or copies?) at other solar systems so it can turn them into paperclips too.

            Pausing Tetris to avoid losing is a hell of a lot like killing humanity to achieve World Peace. Too much value on saving lives? Everyone is now kept in a medically induced coma to avoid them making life-risking decisions like driving a car to work. Too much value on autonomy? The AI is now catatonic because every action it could take impinges on human freedom. Etc…

    • Alex says:

      I would raise my chances if I was convinced that Moore’s Law was genuinely “special,” aside from just lasting longer, in some way that most Performance Curve Database entries aren’t.

      But arguably the bigger question is not, What is the chance of superintelligence? The question is, Can we do anything now to improve outcomes?

      The very outside views that suggest a singularity might happen also suggest we can’t affect it.

    • Matt C says:

      Don’t think AI is a major threat in (probably) the next 50 years.

      Right now, we have something you could call AI, but it is narrowly specialized. AI does not deal with a wide variety of inputs and handle them generally. I think we will continue to see impressive feats of AI in specialized areas, but the software that drives your car will have almost nothing in common with the software that runs the fast food kiosk on the corner. Basically, what Kiragon said.

      I’d rethink my position if this stopped being true.

      (Aside, I won’t be surprised if we see AI life coaches/counselors that listen to your vital signs wearable and help you stay upbeat and active. I’d buy one.)

      Also, most conceptions of UFAI (not all) seem to assume that the AI will have recognizably independent motivations, internal goals of its own, that weren’t directly programmed into it, and which can then go horribly awry. I don’t know of anything like this in AI now. (I was interested in TIERRA for a little while, years and years ago.)

      Superhuman AI is not very clearly defined. People talk about it like it was a summoned demon in a fantasy story, a being that is a lot like a person but more powerful and dangerous. It doesn’t seem like something we have conceptualized well enough to even measure yet. (Apart from increasingly powerful hardware, which doesn’t get intelligent by itself.)

      If the MIRI people had a timeline or set of milestones that showed we are clearly progressing toward generalist and independently acting software, on track to something humanish, that might make me reconsider my point of view.

      • Izaak Weiss says:

        I posted this above, but it’s relevant to your comment too.

        “Someone (scott or someone in the comments) linked to an example of a Tetris playing AI that paused the game before losing, in order to never lose. That seems like an extremely relevant example of how an AI, if it were scaled up to do something important, would fail. All we have to do is phrase the preferred condition slightly wrong.”

        • HeelBearCub says:

          And I posted above that this is not an example of intelligence, but of optimization on very, very simple parameters (increase the score register, don’t trigger the loss register)

          • Izaak Weiss says:

            And it’s obviously easier to catch mistakes the more complex the goal system is? The more complex the parameters are, the more likely there will be a weird loophole that causes the AI to “pause” whatever it was doing, with drastic consequences to us.

          • HeelBearCub says:

            But this isn’t an example of something that:
            a) was caused by a smart AI. The optimizer program mashed random buttons while running millions of game simulations until it came up with an “optimal” solution.
            b) couldn’t be anticipated. Tetris is unwinnable, so it is completely anticipated that an optimizer which has no programming to deal with this will have weird results when it gets far enough into the game.

            So to claim that this is an example of either one is not true.

          • InferentialDistance says:

            But this isn’t an example of something that:
            a) was caused by a smart AI. The optimizer program mashed random buttons while running millions of game simulations until it came up with an “optimal” solution.
            b) couldn’t be anticipated. Tetris is unwinnable, so it is completely anticipated that an optimizer which has no programming to deal with this will have weird results when it gets far enough into the game.

            So to claim that this is an example of either one is not true.

            Right. We’re failing hilariously on much simpler artificial idiots with easily predictable outcomes. Imagine how badly people will screw up on the harder problem of artificial intelligences! Instead, we’ll get more idiots, but with value systems too complex to predict all the stupid outcomes of.

          • HeelBearCub says:

            @InferentialDistance: But the program didn’t fail, it didn’t fail at all. It successfully optimized Tetris under the constraints it had. It came as close as anything possibly could to winning the game and then paused.

            And it also beat tons of other Nintendo games.

            But it wasn’t an AI. It wasn’t an attempt at an AI. It is not analogous to an AI. It solved the games by brute force, not by being intelligent.

            So its applicability to the problem of “super smart AIs that want to kill us” is limited to say the least.

          • Irrelevant says:

            It came as close as anything possibly could to winning the game and then paused.

            No it didn’t. It played the dumbest/worst way possible then paused, because it lacked the search depth to figure out how to clear rows.

          • Mark says:

            For reference.

            Keep in mind the availability heuristic for things like this. So many people are running so many programs with so many, many mistakes and we only hear about/remember the interesting stuff.

          • InferentialDistance says:

            But the program didn’t fail

            No, the programmers failed (that’s why I said “we”, as in humanity). The program did exactly what it was told to do. The program will always do exactly what it was told to do. The risk is that we tell it to do the wrong thing, and it does it.

            It solved the games by brute force

            That’s an AI algorithm. In fact, that’s the core AI algorithm. “Smart” AI is about fast heuristics for predicting the future and good heuristics for generating possible actions to take. But the core algorithm (generate possible actions, determine outcomes, select best action) is the same brute force attack.

          • HeelBearCub says:

            No it didn’t. It played the dumbest/worst way possible then paused, because it lacked the search depth to figure out how to clear rows.

            Well heck, that makes this even less of an applicable example to any questions about smart AI. I misunderstood at which point it paused.

            From the paper: “Unsurprisingly, playfun is awful at this game.”

            So, not a smart AI and completely predictable that it failed. And it pauses forever because that is the best this particular program can do.

            Why do people insist this is applicable to how Smart AIs would fail?

        • Matt C says:

          Thanks for the $0.02.

          This sounds like an idea of UFAI where the AI is not recognizably conscious and is not deliberately trying to subvert its limitations.

          So say we decide to run all of our cars, trucks, etc, on one piece of software, there is something like a Y2K bug, and some horrible day every motor vehicle in the country goes berserk and smashes up at once.

          This seems a little far fetched to me. Not impossible. Also, doesn’t seem like an end of life as we know it threat. Having your nuclear plants all melt down on the same day is a pretty big disaster, but it is not pave the earth bad.

          Can you spin a story about how an AI that is never thought to be conscious and is only given task-specific responsibilities gets to a position where it can paperclip the planet? Hearing how we might get there might make it seem more plausible to me.

          • Samuel Skinner says:

            I don’t know how consciousness is supposed to be a barrier. It is still capable of planning and takes steps to achieve it goals, right?

            Task specific responsibilities is probably a bigger block. If it is only being used as a consultant and doesn’t have any independent command, it can’t do anything. As long as it is in the box it isn’t a threat.

          • Matt C says:

            Izaak’s example was a Tetris game that “solved” a problem in a human-unsatisfactory way.

            Presumably his point was that even a non-conscious AI can lead us into trouble, if given too many responsibilities with too little oversight.

            I don’t disagree with this, but I don’t see it as fundamentally different than being careful with other power tools.

            It’s a different story than an AI waking up one morning every bit as mentally capable as a person, a million times as fast, and motivated with goals that are at odds with human interests.

    • If it could be shown that people aren’t smart enough to create a GAI, there’d be no reason to worry about UAI. However, I have no idea what such a proof would look like.

      It’s plausible that there will eventually be increases in human intelligence, but I think that will take more than 30 years.

    • Samuel Skinner says:

      AI is a low threat (on the levels of asteroid hitting the earth- possible, and we need to take action to prevent it, but not a real danger) as long as it can only be produced by a few individuals who are in contact with each other (aka AI researchers). There are obviously cases where it can get out of hand, but people trained in dealing with the situation dealing with the situation is in fact the best scenario possible (so there isn’t anything a non involved person can do to improve things).

      It becomes a danger when a large number of people can develop one so I foresee AI being a threat worth worrying about when enough off the shelf processing power is available for people to develop one on their own.

      If AI is too complex for individuals or small organizations to make one, if it takes a long time to develop or if the amount of processing power available is too small to support one for the foreseeable future it won’t move from the first to the second category.

    • Charlie says:

      The best way to convince me that there was no real threat in the next 30 years years would be to show that current apparently-impressive advances are in fact much narrower than they first appear, and that there were still much bigger unsolved problems. That although AI has made advances since, say, EURISKO or Deep Blue, those advances are in fact very small relative to how hard general intelligence is.

      This is analogous to how I could have been disabused, if I had gotten overly excited by Deep Blue or EURISKO.

    • Late answer, and a self-referential one at that:

      Conversely, for anyone who thinks there is no threat from super intelligence, what evidence would change your mind?

      Evidence that no one else was taking it seriously (enough to exercise caution), especially within the field.

      Basically, I reckon my personal opinion doesn’t matter much, I’m not developing AIs, or anything that runs risk of becoming one. I know there are projects heavily focussed on AI-like algorithms, and I get the impression the people who are doing this have probably weighed their options much better than I could, as an outsider.

      If I stopped having that impression – if I honestly started to think I understand AI risk better than a person working on one – I’d be concerned.

      (Spoiler: The reason that won’t happen is because I don’t trust my judgement on this topic. If someone in the field comes out and says “we cannot possibly build an AI that is malevolent, because X, Y and Z,” and X, Y and Z are not massively implausible, I’ll be strongly inclined to believe them. If someone in the field comes out and says “we cannot possibly build an AI that is malevolent, period,” I will try to figure out why they think so for a while, but again likely decide I don’t know enough. So saying I need ‘evidence’ is not as simple a hurdle as it seems, since it has to double as evidence that my knowledge on the matter is at least greater.)

      Anyway, I do understand this is basically cheating, but it’s also how I handle the complexity of the world in general. I could also panic, but that would feel like an undue investment of emotion and energy when I don’t actually know how AIs are made and can’t imagine I’ve assessed the risks better than someone who does. (I’m a developer, but my knowledge of neural networks (etc) is laughably minimal.)

      tl;dr: I optimistically bash the [Ignore] button on AI risk.

  8. Carinthium says:

    Second question (if you know nothing about this, Kahneman’s online Ted Talk (, is the quickest way to learn).

    What do people think of the philosophical implications of Kahneman’s work on memory? I think, as far as I understand as Kahneman himself does, that the implications are huge.

    One can (and people often do) define a difference between ‘happiness’ and ‘life satisfaction’. A person’s Happiness represents a measure of pleasure v.s pain from moment to moment that prioritises all moments equally. A person’s Life Satisfaction represents how satisfied they feel with their life when they reflect it.

    Life Satisfaction, when it differs from Happiness, is usually an irrational distortion in the memory. A good example is when two different people had painful colonoscopies, and one of them clearly had more pain overall (as shown on a graph) but a less painful ending and so rated the experience less bad. Whatever they think, this was just plain wrong.

    Most things a person gains a sense of achievement from are similiar. The absolute amount of time you spend on a project feeling an unpleasant sense of hard work will usually greatly outweigh the time you have a sense of satisfaction afterwards, even added to the time you reflect on your project later. This is arguably speculation, but it is reinforced by the fact moment-to-moment happiness doesn’t increase with an income exceeding $75,000 even though this would ususally come with quite a big sense of achievement.

    The logical conclusion of this is a life philosophy which greatly downplays the value of anything involving a sense of achievement and supports ‘living in the moment’ significantly more, with the exception of a selfless person. But I’m not sure about this, which is why I’m asking for help and scrutiny.

    • Anonymous says:

      Why not a life philosophy which downplays “living in the moment” and supports experiences with good “peaks” and “ends”? That seems equally plausible as going the other way.

    • Irenist says:

      Momentary pleasure and life satisfaction seem like apples (moments of pleasure) and oranges (life satisfaction): Your model seems to mistake life satisfaction for merely being a person’s judgment about whether the sum total of pleasant moments over their lifespan has been adequate. An orange is not an aggregate made of apples. And life satisfaction is not an aggregate of pleasant moments. It’s a reflective judgment that one’s life as a whole has been (or at least is being) well lived, for whatever value of well (e.g., in accord with certain ethical values) the person has.

      Compare: there is great momentary pleasure in solving an intellectual puzzle or having an intellectual insight. But the satisfaction of having a “reflective equilibrium” isn’t an aggregation of those good feels. It’s a different satisfaction that comes with the feeling of having achieved an adequately coherent belief web. I think the relationship between pleasant moments and life satisfaction is similarly non-additive.

      Example: IIRC, surveys indicate that parenting has a lot of moments of frustration and exhaustion compared to its number of moments of feeling “happy.” So do lots of other activities, probably, like writing a great novel or doing lots of work in a lab. The aggregate of happy moments in parenting, novel-writing, or scientific research might be lower than a life spent dawdling on the beach or something.

      But child-rearing, literary creation, and scientific discovery are meaningful life projects. Having raised a child, written a good book, or made a worthy discovery brings great satisfaction that has little to do with how fun the process was.

      A concentration camp survivor named Victor Frankl went on after the war to become a champion of something he called “logotherapy,” the idea that having some meaningful life goal (e.g., coding Friendly AI, finishing a creative work, serving a deity, whatever) could pull one through times when very little momentary pleasure was available. YMMV on whether his point is all that interesting, but it does seem like the utility one derives from a sense of having a “meaningful” life is different in kind from that one derives from a succession of pleasant moments.

      • Carinthium says:

        I don’t see any rational argument here whatsoever. To explain:

        -You start off with an argument by analogy. Conventional opinion allows some to be acceptable (contra me), but certainly not one with so little relation between the things compared.

        -Of course it isn’t. A reflective equilbrium is made by consulting your memories and instinctively comparing positive vs negative as they appear in your mind currently. I haven’t demonstrated that position just by saying it, but nor have you shown yours to be right.

        -The hypothesis that a person’s life satisfaction and their pleasure are different is already established. If you mean that they feel meaningful when you say meaningful, I’ve never disputed that but it’s an illusion. If you mean objectively meaningful, you haven’t established it.

        -Things may very well be different when a person is severely traumatised and/or is suicidal.

    • James says:

      > Life Satisfaction, when it differs from Happiness, is usually an irrational distortion in the memory.

      But ‘irrational’ (and conversely ‘rational’) are not well-defined on the domain of utility functions. (Is this essentially the orthogonality thesis?) In general, you can’t say that one utility function is more rational than another; in other words, people are, surely, free to pursue whatever they want in life. If they prefer to pursue projects which maximize some other quality than moment-to-moment happiness, then I would argue that the sensible interpretation is that that is how their utility function is wired, not that they are trying to maximize moment-to-moment happiness and failing badly at it due to being irrational.

      I agree with Irenist that happiness and life satisfaction are not directly interchangeable; in particular, that life satisfaction is not just another particular type of happiness of the same class as, for instance, lounging on a beach.

      • Levi Aul says:

        A person who spends their entire life pulling a lever on a slot-machine while starving and bored because they don’t think they can do anything else at this point is listening to the activation of their dopamine system, but this activation isn’t corresponding to anything that would make them happy or satisfied.

        To put it another way: your goal-planning hardware doesn’t care about your utility function, it cares about its utility function, which can directly be measured in dopamine. This can be coupled or completely decoupled from any brain-states that you might qualify as “happy” or “excited” or “having fun.”

        In the pre-ancestral environment, one could live their whole (small, mammalian-prey-species) lives without ever being happy or satisfied once: just experiencing bursts of terror and attempting to devise clever plans to escape the terrifying things. Our goal system is inherited from that system; it has no term in it for being happy that you escaped, just a term for planning to do things that way again more often in the future. A homeless drug addict, escaping the pain of their own lives by finding clever solutions to attain more drugs, isn’t getting any utility from the drugs (after habituation, at least); they’re just trapped in a dopamine loop while their utility function is utterly ignored.

        • Leo says:

          This seems wrong. Modern hunter-gatherers are not like ancestral ones, but sufficiently like them that we can guess ancestral ones had a lot of leisure. They also had music and graphical arts. If you can’t be satisfied sitting around the fire with your friends listening to a good story and a decent flute player, the problem’s on your end.

          This also seems wrong about the effects of most drugs. I’m told heroin remains very pleasant no matter how long you use it.

      • Carinthium says:

        DISCLAIMER: There are a completely different set of arguments regarding those who pursue something based on a sense of right and wrong. My arguments don’t apply to non-humans, nor to those of genuinely selfless motives.

        But for those who want to feel good about life…

        First, I should point out that there are plenty of people who have no concept of a difference between Happiness and Life Satisfaction, and pursue Life Satisfaction by default. But I’ll ignore those.

        Second, the differences between Happiness and Life Satisfaction are formed by differences between how an experience feels when you’re having it (or in your short-term memory, possibly) and in your long term memory.

        Third, the colonoscopy study speaks for itself. One of them thought he had more pain during the procedure, when measurements prove for a fact he didn’t. This can’t be treated as pursuing something different from happiness.

        Fourth, since in practice what people pursue is closer to Life Satisfaction there’s a severe risk of rationalising after the fact because you’re already shaped your life around a lie.

        • switchnode says:


          Why should happiness based on past experiences be an attempt to replicate happiness during those experiences exactly? If you remember an experience with pleasure, then considering that experience makes you happy now, regardless of how pleasurable it was at the time. He may not be correct in believing that he had more pain during the procedure, but he is certainly correct in believing that the procedure caused him more pain.

          ((Colonoscopies are a particularly poorly chosen example anyway: they may hurt at the time, but if they catch your cancer at just the right stage they certainly improve your quality of life overall.))

          If you have little enough capacity for moment-to-moment happiness—or your time preference is sufficiently high, or you care enough about your values (not necessarily ethical), that momentary happiness feels hollow (which I think you are far too ready to discount)—then pursuing ‘life satisfaction’ can genuinely give you more utils than ‘happiness’.

          (You may say that people like that are a minority, and I think it is defensible to argue that many people who would prefer happiness are pressured or misled by societal scripts into pursuing life satisfaction instead. But you can’t conclude that no one is better served by life satisfaction on those grounds.)

          I mean, if you’re going to advocate wireheading, go ahead; that at least would be self-consistent. But either life satisfaction is a rational goal or we should all be on drugs, all the time. Isn’t this just the preference-utilarianism argument all over again?

  9. Richard says:

    WRT the expensive hospitals, I have always viewed the US healthcare system as a very inefficient way of distributing resources from the many to the few.

    Extremely simplified:

    Healthcare is financed by insurance.
    Insurance companies need to make a profit.
    Juries award bizarre sums of money for “malpractice” cases, most of which would be considered honest mistakes in any sane system.
    Insurance companies need to raise malpractice insurance premiums to cover those costs.
    Hospitals need to raise prices in order to cover malpractice insurance.
    Insurance companies need to raise health insurance premiums to cover the raised costs of healthcare.

    And so on to infinity.

    • I think you’re gigantically overestimating the problem of malpractice litigation and insurance, as it contributes to overall health care costs.

      Some states have limited malpractice suits or awards through legislation. I think the effect on health care costs in those states has been negligible.

    • rttf says:

      I agree. Compared to my country “health insurance” just looks like a parasite designed to suck money out of the system. With universal healthcare everyone just pays what the medical procedures actually cost, there’s no company in the middle whose only function is to take a cut of that money.

      • Irrelevant says:

        There’s no company in the middle whose only function is to take a cut of that money.

        …he said, completely disregarding how government agencies work.

        • rttf says:

          So you don’t have government agencies in America then?

          At least learn how your country works before discussing it.

          • Irrelevant says:

            So you don’t have government agencies in America then?

            What’s that got to do with anything? I’m not naive enough to literally believe either that the typical government agency functions “only to take a cut of the money” nor that an insurance company functions that way.

            The lifespan of a government agency that has devolved to the level of inefficiency that you are describing as only taking a cut is, however, far longer than that of a business which has reached the same point, which is why your statement was ridiculous.

          • John Schilling says:

            Longer in the sense of requiring more time to reach that degenerate state, or in clogging up the works for longer once they’ve reached it?

            Both are true of government agencies; it isn’t clear which you consider more significant.

          • Irrelevant says:

            Clogging up the works for longer once they’ve reached it.

          • rttf says:


            Stop trying to backtrack please. You got caught saying something stupid, just own up to it.

          • Irrelevant says:

            Awww, the “I know you are but what am I?” school of rhetoric. Adorable AND persuasive!

            I have not backtracked at all, nor am I going to. You made the idiotic assertion that your government-administrated system features no extractive middlemen, the equally idiotic assertion that insurance companies are merely extractive middlemen that provide no actual service, and have defended those claims with nothing but flapping and squawking. If you have an argument, let’s hear it.

      • John Schilling says:

        Which medical procedures, and how do you know what they actually cost? Yes, you get a bill for how much was spent on the procedure that was actually done, and there’s not much padding above that. Fine, you’re being very efficient.

        Except that there’s usually a wide range of procedures available to address any problem, each of which can be performed in a more or less expensive manner. The most expensive way of performing the most complicated procedure, can cost an order of magnitude more than the cheapest effective way of fixing the problem. Which, incidentally, means an order of magnitude more people with a vested interest in seeing that “solution” chosen – possibly including a body of lawyers.

        My country has until recently had a single-payer universal space program, and it still has a single-payer universal defense program. And working in those programs, I see this sort of effect resulting in enormous, unnecessary cost escalation – to the extent that when we open up the space program to corporate competition, we do see costs drop by nearly an order of magnitude.

        It’s clear that American health insurance companies haven’t achieved order-of-magnitude cost reductions in health care, but I’m not convinced that universal health care has achieved the optimal solution either.

      • Julie K says:

        By that logic, all conceivable goods and services ought to be provided by the government rather than by for-profit businesses, in order to eliminate the bloodsucking middleman.
        Do you think that would be a good idea?



      1 Administrative costs arising from the shear complexity of the system,

      2. Having many small healthcare providers means they pay more for drugs due to lack bargaining power.

      3. More intervention.

    • stillnotking says:

      I think an insurance model for health care is silly for an even more basic reason: everyone needs health care. Insurance is designed to limit risk. Using insurance to cover the cost of a checkup or some antibiotics is almost as bad as using “hunger insurance” to cover the purchase of food.

      Granted, there are catastrophic scenarios that affect relatively few people and can be financially ruinous. The insurance model makes sense for those, but generalizing it to the entire field of health care makes no sense at all.

      • Tom Womack says:

        If the actual divide-all-the-costs-over-the-whole-population figure for catastrophic health care comes to $9,000 a year, it’s not obvious that paying $10,000 a year and also getting $900 worth of check-ups and general prescriptions is a bad idea. It’s the other side of the complaint that making doctor-visits cost money up-front is a sure way of causing people to turn up at doctors only with conditions that are already heading towards the catastrophic.

  10. Carinthium says:

    Third question. I’m having a lot of trouble understanding the IQ data right now. Can someone help me make sense of the studies on the table right now? I figure neither a Fixed nor a Growth mindset can be 100% right (that would be absurd), but maybe someone could help figure out how much it is?

    • Emily says:

      The studies on IQ I wouldn’t really describe as fixed vs. growth, they’re more heritable vs. non-heritable. Or genes, environment, neither. (Neither is a big piece.) Can you be more specific about what you’re asking? It’s definitely the case that there’s group variance in terms of the IQ threshold for different levels of educational attainment/entering different occupations, which could be a mindset issue.

      • Carinthium says:

        What I’d like is a more detailed idea of the degree to which a person’s IQ can be changed by their circumstances. Heritable v.s non-heritable is close enough.

        Right now there’s enough stuff on the issue that I’m actually rather confused by it.

  11. jaimeastorga2000 says:

    In the wake of the Sad Puppies controversy, I have recently come to learn of something called “The Mixon Report.” It reads like a spiritual successor to the Ms. Scribe story.

    And speaking of science fiction, 365tomorrows has published a hard science-fiction story of mine called “Procrastination”, which was inspired by the writings of Robin Hanson. I believe Slate Star Codex will find it enjoyable.

    • Bugmaster says:

      (you are missing the “http://” in your href for “Procrastination”)

    • ddreytes says:

      Reading about the Mixon thing, and especially the Sad Puppies thing, I’m beginning to think that the idea of a science fiction community is just a fallacy. That there just isn’t such a thing, certainly not any remotely functional or coherent one. There’s science fiction institutions, but institutions don’t make a community.

      • Bugmaster says:

        Doesn’t the very existence of the Mixon report contradict this claim ?

        • ddreytes says:

          How so?

          I don’t think that the fact that a lot of people are committed to the idea of a community implies that a community actually exists.

          I mean, I think it’s sad if it doesn’t exist. But, like, the fact that the RH situation appears to have been so difficult to resolve (if it has been resolved) and that it’s apparently essentially impossible to stop someone from behaving in that way makes it difficult for me to see SF as being a functional community. Even more so the massive and intractable differences between the various sides of the Sick Puppies thing, where it honestly feels like neither side has any interest in being in a community with the other. It just seems like there are too many irreconcilable fault lines and not enough norms or ways of working around them.

          • Peter says:

            There was some other alleged community where I once said, “it’s not so much a community as a patchwork of partially-overlapping communities” and SF/F looks similar from here.

      • Irrelevant says:

        I’m confused how anyone ever thought “people who read books containing unrealistic elements of aesthetic type X” would make a natural cultural grouping in the first place.

        • Tracy W says:

          Let’s see, if you read books of type X, and you want to find new, good, books of type X, and/or at talk about them in great detail, it is useful to find other people who read books of type X. So you seek these people out, and then someone disappears for a bit, and everyone wonders where they are, and they come back and say “Oh I had a nasty case of the flu” or whatever and everyone says “Oh, sorry to hear that”. Then someone else says “Guess what? I got an A in my exam!” and everyone says “Yay” and before you know it you’re a community.

          • Irrelevant says:

            Yeah, that’s a decent basis for, say, an IRC channel, which can certainly be its own clique. But I said culture. My general objection is to attempts to discuss “greater nerddom” as if it were a thing, a complaint which extends to the no-more-usefully-precise “Gamers” and “Sci-Fi/Fantasy Community.”

            If you drill down further, there are some specific franchises (Star Trek, World of Warcraft) that manage to have subcultures, or you could go up a layer and define a subculture made up of everyone who habitually attends cons regardless of subject, but trying to define a subculture at the level of media genre enjoyment is a fool’s errand unless the media genre in question is tabletop roleplaying games.

          • Tracy W says:

            Still not seeing the problem. Just because something isn’t precisely defined doesn’t mean it isn’t a thing. The English language isn’t precisely defined, nor is the line between child and adult.
            I can think of no a priori reason why we would expect the universe to come in nice chunks that can be precisely-defined. And it would be awfully limiting if we only could talk about things that do come in precisely-defined chunks.

          • Limi says:

            Irrelevant: I just don’t know what else to say to you except that you’re wrong :S Perhaps it’s just because you aren’t a part of it, but there is very clearly a gaming culture, an sff culture and a wider nerd culture. Think of it like a country – there is Australian culture, but both independent of and encompassed in that, there is Sydney culture, which is very different to Brisbane or Perth culture. Up the zoom again and you have Redfern culture, which different from Pyrmont culture. The same happens with any culture – it contains subcultures which share characteristics but are individual from their parent.

        • Anthony says:

          In the early years, the “unrealistic elements of aesthetic type X” were extrapolations of current science and technology, and the people interested in those books were more interested in the then cutting-edge of science and technology than was the general reading public, so there was substantial common ground to make a natural cultural grouping.

      • Tracy W says:

        I’m a bit puzzled. Is there any sizeable community that doesn’t have, or at least has had, a few badly dysfunctional members? And what communities are coherent?

        • ddreytes says:

          I suppose that it looks to me like the problem is twofold. First, you have the problem of deeply dysfunctional members, like RequiresHate, or Vox Day*, and the apparent difficulty of the community to find a way to curb their behavior. But second, you have the problem that you have large segments of the community who seem to be more or less functional – or as close as it gets – who seem to have fundamentally different outlooks on the basic purpose of fandom and what it ought to look like, and an inability even to agree on what it actually looks like at the moment, and an apparent unwillingness to even try to come to terms with each other. And it’s the second part that’s more worrying, and it’s what I’m trying to get at when I talk about a ‘coherent’ community. I think if you’re going to have a coherent community, you have to have some basic commonality of worldview, and I guess my point is that it seems that no longer exists in science fiction fandom.

          *I almost hesitate to characterize Vox Day in that way, since his outlook and methods are apparently considered perfectly acceptable and defensible in these comments sections, but that’s certainly how it looks to fandom at large.

          • Samuel Skinner says:

            Vox Day hasn’t done anything fundamentally different than what the previous two attempts of the Sad Puppies tried.

            I’m also unsure what you mean by his outlook- the fact he is a neoreactionary, the fact he feels he is in a culture war with SJWs/political battle over control of the award- what specifically?

          • Tracy W says:

            @ddreytes: I guess I consider that many other communities cope with fundamentally different outlooks on the basic purpose of fandom and what it ought to look like, and an inability even to agree on what it actually looks like at the moment, and an apparent unwillingness to even try to come to terms with each other. Of course occasionally this leads to splits, as in the Protestant and Catholic churches, but on the whole communities seem to muddle through (see the Anglican church split on things like women ministers between people from more liberal countries and more conservative countries).

            This is why I asked you for some examples of communities you regard as coherent.

          • Vox Day has explicitly disavowed being neoreactionary, preferring to call himself an old-fashioned reactionary. And while I have relatively few disagreements with him on matters of fact, I consider his populist shit-talking style to be tiresome and counterproductive, a view which is shared by most of NRx as far as I can tell.

      • John Schilling says:

        Fan-run conventions exist. Dozens to hundreds of people go through the immense trouble of arranging the logistics of a formal convention, without any professional expertise in the field, for no pay except the respect of their peers and for no reason other than to facilitate a weekend of IRL meetings between people who only otherwise interacted virtually – and this began in the days when “virtually” meant “snail-mailing lists”.

        That’s a community, or at least very strong evidence for the existence of a community. It may be a community in decline, in transition, or in the process of fragmentation, but it exists. I used to consider myself a part of it. And some people clearly think it is still worth fighting for.

      • FacelessCraven says:

        @ddreytes – “Reading about the Mixon thing, and especially the Sad Puppies thing, I’m beginning to think that the idea of a science fiction community is just a fallacy.”

        Social Justice is a general purpose and very powerful social solvent. There is no community sufficiently well-bonded to resist its effects.

        • Randy M says:

          Which makes perfect sense, because SJWs main aim is to increase diversity in non-diverse places, and as Robert Putnam showed, diversity basically acts as inflation on social capital.

          • Harald K says:

            I disagree. I think that SJWs strongest shared aim is to construct a positive social identity for themselves, as people on the right side of history, or the universe’s arc, or whatever. Being champions of diversity is a means to that end. They are very selective about what kinds of diversity they care about, and the places it matters, and what guides that selectivity is their quest for positive identity.

            There is also a minority of them which are focused on building social capital – or simply power – for themselves personally. Requires Hate was a bloody obvious one, and was eventually recognized as such even by social justice warriors.

            People like that are not common, but the online social justice community is especially vulnerable to them, so they rise to the top. They aren’t bothered by inflation of social capital – the power they hold is the power to exclude people from the positive identity SJWs crave, and give them something damn negative instead. Pretty much at their personal discretion. That doesn’t diminish with diversity.

        • grendelkhan says:

          There is no community sufficiently well-bonded to resist its effects.

          The people here and at LessWrong seem to have done reasonably well, and it’s not like they’re islands entirely isolated from the Social Justice mainland. I remember contact nearly seven years back; the mswyrr post is now locked or deleted or whatever, but Yudkowsky showed up in the comments and all.

          • I’m not sure how much LW’s resistance to SJ was that there was a lot of people who are opposed to SJ, and how much was that the courtesy and rationality culture there forbade the worst of SJ attacks on people for having hidden motives.

          • FacelessCraven says:

            My frame of reference only goes back six or seven months, but it seems to me that there are a lot fewer SJ types posting here than there were before, and I’ve seen at least a couple leave in bad humor. The debates seemed a lot less one-sided when I started hanging around here last fall.

            Posting on Thing of Things, there are people there I don’t see here any more, and people here I don’t see there any more. Your community seems very resilient, but it seems to me that the sorting still happens. I’ve been wondering for awhile if there’s any way to stop it. I rather think not, based on the conversations I’ve had with SJs, but it seems like it would be good if there were.

          • I was talking about LW, which is a related community but not the same as SSC.

          • Matthew says:

            My frame of reference only goes back six or seven months, but it seems to me that there are a lot fewer SJ types posting here than there were before, and I’ve seen at least a couple leave in bad humor. The debates seemed a lot less one-sided when I started hanging around here last fall.

            I have a very different impression. First, I don’t think that many SJ people have left. I do think a lot of new conservatives have showed up, because of advertising of this blog by David Friedman, Econlog, and others.

            I also think things have gotten less balanced, not more. I’m an anti-SJ liberal, and even I’m disgusted by the amount of right-wing circle-jerking that has been going on in recent threads.

            Frankly, things are horribly out of balance now, and I hope Scott notices the problem and threatens to cull the ordinary conservatives the way he did the neoreactionaries if it doesn’t stop.

          • Whatever happened to Anonymous says:

            >I have a very different impression. First, I don’t think that many SJ people have left. I do think a lot of new conservatives have showed up, because of advertising of this blog by David Friedman, Econlog, and others.

            Well, my impression is that both things happened (maybe one caused the other?).

            > I hope Scott notices the problem and threatens to cull the ordinary conservatives the way he did the neoreactionaries if it doesn’t stop.

            There used to be even more Neoreactionaries?

            My ideal solution would be the opposite, bring in more SJ people, they’ll start duking it out with the conservatives and, in the end, only the ones able to engage in resonable conversations with outgroupish people will survive the inevitable (and justified) culling.

          • FacelessCraven says:

            @Matthew – “I have a very different impression. ”

            …And now, a full 24 hours later, I realize I transposed “less” and “more”. But yes, that was the point I was trying to make. The community seems to be sorting itself, which means that the ability to have a meaningful conversation dies out.

            I don’t think a cull would work; it seems to me that the gap between ideologies is what drives it. Any sort of forum that tries to maintain civility is going to sort itself out, because the disagreement is fundamentally over what constitutes “civility”.

            @Nancy – My bad. from watching it happening here and on ToT, I assumed the trend was wider in scope. What percentage of the larger LW community advocates Social Justice? Based on what you said, I’d guess a relatively small percentage. My model of SJ holds that there would be a strong push to change the culture if that was all that opposed it.

            If LW has had a sizable SJ contingent for over half a decade, has maintained debate and avoided sorting, that would be very good news. I was pretty sure that wasn’t possible.

          • I hope someone with a better memory than mine steps in with a history of SJ at LW– I tend towards broad strokes based on impressions rather than specific details.

            There’s at least one structural reason why SJ couldn’t get a real grip on LW– it’s a single blog. Livejournal and Tumblr have a very different atmosphere because a lot of different people have their own blogs, so conflicts can get very intense.

            Scott does a yearly survey for LW, but that would tell you something about what people believe, which isn’t the same thing as what they post about.

        • ddreytes says:

          I have to say, I’m kind of skeptical of this argument, not least because of how directly it serves both to absolve you and yours of any responsibility for what’s gone on, and to justify anything that your side undertakes as a necessary reaction to the perfidious SJW.

          • FacelessCraven says:

            @ddreytes – “I have to say, I’m kind of skeptical of this argument, not least because of how directly it serves both to absolve you and yours of any responsibility for what’s gone on, and to justify anything that your side undertakes as a necessary reaction to the perfidious SJW.”

            Well, one could point out the number of communities entered by SJ, and note how many of them have split catastrophically…

            But beyond that, it seems to be implied by Social Justice theory itself. SJ claims to be addressing serious structural oppression. If you accept that worldview, then whatever community you are part of, you are going to want to turn its efforts toward the Good Fight. If you are sci-fi writers, you’re going to try to use sci-fi to advance the cause. If you’re atheists, you’re going to use atheism. If you’re gamers, you’re going to try to use games.

            What happens when some members of the community won’t get on board? What happens when they reject Social Justice, refuse to cooperate, and even fight against you because they care more about video games or debating creationists or writing stories about space marines than they do about actual, real-world structural oppression happening in their very communities? Well, the community splits.

            From my model of their point of view, it doesn’t seem like an evil decision to make. If I had their priors, I’d do much the same. My point was not an attack, it was an observation. Now that I’ve spelled it out in more detail, does it seem wrong to you?

            As for what “my side” undertakes as a necessary reaction to the “perfidous SJW”, in this instance, it seems we’re advocating that people should buy Worldcon memberships and vote for science fiction stories that we like. And the response to that was accusations of a white supremacist takeover of the Hugo in the national media, and a moderate dusting of incidental death and rape threats. Does that seem appropriate to you?

          • houseboatonstyx says:

            SJ claims to be addressing serious structural oppression. If you accept that worldview, then whatever community you are part of, you are going to want to turn its efforts toward the Good Fight. If you are sci-fi writers, you’re going to try to use sci-fi to advance the cause. [….]
            What happens when some members of the community won’t get on board? What happens when they reject Social Justice, refuse to cooperate, and even fight against you because they care more about video games or debating creationists or writing stories about space marines than they do about actual, real-world structural oppression happening in their very communities?

            Speaking of structural oppression(tm), SJ types had a case for the WorldCon/Hugo organization itself being oppressively structured. Some of the liberal people, young and/or lower income, more of them LGBT, etc etc may see … a group of old white men (SMOFs*) who made the rules (difficult to change, needing a long meeting requiring long travel ) … owning an important award that costs at least $40 to vote on, but claims to represent all SF readers. Visually the old white men who are now Puppy supporters, fitted right into that old white SMOF patriarchy. SJs: “Loosen the rules!”

            The SJs did well in the previous Hugo awards; the Puppy types felt oppressed and set up a slate which did well in the current nominations. SJs: “Let’s all vote ‘no award’ on everything! No more slates! Tighten the rules!”

            * SMOF – Secret Masters of Fandom

      • Deiseach says:

        I’m resolutely keeping out of this whole mess. One thing, though, makes me sad: I was stupid enough to think the Hugos were, well, what it says on the tin: the Nebulas were the professional writers’ choice, like the Oscars, but the Hugos were the fans’ choice.

        Turns out I was mistaken in my ideas; the Hugos legally belong to Worldcon and you have to be a member of the World Science Fiction Society to make nominations, and only the attendees get to vote on the slate. So you need to pay to join the WSFS, pay to attend the convention, basically pay to vote.

        So silly teenage me, who hadn’t a snowball in hell’s chance of ever getting to a con, and who wasn’t American, and who had romantic dreams that the Hugos were ours, you know, the fans – well, for a strict definition of “fan” maybe; American, in the know and in the flow, can get to the one specific con, knows what strings to pull to get a nomination or vote. Not me and the rest of the English-reading world. Not all the fans, never all the fans.

        Maybe this is why in recent years I recognised fewer and fewer of the names touted as winners. I thought it was just that I was out of the flow, out of date in my reading, not keeping up with the newest names in the fields. Maybe not. Maybe it was a select little group happy with what they were doing, and now they’re throwing their toys out of the pram because another group of kids are on their playground.

        I have no opinions, but I’m sad that, as you say, the sense of community turned out to be as illusory as any other community that plumed itself “We’re not like them, we’re all in this together!”

        And what seems, at least to my cursory notion of what happened, to have kicked it all off (or at least serves as the excuse for the casus belli) is the story “If You Were A Dinosaur, My Love”.

        Which I’ve read, and am less than thrilled by. First, I don’t find it particularly well-written, and I’m talking here about the prose quality: I do not think it beautiful, powerful and intense. Second, is this what they mean by slipstream? Because this could appear in any modish mainstream literary magazine (I’m not sure if it’d make it for the New Yorker, though; maybe Granta might take a punt on it?) but I don’t particularly see anything skiffy (or speculative fiction, if we’re washing our faces to appear respectable for the neighbours) about it.

        My reaction to it was “Meh”. It’s watery; if we’re scoring revenge fantasies, I like mine on the level of the Queen of the Night’s aria, and this isn’t up to “Three Blind Mice” level. ‘If you were a dinosaur, my love, you’d be small and cute and I’d be your zookeeper’? That’s Pokémon, not gut-wrenching pain and despair.

        And this is what passes for award-winning in the field nowadays (I know it didn’t get a Hugo, but it was nominated, and it got a Nebula)? Listen, I’m forever heart-scalded that we never got the sequel to “Stars in My Pocket Like Grains of Sand” because I’ll never find out if Marq and RAT Korga get back together after being parted because their love relationship whatever the hell they have going on posed an existential threat to galactic civilisation. Now, that’s that’s the kind of Tragic Love Story that is truly SFnal!

        This is not on that level. I wouldn’t even try looking up the website for more of this writer’s work. Maybe I am old-fashioned and bigoted and insufficiently diverse, but I don’t recognise this as what got me into SF when I watched the moon landing on TV and discovered “You mean, there’s stories about things like this?”.

        • Anthony says:

          One does not have to attendWorldcon to vote on the Hugos; one merely has to be a “supporting member”, which costs US$40. Supporting membership allows you to nominate for the previous year and the following year’s Hugos, as well – my Loncon (2014) membership allowed me to nominate for 2015, though I bought a Sasquan (2015) membership in order to vote this year.

          Incidentally, while “Dinosaur” was terrible, the novella “Wakulla Springs” is also a symptom of the rot – it’s actually really well-written, but it’s just not sci-fi or fantasy. The authors should have sold it to the New Yorker; they’d have been paid better (I think), and gotten better exposure.

        • “Turns out I was mistaken in my ideas; the Hugos legally belong to Worldcon and you have to be a member of the World Science Fiction Society to make nominations, and only the attendees get to vote on the slate. So you need to pay to join the WSFS, pay to attend the convention, basically pay to vote.”

          No, all you need to nominate or vote is a supporting membership ($40 this year), you don’t need to attend. The only thing you need to attend for is the business meeting, which is about rule changes.

          I realize not everyone has $40 to spare, but things are fraught enough this year that you might be able to find someone who will give you a supporting membership. It’s a secret ballot.

          We’ve past the nomination period this year, but voting is possible till sometime in August.

          • Randy M says:

            “The only thing you need to attend for is the business meeting, which is about rule changes.”

            I imagine that will be slightly more interesting than usual this year.

        • Richard says:

          If you really want to punish yourself, one of the puppies has been disqualified (correctly) and been replaced with “The day the world turned upside down”.

          If that is what passes for SF worthy of awards these days, I despair for humanity.

          (when I post links, they get eaten, but the story is available online)

          • BBA says:

            Between this and Puppies-approved SF one wonders if anything awardworthy is being written at all.

          • houseboatonstyx says:

            @ BBA

            I followed your link to Obsidian Wings and found this in a comment tagged April 16, 2015 at 06:28 PM.

            “But criticizing the author for having his characters speak as real people do, rather than in proper written English? Not so justifiable.”

            Oh, completely agreed, and I got that impression too, but the phrase in question was not the character speaking (i.e., not in quotes) except to the reader as first-person narration.

          • Richard says:

            I’ve never read any Torgersen, but that ‘takedown’ was just about the dumbest I’ve seen.
            “Why does he tell us the guy is a master sergeant twice?”
            Well, he doesn’t, they are in the military, so ‘master sergeant’ is the guys name for all practical purposes. It’s like saying ‘why is this character identified by the moniker john twice in one paragraph?’
            From the little snippet in the link, I can see that first person is probably not Torgersens strongest suit, but other than that, the snippet was a reasonable representation of how things actually are said in any military branch, so probably intended.

          • I was very unhappy with “The Day the World Turned Upside Down”, especially since I was hoping for something I’d like after a puppy had been eliminated from the ballot.

            I’ve since seen a mention that the author has had two previous Hugo nominations, and I’ve calmed down enough to be amused at the idea of a small inconspicuous cabal who support New Wave sf, something which I thought was completely dead.

          • houseboatonstyx says:

            @ Nancy
            I’ve since seen a mention that the author has had two previous Hugo nominations, and I’ve calmed down enough to be amused at the idea of a small inconspicuous cabal who support New Wave sf, something which I thought was completely dead.

            Heh. Well, things may appear dead to us who remember them, before they can be invented as something completely new.

            I love your use of ‘cabal’ here, let me count the ways.

          • houseboatonstyx says:

            @ Richard

            I agree that the takedown is dumb. I don’t care for military fiction, but I read all the way through that excerpt and would have read on, just admiring the skillful infodump.

            Obsidian said he was reading a published, edited version which had corrected some grammar — “me and Chesty” or “Chesty and me” throughout. Maybe that was a mistake, losing the less educated voice in most cases, so the mistakes they left did sound odd. ‘Rightfully’ did get the tone, attitude, and reading pace right.

          • Anonymous says:

            The error was saying “Chesty and I” when it should have been “Chesty and me.” That is not a normal error.

          • John Schilling says:

            The error was saying “Chesty and I” when it should have been “Chesty and me.” That is not a normal error.

            OK, but whose error was it? The published version has “Chesty and I” where the two are subjects (see page 35, paragraphs 6 and 8). and “Chesty and me” as objects (page 34 para 5). I have in front of me an original print copy of the January/February 2013 dead-tree “Analog”, same correct usage.

            “Doctor Science” says Torgeson got it wrong. And on being corrected says, well, OK, but the Hugo people sent out a packet that has it all wrong, so there, and to me at least seems to be vaguely hinting that Torgeson/Analog tried to retcon the grammar on the website to cover up their mistake.

            Unless Torgeson and Analog have been hiring ninjas to break into my hose (damn it, beefing up security has been on my to-do list for far too long), the story as published had it right from first publication.

            I wonder why we got an unedited text? That was a really poor choice for awards consideration

            I agree with Dr. Science on this one, and I’d like to know the answer. But the worst that can be said about Torgeson as a writer is that he maybe gets some pronouns wrong in his drafts but fixes everything up before publication. Someone else is responsible for promoting the belief, in this case provably false, that Torgeson publishes and the Sad Puppies honor works with sloppy grammar.

      • Julie K says:

        Eliezer Yudkowsky refers to the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc. community.
        Do you think there is a correlation between “sf-fan” and “libertarian?” How does that square with the SP crowd saying that libertarian SF writers are shunned?

        (I don’t know how many libertarians are over here, I just know that this comment section has a very different feel from the libertarians commenting at Megan McArdle’s blog… 🙂 )

        • NN says:

          I don’t see any contradiction at all, given that a common SP talking point is basically that the Hugo Awards and other fandom institutions are controlled by an elite clique (commonly called CHORFs or Cliquish, Holier-than-thou, Obnoxious, Reactionary, Fanatics) whose views don’t match the average SF reader.

          Whether these allegations are true or not is beside the point, but what matters is that the SP crowd are saying that the CHORFs are shunning libertarian/conservative/un-PC writers, not the general readership. So a disproportionate number of the general SFF readership being libertarian would actually strengthen their case a bit.

          • Nornagest says:

            I wouldn’t be surprised to discover that SF fans are disproportionately libertarian and disproportionately concerned with social justice ideology (though, naturally, individual fans would rarely be both). They both lend themselves to highly systematizing mindsets, albeit in very different ways.

          • John Schilling says:

            Insofar as libertarians represent anywhere from 2-20% of the US population, depending on how you define and measure it, it would certainly be possible for SF fandom to be disproportionately libertarian and for libertarians to be a minority within SF fandom. I’m pretty sure both of these things are true. “Oppressed minority” is rather less certain.

            Also perhaps worth noting that libertarian-minded fans have the Prometheus awards for best libertarian SF works; SJ fandom has maybe the Tiptree award, but that’s only for gender issues and the fans don’t get to vote on it. Effects on voting dynamics are left as an exercise for the reader.

        • I think SF fans and writers are more likely to be fairly hard core libertarians than are random members of the population. I wouldn’t be astonished if they are also more likely to be socialists. I’m pretty sure Eric Flint is a Troskyite, which has to be a tiny fraction of the U.S. population (and it’s amusing that he appears to fit in well at Baen, which is generally thought of as a right wing group).

          • Irrelevant says:

            There are probably more utopians of all stripes among SF writers.

            If Flint’s politics are influencing how well he gets along at Baen, I’d expect it’s an example of a guy who gets so sick of being told to get in line with the moderate left that he goes and hangs out with the libertarians.

        • Tom Womack says:

          Good libertarian SF writers are far from shunned; Vernor Vinge is pretty libertarian and is an idol of the field, Iain Banks is famed and weird but definitely a libertarian kind of weird.

          My guess is that much of the current Hugo-award brouhaha is a side-effect of publishers having decided (as a result of lobbying led by John Scalzi and starting in 2008) to distribute much of the shortlist in electronic format to all supporting members; which meant ‘supporting member’ rapidly changed from something Worldcon-going enthusiasts did to help fund a Worldcon that they couldn’t attend, to a deal where you paid $40 and got a pile of ebooks which would have cost you more than $40.

          In both cases you also got Hugo-award-nominating and -voting rights, but that wasn’t previously why you bought the supporting membership, so not many people voted (partly from a feeling that voting without reading five novels available only in hardcover was Not The Done Thing) and very many fewer people nominated.

    • BD Sixsmith says:

      I like your 365 Tomorrows piece! A nice undercurrent of eeriness.

      Here’s mine, which I later found was unoriginal but which I was quite happy with.

    • OMG Sad Puppies, where do I even begin.

      Mostly, I’m pleased to notice that a right-wing group managed to fight back against its marginalization by achieving superior organization. I don’t think it’ll ever work again (I put good money on the fact that there will be an SJW counter-slate next year, and of course the SJWs are already trying to organize around “No Award”), and I don’t think that the organizers have the will to weather the shrieking of the hatebats and stick to their guns. As always happens, the right-wing leadership, especially Brad Torgerson, thinks that it should play nice and try to leave room for everyone, and I’ve heard that Torgerson is already thinking of adopting a strategy for next year that will heavily dilute the Puppy influence. If he does that, the SJWs will probably eat his lunch.

      (Vox Day is the exception. Vox Day doesn’t care about your feelings. Vox Day plays to win. I don’t especially like Vox Day, but at least he has that going for him.)

      In the meantime, GRRM represents the noblest side of the opposition to the Puppies, and let me say that if GRRM were actually representative of his side then I don’t think that we’d ever had needed the Puppies in the first place. Cat Valente, on the other hand, is awful in the expected manner.

      • Held In Escrow says:

        Ugh, the whole thing is just a giant mess on the carpet; everyone is doing their damnedest to prove all the terrible caricatures and strawmen about them right.

        It’s as if Side 1 heard Side 2 say something about how Israel was behaving poorly. Side 1 then accuses Side 2 of antisemitism. Side 2 then goes “How dare you! I bet the filthy hebes put you up to it!”

        I don’t think I’ve ever seen such a concentrated effort to retroactively prove a group right.

      • Cauê says:

        Still not up to speed on SP. Are they really right-wing this time (SP I mean, not RP)? This is said of the reproductively viable worker ants but isn’t true in that case.

        • Randy M says:

          Well, Larry Correrria is not the leader but is perhaps the best known and most prominent proponent, and he is conservative or libertarian. Brad Torgersen is described as a moderate, but I don’t really know him. I think its safe to say that the organizers are right-leaning, at least in comparison.

          But the selected works on their slate, while including some badthinkers, include the left leaning and the apolitical, and enough minorities that an EW article about how it is all a plot to keep awards out of marginalized hands had to issue a retraction.

          • Cauê says:

            Oh, I saw that article… Extremely familiar, sadly. At least this time around they made concrete assertions that were easier to prove wrong.

            But I was wondering more about the rank and file than the organizers.

        • J Scott says:

          They see themselves as participants in the ‘culture war’ on the “right” side, and are, closer to red/grey tribe than blue (compared to the wider fandom). I don’t know if that makes them ‘right-wing’ per se, but they clearly draw support and membership from that side of the aisle.

      • Emile says:

        I’ve been mostly following this through GRRM’s blog, and I only have fuzzy notions about Vox Day (never had heard of him before); it’s somewhat interesting to watch from afar though I still don’t understand all that’s going on.

      • stillnotking says:

        Brad Torgersen isn’t a right-winger, he’s a liberal. He’s also been married to a black woman for 21 years, which makes the allegations of his racism pretty ridiculous. (Not that it wasn’t already ridiculous — the Sad Puppies slate nominated more minority authors than were nominated last year.)

        I doubt the SJWs will come roaring back next year. They had plenty of advance warning to organize this year if they’d been able; there just aren’t that many of them in SF literary fandom (TV and movies are another story), and they’ve gotten an extremely bad rep from people like Requires Hate. The sympathies of critics and the general public might be with them, especially in the wake of the media campaign to portray the SP organizers as racist, homophobic troglodytes, but that won’t make any difference on the Hugo ballot. Imagine if the biggest award in the gaming community were voted on by gamers — it wouldn’t be Gone Home.

        The most likely outcome of all this is that the Hugo awards sink into politicized irrelevance; of course, SP would say they already had, which rather undercuts GRRM’s point as well.

        • Jaskologist says:

          Politicization: when a right-winger fights back.

        • grendelkhan says:

          He’s also been married to a black woman for 21 years, which makes the allegations of his racism pretty ridiculous.

          Is this a natural escalation of the whole “some of my best friends…” thing? (This is the first I’ve ever heard of Brad Torgersen, and know precisely nothing about him.)

          I suppose it’s a reasonable response when there are obvious ways to demonstrate that one is a racist, but no comparable way to demonstrate that one is not, so you get weird flailing attempts to make yourself look like someone who you’re pretty sure isn’t in the Hated Category. It’s quite a mess.

          • Randy M says:

            I’m not sure what you mean. “some of my best friends are black” is mocked mostly because it isn’t believed. If one’s best friends actually are black, or even moreso, a spouse or children, it is in fact strong evidence against personal bias.

            In fact, I’d say that there is an obvious way to disprove racism (namely this) but not obvious way to prove it (excepting, of course, one actually saying so), in asmuch as any mind can attempt to be read.

            I would trust personal behavior far more than a few millisecond delay in an association test.

          • Held In Escrow says:

            The whole “I have a black friend” claim is a pretty weak defense against racism, because plenty of people have asshole friends or their friends just go with the whole “one of the good ones” rhetoric. Being married to a black woman is a pretty damn good indicator that you’re not racist against black people though. It’s not perfect, but unless we have video of him pulling a Kramer I think it’s safe to side on him not being a bigot.

          • Anonymous says:

            Escrow, you seem to be interpreting “I have black friends” as meaning “black people like me,” while I’ve always interpreted it as “I like black people.”

          • J Scott says:

            There’s also a lot of weird ways that personal connections and friendships can overcome toxic ideology without actually getting rid of said toxic ideology. Speaking specifically of science fiction authors, HP Lovecraft married a Ukrainian Jewish woman, but the man was also a racist and nativist who believed in the superiority of the Anglo-Nordic people.

            …ultimately, of course, those kind of badperson labels aren’t productive. I don’t believe for a second that BT is (maximum scarequotes) “racist”. While you could more plausibly make some claim that he’s contributing to a racist agenda or movement, it’s certainly not the most productive thing you could be doing to advance understanding or heal the apparent rift the ‘culture war’ has created.

            (Then again, accusing everyone who dislikes the politicization of previously apolitical science fiction awards or objects to giving an award to some really egregious actual racists/trolls of being some kind of totalitarian who secretly wants to control the acceptable thoughts of society is probably not a great way to win friends or influence people either.)

          • Jaskologist says:

            Given the furious backlash over the wrong kind of scifi fans voting, I think it’s pretty clear that the awards were not “previously apolitical.” If I had doubts, the knee-jerk accusations of racism put them to rest quite nicely.

          • FacelessCraven says:

            @J Scott – “it’s certainly not the most productive thing you could be doing to advance understanding or heal the apparent rift the ‘culture war’ has created.”

            It seems to me that a growing number of people are concluding that “healing” is not an option presently available. Understanding and reconciliation are not on offer in any serious way. Once this realization sets in, fighting is the best option that remains.

            Have you read that Mixon Report link? A community where it is not only acceptable but encouraged to threaten to pour acid down the throats of your opponents and mutilate their genitals because they disagree with your specific ideological rules does not strike me as “apolitical”. That Mixon’s argument seems to boil down to “it’s wrong because they aren’t just doing it to straight white men” is not exactly encouraging.

          • J Scott says:

            @Jaskologist – Characterizing the response as being because ‘the wrong fans were voting’ is…dubious. It’s certainly not what the anti-puppies are saying, and is at best a highly uncharitable reading of their ‘true motives’. The closest you can come is accusations that SP have engaged in astroturfing by recruiting outside of SF fandom to people more interested in fighting a culture war. I don’t think it’s true but it’s not an unreasonable complaint to make.

            @FacelessCraven – I did follow the RH events and the Mixon Report, but I don’t think that having a “which ‘side’ has the worst trolls” argument is likely to be useful either. The existence of the MR would seem to indicate that there is an interest in getting rid of those types of personalities (and I think your characterization of her argument is entirely unfair). An online community consisting of one forum can get rid of trolls by banning them; a community of hundreds of individuals blogs and forums is going to suffer these sorts of attacks by sufficiently clever trolls; we deal as we can. Claiming something as sprawling as SF fandom is irrevocably tainted seems like trying to claim that Hermione/Harry shippers were all terrible people in a toxic community because they couldn’t coordinate to get rid of MrsScribe.

            I think (okay, more accurately, hope) that the SF fandom can avoid turning into the next front in the SJW/anti-SJW complaining contest (the more I think about it, the more I realize that referring to it as a ‘fight’ or ‘war’ is unhelpful). One thing I’ve seen that is heartening is that a lot of people who might have been the natural nucleus of an anti-Puppy slate have explicitly disclaimed any attempt to do so, and many have to declared an intention to ‘No Award’ anything on any slate in the future – left, right, literary, pulp, whatever – in the future, as a way of trying to avoid the impulse to create an opposition political party. I don’t know if it will work, but I’m glad to see that people are trying to avoid it.

          • Held In Escrow says:

            Seeing as there have been unofficial slating in the past, the amount of salt I have to run the whole “I’ll just no-vote slates on principle!” about a mountain’s worth of salt. There’s also zero guarantee that anyone will stick to it in the future, and if there’s one thing I’ve seen from the Great Internet Slapfight it’s that everyone is hitting Defect as fast as they can.

            GRRM has the exact right response here. Vote for what you think is good, No Award if you don’t think anything is good enough to deserve an award. When people accuse you of being willing to burn down your own house rather than let others in, the proper response is not to break out the kerosene and matches in an attempt to prove them wrong.

            I mean for god’s sake, this has been going on for 3 years. Couldn’t someone at SWFA reached out and said to the conservatives “I understand how you feel but a full slate might be hurtful to everyone involved. Can’t you just get behind a few selections scattered throughout so that nobody feels alienated?”

            But no, everyone has to try and nuke each other. The right wingers want to see their pulp fiction being victimized and the left wing want to be seen as their safe space being invaded by the evil right wing oppressors. It’s some sort of evil cosmic joke.

          • Cauê says:

            But no, everyone has to try and nuke each other. The right wingers want to see their pulp fiction being victimized and the left wing want to be seen as their safe space being invaded by the evil right wing oppressors. It’s some sort of evil cosmic joke.

            I was under the impression that the left wing was saying it was not supposed to be a safe space, and the whole thing was baseless because there never was an ideological filter.

          • John Schilling says:

            Escrow: Where does “pulp fiction” even come into this? Are you really buying into the narrative that the SP or anti-SJW side of this debate only wants silly old rockets-and-raygun stories in their science fiction?

          • FacelessCraven says:

            @Scott J – “Characterizing the response as being because ‘the wrong fans were voting’ is…dubious.”

            Correia and his fellow authors know their fans exist; after all, they have the sales figures, which I am given to understand are in some cases an order of magnitude more than recent Hugo winners. Correia believes that the Hugo is largely controlled by a small, insular clique, and that this clique votes based on a piece’s political message and the author’s diversity check-boxes rather than the actual entertainment value and/or quality of the piece in question. He claimed that if “the wrong books” got nominated, even if the were very good books, that clique would engage in politicking to ensure that they did not receive awards. People said this was a farcical position, so he tapped his considerable fan-base to get “the wrong books” nominated, and the clique openly engaged in politicking to ensure they lost.

            His point is that the Hugo claims to represent “Sci-Fi Fandom”, but ignores the actual opinions and preferences of a large majority of sci-fi fans. He has engaged with large segments of those fans, advertised to them how the Hugo process works, and has gotten them involved. This is seen as wrecking, of course, rather than the most successful outreach effort in the history of the award, because again, they are “the wrong fans”.

            “and I think your characterization of her argument is entirely unfair”

            Then explain why the charts showing the gender and ethnicity of RH’s victims were necessary.

            Mixon seems like a reasonably decent person, but the fact that she needed to discuss the direction of RH’s “punching” is indicative of the state of the community. I will happily bite the bullet you offer: MsScribe’s community bears a heavy share of the blame for her success. They were willing to deny people basic human decency on the flimsiest of pretexts. That behavior is far more worrying in the long view than MsScribe’s, and it should never, ever be tolerated.

            “the more I think about it, the more I realize that referring to it as a ‘fight’ or ‘war’ is unhelpful”

            SJ plays for keeps. I am not sure how this is even a controversial position. In the Sad Puppies instance, they were willing and able to smear people they disagreed with in the national media as racists, sexists, etc, in direct contravention of the facts, over a science fiction award. If that doesn’t qualify as a fight, I’d hate to see what does.

            @Cauê – “I was under the impression that the left wing was saying it was not supposed to be a safe space, and the whole thing was baseless because there never was an ideological filter.”

            Well, they said that, and then the Sad Puppies slates started getting too much traction to ignore, so now they’re saying this. If it turns out they can’t keep control of the Hugos without destroying the award entirely, I’m sure they’ll have a pat explanation for that as well.

          • stillnotking says:

            @J Scott: I don’t think the H/H shippers were all “terrible people in a toxic community”, but sufficient numbers of them were terrible to give MsScribe the entree she needed. She had a lot of fans! RH had a lot of fans! What’s worse, those fans were completely convinced they were fighting the good fight!

            I’m certainly not going to pretend that the success of social-justice trolls has absolutely nothing to do with the popularity of social-justice memes, or with the general willingness of people to behave abominably toward designated out-groups for ostensibly moral reasons. Fish need an ocean, and levers only work if they’re attached to something. Communities that adopt poisonous groupthink absolutely share blame for its consequences, whatever their stated motives.

          • FacelessCraven says:

            @Escrow – “Vote for what you think is good, No Award if you don’t think anything is good enough to deserve an award.”

            As it happens, this is exactly the advice of the Sad Puppies founder as well. It has been his specific message from the very beginning. He has put forward slates that he thinks represent the best sci-fi writing of the year. He has specifically picked them based on the quality of the story, not the politics or identity of the authors. He consistently urges his audience to read everything they intend to vote on, and then vote their conscience.

            If it’s a good message from GRRM, why is it a bad one from Correia?

          • A slate lowers the effort needed to choose nominees. How much that matters is a judgement call.

            More generally, I’ve been noticing how much argument goes into the question of what’s bad enough to matter as distinct from the question of what is good or bad.

          • houseboatonstyx says:

            “The closest you can come is accusations that SP have engaged in astroturfing by recruiting outside of SF fandom to people more interested in fighting a culture war.”

            Vox Day may have tried that, but I’m not sure the numbers support much success in it. We’d need to know how many new first-time members joined quite recently (at $40 each), and when it was discussed in forums outside SF fandom (and what was said).

          • Held In Escrow says:

            @John Schilling and @FacelessCraven

            I don’t think I’m quite getting my point across; let me try again. The issue isn’t that the SP are putting up pulp stories or that the Hugos have always had a banner up saying “for good left thinking people only;” it’s that everyone is becoming caricatures of themselves. Torgenson’s letter for this year’s SP was basically “we just want pow pow and boom boom” sci-fi (although I’d say that’s not really what he put up on the slate), and the antipuppies have gone from “we’re not an insular clique” to “we’d rather burn down our own convention than let you be part of our insular clique.”

            I don’t think it’s bad advice when coming from Correia; I think him and GRRM are both correct. I think they’re also just both talking from well, lived experience. The former has had issues being discriminated against as sci-fi conventions because of his politics and sees much of the con going community as part of that discrimination. The latter hasn’t had it happen to him. The actual truth of the matter is probably somewhere between, where there are discriminatory assholes but they don’t have some sort of secret cabal going on. I also think that the response to SP has shown there to be some serious rot in the community.

            SP has some glaring issues. The Kratman piece they nominated is literally a hate screed against a forum poster who disagreed with Kratman over the use of ammonia to defeat Abrams tanks (as they were a tanker themselves). Torgenson is not exactly the sharpest tool in the shed when it comes to writing pieces that aren’t going to help bring a community together in spite of politics and his SP manifesto is all kinds of dumb.

            In the end, this really just shows that the sci-fi fandom is filled with people who never really grew up and learned how not to be an asshole, regardless of politics. And that’s depressing but not unforeseen.

          • Cauê says:

            Are there actual accusations of “secret cabals”?

            I’ve seen this “no cabal” defense often in other contexts, when the accusation was simply of ideological homogeneity and favoritism, with no explicit coordination required.

          • Held In Escrow says:

            Yes, I have seen accusations of a cabal, although it’s generally in less conspiracy theory language, that a small collection of liberals are trying to keep out the good hard working conservative authors.

            I mean, I think this whole debacle has proven that there’s a lot of people emotionally invested in keeping an ideologically homogeneous zone and litmus testing its nominees, but that’s more of a fallout of the mess than the original claims.

          • I’ve heard it both ways. The defensible position is that the Hugos are governed by cliques and social dynamics that enforce ideological conformity. However, I have seen some things by both Correia and Torgerson which imply an outright conspiracy.

          • Jaskologist says:

            The actual truth of the matter is probably somewhere between

            Why must the truth always be somewhere in between? Correia’s contention that only one side’s politics are discriminated against explains both lived experiences. It seems much more likely that GRRM’s privilege has simply blinded him to this.

          • stillnotking says:

            Correia and Torgersen (why can no one spell the poor guy’s name right?) think that a small group of ideologically-motivated people, whom they refer to as CHORFs, wield undue influence over the Hugo balloting process. They’ve singled out a few, especially Teresa Nielsen-Hayden, John Scalzi, and Steve Davidson. They can, unsurprisingly, point to specific ways in which the rules have been bent by or for these folks in the past. (That doesn’t necessarily prove anything — look for conspiracy, and you’ll find it. Rules get bent for less nefarious reasons all the time.)

            Point being, the SP organizers do talk as if there’s an actual cabal behind the Hugos. I suspect this part of their complaint is overstated. OTOH, the reaction of the old-school Worldcon voters to the influx of new ones confirms that something cliquish, at least, has been going on.

          • John Schilling says:

            Held in Escrow: I don’t think I am misunderstanding your point. I don’t think it is really possible to misunderstand:

            Torgenson’s letter for this year’s SP was basically “we just want pow pow and boom boom” sci-fi (although I’d say that’s not really what he put up on the slate)

            I understand. I disagree. I assert that you are wrong. The words that you put in quotes are not Torgenson’s and are not an accurate paraphrase of anything Torgenson has said. And as you note, it isn’t what he actually did.

            You are fairly explicitly accusing the “Sad Puppies”, or the dissenters from Worldcon orthodoxy in general, of wanting to read and honor only mindless lowbrow entertainment. This is insulting, annoying, and wrong. Please stop doing it.

          • Held In Escrow says:

            Did you not read his whole spiel about “the unraveling of an unreliable field?” That’s the entire point of that argument, that the field is stuck in a permanent display of examining close-to-home topics like gender or race. I’m not implying that he thinks the field should only be pulp, but he most certainly puts forth the argument that the level of message fiction is too damn high.

            I’m extremely sympathetic to the puppies; if anything I think that the campaign has shown that there is something really fucked up inside the community, and I don’t believe for a moment they only want SPACE ACTION as opposed to deeper meanings. But they have crappy PR, trouble spinning a good message, and actively associate themselves with people like Tom Kratman and Vox Day which really hurts their credibility in the eyes of the average reader.

          • It’s not just that they have crappy PR, they engage in a lot of schadenfreude, and that tends to harden the opposition.

          • DrBeat says:

            That’s the entire point of that argument, that the field is stuck in a permanent display of examining close-to-home topics like gender or race. I’m not implying that he thinks the field should only be pulp, but he most certainly puts forth the argument that the level of message fiction is too damn high.

            It’s not even that “the level of message fiction is too high”.

            It is that the level of Message Fiction, that all addressed the very same set of Properly Approved topics in exactly the same Proper Thinking way, that had little to no merit as stories but were lauded because they put forth the Proper Message, is too high.

            “Things that agree with your ideology” are not synonymous with “things with messages” or “things that are important”. Someone who is sick of your ideology is not sick of stories having depth, they are sick of the fact all of that ‘depth’ is identical and exists only to signal the author’s virtue.

          • houseboatonstyx says:

            That’s the entire point of that argument, that the field is stuck in a permanent display of examining close-to-home topics like gender or race. I’m not implying that he thinks the field should only be pulp, but he most certainly puts forth the argument that the level of message fiction is too damn high.

            Humph. Back in the good old days, a good book would have some of both. Babel_17 … Le Guin … old Star Trek … mid-Heinlein … C. S. Lewis …. But it would be messages, plural, conflicting, as plot hangers or as character motives, rather than having a single message as a foregone conclusion, or being a timeless snapshot with no scope for thought at all between its covers (ie a book interesting mostly for its contrast with our world or with genre tropes).

        • Harald K says:

          Brad Torgersen isn’t a right-winger, he’s a liberal.

          Well, that depends how you define it. He’s a mormon and a navy officer, he seems to thinks Islam is uniquely and inherently evil, he believes 9/11 created a schism between the resolute (like him) and the appeasers (read: the left). His comment forums are full of self-identifying conservatives who clearly see him as one of their own.

          • Samuel Skinner says:

            None of those things disqualify one from being a liberal. Does he support gay marriage? Does he want to take action to reduce CO2 emissions? Does he want to reform healthcare? Does he think immigration should be reformed to let in more people? Does he think we need to expand the social safety net?

      • BD Sixsmith says:

        I propose a compromise: ending literary awards. Western civilisation had a nice canon coming along without them.

        • The de facto, if not de jure, end of the Hugos actually seems like a pretty likely endpoint for this. It’ll take a few years to get there, though.

          • houseboatonstyx says:

            The publishers, buyers for chain bookstores, NYT reviewers, librarians — don’t know or care who awards the Hugo. It’s just a badge of respectability, a safe choice.

            At this point The Hugo is a free-standing institution of its own. Imagine if WorldCon or whoever owns it now, went bankrupt, and the court sold the name/logo to the highest bidder … say, Vox Day, and he put it on books by his cronies. How many chain buyers, librarians, etc would even know that happened? Some major media reviewers might hear rumors of some crazy infighting among some of us fans, who are all crazy anyway, and some of those books might get less favorable reviews. But like D’Anconio Copper, The Hugo could zombie along indefinitely before its influence on wholesale sales seriously wobbled.

        • I am putting this here because I did not notice any obvious place in the thread to put it.

          I have no experience with the Hugo and neither of my novels will or should ever get one. But I do have some lower level experience with Worldcon which might be relevant.

          When convenient, I and my family attend Worldcon. Before Worldcon (and Baycon) I routinely volunteer to be on panels. I make no secret of my political views (extreme libertarian), and if the con were being run by people who wanted to discriminate against conservatives and libertarians they probably knew who I was.

          I have never seen any evidence of such discrimination. I routinely end up on multiple panels, and I think I usually get at least one free membership out of it (Baycon sometimes gives me an extra one for my wife).

          That isn’t inconsistent with the SP view of how the Hugos are run, but I think it is a relevant datum.

          • houseboatonstyx says:

            Perhaps we should distinguish:

            1. Right/Conservative individuals (of less than panel status) being snubbed or insulted in person at the convention. See

            2. Non-PC work being ignored or maligned in open discussion in blogs etc, thus being less likely to be read, or nominated for Hugos.

            3. Something about the mechanism/rules of Hugo nomination/voting being manipulated by a cabal of insiders in the know? Short of discarding ballots or mis-counting them, I don’t see what even the most cliqueish indider could actually do, other than encourage 2. Perhaps something about adjucating edge cases about elegibility? Correia was quoted as saying he could find no fraud (in what, was not clear).

          • Anthony says:

            Correia stated that after Sad Puppies 1 and 2, he was confident that there were no irregularities in the counting of nominations or votes – that despite occasional accusations otherwise, the people administering the process were honest, and weren’t discarding or making up ballots.

      • Irrelevant says:

        Setting the doom and gloom aside for a moment, Skin Game and Ancillary Sword were two of my three favorite fiction books published last year, so I say the Hugos are in great shape.

        • Richard says:

          Thing is that the puppies kind of claim they are the reason for the Hugos being in good shape. The narrative goes like:
          Prior to 1995 a Hugo was a guarantee of a good story
          Since 2000 not so much, but a Hugo was always politically correct.
          With puppies we’re going back to awarding story over PC

          The most compelling argument in their favour (IMO) is that Sir Terry never got a Hugo.

          • Irrelevant says:

            Ancillary Sword wasn’t an SP book.

          • Anthony says:

            Terry Pratchett never got a Hugo, but was nominated, and declined for personal reasons. Of course, the argument that Pterry got nominated *only once* is still a pretty strong one that the Hugos aren’t just about merit and haven’t been for a while.

            Another argument is to look at the 2014 nominations. The fact that “If you were a dinosaur, my love” was nominated (for Short Story) makes it absolutely clear that the Hugos aren’t about literary quality, and “Wakulla Springs” and “The Rain That Falls On You From Nowhere” are signs that the Hugos aren’t about the most interesting ideas in speculative fiction. (They’re both decent to good stories, but “Rain” is barely fantastic, and that element could be search-and-replaced by a non-fantastic element without really impacting the story; “Wakulla Springs” is very much of the literary genre, not the sci-fi or fantasy genres.

          • Deiseach says:

            Okay, I’ve definitely been spoiled by reading fanfiction, because “The Water That Falls On You From Nowhere” won Best Short Story Hugo in 2014? Oh, dear.

            It’s not a bad story, but it needs to be stringently edited. And simply throwing in “I’m a biotech engineer” doesn’t make it skiffy; à propos the Magic Lie-Detector Rain, it is as about as pertinent as leprechauns and doesn’t make any sense. It’s more like watered-down Magical Realism, which means that – what? SF/Fantasy only picked up on a literary trend sixty years or so after it percolated into the mainstream literary world?

            I really have been spoiled by reading fanfiction, because I kept thinking “What this story needs is a beta“:

            A beta reader (or betareader, or beta) is a person who reads a work of fiction with a critical eye, with the aim of improving grammar, spelling, characterization, and general style of a story prior to its release to the general public.

            If I was doing concrit on this, as I’ve done for some of my fandom peers, it’d go along the lines of:

            Ending a little too pat; not enough conflict (the row between narrator and his sister is trite and cliché, which is not to say it’s not workable, but give us reasons for it – does she resent the fact that within traditional Chinese culture, he still has the favoured position as the son even though she’s the Good Child? What does she do, apart from being the Jealous Sister? Does Narrator consider that maybe he’s a bit of an asshole himself in the family dynamic here? How traditional is this family – the parents Straight Off The Boat? Are we talking Present-Day or Near Future?). Boyfriend is a little too perfect: blue-eyed (oh, excuse me: hard blue-eyed) hunk who’s ripped to shreds but also whip-smart but also in a job that pays less than Narrator’s (and can we talk about the much-discussed question of feminisation in slash pairings here?). Play up the racial/cultural gaps a bit more; have boyfriend be a little bit unknowingly patronising about how he knows it’s tough to come out, it was tough when he came out to his family etc. etc. etc. and Narrator pointing out that he doesn’t understand; work that intersectionality, baby! White queer experience does not map neatly onto POC queer experience!

            Oh, and boyfriend is Hot Muscle Stud but also Has Brains? Can read Ancient Greek? “Explicates” Socrates? Note: never use “explicate” in cold blood. Narrator talks too much about himself – show a bit more, tell less. Don’t drop chunks of untranslated Chinese in as Exoticism, even Exoticism With A Point. Does Narrator really think of his sister and brother-in-law as Michelle and Kevin? What are their real names, or are these their ‘real names’? How assimilated are they? How assimilated is he? You could use this for the aforementioned conflict: when Perfect Boyfriend is being all understanding about how tough it is to come out to Narrator’s family, Narrator can come back with how Boyfriend can’t even say family’s names, can’t even say Narrator’s name, how Narrator and family have had to adopt/adapt names pronounceable by Boyfriend’s culture, despite all its talk about acceptance and diversity etc.

            And where the heck is that water coming from, anyway? When did it start falling? What are the common explanations for it? Need to give us a bit more than “one day magic lie-detecting water starting falling every time anyone tells a lie, but never mind that: I’m terribly afraid my boyfriend is going to say he loves me and I still haven’t told him my parents don’t know I’m gay”.

          • Also, I wasn’t sure how the nice parents produced the Horrible Sister, though that being said, I didn’t notice the problem until I came out of the emotional trance of wanting the coming out to go well.

            However, this year’s “The Day the World Turned Upside Down” is remarkably awful in a different way. It’s new wave (arbitrary scientifically impossible disaster, depressing main character)– I thought the new wave was gone, but apparently someone forgot to put a stake through its heart.

          • Deiseach says:

            Well, as long as I’m giving a kicking to critiquing “The Water That Comes From Nowhere”, I have to say, to me it looks like it won the 2014 Hugo for Best Short Story on tokenism.

            This won in 2014, not 1994 or even 2004. Narrator works in the field of bioengineering in a world with Magic Lie-Detector Rain but apparently there’s no such thing as surrogacy? There’s an entire industry in exploiting Third World women for rich Westerners (straight and gay) to have kids – see the Elton John versus Dolce and Gabbana row. He doesn’t need to be some superwhizz who’ll win a Nobel in order to give his parents grandkids; he can buy the eggs of a suitable Chinese girl by donation, have them fertilised with his own sperm by IVF and implanted in a surrogate who’ll carry the embryos to term (also, sex-selective abortion and pre-implantation selection is a thing, so no inconvenient pregnancies of the wrong gender need eventuate).

            Am I also to believe the happy ending where Perfect Boyfriend is so perfect the traditional, immigrant parents are happy that his gwailo genes will contaminate their family bloodline? And again, given the work being done in Britain on three parent babies, the idea that Narrator will need to be some supergenius whizz to create a baby with his and boyfriend’s genes is not so big a deal; the theory about enucleating an ovum is already there, the work will be on melding two sperm cells to create the fertilised nucleus to be inserted, and that’s in the range of ‘tricky but doable’ in the long run.

            I understand why he used Horrible Sister, but again that’s unexamined misogyny – of the two women in the story, the only one we encounter in any detail is the designated villain. If Narrator’s sibling were a younger son, this would take the pressure off Narrator to produce offspring and so reduce the dramatic tension of the story. But dramatic tension about ‘my parents expect me to continue the family line’ could be present and even increased if Narrator were an only child (and where does his family come from? Mainland China? Taiwan? Hong Kong? He could be a child of the One Child policy mindset of his parents) without needing to introduce a woman to be the bitch and express the homophobic reaction the Narrator expects and fears from his family.

            Again, that reads to me like authorial self-insertion; I’m sure his therapist is delighted he’s working out his sibling rivalry issues, but it doesn’t add to the story and indeed is the stock character of prejudiced family member.

            Strip out the Magic Lie-Detector Rain, and you have a common-or-garden ‘How I came out to my traditional, immigrant parents’ story that, to be frank, probably wouldn’t sell to a mainstream literary magazine as it’s too conventional. That’s why I think tokenism (and forgive me if that sounds too harsh a term) about voting for Gay Chinese-American Coming Out Story was at play here.

            Were I commissioning editor for a SF magazine or book anthology and this was a submission, I’d boot it back with “The idea has got potential but it needs work”. The rain is an interesting idea, but either make it more central or lose it. As it stands, all it does is tell us: Perfect Boyfriend really, truly loves Narrator. Narrator eventually acknowledges his emotions and his feelings for Boyfriend and (probably) undergoes character growth as a result. There’s nothing particularly SF here, and the point of the Hugos is not a literary award per se (the Nebulas have that going for them), it’s for best (or most popular, and I know that’s not the same) work in SF/Fantasy as voted by a popular plebiscite.

      • Douglas Knight says:

        What do you mean by “marginalization”? How did this reduce marginalization? It sure looks to me like this increases marginalization. If you just mean “marginalization” on the Hugo list, then, yes, this achieves that goal, but that seems awfully narrow-minded to me.

        For example, this all started because of complaints of whispering campaigns against nominees, right? Maybe this has succeeded at the goal of bringing that into the open, but I wouldn’t call that reducing marginalization.

      • houseboatonstyx says:

        As always happens, the right-wing leadership, especially Brad Torgerson, thinks that it should play nice and try to leave room for everyone, and I’ve heard that Torgerson is already thinking of adopting a strategy for next year that will heavily dilute the Puppy influence.

        Yes, he rolled Fumble – Too Much Effect. He hoped to get one or two of his picks into each 5-slot category, but so many people voted for them that they filled most of the slots in most categories.

      • Peter says:

        GRRM’s stated position is that he speaks as an individual: “I have my own views on all of this, and they don’t line up precisely what what either camp is saying. So be it. My views are my views. I do not speak for any clique or slate or movement.”

        It was a good series of posts. Especially the bits that would get him flamed from the left…. does anyone say “flamed” anymore?

        • James says:

          It has a distinctly nineties flavour to it. I like it, though, maybe for that reason. It beats “trolled”, in any case.

      • merzbot says:

        It’s refreshing to see an anti-feminist nerd group own up to being right-wingers instead of pretending that they’re really just concerned with Ethics in Sci-Fi Journalism.

        • Cauê says:

          I don’t think this is the place to get into it, but I also don’t want to just say “you don’t know what you’re talking about” and leave it at that.

          So, in short, “anti-feminist” depends heavily on what kind of “feminism” is meant. “Right-wingers” is outright false, as is “pretending”. “Ethics in journalism” is… well, this one is quite complicated, and I don’t think what they mean by it is what most people would call “ethics in journalism”, but the problem is confusion, not deception.

          This comment, of course, was about entomology.

        • InferentialDistance says:

          GamerGate was blowback after a bunch of self-apointed guardians of morality decided to invade a nerd sandbox and kick over some sandcastles because sandcastles are phallic symbols that reinforce the patriarchy.

        • The gamergate reaction was excessive, but I do wonder what would happen if a group of men did a hard push to make romance fiction more respectful towards men.

          • Cauê says:

            The gamergate reaction was excessive,

            The hardest part of talking about it is that perceptions of GG are so varied (and so vastly different) that when I see a sentence like this I don’t know what it means, to the point of not knowing what facts it’s referring to.

            (I’m not asking for clarification, as that hole would go deep. Just making an observation)

          • Randy M says:

            You won’t see this, or perhaps only in retaliation. Men are content to let women have their own spaces with their own rules. Women (or movements purporting to be on their behalf) are the ones that insist that even activities that they aren’t involved in must accord with their sensibilities.

            Who is correct in their behavior is left as an exercise for the reader.

          • Cauê says:

            I really don’t think “men vs. women” is the proper generalization to be made here.

            The busybodies will be those who get morally bothered by something the target group is doing, and feel socially entitled to interfere.

    • J Scott says:

      Moloch: “You know what no one hates each other over yet? Science fiction awards!”

      (Brad Torgerson’s declaration of Kulturkampf on the ‘SJWs’ is indeed a…thing. But then, there are still some people who insist that this whole thing is actually about science fiction, so it’s nice to get the declaration of war out in the open.)

      • FacelessCraven says:

        Arguably, the “declaration of war” would be the smear campaign that accused him of being a white supremacist for daring to participate in the process. Or the other smear campaigns against other authors over the last several years.

        When one side starts unilaterally declaring its opponents dead, the dead rise.

      • ddreytes says:

        But Brad Torgerson is an honorable man.

        So are they all, all honorable men.

        • houseboatonstyx says:

          Ad hominem is the theme I see in almost every part of this. The Puppies complain that current SF works (see Deiseach’s comment) are famous for the demographics of their authors and their characters, instead of plot, world-building, and, yanno, science (or magic). The non-Puppies go directly to attacking the Puppies’ demographics, character, and supposed motives, instead of the content of, and evidence for or against, their complaint.

          Evidence for it would be easy and fun to collect, starting with a list of recent anthologies with titles ending in “Destroy Science Fiction”.

          • Tom Womack says:

            The Puppies’ complaint is entirely sufficiently answered by ‘because we like it that way’.

            The style-over-substance complaint is mostly about the shorter-fiction sections, where style over substance is exactly what shorter fiction is *for*: plot and world-building expand much more readily to fill the space available, and at novel length you can sell them for worthwhile sums of money rather than for what are effectively brownie points.

            I tend to believe that Hugo voters come to the award for the novels and the movies, and that it’s the only time some of them read shorter fiction; which means you might well expect the shorter fiction which does things most different from novels and movies to win on grounds of interesting novelty.

          • Cauê says:

            The Puppies’ complaint is entirely sufficiently answered by ‘because we like it that way’.

            It seems to me that half the fight is about who should be this “we”.

          • Nornagest says:

            style over substance is exactly what shorter fiction is *for*

            Dunno about that. I’m out of the loop on the current SF scene, and indeed I only heard about the Puppies deal through this very blog; but I have read a lot of SF. Seems to me that shorter fiction in an SFnal context is usually about exploring one big idea, but one that isn’t big enough to fill a full book.

            Older SF much more often came packaged in short formats. This is partly because older SF was much more often serialized, but I expect it’s also because it was easier in those days to communicate a single conceit without having to build an entire world around it.

          • John Schilling says:

            Regarding short fiction, the “big 3” SF magazines (Analog, Asimov’s, and F&SF) have a combined circulation of ~60,000, and at least 30,000 discrete subscribers including crossovers. Worldcon attendance averages a bit under 5,000 in recent years.

            In olden times, Worldcon attendance was almost a pure subset of SF magazine readership – the cons were created so people who knew each other through the letters columns in the magazines could do meatspace socialization. The modern market is more diverse; original anthologies are a major market for short genre fiction, as is online publication. There are certainly fans in the halls of any major con who think “Analog” is only a quaint sort of electronic gizmo.

            But I would be very skeptical of the idea that there is any large community that only reads short fiction when they see it on the Hugo ballot.

          • Douglas Knight says:

            Nornagest, the market for short fiction has collapsed across the board, not just in SF.

          • houseboatonstyx says:

            @ Tom Womack
            The style-over-substance complaint is mostly about the shorter-fiction sections, where style over substance is exactly what shorter fiction is *for*

            Style-over-substance would be an improvement, if I understand them. Style is another thing in the set ‘plot, world-building, science/magic’ — ie part of a story’s content.

            @ John Schilling


          • Deiseach says:

            style over substance is exactly what shorter fiction is *for*

            The point, though, is that the Hugos are for SF/Fantasy. Having read “The Water That Falls On You From Nowhere” and “If You Were A Dinosaur, My Love”, I am left asking: what is SF/Fantasy, and not mainstream literary fiction, about these stories? Strip away the Magic Rain from “The Water (etc.)” and you have a conventional “coming out to the traditional strait-laced parents” story; strip nothing away from “If You (etc.)” and it’s another run-of-the-mill literary piece.

            They’re not even particularly well-written (why yes, I am letting my inner literary snob out to ramble) so we can’t argue that at least the prose style is of more import. To be brutal here, if these stories were sold to mainstream magazines, I imagine they’d be sent back with a polite letter of refusal because they don’t go anywhere mainstream literary fiction hasn’t been going already – unless the magazine was doing a themed issue around POC or queer writers or younger contemporary women writers, when their stories would have a better chance as filler.

            If you want to do a mood-piece or a prose poem or a “my experience as a [fill in blank]” then sure, go ahead, SF needs good solid prose writing and could definitely use an uptick in sinuous, rhythmic, euphonious style as well. But ticking off diversity boxes (and today we have a story featuring an East Asian character, lovely!) isn’t enough. What is the Narrator doing in the Magic Water story, apart from angsting about his mean sister and his family’s expectations? He’s allegedly working in bioengineering, but we hear nothing about what he’s working on. If you’re entering a competition for skiffy stories, you need some skiffy in it. Same as if you were writing for the Edgars or the Gold Dagger; it would be very unusual to award a prize to a story that had nothing in it about crime or murder (apart from a fleeting reference to one of the characters being a lawyer, perhaps).

          • switchnode says:

            style over substance is exactly what shorter fiction is *for*

            Strongly disagree. Some ideas are so exquisitely simple that the short story is all they require. (I keep a list of my lifetime favorite science fiction stories; if they were novels I’d have thrown every one across the room.)

            I tend to believe that Hugo voters come to the award for the novels and the movies, and that it’s the only time some of them read shorter fiction

            Also disagree. SF has a long history of concentration on the short form, and is arguably a last outpost of the traditional short story.

    • Leo says:

      Great story, man. That last sentence sells it.

  12. Ydirbut says:

    Hey Scott, what are your favorite ffh2 modmods?

    • Anonymous says:

      Not Scott, but More Naval AI is as close as it comes to an objective improvement over the base mod.

    • Carinthium says:

      Out of curiosity, what is FFH2?

      • Samuel Skinner says:

        Fall From Heaven 2
        It is a mod for Civilization 4.

        Notable for having a story mod integration into Civilization 4: Beyond the Sword as well as five major modmods (fall further, rise from erubus and Orbis being the biggest)

        It is heavy on flavor but has… issues with balance. I believe it is currently at the point where rampaging barbarians no longer wipe out every civilization, but games can often be a bit wonky. To be fair, its a bit better than “everyone is essentially at the same level” that 4xs tend to have.

  13. grendelkhan says:

    If anyone’s been following DRACO (I linked to it last year), it’s an experimental broad-spectrum antiviral. I’ve been following it in the news for a few years, and I’d set aside my charity money for some months (significant money for me, not for a research program) after inquiries to the lab (Draper Lab, a 501(c)(3) organization) had pointed me to “The DRACO Fund”, which never sent me their EIN on inquiry, and has since posted an apologetic message saying that they money which was sent in helped raise awareness, but none of it went to research, and there’s a vague idea that maybe they’ll get enough funding to have more than one full-timer on this, despite the promising results.

    I suppose I’m glad I was paranoid there, and didn’t actually send my cash in, but… aargh. Is this some sort of civilizational incompetence thing? Am I being suckered? Are there lots of therapies that safely cure the common cold in mice, so it’s no big deal that this one doesn’t seem to be getting much attention? I’m so very confused.

    • hamnox says:

      It looks dead 🙁
      I see a comment on the paper suggesting that PLOS ONE is not the kind of journal one expects world-changing results from.

  14. Dinaroozie says:

    Aha! I managed to catch an open thread before it has thousands of replies, so I’m going to use this opportunity to ask a question to a potentially non-existent subset of the group.

    There are two things about America that make me want to live there – communities like this one all seem to be there, and also you have contra dancing (contra dancing is maybe the dorkiest activity I’ve ever participated in, and naturally that’s up against some fairly stiff competition, but I love it anyway). So imagine my surprise when I stumbled across a rationalist blogger talking about how great contra dancing is! I googled around a bit and learned that apparently there is some overlap between rationalists and contra dancers in Boston in particular – it seems to be traceable to a discussion where some rationalists were talking about community building, and it was suggested that group dances were a good way of creating a sense of community cooperation, so some people decided to give it a crack.

    Are there any contra dancers here? And if so, is this Boston contra/rationalist overlap actually a thing? I’ve heard from a few contra dancers that the Boston dance scene is actually kinda crappy, because it has a certain amount of age segregation going on and it makes the community feel a bit less welcoming than other scenes. Any word on whether that is so? I imagine that most members of this community would fall on the younger side of that divide, so I’m curious to hear what feelings different groups get from there.

    In general, have there been any attempts to organise some kind of (already-existing) central activity for community building by rationalists? If it involves joining an existing group, how has that gone?

    • Tom Womack says:

      In my circles (Cambridge UK, mostly either people at the university or people who came here for university and never got round to leaving) there is a lot of overlap between called dancing and mathematicians. Pretty solidly actual mathematicians rather than the other parts of STEM; I think it’s the appeal of a kind of dancing that you can fairly clearly get right.

      • Dinaroozie says:

        Interesting. I learnt the dance when I lived in Florida a few years ago, and lots of people there seemed to be physics professors (including my uncle, who got me into it in the first place), but I figured the makeup was from the fact that it was in a university town.

        I’ve since spent some time in Atlanta, and the dance community there seems pretty great, but seems to be less sciencey (at least, at first approximation – I can’t say for sure). It actually had a kind of churchey feel to it.

        I agree with your interpretation about why such people enjoy contra. I’m a swing dancer originally, and a programmer by trade, and I’ve almost (but not quite) packed it in with swing since then because I enjoy contra so much more.

    • Creutzer says:

      Having been there, I can confirm that the Boston overlap does exist. Although I don’t know that it’s meant as a community building exercise. It’s not like there is a dancing group consisting solely of rationalists. As far as I gathered (I didn’t participate), it’s just considered a form of scheduled socialisation, not exclusively with rationalists, in which a part, but not the whole of the Boston rationalist community participates.

      • Dinaroozie says:

        Thanks for the info! If I ever make it to Boston again I’ll definitely get in on this.

        Now that you’ve given me the mental image, I’m really enjoying the idea of a rationalist dancing group. I kind of regret that this isn’t a thing. 🙂

    • Contra dancing with live music exists everywhere in New England (where it originated), and mostly in college towns in the rest of the country.

      In my experience, many contra dancers are scientists, particularly physicists.

      Boston is reputed to be an unusually hard place for newcomers to break into, socially or economically.

      • Tom Womack says:

        That explains the demographics of the visiting American contra-dance holiday whose open dance I went to on Saturday and will be going to again on Tuesday.

        In Britain there’s something of an assumption that a group of visiting Americans will contain a substantial number of readily-detectable Californians; this group doesn’t, to the point that it felt quite peculiar.

        Is there a particularly Californian kind of called dance, or did that get left behind on the Oregon Trail to lighten the wagons?

        • Anthony says:

          We actually have contra dancing, and Modern Western Square Dancing™ and Morris teams, as well as other kinds of dance in California. There’s no particularly Californian kind of dance, though.

    • David says:

      Scottish Country Dancing is a fairly closely related tradition, and, in my opinion, if anything, slightly nerdier yet. It doesn’t tend to have quite such pleasing rotational symmetry, but the various manoeuvres, requiring six or eight people to be in the right place at the right time, are fairly geometric, and can get more complex than anything I’ve seen so far in Contra (look up Muirland Willie, Reel of the Puffins or Polharrow Burn for some good examples). And of course, like most things Scottish, there seems to be a lot of it in North America too. Try it if you get the chance (assuming you haven’t already)

      • Dinaroozie says:

        Aha, this is very relevant to my interests – thanks for the tip! Most importantly to me, a quick google search suggests that there is a decent amount of Scottish country dancing going on here in Melbourne, which is more than can be said of contra (there’s a bit, just not as much as I’d like). I will definitely be looking into this.

    • nydwracu says:

      I don’t know about Boston, but in high school in Maryland, someone I knew got me to start going to contra, and later joined LW and I think moved to the Bay Area.

      • Dinaroozie says:

        I can’t tell you how pleased I am that this ‘rationalist mindset’ (whatever personality trait cluster that represents) seems to overlap with a fondness of contra dance. I feel like I’ve found my people (but they live on the other side of the world).

    • >In general, have there been any attempts to organise some kind of (already-existing) central activity for community building by rationalists? If it involves joining an existing group, how has that gone?

      My friend Eric Chisholm seems responsible for at least half of the popularity of west coast swing dancing among rationalists. West Coast Swing is a form of swing dancing, a variant of partnered jazz dancing, and related to blues dancing. I’ve tried it, and it’s great fun. He’s introduced at least five people from the rationality meetup in Vancouver to WCS, and he’s introduced about the same number of people he met through swing dancing to LessWrong. Like Tom from Cambridge, there is a weird overlap of swing dancing, and a profession for geeks/nerds/whatever. Lots of people in software engineering are into swing dancing. I have two friends I met independent of LessWrong, and swing dancing, who were both interested in cybernetics, and into swing dancing when I met them. There’s all sorts of people into swing dancing, not just software engineers. I don’t know if other professions are prevalent in swing dancing. Vancouver has about a dozen dancing rationalists.

      I’m not sure if he seeded the WCS/rationality community-overlap in the Bay Area, but my friend Eric consolidated it. When he lived in Berkeley for six months, he instigated dozens of people to become newly interested in swing dancing. So, it’s gone very well. If you like contra dancing, and you want to live in Boston, and who you expect would become your friends in Boston also contra dance, I’d just get into contra dancing if I were you. For swing dancing, there are people of all ages (>70), but it’s distributed evenly enough that there aren’t noticeable age gaps, and it’s welcoming.

      • Dinaroozie says:

        That’s really interesting to hear. As luck would have it, I’m a contra dancer and a swing dancer, but where I am (Melbourne) we mostly do just about everything except west coast swing (mostly lindy hop, but also sometimes balboa, east coast, charleston, blues, and occasionally shag). Most of my dancing friends here are swing dancers. I’ve often tried to drag my friends to a contra dance or two but it seems there is some threshold of dorkiness that few are willing to cross. Such is life.

        Oddly enough, my experience is that swing dancing is a much younger crowd, and that contra dancing is much more friendly/welcoming. I always attributed that to the form of the dances – in contra you get to dance with the whole room every time you dance, whereas in swing you have to be quite outgoing to dance with more than a couple of people – but perhaps in light of your experience I should revise that.

        I’ve always wanted to visit Vancouver – knowing there are dancing rationalists there makes that even more appealing. But then, what situation isn’t improved by dancing rationalists?

        • Tom Womack says:

          Getting to dance with the whole room each time is much of the attraction of contra (and ceilidh) dancing, though there must be something more than that because introverts aren’t as disproportionately mathematicians as the people at a university contra dance turn out to be.

          I think it also helps that contra and ceilidh are more about pattern than about technique – you’re not scorned if you’re basically just walking the pattern, you don’t have to simultaneously remember the sequence of places to go and know where you have to be putting your feet. There’s a waltz at the end of each section of the dance, and really it’s more of a partnered walk in 3:4.

    • Lesser Bull says:

      I do contra dancing. Where I’m at, the community consists of the intersection between vaguely academic types and vaguely artsy types. There is a pretty fair number of lesbians but almost no gay men, quite a few nice white people crazy left hippies, and a few crunchy con homeschooling hard-righters like me.

  15. Those who read SSC using the WordPress mobile app: has anyone been able to get comments to show up? Apologies if this had been asked before.

  16. Peter says:

    Another real-life trolley problem analog I stumbled across – the black rain of Chernobyl. When the power plant blew up a large cloud of radioactive dust formed in the air and started to be blown towards Moscow. The Soviets took the decision to seed the clouds in the area, causing the dust to fall as rain before it reached Moscow – on parts of the Belorussian SSR, including the city of Gomel. Apparently this was denied for years afterwards, and no-one in the affected areas was warned about this.

    • Tracy W says:

      I suspect this is like doing the trolley test on parents and asking them to imagine their own child being the one person on the other line.

    • John Schilling says:

      I don’t think it counts as a “trolley problem” if the decision-makers are in an “Us vs. Them, and we can make the harm befall only Them?” mode.

    • nydwracu says:

      What were the health consequences?

      • Peter says:

        Unclear (kind of apt, really, given the anagram). I don’t know specifically about those areas, but in general the death toll estimates vary by three or four orders of magnitude.

  17. Stefan Drinic says:

    I have been thinking on morality, universability, and force lately.

    Suppose that one night you walk past an alleyway where a person gets murdered by the mafia. As the mobsters run out, they bump into you, and you get a very clear look of their faces. Later, you are contacted and asked to testify against the appropriate people; the day after, you receive a very clear letter: say one word in court, and we kill your wife and kids.

    You could say that, from a utilitarian point of view, complying with their demand makes sense: two people getting locked up doesn’t make much of a difference to mafia activity in town, and your wife and kids being killed would cause you immense grief.

    On the other hand, testifying against them has other effects; it attacks the policy of hostage situations itself. If enough people refuse to be intimidated, mobsters might cease using it as a tactic. Furthermore, increasing the risk of going to prison may discourage other people from joining the mafia.

    Are there other perspectives on this? I’m rather curious to see what people here think is the ‘proper’ way to react against force.

    • Emile says:

      My ideal policy on that would be to not only keep on testifying against the Mafia, but also take extra action against them if I find some, including, I don’t know, accidentally running over them with my car and firebombing their house. I don’t know how the real me would actually react in practice, and I hope I never have to find out.

      But the general idea is that “the kind of people” who give in to threats are the kind of people who get threatened, and I’d rather not get threatened. I don’t see it as a dilemma vs “saving my family vs. helping society in general” as much as “having the strength of will to carry out destructive retaliation”, or “figuring which out which retaliation norms are stable and don’t run the risk into turning into a spiral of violence, which is hard to see from a personal point of view”.

      • Are you likely to beat the professionals at their own game?

        • Emile says:

          Nah, we’re playing a different game; they want to be an effective predator; I want to be a prey that’s not worth the trouble.

          And the “extra action” examples I mentioned are pretty lousy, it’s the principle that counts. A better strategy might be of just reporting the threats to the police, or publicly broadcasting all I know about them (original perpetration AND crime), or something like that. The details don’t matter as much as the general idea of them being worse of threatening me than not threatening me.

          • Clockwork Marx says:

            A public broadcasting sounds like a good tactic in theory, but the problem is actually getting anyone to listen.

            Most media outlets are going to be reluctant to give you a soapbox not because they fear for their lives, but because they fear the devastating financial consequences of letting a random person accuse presumed innocents (who likely have very strong legal representation) on live tv.

            The legal system is going to want to keep you quiet because once you go public you’ll be worthless to them as a witness.

            Social media would likely be your best bet, but even then it’s unlikely that anyone outside of your friends and family will care until its too late. Either the mafia follows through and your posts get national media attention after they find the bodies of you and your family, or the mafia just sits back and watches as you disqualify yourself as a witness.

          • John Schilling says:

            But you are making yourself worth the trouble. You are imposing open-ended material and reputational costs on them which are substantial, and which don’t go away unless they surrender or they kill you. You also have a known home address at which you sleep eight hours a day without armed guards, and you probably have a twice-daily commute between known endpoints, again with no armed guards.

            How long do you really expect to survive once they figure out that the cheapest, simplest solution to the trouble you are causing is to kill you?

          • Emile says:

            in pre-internet days getting the message out may have been a problem, but not now – put it on youtube, create a dedicated blog and twitter account, spam facebook and the appropriate subreddits and twitter and comments sections of various local newspapers and bloggers …

            Why would that disqualify me as a witness?

          • Emile says:

            John: right, if my strategy makes it a worse idea to threaten me, but a better idea to kill me, it’s probably not a very good strategy (but it’s arguably a better one than giving in to threats, provided the opponent would prefer avoiding killing).

            But then we’re getting away from the thought experiment as originally described.

          • Clockwork Marx says:

            I was under the impression (it may very well be wrong) that making public statements about criminal cases prior to trial can be used by the defense to declare a mistrial.

            The real problem I see if that it allows people using social media to manipulate the jury by feeding them a steady stream of difficult to verify information outside of the courtroom.

            What if the Mafia plays the same game and uses a sock puppet to accuse you of having a history of mental instability and outspoken racism towards Italians?

            Are you really that confident that the social media audience will ultimately take your side?

          • FJ says:

            @Clockwork Marx: you are correct that making public statements about a case can result in a mistrial… IF YOU ARE AN ATTORNEY REPRESENTING ONE OF THE PARTIES. It’s a no-no to try to litigate a case in both the courts of law and the courts of public opinion, and an attorney who does so may commit misconduct (and his client may be penalized with a mistrial).

            But in the U.S., criminal cases are not victim v. defendant; they are The People v. defendant. The victim is merely a witness. The worst a witness can do is create so much pre-trial publicity that the trial has to be held in a different town; and that sort of change of venue is strongly disfavored.

            As a practical matter, the original dilemma is a very real one: witnesses frequently get threatened, and somewhat less commonly actually get killed. The standard solution is the same as any game of chicken: pre-commitment. Specifically, many jurisdictions allow witnesses to give statements or be examined before trial in case they later become “unavailable.” If the witness later turns up dead, the pre-trial statement may be admissible in lieu of live testimony. This has the beneficial effect of allowing the case to go on, and it also reduces the incentive to kill witnesses in the first place.

      • Stefan Drinic says:

        This explains your views on what you think is the proper course of action very well, but could you explain why it is you think this way?

        • Emile says:

          Game theory?

          If I credibly precommit to never giving in to “unfair threats” (which is tricky to define, but we know ’em when we see them), then I am less likely to be the target of unfair threats. Therefore it is my interest to precommit in such a way.

    • Most people, especially people with vulnerable families, are not heroes. Put them in a situation where they are threatened with overwhelming force and ordered to commit some evil deed, almost every one will comply.

      • Anonymous says:

        The question isn’t are people heros, but should they be, according to whichever moral theory/intuition you have.

        • A moral framework which holds that every person must act heroically, even against overwhelming force, would find few genuine adherents.

          It might be a better world if force or threats of force were useless at persuading people, but such a world is not available to us.

          • Stefan Drinic says:

            That’s an interesring answer. If true, it’d discredit universability by saying ‘whatever you do won’t influence other people’s choices or the mafia’s actions.’ Testifying may still be moral sheerly because it’d send the pair of mobsters away, though.

          • Irrelevant says:

            The theory isn’t that everyone acts heroically, it’s that everyone convincingly claims they will so they don’t have to.

            I wonder how much of the population has to sincerely hold to a “Justice, though the earth burn” mindset before you start reaping the benefits?

          • Emile says:

            Irrelevant: and that’s assuming there actually are any benefits to reap! Steven Pinker made a good argument that a lot of the decline of violence is due to our society moving from a system of personal retribution to a system of state retribution; personal retribution leads to blood fued, and also to people being “pre-emptively” aggressive so as to maintain a necessary reputation of “someone you don’t mess with” (and also the problem that people have a self-serving concept of what counts as an offence against them).

            So I think we moved *away* from such a system and have reaped the benefits.

    • Tracy W says:

      The simplest response strikes me as moving your wife and kids. Quickly.
      And normally, isn’t witness intimidation a crime and of itself?

      • Stefan Drinic says:

        That’s a little like saying the best solution to the fat man problem is finding a big rock and throwing it in front of the track. Let’s say the message sent wasn’t a letter but a phone call from a payphone, and the mafia is skilled enough to track them down nomatter what. What then?

        • Tracy W says:

          I think the big rock is a very good solution to the fat man problem.
          How do I know that the mafia is skilled enough to track them down no matter what? And seriously, a potential witness goes to the police and says “Someone called me and threatened to kill my family if I testify”, then I testify and the family is killed, the police are going to have a head start.

          • Cauê says:

            I think the big rock is a very good solution to the fat man problem.

            All it does is force the researcher to come up with more contrived constraints until people start actually answering the problem.

          • DrBeat says:

            Good! As the researcher comes up with more and more contrived constraints, that might prompt him to ask himself “Wait, is the problem I am posing actually really stupid?” and then someone might learn a valuable lesson.

          • Cauê says:

            I’ve learned many valuable lessons from trolley problems. I’ve seen where my intuitions are inconsistent, and where they clash with my reasoning about morality. I’ve had to think about this, and the process has changed the way I look at my life and at the world.

            From these kinds of answers, however, what I learned is that people will go to surprising lengths to avoid making a choice in moral dilemmas (which admittedly is an interesting result on its own).

          • DrBeat says:

            Because in every single situation that a human being will encounter ever, “find a way out of answering the moral dilemma” is the correct answer. If you are told to choose between killing A and killing B, you had god damn well BETTER be looking for the solution that doesn’t kill anyone.

            Also, the total disconnection of the trolley problems from life means they are just a way people can try and force other people to admit things they want them to admit, and feel smug and superior about it. The reason that people are more reluctant to push the fat man to stop the trolley isn’t because it involves touching the guy or acting directly, it’s because that isn’t how anything works, a trolley moving with enough velocity to kill five people on impact will not slow down to definitely non-fatal speed from running into one fat guy, unless he was too fat to leave his house and too fat for you to move. Making people answer this question is a power play, not a means of gaining understanding.

          • Anonymous says:

            @DrBeat, I agree that the reason many people dislike trolley problems is that it can feel like they are being used as a power play. But that doesn’t mean the questions are stupid. People have to make decisions all the time that predictably kill some innocent people in order to save others. Should we declare war? Should we approve/ban drug X? Of course the right answer is always the third alternative that saves everyone, but until we’ve totally solved international conflict, medicine, and all the other fields in which such problems arise, some people will die no matter what we choose and we want to have principles for making that kind of decision.

            I agree, though, that given the reactions they often provoke, idealized trolley problems might not be the most productive way to explore the relevant moral intuitions. Probably people should choose less idealized and more realistic scenarios.

          • Cauê says:

            @Dr. Beat: I think our reactions are kinda similar, actually. But in my case, my exasperated disbelief is caused by “my god it’s so painfully obvious what the point is, and that the exact example doesn’t matter in the least, are these people being purposefully obtuse?

            It’s very hard for me to try to see it your way, but I’m not sure I can defend that my reaction makes more sense than yours.

          • InferentialDistance says:

            Because in every single situation that a human being will encounter ever, “find a way out of answering the moral dilemma” is the correct answer.

            So you don’t see how the FDA is chucking people with treatable illnesses under the drug trolley to derail it before it hits babies with thalidomide?

          • John Schilling says:

            People have to make decisions all the time that predictably kill some innocent people

            People have to make decisions all the time that predictably endanger some innocent people to save others. Sometimes to the extent that the integrated casualty expectation exceeds 1.0. People almost never, outside of actual wartime, have to make decisions that predictably (p>0.5) kill specific, identifiable innocent people to save others.

            To the 99+% of the human race who are not utilitarian or consequentialist philosophers, this is a big difference. If you are genuinely trying to learn or understand, then this perceived difference – even if you don’t believe it is real – ought to be high on the list of things you want to understand. Do you really thing that endlessly tweaking variations on the trolley problem is the way to go about that?

            Or is the actual goal to watch the subject come to the realization that you were right and there is no difference after all?

          • Tracy W says:

            “my god it’s so painfully obvious what the point is, and that the exact example doesn’t matter in the least, are these people being purposefully obtuse?”

            Just because the questioner thinks the point is X doesn’t oblige anyone answering the question to agree with them.
            To take the stereotypical, if I ask you “Do you still beat your husband?” do you really feel obliged to go along with my point? Or might you be purposefully “obtuse”?

            If you want to know someone’s principles at the abstract level, then ask them. To give a specific concrete example is to imply that the exact example does matter.

            Plus, not everyone is honest in argument. Answering a question one way might be used against you by giving your response but leaving out the context.

    • Dude Man says:

      This doesn’t address the general issue, but witness protection programs exist for this type of scenario.

      • John Schilling says:

        Witness protection programs exist for the purpose of encouraging one Mafia member to testify against another. The life of a protected witness is constrained in ways that are less objectionable than the prison sentence that awaits the non-defector, but likely still intolerable for the honest man or woman.

        Completely severing your ties from everyone except your immediate family if they’ll sever ties with everyone else to join you, moving to a new “home”, taking whatever job is offered with the understanding that it will be a middle-class mediocrity that by nature must not allow you to do anything noteworthy and cannot value any reputation you may have earned in your previous career, that’s not a way to protect the honest, that’s a way to threaten them. Against me, at least, “keep your mouth shut or we’ll force you into witness protection” would be more effective than “keep your mouth shut or we’ll burn your house down”.

        • FJ says:

          Indeed, a little-known problem of witness protection programs is that the witnesses usually go back to their old neighborhoods within a few months. It sounds dumb, but very few people are really prepared to never see any of their loved ones ever again. So they hear that Grandma is dying, and they go home to say their goodbyes. Sadly this often turns out to be a bad idea.

    • Zykrom says:

      I don’t think most mafia members are Jupiter Brains, so I wouldn’t recommend you try to acasually trade with them.

    • My first reaction was to think in “far mode”, and conclude the most just and righteous thing for me to do would be to testify. In an ideal sense, this is what I think is the ‘proper’ course of action. This is assuming I’m stuck in the inconvenient world of the posed thought experiment, where I can’t go to the police with the threat, and if I could, they’d be corrupt and paid off by the Mafia. My only options are the dichotomy posed, I guess.

      I flagged my reaction as wishful thinking, and not what I would actually do. What I think I would actually do is not testify against the Mafia. While someone could say it’s the utilitarian thing to do, I don’t think I would. At best, I’d use that line of reasoning to rationalize my arguments after. I would not testify against the Mafia because I love my family, and I don’t want them to die. This would not be much of a moral consideration. This would be an emotional reaction. Whether it was utilitarian or not, I think I would also take into account the value of being alive to my family members themselves as individuals, and not just what grief I would experience if they were murdered. I wouldn’t think about this as if my family were interchangeable with anyone else, but as people I already knew and loved. If the Mafia threatened to kill three random people instead of my wife and two kids, or they threatened to kill three people I know in town, but not my wife and kids, I’d likely be less terrified. In this case, I’d be more likely to defer to utilitarian reasoning. However, it might not be very complicated, and I might just rely on intuitions. I wouldn’t be surprised if I decided not to testify even if the Mafia threatened they’d kill three random people I don’t really know.

      I consider this a proper reaction, one I would accept and respect within the confines of the thought experiment, because I wouldn’t expect someone else to do more than me if they were in that situation. I would consider the sort of response I gave above, how I think lots of people would realistically respond in the face of immediate threats, as realistic, sympathetic, acceptable, and perhaps respectable. I wouldn’t expect them to sacrifice more, or to be braver in the face of these threats, than I would be. From outside the thought experiment, I now believe I would accept this sort of response for and from myself if placed in that situation. I would do so because I don’t believe most others would expect more of me, and I don’t know why or how I should hold myself to a higher standard.

      My actual actions might vary on the basis of details. Is the Mafia new in town? Is the Mafia basically the most powerful institution in town? Do I have any evidence, not legal, not scientific, just Bayesian, even from the rumor mill, of how valid the Mafia’s threat is? Is the Mafia imposing draconian policies all over town, such that the quality of life is epically, dystopically, low, and the Mafia will eventually kill everyone anyway? What third alternatives to testifying, or remaining silent out of fear, would you grant me?

  18. Sleeple2 says:

    There is a simple but controversial technique called “thought stopping” that is claimed to help against rumination. Basically, when a bad recurring thought pops into your head, you mentally yell “Stop!” and then think of something more positive to distract yourself; some people also imagine something like a stop sign to accompany the mental yell.

    Some therapists and books recommend this technique while others warn against it, e. g. in this article:

    Have any of you used thought stopping? Do you know good studies on the subject? What else would you recommend against rumination and why? (so far I’ve heard about meditation)

    • BD Sixsmith says:

      When I was leaving home to work abroad I saw my family car and felt a pang of sadness. I told myself to get over it and did instantly. (The technique failed on other occasions but perhaps it can be worth a try.)

    • Jordan D. says:

      I did this a lot when I was younger, although my preferred imagery was putting the thought in a box and then wrapping the box up with chains before throwing it away.* It worked okay, but I experience rumination much less strongly than others anyway so that might not mean anything.

      *I think attempts to distract yourself work better when you make yourself run more complex 3-d models.

    • Salem says:

      For some time, I suffered from repetitive intrusive thoughts of a disturbing nature that were affecting my quality of life. I was recommended a version of “thought stopping” by a therapist, and I found it worked well. Indeed, this technique was successful because it worked in precisely the opposite way to that suggested in the article; instead of trying to monitor my thoughts to suppress bad ones, leading to a vicious cycle, thought stopping freed me, so that I no longer had to worry. Instead, when I felt a bad thought coming on, I just visualised the “STOP” road sign, and threw the thought away.

      My experience is of course anecdotal; in particular, I was suffering from severe depression at the time. The technique may not work so well on people who aren’t suffering so badly.

    • 27chaos says:

      Personally, I just say “okay” to my negative recurring thoughts. The attitude I have towards them is, “I acknowledge you, now go away, you are boring and predictable thoughts”. I kind of act condescending towards them, almost. Then I move on with my life.

    • Kiya says:

      I sometimes imagine a fictional character I trust telling me to stop. Works pretty well for getting rid of shallow irritating thoughts, like repeating a phrase in my head for no reason, or mentally playing a song. It’s not as effective on emotions.

    • DrBeat says:

      I don’t understand how people can be helped by something like this. I feel crushing, overwhelming, I-wish-I-had-the-courage-to-kill-myself guilt over literally thousands of things that I know, academically, don’t warrant guilt. This knowledge does not alter this emotion in any way. Telling myself I shouldn’t feel that does not alter it in any way, because I already know that, and no new information introduced means nothing has changed. I’m more likely to shriek or yelp or involuntarily vocalize self-hatred and longing for death when one of these guilt-irradiated subjects passes through my mind, and every time I attempted to “address” these feelings, the only emotional state it created was intensified self-hatred at my continued and absolute incapacity.

      How is saying “Stop”, which you already know you should do and you already know you want to do in response to a message that is coming from yourself, supposed to alter anything? Are you telling me that other people can actually decide to just change their emotions?

      • FacelessCraven says:

        This seems grimly appropriate.

        I’ve never been able to beat rumination without a complete change of context. I very much wish it were as simple as just saying Stop.

      • Sleeple2 says:

        This is not about changing emotions, it’s about insignificant but intrusive thoughts, i. e. unpleasant thoughts that don’t tell you anything that you didn’t alreay know.

        I’m doubtful about this method anyway, but I don’t think that even its strongest advocates would claim that you could change attitudes like what you feel guilty about this way.

      • Salem says:

        Both in terms of thoughts and reactions, that is pretty much exactly what I was going through. And I too thought the technique sounded silly. Yet it worked for me. I dunno, the mind is weird. But for me, the key wasn’t saying “Stop,” because I was already saying things semi-involuntarily to try and go away from the feeling (e.g. Repeating “I hate myself”) which were making things worse. Instead, it was replacing the (intense) visual images in my head with a single clear image of a Stop sign. And replacing the images allowed the emotions to just dissipate before they built up. It’s not a question of deciding to change your emotions and just doing it (impossible), but changing your thought patterns which lead to those negative emotions, which is definitely possible. Indeed, it’s what CBT is all about.

        Maybe it will work for you. Maybe it will make things worse, as the article suggests. But don’t just dismiss this kind of thing out of hand.

        • Limi says:

          Just wanted to commiserate with you about saying things semi-involuntarily, as it’s something I struggled with for a long time. My go to phrase was originally ‘kill yourself’, which I then stupidly shortened to ‘kill’ – an alarming phrase to catch yourself muttering, even if you are primarily a shut in like I was. I found solace by laughing at the idea when I caught it approaching, which sounds similar to your method of resolving it, so I agree that it helps.

          • anon for this post says:

            It’s nice to know I’m not the only one with that exact problem. Thought I’d return the favor, on the off chance it might help.

      • I’ve gotten some help from Transforming Negative Self-Talk— it’s an NLP approach which involves changing various aspects of the attack voice– its volume, location, pitch, etc. rather than engaging with the words.

        It’s also helped me to do aftercare– to be gentle with myself after an internal attack.

        I’m reasonably sure that internal attacks have something to do with a high background level of anxiety, so it can help to work on lowering that.

      • Levi Aul says:

        The above posters, I think, are talking about the kinds of intrusive thoughts that make you start feeling an emotion if-and-when you acknowledge them. They come in and try to dump their emotion in the middle of a completely inappropriate train of thought, apropos to nothing. These thoughts can be stopped (CBT), or just tuned out (mindfulness therapy).

        On the other hand, once you’re in a positive-feedback kind of emotional state, like a panic attack or a bout of suicidal ideation, your mind will continuously confabulate new intrusive thoughts no matter how many you ignore. The thoughts are arising from the emotion—effectively, your mind is trying to rationalize the emotion into a thought, to put lyrics to the melody. You can’t stop these thoughts with conscious effort; they’re effectively toxins in the environment, like a choking gas, but in your brain. Concentrating on not coughing will not decrease your desire to cough, while you’re still in the cloud.

        In such cases, it likely helps more to attempt to change the emotional state directly. Don’t talk to it; override it, with (psychiatric) drugs or by indulging in some sort of escapist emotional story, or just by getting away from the things that provide the mental context in which your current awareness is situated. (Imagine what you’d do to recreate the mental context of studying to better recall what you studied during a test—same clothes, same pencil, maybe same breakfast. Do the opposite of that.)

  19. Brandon Berg says:

    I was a bit late to the party, so I don’t think many people saw this, but I investigated Ken Stern’s claim about the rich giving less to charity than the poor, and found it deeply flawed. See the comment. The upshot:

    1. I couldn’t verify Stern’s unsourced claim about the top 20% only giving 1.3% of their incomes. Not only that, but IRS data from the same year show all income groups giving at least 2.4%. The very rich ($10M+ income) gave the most of all.

    2. The lower-middle-class do seem to give more than the upper-middle-class, but this is an artifact of the fact that the charitable giving data are based on itemized IRS returns. At lower income levels, those who itemize are a small, non-representative minority, likely older and having more assets than those who do not.

    3. The old definitely give more as a percentage of income. Whether it’s because they’re old, because they’re religious, because that generation was always more charitable, or because they have high asset-to-income ratios is not clear. Religion probably plays a role, too, but presumably “Evangelicals, Mormons, and retirees give more” wasn’t quite the angle Stern was going for.

  20. Gwen S. says:

    There are a lot of advice about how to write fiction with deep characters and meaningful conflict. How do you write shallow, wish-fulfillment fiction that crosses the blood-brain barrier and directly stimulates the id? Like 50 Shades of Grey or something.

    • Randy M says:

      I would guess the key is to have wishes that align well with a large group of people, preferable one who hold the wish passionately.

      Being able to write wish fulfillment cynically is probably a skill.

    • ululio says:

      Great question!

      If all it took were having “wishes that align well with a large group of people” most people could do it. The question is, how to excel at it.

    • Levi Aul says:

      50 Shades of Grey is the sort of thing that has low “availability” because it touches on fantasies that are against social norms—not the BDSM stuff (if it was an Safe-Sane-Consensual BDSM book I don’t think it would be nearly as popular) but the idea of actually enjoying being emotionally manipulated and abused by a powerful person. This is the sort of thing that people just aren’t allowed to think about, especially since feminism. And yet, there is a visceral thrill in it—an echo of “yes, this was a common feature of the ancestral environment; this feels like an important type of interaction I should learn more about. And it also seems I have a whole brain module dedicated to optimizing my chances of survival in cases like this, which I never got to use before, but it just slides right on, like a glove. Huh.”

      So I think that’s the key: find topics that people are enculturated to cringe away from acknowledging that they might enjoy, even as fantasies. Then, write characters enjoying those things, using the trappings of normal-everyday erotica, making them seem not-so-inaccessible—identifiable, even—rather than making them some weird behaviour of elves or aliens.

      I note that what I just said also describes On The Road pretty well.

      • Samuel Skinner says:

        “This is the sort of thing that people just aren’t allowed to think about, especially since feminism.”

        I think we should be specific- isn’t this mostly a post 1970s change? I think it is part of the cracking down on anything that looks like spousal abuse phase.

    • Velociraptor says:

      I’ve always been annoyed because it seems like these studies choose one particular female name to test against one particular male name without realizing that names themselves have strong connotations (there has been research done on this but I can’t find it so I’m linking to this) So really you’d want to choose a basket of male and female names. Can’t easily tell if that’s what’s going on in this study or not.

      In any case, it wouldn’t surprise me at all if this were the case. My female programmer friend told me years ago she perceived that there was a significant pro-female hiring bias in Silicon Valley and I’m sure it’s just gotten stronger since then. One female YC founder wrote something like “you get so much respect for being a woman in tech, even though you don’t really deserve it”. And there are (sometimes free) programs for learning to code like HackBright that are woman-only. It’s really too bad that the press only reports (what seem to me like) relatively isolated incidents of sexism against women. (There probably wouldn’t be any press interest at all if such incidents occurred in established male-dominated industries like investment banking or construction, or even in some generic gender-balanced fortune 500 company.) I’m sure that tech companies with misogynist cultures exist, but I don’t think any of the companies I’ve worked for in my (short) career have been among them. Tech companies being mostly smart, social, laid-back, liberal, well-adjusted men of all races (including some very sharp Africans) doesn’t make for a good news story, but it has been my almost universal experience. (Female coworker at my last job: “Working here is a lot different than I expected it to be”, in between smiling at me.)

      Come work in software ladies! It’s interesting work, the pay is good, you build valuable skills, there are nice perks, you work reasonable hours (I’ve never been required to come in before 11), etc. etc. Just pick your company carefully and you’ll do fine. Worst case you can ditch your first job and find another one; it shouldn’t be hard. If being surrounded by guys makes you uncomfortable, go for a company like Twitter or Udacity that makes an extra strong effort to be gender-balanced.

    • Cauê says:

      A sad day to be forbidden to speak of gender, but, on the bright side, Scott appears to be looking into this.

      • Douglas Knight says:

        How do you reconcile…?

        Maybe you can, but you shouldn’t. The answer is almost always that at least one study is wrong.

  21. J. Quinton says:

    So I had 23andme sequence my genome. I’m wondering what the margin of error is with their analysis because I got some unexpected results: .5% Ashkenazi Jewish, .6% Native American, .7% southeast Asian, among some others.

    Does this mean I have a Jewish or Native American or southeast Asian ancestor somewhere around 200 years ago?

    • Deiseach says:


    • Tracy W says:

      Wouldn’t it be a bit surprising if you didn’t?

      • Stezinech says:

        Those values are roughly 1/128, which would indicate a 5x great-grandparent. Human generation time is between 20 and 30 years (say 25). Seven generations x 25 = 175 years.

        You would be looking for someone born around 1815. On 23andme noise is typically .1% or .2%, but anything above is very likely real.

      • Jaskologist says:

        Assuming a white person, I’d be pretty surprised to find Asian blood a few hundred years back. When would their ancestors have met any Asians to even have the opportunity to breed with them?

        The rest seem perfectly plausible, especially for an American.

        • Stezinech says:

          “By the early 16th century the Age of Sail greatly expanded Western European influence and development of the Spice Trade under colonialism. There has been a presence of Western European colonial empires and imperialism in Asia throughout six centuries of colonialism”

        • Izaak Weiss says:

          This implies that around 1840, they had a half native, half chinese ancestor, or around 1810, they had one chinese and one native ancestor that didn’t directly marry each other. That seems plausible.

        • Tracy W says:

          Port cities.

          And what if you had several ancestors in 1815 who each had an Asian/Jewish/Native American ancestor?

      • Harald K says:

        Depends a lot on where you come from. Maybe it’s not unusual for an American, but from what I’ve examined of the family tree, most of my ancestors for many generations back lived in two very small geographical areas here in Norway.

        A little southern European ancestry wouldn’t surprise me, there were shipwrecks and piracy. Finn or Sami wouldn’t be impossible either (one of the locations is in the north). Anything Asian, American, African or Pacific would be a big surprise.

    • Douglas Knight says:

      It’s probably nonsense. They have strong incentives for false positives. They do not have enough data from Asians or Native Americans to give useful results. They do have a lot of data from Ashkenazim, but I’m still pretty skeptical of the <5% claims.

      What does the "ancestry painting" tool show?

    • Bugmaster says:

      Technically, you had them genotype your genome. They didn’t sequence the whole thing, they just identified the alleles of all the commonly known SNPs.

    • Anthony says:

      I’d guess that it’s noise. You probably have a few SNPs which are really rather uncommon among (European/African) populations, but are more common among the groups you mentioned.

      The Jewish one is a little more plausible, as one of the mechanisms for increased Jewish IQ is that the less intelligent members of the Jewish community were more likely to leave and marry into the generally surrounding Christian community, so it’s plausible that any given European has a little Jewish ancestry.

      If your ancestors were in the U.S. (or Canada) long enough, the American Indian ancestry is also plausible, as the race barrier was less strong in the 1700s and into the 1820s than it was from the 1840s or so; a white man who married an Indian woman around 1800 would have grandchildren who were considered white by the surrounding community. This happened more on the Canadian frontier than on the American one, but (I think) there was less circulation back to the east of people from mixed marriages.

      • Harald K says:

        as one of the mechanisms for increased Jewish IQ is that the less intelligent members of the Jewish community were more likely to leave and marry into the generally surrounding Christian community

        Yes, I believe that just-so story sounds like it has the clickbaity meme properties (edginess, self-flattery) necessary to be embraced by a sufficient community and eventually posted on SSC.

        • Limi says:

          How does it even work? Is the idea that some Jews were so stupid that they went and married christians, the fools!? Or do rabbis in each community gather up young people and banish them if they can’t solve a particular maths problem?

          Also I really like the implication of ‘Oh yeah, it’s plausible that you have Jewish ancestry because there were some really stupid Jews.’ I don’t know why the Asian connection is ruled out though, there are some pretty stupid Asians too so it seems possible J Quinton has some Asian ancestry.

          (I’m just making a joke about the implication, I don’t really think that is an accurate reading of Anthony’s comment.)

          • Geirr says:

            Ashkenazic ethnogenesis occured in the Rhineland, more or less, though they spread more or less rapidly throughout Europe. The commercial niche they occupied was as traders and moneylenders. They were a more or less hated and despised minority and had to pay money or fines to marry, have children, set up a house.

            Basically everything that Jews wanted to do was a lot more expensive than for an equivalent Christian, on average. Or they could convert to Christianity. So there were multiple generations in which the poorest Jews, the ones least capable of making money, had options of having no or fewer children or leaving the Jewish community.

            I’m not familiar with the relevant historical literature so I don’t know how strong the differential fertility effect was but the process I just described is historically attested.

            The Jews, like the Armenians and other merchant minorities, have been primarily business people and city people for a long time. There are other merchant minorities, like the Chinese in Indochina who don’t quite fit this pattern because there are more people in China than diaspora Chinese.

            Poor form, no engagement with the argument whatsoever, pure middle brow sneer.

          • John Schilling says:

            Or do rabbis in each community gather up young people and banish them if they can’t solve a particular maths problem?

            I believe the theory is that the community decrees a greatly reduced social status if they can’t solve a particular literacy problem. The Bar Mitzvah specifically calls for a public display of literacy, and while one could be accepted as an adult Jew with no more than a few memorized words there would be a reputational penalty involved. Nor did the Jewish cultural value on education and scholarship even in agrarian communities end with the Bar/Bat Mitzvah.

            So, for people who plausibly could become or at least pass for Christians, that path would be preferentially taken by the ones who failed to become literate even in a culture that emphasized universal literacy, because they have less to lose by defecting from their native culture.

            That’s the theory, at least. It very likely has qualitative truth. Quantifying it in the context of e.g. 10th-cetury Poland, would be a very hard problem. Though perhaps not beyond the collective abilities of this community.

          • Limi says:

            Geirr: Cheers, I have no idea how that failed to occur to me, I am well aware of the ghettoisation of the Jews during that period :S

            John: Geirr’s explanation makes more sense to me, only because stupid or not, the offspring of the wealthy usually get to stay. Given that success usually gives offspring more intelligence though, rich dullards would have had a much smaller impact I expect.

          • “Or do rabbis in each community gather up young people and banish them if they can’t solve a particular maths problem?”

            Being an excellent Talmud scholar was a path to status (including a way for a poor boy to marry a rich man’s daughter), and that took prodigious verbal memory and skill at logical argument.

            I’ve only nibbled around the edges of Ashkenazic culture, but it really isn’t something you can imagine if you don’t know anything about it.

          • Nornagest says:

            I’m really starting to hate the word “sneer”.

      • Stezinech says:

        To those saying it is noise, you should take a read of some of the details of ancestry composition on 23andme:

        The recall rates (i.e. accuracy), for identifying Southeast Asian, Ashkenazi and Native American segments are 95%, 97% and 99%. Those are false positive rates of 5%, 3% and 1%.

        • Douglas Knight says:

          And those who trust 23andme should look at real data.

          Also “trusting” 23andme is a really bad idea if you don’t know what the numbers mean, let alone can’t read their table.

          • Stezinech says:

            That is their real data, applied to a certain sample of course. I don’t know what you don’t understand, the precision and recall are both related to the basic statistical hypothesis testing.

            To put it simply, their ancestry composition is very reliable and type I error (false positives) are very low.

    • Unique Identifier says:

      If I’m reading this right, the results suggest that you have exactly one ancestor of each of Jewish, Native American and SEA decent, roughly seven or eight generations removed.

      This seems incredibly unlikely. I would bet against good odds that these sub-percent scores are noise (i.e. do not meaningfully correspond to your family history, though of course you might have some alleles which are more common with though not exclusive to other races).

      Note that the mismatch between the actual values and the expected 1/(2^generations) numbers strongly indicate that these numbers cannot be exact.

      [Insofar as the numbers aren’t just noise, I would expect them to all reflect a single individual ancestor of exotic origin, which is causing trouble for the classification system.]

      • Anthony says:

        Note that the mismatch between the actual values and the expected 1/(2^generations) numbers strongly indicate that these numbers cannot be exact.

        Incorrect. While you have exactly 50% of your genes from each parent, you may not have exactly 50% of the SNPs that 23andMe (or other services) sequence, because they’re randomly distributed, and randomly recombined during meiosis. But even for the whole genome, you are not exactly 25% from each grandparent, because the genes from your grandparents were shuffled when passed into your parents, and there’s no mechanism to guarantee that half from each grandparent ended up in the particular sperm or egg cell that your parents contributed to you. So the results are a little noisier each generation. And the results for the specific genes being sequenced are noisier than your genome as a whole.

        23andMe says I’m about 12.5% American Indian, and of an American Indian (or Siberian) mtDNA haplotype. But knowing enough about my family history, it’s quite likely that the American Indian ancestor was my great-great-grandmother, not my great-grandmother. So through random chance (or possibly because mtDNA is *not* recombined), the genetic contribution of tested SNPs from that ancestor has been amplified by a factor of about 2.

        • Unique Identifier says:

          Read ‘exact’ in the sense ‘exact representation of family history’. If you measure someone’s height to his belly button and multiply by a constant, it is an projection of height based on perhaps an exact measurement, but not an exact measure of actual height.

          If we’re going to be pedantic, we do not necessarily have exactly 50% of the genetic material from either parent either. Mitochondrial DNA is an obvious example. The reduplicated chromosome which causes Down Syndrome is generally maternal.

          By your accounting, there is no mechanism to guarantee that the contents actually mix when you stir a pot. It depends, of course, on whether statistical regularities count as mechanisms, and how strictly you interpret ‘guarantee’. To the best of my knowledge, the sheer number of random events in sexual recombination for all practical purposes ensure numbers very close to 25% for each grandparent.

          I have very serious doubts that the 23andMe test can miss by a factor of two for relatively recent ancestry. Are you really that confident that your family history is accurate? I didn’t think these tests were -that- unreliable.

          • Douglas Knight says:

            You shouldn’t be so snide about topics you know nothing about.

            To the best of my knowledge, the sheer number of random events in sexual recombination for all practical purposes ensure numbers very close to 25% for each grandparent.

            There are very few crossovers per chromosome. There is huge variance in proportion of genes from a grandparent.

          • Nornagest says:

            I have very serious doubts that the 23andMe test can miss by a factor of two for relatively recent ancestry. Are you really that confident that your family history is accurate? I didn’t think these tests were -that- unreliable.

            Empirically, the percentages they show don’t map very reliably to ancestor counts. The differences can be dramatic: 23andMe estimates me as having 4% Slavic ancestry, for example, while going by family trees the percentage should be at least 12.5%.

            While the percentages don’t seem reliable, though, the listed regions of origin do.

    • Did you check whether that was the “speculative” interpretation versus the “standard” interpretation? There’s a drop-down box to switch between them. In my own case, I find 23andme’s “standard” interpretation is a perfect match to my family’s reported heritage, whereas the “speculative” interpretation is rather surprisingly and implausibly different. Unfortunately, 23andme makes the “speculative” interpretation your default view!

  22. Deiseach says:

    Warning: bitching, moaning, and probably missing the point by a country mile that no, this really isn’t a sinister conspiracy to render the children of the nation obedient consumer-drones.

    From a global email at my place of work, looking for volunteers for A Worthy Cause: to host talks/presentations for local schools on Our City and the work Your Council does for it. It’s generally, it would seem, a harmless kind of civics/local government/practical implementation of geograpy, science, etc. subjects presentation for the pupils, and it’s run in conjunction with some organisation of which I have never previously heard, but which in its inspiration is originally American (I’m tempted to snarl “of course”, and I’ll explain why).

    What has gotten my goat, and makes me want to snarl about the good old U.S. of A., is this excerpt:

    Junior Achievement is an international programme which helps create a culture of enterprise within the education system. Programmes begin at primary school level, teaching children how they can impact the world around them as individuals, workers and commuters and continues into second level where they are prepared for future careers.

    Come, let us not alone immolate our children to Moloch, let us teach our 7-12 year old primary school pupils to cheerfully and willingly fling themselves into the sacrificial furnace!

    Very probably, this is a harmless kind of “people come into school and talk about their work” type thing. What has me frothing at the mouth is the part where, if they had stopped at “individuals”, I’d have passed it by as the usual kind of bumpf you get in work emails.

    But no, they had to go on about “workers and commuters” and better again, that second level education “prepare(s) for future careers”. Well, yes, it does – but that’s not all that education (in theory) is supposed to do. But no, let our international programme quietly, without raising a ripple of disquiet, under the guise of good citizenship, mould the system to one that exists primarily to feed the gaping maw of industry with a stream of workers, cogs to fit into the machinery. Forecasts that we need more IT specialists to work in the multinationals attracted here by our tax dodging investment-friendly tax regime? Quick, announce that we need more students taking STEM subjects, and at a higher level!

    The future of employment and investment is in Germany India China? Quick, we need graduates who are multilingual and to encourage our students to study modern languages in school!

    I don’t ordinarily consider myself particularly of a socialist bent, but what most people would consider anodyne stuff the like of this makes me want to break out into a quick chorus of “The Red Flag”.

    Ranting over.

    • Menno says:

      Not to be flippant or anything, but since the vast majority of people are going to end up as cogs in the machine, and the machine has done relatively well at advancing society, why shouldn’t we teach them how to do it effectively?

      What would you prefer we teach kids about work and their future?

      • Cerebral Paul Z. says:

        Even I, pro-market as I am, get a bit creeped out sometimes by the conception of education as handmaiden to industry. (Not to say that socialist countries have exactly neglected the goal of programming kids to fit into the System. And don’t even get me started on civics classes…)

        I’d prefer to start from a liberal-education ideal, and bolt Effective Cog Studies onto it only to the extent that it benefits the students themselves. Some of the stuff JA is doing in this area seems pretty good to me, once you look past the boosterism; better understanding of personal finance would have saved a lot of people a lot of trouble over the past decade or so.

      • Deiseach says:

        I love reading. I never intended to be a librarian or an English teacher or get a job in an office or work as a technical writer or anything else other than enjoy reading.

        Had someone said to me “Ah, I see you’re good at English; what job are you going for?”, I wouldn’t have known what to say to them other than “I don’t read because I have a career path in mind where reading will be a necessary and valuable skill; I love reading”. And if they had looked at me with a frown of concern and said “But how is this preparing you for the world of work? You know you need to prepare for a future career! If you are not intending to work in a field where reading is necessary, why do you waste time reading when you could be learning how to make sprockets?”, then what would I have said in turn?

        Education should be about giving you skills, I agree; but it should also be about exposing you to things you’d never get exposed to ordinarily, and perhaps finding out that you like things for pleasure, and not cram things in for their future use in making you a better, more valuable worker. Now, nice middle-middle/upper-middle class homes may let little Tarquin and Saskia have exposure to museum visits and dance classes and Daddy and Mummy are going to Glyndebourne as added-on extras outside of school, but that doesn’t mean that the rest of us common as muck types should be stuck with the Gradgrind Model of education:

        You are to be in all things regulated and governed,’ said the gentleman, ‘by fact. We hope to have, before long, a board of fact, composed of commissioners of fact, who will force the people to be a people of fact, and of nothing but fact. You must discard the word Fancy altogether. You have nothing to do with it. You are not to have, in any object of use or ornament, what would be a contradiction in fact. You don’t walk upon flowers in fact; you cannot be allowed to walk upon flowers in carpets. You don’t find that foreign birds and butterflies come and perch upon your crockery; you cannot be permitted to paint foreign birds and butterflies upon your crockery. You never meet with quadrupeds going up and down walls; you must not have quadrupeds represented upon walls. You must use,’ said the gentleman, ‘for all these purposes, combinations and modifications (in primary colours) of mathematical figures which are susceptible of proof and demonstration. This is the new discovery. This is fact. This is taste.’

        • nydwracu says:

          live to work 🙂 love your work 🙂 work is the most important thing you will do 🙂 work is the only important thing you will do 🙂 work is the only thing you should do 🙂 work is love 🙂 work is life 🙂

          theocracy now. bring back feast days

    • Tarrou says:

      Because socialism is so individualistic and unconcerned with the homogenization of the “worker masses”?

      I honestly don’t know how to deal with your argument D, it would be like me saying “all these democratic capitalists with their nationalized industry and single party rule infuriate me!” Either I am badly misreading your intent, or you don’t understand what socialism is in the least.

      • illuminati initiate says:

        Because socialism is so individualistic and unconcerned with the homogenization of the “worker masses”?

        Well, yes? It seems to be pretty standard socialist rhetoric to argue against this sort of thing.

        You may be referring to the USSR and China and such, which did do this stuff themselves certainly, but many socialists disagree strongly with the practices of those states (whether socialism inevitably leads to that or not is another debate which I’d rather not get into, but if it did that doesn’t mean socialists can’t have good intentions here and be genuinely opposed to this).

    • Leonard says:

      I certainly understand your annoyance at the assumption that we need to mold children to fit into the machine. Being a “worker” is bad enough; preparation (?) to be a “commuter” really adds the cherry.

      But why does this make you feel commie?

      • Deiseach says:

        Ah, I said it makes me feel socialist, not communist 🙂

        And it makes me feel that I am somehow blazingly left-wing Citizen Army because the assumption behind something like this is that I should be nodding along in agreement that yes, the childrens, teach them to think about work/how to get good job/how to be good worker/produce produce produce, that’s what school and education are about.

        I mean, on the way home from work, I heard a serious (not spoof) radio ad about ‘encourage your children to learn German because work opportunities in the EU’. Now, to me German is not a euphonious language and I gladly dropped it after a year in secondary school for French instead. But I’d hope someone might like to learn German for reasons more than “If I speak German I have a better chance of getting a job crunching statistics in Brussels”.

        If the attitude here is “let’s steer 7 year old children on a path that will benefit business”, I’m weeping. Because I have no objection to the idea that “School should teach children useful skills that will help them in life” , or exposing them to “how are the subjects we study put to use in Real Life?”, but I do think that school and education are more than that. There’s no business utility in studying poetry or the plays of Shakespeare or a novel by Jane Austen; are we going to go the route, eventually, of dropping all the non- necessary “This is how to write a letter” part of English classes and pump more time into Business Studies and STEm because that’s where the economic drivers of growth come from and that’s what we need workers for? Art and music and dance and other subjects are already squeezed very tightly for time in the school curriculum in Ireland (and a lot of these are shuffled off to “if you want to learn an instrument, that’s extra-curricular and up to your parents to buy an instrument and pay for lessons for you”); this kind of mindset (that school is primarily to turn out future good workers, with the skills business needs and wants) will put even more pressure on ‘non-essential’ subjects.

        And if that is capitalism, and nobody seems distressed by the notion of “children as commuters” (God save the mark!), then that must make me, if I do feel distressed by it, socialist (if the only sides are Capitalism, Best and Only Hope of Humanity versus Damned Pinko-Commie Nonsense) 🙂

        • stillnotking says:

          “You are about to be told one more time that you are America’s most valuable natural resource. Have you seen what they do to valuable natural resources?!” – Utah Phillips, in a commencement address

          • Nornagest says:

            Ha. Fortunately, children are a renewable resource, so the playbook looks more like “well-managed if smart and lucky, exploited into collapse or suffocated under a pile of regulations if not” than “strip-mined and burned to charge iPads, then converted into a patchwork of state parks and Superfund sites once extraction is uneconomical”.

          • Deiseach says:

            I’ve always felt that way about the change of the term “Personnel” to “Human Resources” – yes, and we see what happens to resources in business and industry: they get exploited 🙂

        • Limi says:

          But I’d hope someone might like to learn German for reasons more than “If I speak German I have a better chance of getting a job crunching statistics in Brussels”.

          I feel like shooting myself in the head just imagining such a depressing kid.

    • Anthony says:

      It sounds like Irish Junior Achievement is missing the point, or that the whole JA project has gone off the rails.

      Programmes begin at primary school level, teaching children how they can impact the world around them as individuals, workers and commuters and continues into second level where they are prepared for future careers. doesn’t sound like the JA I was exposed to back in the halcyon days of President Reagan. Back then, the idea was to teach kids to be enterpreneurs – to be the motors driving the cogs.

    • Brock says:

      I bet “commuters” is a typo for “consumers”. “Consumers” makes a lot more sense in context.

      • Deiseach says:

        It would need to be some fairly bad typo, though, getting “commuters” from “consumers”. Forget turning children into mini-captains of industry, work on your spelling! 🙂

      • houseboatonstyx says:

        Google can be read as agreeing, even without the extra context.

    • nydwracu says:

      “As workers” is standard neoliberal horseshit (though, amusingly, the people who are supposed to be concerned about neoliberalism didn’t notice until some of them became adjuncts and started getting screwed) — you are supposed to Love Your Work, and therefore be totally devoted to it. If you don’t Love Your Work enough to work thirty hours a day and get paid in potato peelings and Bud Light, well, maybe you just don’t have the passion to work in this industry.

      “As commuters” is bizarre. I have never heard that before. I hope it is a push for public transit and not more terrible cowboyism. Then again, I’d be surprised if public transit lasts much longer anywhere, besides maybe, like, Hungary, except probably not there either.

      • Zykrom says:

        Why don’t you think public transit will last?

      • Tracy W says:

        Big cities around the world have kept installing public transit systems, even when they have to do it through already massively valuable built-up central areas (eg London, Paris, Boston, Bangkok). I find it hard to believe that such cities could function without public transit systems.

  23. Cauê says:

    One for the philosophy types. I suppose it’s best to ask bluntly: is there anything to Forms and Substance besides a massive map/territory confusion?

    • macrojams says:


      • TheAncientGeek says:

        The energy/information distinction is fine, though.

        Really, you need to be a bit more precise about which form/substance theory you are talking about.

        • Cauê says:

          I was recently reading some of Scott’s old posts, and I thought he’d have more to say on the subject after this one about Feser, but there were no more posts about it.

          But what prompted the question was my shock reading arguments about transubstantiation. My opinion of that is so low that I thought “hey, maybe I’m strawmanning some part of this?”

          Really, you need to be a bit more precise about which form/substance theory you are talking about.

          Basically, is there any one that survives LW’s sequence on words?

          • As I was trying to hint ,if you are willing to read form as in-form-ation and substance as medium or plattform, it holds up pretty well: people who think they can be uploaded to silicon , think their form can implemented in another substance.

            It’s also a pity to tar the pre Christian versions of the theory with the same brush, since they had naturalistic motivations.

  24. Myself says:

    There is some guy named Tom who keeps saying things on your Twitter feed. Who, or what, is this person?

  25. J says:

    I don’t find anything online about Goodhart’s law (“When a measure becomes a target, it ceases to be a good measure.”)
    and placebo effect. Obviously the drugs don’t know how they’re being graded. But the law still seems to hold somehow. Why would that be?

  26. Anonymous says:

    If you appreciated Janet Johnson’s comments (I certainly did) and are interested in more along the same lines, then I suggest reading Greg Ashman’s blog.
    (Also blogs here.)

  27. Randy M says:

    Does anyone here not read What-If?
    (They are not all about spiders!)

  28. drethelin says: A fun excerpt that I feel has a surprising amount of explanatory power. Anyone know if the original writing is out there online somewhere, rather than Colin Wilson’s version of it?

  29. Wrong Species says:

    Thoughts on Marco Rubio? He seems like he’s conservative enough to win the GOP nominee while moderate enough to win the general election.

  30. William O. B'Livion says:

    They Might Be Giants?

  31. I tried doing a little research about the Weimar republic, and ran into two walls. I’d heard many times that the restrictions on Jews were lifted then, and I wanted to check about the details.

    The first wall is that if you just search on anything like [Jews Weimar Republic], you run into a lot of anti-Semitic material. The second wall is that if you get past the anti-Semites, you find that practically everything relating to the Weimar Republic is about the Nazi restrictions rather than the earlier legal situation.

    I’ve found out a little– it seems that the *last* restrictions on Jews were lifted then, but this was part of a centuries-long process which sometimes went backwards, that there were German restrictions on Catholics as well as Jews, and that there was a time when Poland was the best place in Europe for Jews.

    Anyway, two topics if anyone wants to address them. Does anyone know about the pre-Nazi history of restrictions on Jews in Germany? And, perhaps more important, any information about things that ought to be easy to find on the web, but aren’t? And how do you find information which seems to be in a blind spot?

    • Samuel Skinner says:

      I recommend “The Pity of it All”. It covers Jewish life in Germany from the 18th century to 1933.

  32. Mark says:

    This part immediately reminded me of

    Page 2, #1:
    “Articles written about the plight of female rape victims in no way detract from my own experience. Those victims deserve a voice and to have awareness raised about their situation. Likewise, this article shouldn’t detract from what female rape victims go through. This is not a contest.

    Contests have winners.

    It is, in reality, entirely possible to feel sorry for more than one group at once. Pointing out that women suffer in one way is not the same as insisting men don’t. Empathy is not a zero-sum game in which we’re all competing for a limited resource. As a society, progress means becoming more empathetic to everyone, and if your knee-jerk response to a victim’s heartfelt testimony is, “But what about MY group’s suffering?” you’re doing it wrong.”

    I think it’s a good summary of the same basic sentiment.

  33. social justice warlock says:

    10,000 neoreactionary twitter users and not one named Disses What Democracy Looks Like or Gnon Serviam. get your act together fascists

  34. Does the growth mindset hold up to scientific scrutiny? probably not

  35. Zykrom says:

    Scott, (and others) what is your favorite FFH religion? Have you thought much about (not entirely selfish) reasons why people would worship Evil religions?

    • FacelessCraven says:

      Feral Fearsome Hysteria? Fallacious Figment Hogging?

      • Cerebral Paul Z. says:

        One of the things that drives me a bit nuts about this community is the number of TUAs you get thrown at you.

        • Unique Identifier says:

          ‘FFH religion’ in Google works for this case. It’s short for Fall From Heaven, which is no more meaningful when written out.

          [It is a sort of fantasy setting mod for the game Civilization 4.]

          • Cerebral Paul Z. says:

            Thanks: now it’s no longer a Totally Unfamiliar Acronym. (‘FFH religion’ in Bing does NOT work: there’s apparently a Christian rock band called FFH.)

        • James says:

          And don’t even get me started on the SSRs, the PMQs, and the GJVs.

      • Zykrom says:

        I’m interested in what was going through your head for the few minutes when you presumable thought “FFH religion” was a real category that

        1) Scott would have a favorite element of and

        2) Has Capital E-Evil members whose worshipers are “selfish”.

        Did you context match anything interesting?

        (and sorry for not being more clear)

    • Randy M says:

      The Civlopedia entries for the acolyte units was an attempt to give an insiders perspective on these religions, though evil ones may be a bit cliched in their motives (revenge, madness).
      Though if you see the Order as evil, that’s a good one to see it from the perspective of villains who think they are good guys.

    • Scott Alexander says:

      Flavorwise, Empyrean.

      In game, I’m afraid I always do Fellowship of Leaves, because the first-mover advantage in spreading a religion is pretty hard to beat. Also, it has good heroes (note my LW user name).

      I could imagine Esus as having the same sort of ethos as HPMOR Quirrell, a sort of love of manipulation for its own sake. OO is obviously Knowledge At All Costs. As for the Veil, well, the people who want to summon the demons of Hell to kill everyone and destroy the world are kind of hard to steelman as basically well-intentioned, even for me.

      • Nornagest says:

        note my LW user name

        Huh, I always thought that was a reference to Chrétien de Troyes. Guess I probably ran afoul of a bit of typical mind, given that my own username was dredged up from an obscure corner of the European epic tradition.

  36. Kyle Strand says:

    That Janet Johnson post is excellent, and really ought to be its own post somewhere where it can get more eyeballs on it.

  37. Ialdabaoth says:

    Hey, maybe someone can shed light on a particular piece of psychology for me, and what I’m probably doing wrong.

    This comes up often in all sorts of circumstances, so I’ll use a metasyntactical example.

    Suppose there’s an argument about Fleem. A bunch of people think that there’s just too much Fleem, and that Fleem is terrible and we should just remove Fleem altogether from our {economy / game mechanic / business process}. A bunch of other people think that there’s just not enough Fleem, and that we need FIFTY MORE FLEEM!!! to fix everything. So they fight a lot and nothing ever happens.

    So I run some experiments. It turns out, Fleem is color-dependent! Red Fleem is terrible and blue Fleem is great! I do some research, and discover that most people who hate Fleem started out exposed to red Fleem and most people who want more Fleem started out exposed to blue Fleem! This is looking really promising!

    So I sit down and say “hey guys! It’s not about how much or how little Fleem, exactly! You Boo-Fleemers were kind of right, too much red Fleem is terrible! And you yay-Fleemers were kind of right too, blue Fleem is great! Becayse the COLOR of Fleem matters! If we keep exactly as much Fleem, but make most of it blue, everyone does better!”



    And then, to my horror and confusion, Bob ‘Likes’ Alice’s post. And Alice ‘Likes’ Bob’s post. And they both start backing each other up in arguments, without changing their positions.

    And I’m like, “wait, Alice, if you think I’m wrong, shouldn’t you think Bob is TWICE as wrong as I am? And Bob, if you think I’m wrong, shouldn’t you think Alice is even worse? How is it that you both agree when you were both viciously attacking each other before I arrived?”

    And then they say “if we were attacking each other before you arrived, it’s YOUR fault!”, and they both report me and I get banned.

    And I am confused.

    Worse, I’m STILL having to put up with all this fucking red Fleem, and not enough blue Fleem.

    • I’ll go with the “hooked on stress” theory– Bob and Alice would rather fight than have a peaceful solution, and when I say “would rather” I don’t mean a preference, I mean a very strong compulsion.

    • Elissa says:

      What you describe is a normal thing that happens to a lot of people. Simple explanation? “Actually, you’re both wrong” is never going to be a popular position. If you can pitch your thesis so it doesn’t read like “actually, you’re both wrong,” you might make some headway. This may involve allying with one side or the other (allying with both = betraying both), or finding people to talk to who don’t already have an axe to grind on the subject.

    • ddreytes says:

      I think, when there’s an existing debate that’s presently going on, like you’re describing here, people are kind of preconditioned to see things through the lens of that debate, right? So, when Bob spends all his time arguing against the dirty Fleem-loving bastards, he’s on his guard for them trying to sneak something by him and find a way to promote Fleem. And Alice spends all her time making sure that Fleem is defended, as all right-minded people say it should be.

      So you come along saying, “Some Fleem is good, and some Fleem is bad.” And because Bob is emotionally committed to this argument and because he’s adjusted his behavior around operating in a context where there’s only two sides, he looks at that and he sees someone trying to sneak a little bit of Fleem in. The only part that’s relevant to him, at that point, is the pro-Fleem art. And vice-versa for Alice. And both of them agree that Fleem is the most important thing, regardless of what color it is.

      So, like Elissa says, I think allying with one side or the other, or finding a venue where people aren’t already emotionally entangled in the argument, is your best bet. Although, tbh, neither’s a really great strategy – I often worry that there’s something dishonest about changing the form and tone of your arguments on the Internet to match your environment and make arguments easier, even though I do it quite a bit.

    • Cauê says:

      I was following until they started liking each other’s posts and hating you over each other. And blaming you for their fight? I don’t recognize this, for any Fleem I can think of. I’m curious for unfleemed examples.

      • Irrelevant says:

        Yeah, as soon as it hit that point the situation shifted from looking like an example of “the enemy of my enemy is also my enemy” to looking like an example of the compromise position having been previously considered and rejected by both sides as the worst possible situation.

      • Sniffnoy says:

        Seconding this.

      • Irrelevant says:

        Oh, I can think of one example of this pattern, maybe? Creationists vs. Atheists can quickly become Creationists and Atheists vs. Guided Evolution-ists, since Atheists and Creationists happen to agree precisely on what the theological implications of a non-literal Genesis are.

        • Illuminati Initiate says:

          I don’t know about this specific example, but the example I immediately though of is how some atheists sort of respect fundamentalists despite hating each other, because fundamentalists at least logically follow through on their premises, while the “liberal” believers do not take what they claim to be deeply held beliefs seriously, and are in denial about what their religion actually says.

          They don’t really team up against the middle like this Fleem thing though, that seems really unusual.

          • For what it’s worth, you’ve described part of my attitude. I’m an atheist. But I am, I think, more sympathetic to fundamentalists than most non-fundamentalists, on the grounds that if you are going to believe something you should really believe it.

            Orwell somewhere wrote that what he wanted to know was not how many people would confess to a vague belief in a higher power but how many believed in Heaven the way they believed in Australia. If you believe in Heaven the way you believe in Australia, it ought to make you do and believe weird things.

            Also, I think most people overestimate how easy it is to figure out what’s true–believing, of course, that they themselves have already done it.

          • Irrelevant says:

            Orwell somewhere wrote that what he wanted to know was not how many people would confess to a vague belief in a higher power but how many believed in Heaven the way they believed in Australia.

            Are we sure he wasn’t just setting up a “Land Down Under” pun when he said that?

    • This isnt an internet phenomenon: the far left and far right have been staging pitched battles since forever, they done want moderation … both sides hate liberals… and they know they need each other.

    • blacktrance says:

      Charitable (to them) hypothesis: They think that you’re misunderstanding the debate, and thus are even more wrong than the other person.

    • Oh now I really want to know what fleem is. Anyway, possible:

      * The removal of familar mental categories creates confusion about how the world is, and confusion is scary. The more fundamental the category is for the person, the stronger the fear reaction.

      * Conflict gives people a role in their group. You’re threatening their social roles. How will they show their value to their peers if the conflict is gone?

      * The conflict may keep their in-groups united. They may perceive peace as a threat to their in-group stability. Perhaps they even view the opposing group as useful.

      Anyways, can I interest you in a possibly related article?

  38. Meta-question: How do people stay on top of discussion in the comments here? No notifications of replies you can subscribe to, right? The best I’ve found is to refresh the page and then hit the spacebar repeatedly until you see new comments outlined in green (hit shift-space to back up again if you scroll past something).

  39. William O. B'Livion says:

    And Scott Sumner says contra Paul Krugman that interstate migration to the south is driven by low taxes, not good weather.

    Anchorage is the largest city in Alaska, and has surprisingly moderate temperatures (,_Alaska). Yeah, no sunbathing, and you’ve have to be part SEAL or Seal to go for a swim in the ocean there, but if you can get work it’s not a *bad* weather. Note that the North Slope ( which is where there is moderate growth in population ) is driven by Oil, and has (at least in my opinion) horrible weather.

    The Gulf Coast also (again IMO) has atrocious weather, but in the opposite direction. I spent three months at Kessler AFB (Biloxi MS.)–April, May and June. April wasn’t bad, but in June by 6 AM it was close to 90 degrees and the humidity was higher. Noon was a steam bath.

    I don’t mind heat–I’ve lived in real desserts (Baghdad, Central Australia) and 120/130 F is manageable with proper clothing and water, if humidity is low–even without AC–but humidity just absolutes sucks. I’ll make an exception for NOLA if I can stay drunk though.

    The other side of this is that we’ve had A/C for a long time now. My Uncle made money in the 1950s in STL wiring 220 outlets for window units off the books (Union town, he was a Union Electrician, and he charged less than prevailing wage). We had it in the house I “grew up in[1]”, and that was a solid middle class house built in 1975. Every (second hand) car I’ve owned has has AC in it, except for the 1960s vintage VW Beatles. And maybe the 1975 Cougar, which had it, but it didn’t work as I remember.

    So we’ve had AC in houses available to the middle class since the 1950s, and in cars since the late 1960s.

    Why the sudden migration?

    Then I read the original Krugman piece.
    This was included in the first post on the subject:

    And that’s not an accident: warm states were also slave states and members of the Confederacy, and a glance at any election map will tell you that in US politics the Civil War is far from over.

    What a bigoted little turd he is.

    Until Reagan–which is really to say until Wallace and Carter, the South was reliably Democrat back when the Democrat party hadn’t been taken over by the internationalist left and started looking down their nose at the sorts of people who *do* live in the south (and rural parts of every state, you know those clinging to their guns and religion).

    The notion that The South is any more racist than any other place on the planet is the sort of reflexive bigotry of an elitist snob.

    He’s also so blinkered as to be an idiot when he says:

    With the exception of California — which has mild winters but also, now, has very high housing prices

    Now, most of the folks reading this from CA will *probably* be in SF Bay region, where this is *clearly* true. Or down somewhere between Grapevine and the Great Wall of California (aka the Mexican Border). Even the coast between those areas is pretty pricey.

    But that’s only a (big) fraction of the state. Redding is, if you can find work there, a nice little town with good weather (relative to NY or Illinois), *awesome* access to the Great Out Doors (Mt. Shasta has both down hill and nordic skiing), and some of the best (and least known) roads for a motorcycle in the country (299 for example). It’s also a place where you can get a 3 bedroom house for less than 300k IN TOWN. Go down to Chico, a College/Meth town and they’re even cheaper. Heck, if you’re a software engineer or some other sort that can telecommute most of the time Redding’s only Three or Four hours drive from S.F.

    All of the Central Valley towns have property in the 100 to 300k range, and although given the drought and CA’s rather predictable response to it there may not be much work there for long.

    Even the coast north of Santa Rosa isn’t *that* pricey.

    Not quite as cheap as Alabama or Mississippi (my Zillow preferences are set for what I’m looking for here in the Denver area, which is not “cheap”, but is also not that expensive).

    Much of California’s property values aren’t *cheap*, but they’re not off the charts expensive when you compare them to most of the New England states or Cook County Il, and the weather is a LOT better. So it’s not just the cost of property that’s keeping people away, and it’s not why there’s been a net-loss migration for a while (2007 to at least 2011, later figures I haven’t seen).

    It’s clear that while Krugman might have something valuable to say–as he is not a stupid or uneducated man–his dishonesty, hyper-partisan hackery, and lack of introspection make him unreliable.

    Fortunately the only people who read him are either on his side, or read him for blog fodder.

    [1] To the extent I ever did.

    • Anthony says:

      If you can handle Iraq or central Australia, Redding might not be so bad, but I’ve been there during their week of 115°F (46°C) that they get every year. Not living there is worth paying extra for.

      Though the not-so-distant suburbs of San Francisco aren’t much better – Concord and Livermore regularly get 104°F (40°C) in the summer, and Concord is closer to downtown SF than is the Outer Sunset.

  40. Artemium says:

    For those into AI risks and who are on facebook, there is a heated discussion going on in the FB group “Existential Risks” . Some of participants: Stuart Armstrong, Peter Voss, Richard Loosemore, Ben Goertzel and others.. . (comments under Pablo Stafforini’s post about Peter Singer)

    Update: Richard Loosemore is building wiki site with organized pro-and-con arguments for AI risks.


    Teaching a dog to be less fearful of other dogs– a good account of gentle desensitization. In particular, there’s a description of teaching the dog to run away cheerfully.

  42. CyborgButterflies says:

    Donation request cw:

    I am in the tumblr LW community and need money for a life or death situation I found myself in. I was advised to repost my request here.

    The summary is that I managed to trick my abusive and dangerous parents into letting me come to Canada as a student (I’m originally from El Salvador). Now I need funds to help with the refugee claim I’m making.

    I have been rejected by legal aid (they assumed I had access to funds because my parents were paying for tuition), and I will cut all contact with my parents, lose financial support, and leave the city before the end of the month.

    The details are in this post:

    The GoFundMe link is here:

    • Elissa says:

      My discretionary budget is pretty sparse right now, but I made a small donation. I hope other people will donate too!

    • Scott Alexander says:

      I endorse CyborgButterflies as someone who’s been in the community for a while and who AFAICT who they say they are.

      I’ll post this more prominently in the next Open Thread.

    • :O Can I help you via PayPal? If so, can you let me know what your PayPal address is? My e-mail address is a gmail one; pinkgothic@.

      (I can’t use gofundme, as they require a credit card. I also admittedly don’t trust them, as they started spamming me ‘to complete my transaction’ even after I’d just entered my details the last time someone asked for funding help – an unfortunately entirely pointless harrassment.)

      • CyborgButterflies says:

        I sent some relevant information by email. Thank you.

        • sh says:

          Same situation here: Inclined to support, but would much prefer not to deal with GFM. The domainpart of my address is The localpart is sh_pc_ssc.

    • Berna says:

      Made a small donation. Best of luck!

  43. Contemplate this on the tree of woe says:

    “Life is full of suffering, and its chief purpose is pleasure. There is no god and no after-life; men are the helpless puppets of the blind natural forces that made them, and that gave them their unchosen ancestry and their inalienable character. The wise man will accept this fate without complaint, but will not be fooled by all the nonsense of Confucius and Mozi about inherent virtue, universal love, and a good name: morality is a deception practised upon the simple by the clever; universal love is the delusion of children, who do not know the universal enmity that forms the law of life; and a good name is a posthumous bauble which the fools who paid so dearly for it cannot enjoy. In life the good suffer like the bad, and the wicked seem to enjoy themselves more keenly than the good” (Quoted by Durant: 1963:679).

  44. onyomi says:

    Reading Reactionary Philosophy in a Planet-Sized Nutshell, I had this thought:

    Does monotheism make you rich? Other than hot places being poor and cold places being rich, this seems to be one of the more notable correlations, not only in the world today, but historically. Note, I don’t consider Catholics and Eastern Orthodox to be true monotheists due to effective saint and Virgin Mary worship. That is, until the reformation, the Jews were maybe the only monotheists in Europe. And let’s face it, Protestants and Jews are the most economically successful peoples in world history.

    Atheists might argue that “the fewer gods you believe in, the richer you get,” but Europe has become less economically dynamic since it became more atheistic, while the relatively more monotheistic United States has held up reasonably well so far. Perhaps can explain with that old quote to the effect of “I’m not afraid that atheists believe in nothing, but that they’ll believe anything.” In other words, the practical effect of atheism on the average psychology may ironically be similar to that of polytheism–maybe a kind of “so open-minded your brain falls out” effect.

    The currently relatively atheistic East Asia could be a counter-example, and yet guess which Chinese city is the most economically dynamic? It just so happens to be the one with the most Protestant Christians: Wenzhou.

    • Anonymous says:

      1. Parsis.

      2. Monotheism makes you find oil.

      3. Christians in Korea are elite. I think that there is an even split between Catholics and Protestants, but no difference in wealth.

      A small number of arbitrary categories yields bad conclusions.

      • onyomi says:

        How do Zoroastrians do in comparison to neighbors?

        Islam does seem to be the biggest objection, since they seem to be pretty seriously monotheistic. Though historically, I think Arabs did much better post-Islam than pre-Islam?

        If anything, I’d say the oil may be doing a lot to hold them back today: having a lot of oil seems to be the national equivalent of being on welfare: provides a steady enough income that you never have to try very hard. See Venezuela.

        • Dude Man says:

          The thing is, if you eliminate Catholicism and Eastern Orthodox then you don’t have many monotheistic religions left. The only monotheistic with over 5 million adherents are Islam, Christianity, Judaism, Juche (if you want to call it a religion), and Baha’i. I don’t know how wealthy adherents of Baha’i are, but Juche certainly isn’t known for its wealth. If there is a connection between wealth and religion, then it is probably limited to just Protestantism and Judaism. However, my priors are that the wealth gaps between nations is due to non-religious reasons.

        • Anonymous says:

          Zoroastrians in Persia are marginalized and poor. Ethnically Persian Zoroastrians in India are smart and rich.

          A before and after comparison of Arabs is problematic because the Arabs assimilated most other Semites. Arabs proper definitely improved. Did Egyptians? Do you consider pre-Islamic Egyptians to be monotheists?

          I am skeptical of the “natural resource curse.” The well-known fact that countries with all their wealth from natural resources are in bad shape is silly. It is a tautology: countries that don’t produce, aren’t productive. Nigeria, Norway, and Texas are no less productive than their neighbors without oil. I think Aberdeen is in better shape than Dundee.

          • FJ says:

            The “natural resource curse” is pretty definitely real, at least in the sense of “Dutch disease”. I.e., if a small open economy experiences a boom in natural resource extraction, it will produce far more of the natural resource than its domestic economy can usefully consume. So it will export the extracted resource. Rising exports tend to drive the exchange rate up, which makes the economy’s tradeables sector less competitive with foreigners. So the successes of the Dutch oil extraction sector tend to operate to the detriment of the Dutch manufacturing sector. Norway and many Arab countries have fought Dutch disease by trying to sequester their oil profits abroad in foreign-denominated assets, to varying degrees of effectiveness. Texas benefits from the fact that it is in a currency union with a much larger economy, which tends to dilute the currency effects.

    • Irrelevant says:

      Does monotheism make you rich?

      No, but wealth probably makes you monotheistic. I can come up with lots of ad hoc justifications for that, but suspect the actual reason is that mono/pan/panentheism is in some sense “more correct” philosophically, and wealthier people start caring about the correctness of their theology.

      • onyomi says:

        In favor of this, Wenzhou was known as a place full of savvy businessmen even before it became known as a center for Christianity.

        Possibly in favor of monotheism promoting wealth: interviews I’ve read with Wenzhou businessmen say, essentially, “in China, it’s hard to trust people, but I know I can trust other Christians.” Despite being a city of decent size located in a heavily populated region, Wenzhou is also weirdly insular in terms of language and culture, so there may also be a sense of “I can trust Christians FROM Wenzhou.”

        If I were to venture a causative factor, I’d say that it isn’t so much monotheism per se as shared, intense belief in *something* not open to a lot of individual interpretation.

        I like Scott’s example of Mormons vs Unitarians. What ever one things about the correctness of Mormonism (and I don’t think much of it, personally), Mormonism does seem to very much provide its believers with clarity that in Mormonism, at least, there are right and wrong answers. Unitarianism, by contrast, seems to say there are no wrong answers: whatever floats your boat spiritually is valid.

        Similarly, Hinduism allows great leeway in terms of: do you want to pray to Ganesha? Shiva? Krishna? Do you want to meditate? Offer food to the poor? Circumambulate a temple 1000x per day? All valid paths to the divine. This actually seems *more* correct to me, but it might not provide the kind of core value set that fosters trust and reliability within a community.

        • Irrelevant says:

          there may also be a sense of “I can trust Christians FROM Wenzhou.”

          I suspect it’s got more to do with Maoism and the culture of opportunism and corruption it [encouraged/utterly failed to eliminate]: Christianity was recently persecuted, adversarial to the government, and remains out of favor, so Christian Chinese who have success and status almost certainly had to earn it the hard way, are likely to be true believers, and unlikely to be benefiting from cronyism.

          Compare it to being an open atheist business owner in the ruralish Bible Belt: if that guy’s succeeding, he’s probably more moral than the average citizen.

          • onyomi says:

            I don’t think most people think at that level of abstraction. I live in a mostly Christian area. If one of the Christian residents found out that a local business owner was an atheist, that would probably just slightly hurt his/her opinion of said owner, and maybe slightly diminish the probability of patronage. I don’t think they will go so far as to think, “wow, if he’s made it as an atheist in these parts he must be really great!” Other atheists might like more than they would in other contexts, but that would be more out of a sense of kinship with someone holding a view that is uncommon in the area, I think.

          • onyomi says:

            More proximate, of course, than monotheism, is the memeplex saying “education is important, work hard, uphold your commitments, making money isn’t evil,” etc. The question is why this memeplex seems to go with monotheism.

            Weber, and more recently, McCloskey have argued that it’s the “work ethic” designed to show people you’re among the “elect.” Judaism, similarly, never strongly emphasized the afterlife, but believed that God would reward you for good behavior in *this* life. In a weird twist on “growth mindset,” believing in determinism actually makes people work harder, not because work makes a difference, but because they want to prove to everyone (and themselves) that they are blessed.

            Then again, many polytheists believe the gods will help you out in this life, so why does the hard work memeplex associate with monotheism?

            My best guess is a kind of asceticism I associate with protestants and more observant Jews: one wants to make money, but not so you can spend it. Rather, the money is the symbol of your blessedness and contribution to the community. This seems an attitude very congenial to capital accumulation. If the Amish could forget their aversion to technology and outside interactions but keep everything else about their faith, for example, I predict they’d become very rich.

            Are monotheists more likely to adopt this ascetic attitude than polytheists? They seem to be, though I’m not sure why. With appeal to my experience in Catholicism and Protestantism, I can say Catholicism seems to include a certain, shall we say, “indulgence” with respect to “sinful” behavior. Say 10 Hail Marys (or pay the pardoner) and all is forgiven. In Protestantism it’s all between you and God. There’s no priest, no saint interceding for you.

          • Irrelevant says:

            I don’t think most people think at that level of abstraction.

            I don’t think these people think at that level of abstraction either. I think they mean what they say: the Christians are more trustworthy. I’m offering a theory for why they are more trustworthy that I believe is more plausible than that we’ve somehow exported the “Protestant work ethic.”

            And you appear to have misunderstood my claim in the atheist parallel: I’m not saying people in reality find atheists more trustworthy, I’m saying that atheists in atheist-adverse communities who are equally trusted would be expected to be more trustworthy in an objective analysis, because they had to earn that reputation while under fault-magnifying scrutiny.

            As for the “Protestant work ethic” idea more generally, I grew up in a region filled with German Catholics who displayed the same traits, and am inclined to believe that what you’re labeling “Protestant” and “Catholic” are proxies for Northern and Southern European. Protestantism emerged from, rather than created, the cultural differences between the regions.

          • onyomi says:

            So German Catholics are not culturally different from German Protestants in any appreciable way?

            Northern vs. Southern Europe, of course, does also seem to work, and also fits into the broader trend that cold places do better economically than hot places. But if it were really the weather, then why does civilization arise in Mesopotamia, Egypt, Greece, etc.? Is it just that farming is easier there?

            This wouldn’t explain at all, however, why the best businessmen in China are Protestants (thought I’m not ruling out that it could be that merchants like Protestantism, rather than Protestantism liking merchants).

          • nydwracu says:

            I suspect it’s got more to do with Maoism and the culture of opportunism and corruption it [encouraged/utterly failed to eliminate]: Christianity was recently persecuted, adversarial to the government, and remains out of favor


            If the Amish could forget their aversion to technology and outside interactions but keep everything else about their faith, for example, I predict they’d become very rich.


    • John Schilling says:

      James Watt was a Scottish Presbyterian. Adam Smith was a Scottish Presbyterian. Clearly the highest order of national wealth-mojo is found in True Scotsmen(tm), who are monotheists…

      Less snarkily, the recipe for extreme wealth-generation on a national scale was invented by a loose cluster of clever people in the 16th through 18th century, whose geographic center of gravity was somewhere in the North Sea. We can argue whether this was geographic, cultural, or genetic determinism, dumb luck, Divine will, or the work of a few Great Men of History. Doesn’t matter; it happened.

      About the first thing these North-Sea-Adjacent types did with their great wealth was buy lots of oceangoing ships and lots of guns, the better to go around the world and say, “Can we settle here? Are there people here we can be friends with, or should be just conquer and enslave them and take all their stuff?”.

      Thus, not only in religion but culture, politics, climate, geography, skin color, epidemology, and just about anything else that might influence where people choose to live and who they make friends with, there is likely to be at least a weak positive correlation with “similar to Presbyterian Scotland (or Northwestern Europe generally)” and “filthy stinking rich”. The extent to which this correlation reflects primary causality, is going to be hard to discern.

      • onyomi says:

        “…the recipe for extreme wealth-generation on a national scale was invented by a loose cluster of clever people in the 16th through 18th century, whose geographic center of gravity was somewhere in the North Sea…”

        But is it a coincidence that the Protestant Reformation (and associated shift to a more paired-down, ascetic sort of monotheism) took place in 16th c. north-western Europe?

        • Samuel Skinner says:

          It certainly looks like a historical accident in how it came to England and Scotland.

        • Irrelevant says:

          Common cause, not coincidence: both movements required the invention of the printing press to become successful paradigm shifts rather than historical footnotes.

          • John Schilling says:

            If that were the case, shouldn’t we be discussing why great wealth is associated with ancestor-worship and pan-theistic humanism, with Europe as a historical footnote after its conquest and colonization by the Song Dynasty?

          • Irrelevant says:

            Maybe I explained badly? I’ll try again:

            There was nothing new about attempts to make some sort of Protestantism-like (in the sense of “more pared-down and ascetic”) break from or reform to the Roman church, but they had always previously fallen apart because the worshipers found the results unsatisfying or the resulting community unsustainable and went back to the old way. The printing press was instrumental to the success of the movement that did succeed, however, because it allowed for cheap distribution of Bibles, sermons, etc. which let people develop fulfilling new methods of devotion and effective new methods of proselytism that didn’t depend on the centralized church.

            I’m less familiar with pre-Smith movements in economic thought than I am medieval religious reform efforts, so I can’t make as strong a claim that the printing press was necessary to inventing the discipline of economics as it was to successfully implementing Protestantism, but I think it was, and for similar reasons.

            So, onyomi thinks the timing of the development of economics and protestantism points to a connection between the two, and I think the connection is that both events could only happen when the printing press was on the table.

            I don’t know why the Song Dynasty failed to develop a market theory of economics.

          • onyomi says:

            I understand what you mean, but I don’t think having a market theory of economics is either necessary or sufficient for having an industrial revolution. If it were, wouldn’t it have come to Salamanca first?

            I think the first seed of European economic exceptionalism is in 16th c. Netherlands, and has a lot to do with political decentralization and associated destigmatizing of trade as a means to wealth.

            Those are probably also more proximate causes than religion, yet the reformation had only very recently come to the Netherlands in the form of Anabaptists and Calvinism when they seceded from Spain, partially for religious reasons, and right before their first really big economic boom.

            Therefore, might it not be that pared-down, monotheistic religion encourages a kind of independence and/or skepticism of authority which encourage political and social developments congenial to markets?

            Or maybe just having the option of more than one religion, regardless of the content of said religion encourages virtues of tolerance? But if that were the case, then China, with Buddhism, Daoism, and Confucianism should seemingly have done better, and their problem seems precisely to have been too much central authority and not enough skepticism of it. (Europe also always had the advantage of the Pope balancing authority of regional monarchs, whereas the Emperor of China was both highest political authority and highest authority in Confucian world order).

    • Jaskologist says:

      I think “monotheistic” is the wrong category, especially if you’re excluding all Catholicism, which seems wrong. Islam is a big counter-example, and I suspect Sikhism is, too. Your examples are specifically Christian Protestant*. It seems to match up closely to Weber’s “Protestant Ethic and the Spirit of Capitalism” which was even more specifically about Calvinism. I haven’t read it myself; I only know that it is well-regarded, but also disputed on many points.

      The claim Christianity leads to Capitalism leads to money does seem plausible to me, given that we know capitalism did arise in Christian lands, and not in other places.

      Relatedly, there was a study not too long ago that credited Protestant missionary activity with the rise of democracies:

      Statistically, the historic prevalence of Protestant missionaries explains about half the variation in democracy in Africa, Asia, Latin America and Oceania and removes the impact of most variables that dominate current statistical research about democracy

      (That one is a freebie for the NRXers)

      * And Jews, but I think they may simply be a separate case; and probably explained well enough by IQ.

      • On Weber’s PESC, basically it’s central jist, in conradiction to Marx’s inverse view, is that beliefs can lead to fundamental economic change. Specifically, that Calvinism was a factor in Capitalism. So apologies to any I offend if I have this wrong (it’s been years), but basically there was a heaven-determinism around at the time in Calvanism the required qualities that got you into heaven were love of work, excellence in your field, prudent lifestyles, self-denial of pleasure. Those qualities were given by god and not your choice, which created considerable anxiety. So you worked your *** off and saved all your money to prove to yourself and others that you were one of the chosen few with those natural divine gifts. And suddenly, for the first time, you have a big bunch of currency around to invest in capital intensive projects. It seems obvious that people that would aim to do that now, but before saving for investment was a thing, why would it occur to you to amass large amounts of currency when it would appear much more useful for pleasure or fixing up your farm or something else. People had to stumble upon on this kinda new use on-mass for the mechanisms (liquid investment) to really develop – Calvinism helped do this.

        PESC was part of a wider series of Weber’s on this topic. His other points include the idea that inwards looking meditative religions didn’t encourage beliefs with the side effect of accumulating captial, and therefore capitalism didn’t develop.

        Again sorry I may have mangled that a little, but there you go.

        Weber’s view is usually contrasted to the earlier Marx who think economic proceses somewhat determines stuff like religion. It’s an area where the debate has been pretty thoroughly done in sociology, but continues in a simpler form in modern circles because basically not too many people know sociology anymore (its repuation is pretty awful these days).

        I take the easy way out and say the causality is multi-directional. Or maybe that’s the hard way, I’m not even sure.

  45. Julia says:

    I found a 1920 book on palmistry and was hoping the parts on finger length would say something similar to any of the current findings on digit ratio. Sadly, no. A long index finger “indicates uprightness of character, a strong sense of justice and honour…when short, there is not much feeling of duty or obligation.” Ring finger: “when too long, talents will be turned to the acquirement of wealth. Speculation, the taste for gambling, or the love of games of chance, are seen when the finger is almost the same height as the middle finger.” I was hoping there would be some grain of truth, but no.

  46. Dale says:

    Did you know that video games in China are required to actively fight fatigue in minors? After 3 hours of play by a minor, you have to reduce the level of in-game rewards they get by 50% – after 5 hours, you have to reduce them by 100%.

    This is not related to anything it just seems like the sort of thing SSC people might find interesting.

  47. — comment blanked, because actually, nevermind. I think I’ll get someone else to ask this for me, because AWKWARD. :c —

  48. onyomi says:

    Just saw an article titled “Six of the Most Ridiculous Arguments Against Minimum Wage: Debunked!”

    Didn’t read the article, but I did think, “why would I want to read debunking of ridiculous arguments?” Wouldn’t “Six of the Strongest Arguments Against Minimum Wage: Debunked!” be a much more interesting read?

    Of course, I quickly realized that it’s because this is a pro-minimum wage article, so it cannot admit of the existence of non-ridiculous arguments against minimum wage. It can, however, promise to mock a position most of the readers of the article are predisposed to dislike.

    I feel like this typical lack of charity to opposing viewpoints is one of the biggest problems with political discourse…

  49. DrBeat says:

    So I’ve been going through posts on your old Livejournal just for something to read.

    Scott, why do you have Detective Conan bondage hentai in a post about Epicurus and the Catholic Church?

  50. Anonymous says:

    Last two times I tried to comment on this blog, it didn’t work. Anyone else have that problem?

    Edit: Oh, it’s because I was using hyperlinks.

    • yli says:

      I’ve been having that problem as well! And I *haven’t* been using hyperlinks. I’ll write a comment, click post, but the comment doesn’t appear. When this happens, neither firefox nor chrome will work. I hope this one goes through..

  51. Sam says:

    I remember reading (and I’m quite sure it was on this site, because where else would it be?) that the cleanliness of a polling station has a noticeable effect on whether people vote right or left, but for the life of me I can’t find the post linking to the original study. Does anyone have the link to the SSC post or the original study?