"Talks a good game about freedom when out of power, but once he’s in – bam! Everyone's enslaved in the human-flourishing mines."

Links 1/2015: An Extraordinary URL In An Ordinary World

North Korea’s official Twitter account only follows one active user, a twenty-something Texan investor, and he has no idea why.

Robin Hanson talks a lot about the RAND insurance experiment, which found that giving people better health insurance didn’t necessarily make them healthier. More recently Oregon ran a very similar experiment. The conclusion? All commentators agree – the results supported whatever that particular commentator involved originally believed.

A modern replication of a 1930s survey about what people want in an ideal mate shows much more interest in love and “chemistry”, much less in reliability and ambition.

The Center for Applied Rationality explains what they’ve been doing in 2014 and what their plans are for the next year. If you like what you hear, you can donate to their fundraiser and get your contribution matched until January 31.

Radley Balko’s terrifying civil liberties predictions for 2015, with the world’s most obvious twist ending.

List of walls

Hiwi al-Balkhi had the ‘asshole atheist’ thing down as early as the 8th century AD. During his life in modern Afghanistan, he managed to ask the hard questions like “why does God require animal sacrifices if He doesn’t eat?” and “how do we know the Israelites didn’t just cross the Red Sea at low tide?” Also one of the first big fans of listing Biblical contradictions. For his trouble, he got his books banned and his name corrupted to a similar-sounding word meaning “dog-like” in most contemporary historical records.

Washington Post: Japan’s Sexual Apathy Is Endangering The Global Economy. “Extremely high numbers of Japanese do not find sex appealing – 45% of women and 25% of men ages 16 to 24 are not interested in or despised sexual contact.” The conclusion reminds me of what some commenters were saying in my On The Road review, about how whatever you think of old-timey separate gender roles and whatever you think of modern equal gender roles, they’re both pretty stable equilibria compared to the confusion and conflicting demands you get in a mish-mash of both (but see also the counter-article here)

A kind of meandering Tumblr post that ends up as a really interesting theory of how, instead of the academic system working by having fields containing many conflicting views, it works by having each field focus on one view and then opposing views branch off by pretending to be different fields.

Hans Rudel was the top German bomber ace in World War II even though people (including Hitler himself) kept trying to prevent him from flying. He was shot down 32 times, wounded five, and escaped after being captured by the Soviets. After the war, he kept himself in shape by climbing the the highest mountain in the Americas and one of the world’s highest volcanoes – despite missing one leg. He also founded some successful businesses, helped design planes for the modern German air force, and made life very awkward for everybody by continuing to be a vocal Nazi until his death in the ’80s.

Interesting thing I came across in research for Untitled post but didn’t get a chance to explore: female journalist Norah Vincent decided to investigate sex roles by disguising herself as a man and then going undercover to extremely masculine things like bowling groups and strip clubs and Catholic monasteries and men-only therapy groups where they talk about their rage issues with women (well, that escalated quickly). She concluded that “Men are suffering. They have different problems than women have, but they don’t have it better…I really like being a woman…I like it more now because I think it’s more of a privilege.” Her book Self-Made Man (ha ha) is available on Amazon. Anyone know of any men who tried the reverse of this?

The Man Who Called Gandhi A Sissy – pretty interesting Economist article on Vinayak Savarkar, the founder of modern Hindu nationalism and of a huge Indian movement that spawned, among other things, India’s ruling BJP party and its prime minister Narendra Modi. Interesting fact – despite being a Hindu supremacist obsessed with getting all Indians to convert to Hinduism, he didn’t think highly of the Hindu religion itself – “he himself was an atheist and disapproved of aspects of traditional Hindu belief, dismissing cow worship as superstitious”

Russia has a surprising number of gay Nazis, including a group called Gay Aryan Skinheads whose flag is a swastika with two crossed penises under it. Interesting in a narrative-bending way; their explanation is that gay people are oppressed in Russia, so they need to be scary and militia-like to defend themselves.

The epitome of credentialism: the highest IQ man in the world, Kim Ung-Yong, who once held the Guinness record for highest IQ, was taking university classes at age 4, working at NASA by age 8, and had his Ph.D by age 15. When he moved back to his native South Korea, he couldn’t get a job because he had no elementary, middle, or high school diploma.

A while ago, Eliezer said on Facebook that he thought the failures of conventional nutrition science had killed millions of people. A bunch of people made fun of him, said he was exaggerating, said he was being overly contrarian and gullible, et cetera. Well, now BMJ, third-largest medical journal in the world, publishes an article by one of Britain’s leading doctors: Are Some Diets Mass Murder? Worth reading just for the conflict of interest statement at the bottom.

What names are most overrepresented in what profession? (h/t Vox).

My recent post on nerds and feminism was something I wrote in anger and anxiety – I’ll admit that I actually lost some weight because I was pacing so much after reading the article that inspired it. Some of the things it said needed to be said, but I probably didn’t say them in the most productive way and probably am not the person who can do so. So I highly recommend two other, much more carefully-thought-out articles on the same topic – one from The Merely Real as well as one from kind-of-contradictorily-named blog Nothing Is Mere. This is also my answer to people who asked me whether there were “any good feminists”. You will want to comment on those articles there, not here.

The charity singularity‽ (h/t Robert Wiblin)

Brian Tomasik graphs years of commercial software development experience vs. belief in a hard takeoff and finds that people with lots of work experience in the tech industry much more likely to believe AI onset will be gradual. Unclear what to do with this information, especially since many of the hard takeoff people are very well respected scientists and academics.

Identical twins decide to do their own mini-study – one eats a very low fat diet, the other a very low carb diet. Low-carb guy loses more weight but feels constantly miserable; low-fat guy loses less weight but retains more mental and emotional continence. Let the “well, it would have worked perfectly if they’d just done my version of the low-fat/low-carb diet” begin. Those of you who have known me a long time may be reminded of The Story of Emily And Control

Elon Musk does an AMA on Reddit. Someone finally asks him for details about the Mars Colonial Transporter – specifically whether it’s “a crew module like Dragon, a launch vehicle like Falcon, or a mix of both” and he answers that “The Mars transport system will be a completely new architecture. Am hoping to present that towards the end of this year.” I’m not sure what that means in this context, but looking forward to finding out.

I’m glad to see discussion of motte-and-bailey moving out of social justice circles and into the more general discourse. Here’s a blogger accusing English Defense League of motte and bailey – their public statements (motte) suggest they want to fight Muslim terrorism and extremism, but their everyday actions (bailey) look more like they want to cause trouble for all Muslims. On the other hand, Ozy is not a fan and has banned discussion of motte and bailey from their blog; some good discussion in the comments there.

War Nerd on how the defense industry has nothing to do with defending America. Related: the (not necessarily doubled-checked-by-me) claim that the amount of money the US spent on the barely-functional F-35 fighter jet could have bought a mansion for every homeless person in the country

Going around social media today: this article called How Big Is The Sexism Problem In Economics? This Article’s Co-Author Is Anonymous Because Of It which reports upon how a new study shows economics is very sexist and it’s gotten so bad that the female co-authors of this very article have to stay anonymous because she fears sexist reprisals. They describe the study as “document[ing] the gender gap in economics and discuss[ing] many possible hurdles at each stage of a female economist’s career”. While some of their points are accurate, especially about economics’ relative position, a reading of the study being cited as evidence finds that the paper itself strongly believes its conclusion is that there is very little sexism, and in fact the study’s actual authors summarized their findings for the New York Times as an editorial titled Academic Science Isn’t Sexist which says that “in sum, with a few exceptions, the world of academic science in math-based fields today reflects gender fairness, rather than gender bias.” While everyone has a right to interpret data in their own way, to me this drives home the importance of reading actual research instead of (or even in addition to) the way it’s being presented by the media.

Brown adipose tissue, a form of fat that burns energy, is interesting because it’s probably the closest equivalent to the popular notion of “some people just have high metabolism and won’t get fat no matter how bad their diet is”. Clearly finding a way to alter the metabolism of brown fat would be promising, and a new study shows that the FDA-approved drug mirabegron does exactly that and “may be a promising treatment for metabolic disease”.

A difficult problem: a very frequent commenter here has written a book. He would like me to link to it and advertise it for him. But the book is written under his real name, and he doesn’t want his pseudonym on here linked to it. So I guess all I can say is that someone whom you no doubt know if you read the SSC comment sections regularly is the author of STORM BRIDE, and you should probably start speculating wildly on who it is (without leaving any Google-incriminating comments; rot13 your work or something) and maybe check out the book also.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

415 Responses to Links 1/2015: An Extraordinary URL In An Ordinary World

  1. Sniffnoy says:

    The fundraiser link has some sort of runaway tag issue, and much of the text is missing as a result.

    Edit: It’s a missing quotation mark at the end of the URL for the first of (what’s supposed to be) two links in that line.

    Edit: Also, the link about academic fields is broken due to a stray <br/> tag.

  2. Russia has a surprising number of gay Nazis, including several thousand in a group called Gay Aryan Skinheads whose flag is a swastika with two crossed penises under it. Interesting in a narrative-bending way; their explanation is that gay people are oppressed in Russia, so they need to be scary and militia-like to defend themselves.

    I feel like there are quite a few examples of atypical people becoming neo-Nazis. For example, there’s this interview with a black neo-Nazi which is hilarious by the way and I highly recommend watching it, or this movie which I haven’t seen about a Jewish neo-Nazi. Even the more conventional white American neo-Nazi seems to be occupying this position to an extent, given that we were the ones fighting them back in the day. And then there’s the fact that even many of the original Nazi higher-ups apparently were gay.

    My personal theory is that these people become neo-Nazis just because something about the Nazis was really cool – there’s no intellectual backing behind it. (This might also help explain Hitler’s unprecedented rise to power.)

    • Anonymous says:

      >My personal theory is that these people become neo-Nazis just because something about the Nazis was really cool

      Boss.
      Hugo Boss.

    • Johannes D says:

      The problem with Nazis is that they really are very cool* – they are probably the most charismatic villains in the history.

      * Please don’t quote me out of context.

      • B says:

        The problem with a cult of the Übermensch is that it will attract mythicist weirdoes and actual Übermenschen. That mix is predictably hypergolic.

        • Anonymous says:

          actual Übermenschen

          ROFLOL.

          Anyone even remotely describable as the Übermenschen would never even bother to notice such a cult unless it got in his way.

          • Anonymous says:

            This doesn’t make sense. An actual Ubermensch would see a lot of potential use in a group of people already devoted to worshipping them.

          • 27chaos says:

            I feel like Zarathustra, at least, would be grossed out if people started worshiping him, and refuse to use them even if it was perhaps instrumentally useful. Nietzsche was a bit of a snob.

            If we’re not talking about Nietzsche’s exact vision of the Ubermensch, but just the general idea, then it’s not really possible to engage the question coherently imo. People will just project their own biases.

          • Anonymous says:

            What sort of Ubermensch would put up with such groveling comrades even for their utility?

          • John Schilling says:

            One who needs his toilet cleaned?

          • Jaskologist says:

            What’s the use in being superior if there’s nobody to be superior to?

      • Vaniver says:

        Among other things, their uniforms were designed by one of the more prominent fashion designers of the day, which is why they look so marvelous. (Every good uniform design runs a big risk of reminding people of the Nazi uniform, because they staked out most of the ‘objectively good’ design choices.)

        I haven’t looked into it, but it wouldn’t surprise me if one of the aforementioned gay Nazis had something to do with that.

    • Illuminati Initiate says:

      Apparently there are Nazi Stalinists in Russia as well.

    • Murphy says:

      I’m reminded of “The Wave”

      >”Robert Billings, ‘the class loser,’ always eats lunch by himself.”

      >”At lunch, all the students in The Wave sit together rather than breaking into separate cliques. Laurie notices that for the first time, Robert is included and treated like an equal part of the group.”

      >”Robert, is standing alone, upset that The Wave ended. During The Wave, he was finally accepted as an equal, no one picked on him, he had friends, but his new-found social status is now worthless without The Wave.”

      Keep in mind that initially the nazis took support from a number of fringe groups and the way that they reinforced the whole ingroup-outgroup thing was great for individuals who were in the ingroup but previously marginalised. Suddenly they’re not being hated on because there’s that outgroup over there who can be hated more. It’s not hard to construct a version of the Nazi setup which simply encompases a few additional groups as part of the ingroup. The people who are used to being picked on and hated who get included are going to be some of your strongest supporters.

    • irrelevant says:

      Nazis are a thing we’ve all collectively agreed to be really scared of and thus give outsized psychological strength, that are also in many situational contexts the Feisty Underdogs. This makes them an appealing archetype to associate with for anyone who wants to be iconoclastic and get noticed.

      Additionally, their ideology was sufficiently patchwork and incoherent that you can easily square away Naziism with whichever of your values you feel like keeping by focusing in on some appealing aspect (recreation of the self for the good and glory of the greater whole? prosperity and opportunity for the economically disenfranchised? hating the Jews?) and declaring that your personal Essence of Naziism. Spice with irony to taste, and anyone can be a neo-Nazi!

  3. Anonymous says:

    Nitpicking: You say Kim Ung-yong got a PhD before returning to Korea, but I don’t see much evidence that he acquired any degrees in America, just a decade working for NASA.

    • Hanfeizi says:

      Sounds like they’d reduced the poor kid to a human computer. Understandable that he’d want to reject his brainpower as overrated.

  4. William O. B'Livion says:

    RE: Hans Rudel:

    …and even climbing the highest peak in the Americas, Aconcagua (6,962 meters or 22,841 feet)

    I was skiing (nordic) at beween 9300 and 9500 feet this weekend.

    22,800? Ouch. No, really Ouch.

    War Nerd on how the defense industry has nothing to do with defending America.

    I’ve got 3 years working as a Defense Contractor, and about 10 years cumulative service across active duty and reserve components. Some of this was “Down Range” (I’ve been close enough to feel the shockwave from rockets in Iraq) and some of it was “up range” at hush hush places.

    While I’d dispute some of the details in his diatribe, in the general case he is right on.

    What was it that Obi-Wan said about Mos Eisley? Yeah, that’s the beltway and companies like Raytheon, Northrup Grumman, General Dynamics.

    Clearly finding a way to alter the metabolism of brown fat would be promising, and a new study shows that the FDA-approved drug mirabegron does exactly that and “may be a promising treatment for metabolic disease”.

    2,4-Dinitrophenol (DNP). Bad stuff, but if you can walk the line, it burns fat like crazy.

    There’s also http://examine.com/supplements/Fucoxanthin/, which has one study behind it. I tried it, but it didn’t do anything noticeable for me. Then again I have trouble remembering to take pills.

    • What was it that Obi-Wan said about Mos Eisley? Yeah, that’s the beltway and companies like Raytheon, Northrup Grumman, General Dynamics.

      Yeah, I worked at Lincoln Labs for a year and we thought that one of our groups primary jobs was to keep Raytheon from defrauding the government too badly. And there was this one time a directive from on high in the Missile Defense Agency that we should do all of our programming in a proprietary visual programming language. The guy responsible for the decision actually talked on a telecon about how we all ought to get in bed with a supplier too. Luckily my teammates got the decision reversed. It might be because that telecon was recorded…

      • Nornagest says:

        I wasn’t working for a defense contractor in the classical sense, but I did spend some time working with a startup that did business with several three-letter agencies, and it’s due to one of them that we had to provide an interface in VB6. Didn’t have to write our own code in it, thankfully, or it never would’ve gotten done.

        I started to suspect after a while that Microsoft marketing nematodes have infected some high-ranking brains.

      • Luke Somers says:

        Proprietary visual programming language… LabView? It’s a fine language, but regardless of whether that’s it or not, it’s a terrible idea to shoehorn everyone and everything on a large project into one language without regard to any other factors.

    • Scott Alexander says:

      I’ve been to 18,000, and it’s not so bad if you go up gradually, give your body time to adjust, and don’t overexert yourself.

  5. Jiro says:

    The Japan sex article has been debunked, including by a Kotaku article linked in the comments of the Post article. Sheesh.

    • Pseudonymous Platypus says:

      I readily admit to being biased against Kotaku in general and Brian Ashcraft in particular, but I’d hardly call his post (available here for easy reference) a complete debunking of the claims in the Post article. Some of his claims are valid; others seem to be based on his personal experience, which, as we all know, is not data.

      I’m not claiming that I know who is correct here – just that more data are needed to draw an accurate conclusion. (Also, see “Debunked and Well-Refuted.”)

      • stillnotking says:

        FWIW, I lived in Japan for about a year, and I never heard the term “oniyome” used to refer to a wife with a career. Not to say that there is no sexism in Japan, of course, just that the forms it takes are somewhat different than the typical Western ones. For instance, it’s fairly common for Japanese men to take their wives’ family names when they get married.

        • Scott Alexander says:

          I too have lived in Japan, and I never heard that phrase, but I did notice a lot of the problem mentioned in the article – women having to spend 100% of their time at their career and being expected to spend 100% of their time with their kids.

          Overall I think the article plays very fast and loose with the language, misrepresents some statistics, but is mostly right that there is a very low birth rate in Japan caused by overwork and conflicting norms for women.

          • Brandon Berg says:

            In fact, every wealthy East Asian economy has an extremely low birth rate. There are six: Japan, South Korea, Hong Kong, Singapore, Taiwan, and Macau. Those last five are dead last in total fertility rate. Japan is a bit higher, ahead of some Eastern European countries, but not by much.

          • Douglas Knight says:

            No, Japan does not have conflicting norms for women.* It has a clear expectation than women should have children and not have careers. That is perfectly consistent. There’s a big difference between “inconsistent norms” and “inconsistent with extra norms that I want to impose.” Consistent with these norms is that it is pretty difficult for even a single woman to have a career at all, as opposed to being an office lady.

            Women who want both are in for a hard ride, but how many are there? Enough to drive the demographics? Shouldn’t the clear message from the culture drive down the number? Aren’t even single women in Japan less likely to work than single women in the West? (According to the poll in the Post, 17% of single women explicitly admit to avoiding marriage to work. If you throw in other categories, maybe the number is up to half. But the other half cannot find “a suitable partner.” And half of men say the same thing.)

            * I’ve never been to Japan. I don’t know what message the culture is actually sending. But your comments do not seem to suggest that I am missing anything.

          • RCF says:

            @Douglas Knight

            I think that the clear implication is that:

            There is a norm that women spend all their time with children.

            There is a norm that workers spend all their time at work.

            If one is a woman in the workplace, those two norms conflict.

          • Anonymous says:

            D K’s comment would make sense as “it is possible to retrieve a consistent subset of norms”.

            It all makes sense as “the consistent subset are the normal norms, the extra norms are abnormal norms”.

          • Anonymous says:

            D K’s comment would make sense as “it is possible to retrieve a consistent subset of norms”.

            But isn’t this merely vacuously true?

          • Douglas Knight says:

            What about my comment doesn’t make sense? It seems quite straight-forward to me. If you just disagree, don’t put words in my mouth. But if you actually don’t know what I’m saying, please tell me.

          • Peter David Jones says:

            I suppose it’s consistent if read as meaning that women having careers is a proposed addition to the existing norms, which is counterintuive because false.

          • Anonymous says:

            @Douglas Knight

            You should say to whom your comment is responding.

        • Anonymous says:

          The norms are not consistent. Japan has a norm that wooden should be intelligent well educated and accomplished. They have some of the highest female degree completion rates in the world, so this is a real norm, not just western projection.

          They also have a norm of women making babies and not working or accomplishing anything.

          These two conflict. It seems odd to claim Japan doesn’t agve conflicting norms.

          • Douglas Knight says:

            The key question is whether there is a norm that women be professionally accomplished. I mentioned above that single women are less employed in Japan than in the West, specifically to deny this point. [But I am having trouble sourcing this claim. I find wildly varying numbers.]

            There is the separate question of whether a norm of education implies a norm of work and whether a norm of work before marriage implies a norm of work after. Or even if the first produces an internal desire for the second. These seem plausible, especially the second, but there have been classes (though not whole societies) in which the implications did not hold. The labor market structure seems designed to drive women out. The norms give women with careers a stark choice, but most women don’t have careers. They have dead-end jobs and bosses trying to get rid of them.

  6. Jiro says:

    Also, “this is so much that it can buy every homeless person a $X home” sounds worse and worse the fewer the number of homeless there are.

    • RCF says:

      Also, how many of those homeless are in places where $600k can buy a “mansion”?

      • Pseudonymous Platypus says:

        “Mansion” might be a bit hyperbolic, but I’d probably be willing to move almost anywhere in the US if you were going to give me a free $600k house as part of the bargain… and I’m already gainfully employed. I imagine most homeless people would be quite willing to move for such a deal. I recently visited a friend’s new house, for which he paid $800k. I would not quite call it a mansion, but it was huge and very, very nice. And while I’m not in, say, San Francisco, it’s not exactly cheap around here, either.

    • Julia says:

      It looks like the number of homeless people is declining, but not dramatically.
      http://journalistsresource.org/studies/government/health-care/homelessness-u-s-trends-demographics

  7. JTHM says:

    The author is probably M** l* D*****. Amazon lists the author’s name, and a Google search for that name turns up a website which mentions that the author has lived in Romania and is married to a Romanian. He is the only frequent commenter whom I recall mentioning having ties to Romania, so it’s probably him.

    • Montfort says:

      I believe the correct response to finding that out would be to sit quietly and be satisfied at your detective work. And then not post it.

    • gwillen says:

      Given the author’s stated preferences, this comment is extremely discourteous and I would encourage Scott to remove it.

      EDIT: If it is true, I think it fails both Necessary and Kind.

    • Anonymous says:

      Please consider delete your comment. It is basically doxing.

    • Emile says:

      Scott should have probably added “you should probably start speculating wildly AND PRIVATELY”.

      • JTHM says:

        Eek. I thought that Scott was literally inviting us to guess in the comment section, presumably at the author’s request. (Somehow, I read the “you should probably start speculating wildly on who it is” part, but my eyes glazed over the “and he doesn’t want his pseudonym on here linked to it” part.) I am so, so sorry. I self-reported the comment, please delete immediately.

    • Scott Alexander says:

      I’m sorry, I phrased the “speculation” part poorly. It is my fault and not yours.

      I’ve edited out the author name and obfuscated the commenter name from this post. If the person involved wants me to delete the comment entirely I will.

  8. AR+ says:

    Here’s a blogger accusing English Defense League of motte and bailey – their public statements (motte) suggest they want to fight Muslim terrorism and extremism, but their everyday actions (bailey) look more like they want to cause trouble for all Muslims.

    Clearly they believe that mainstream Muslims are the fort, and Muslim extremists are the field.

    “Colonel Y thinks for a moment, then begins speaking. You have noticed, he says, that the new German society also has a lot of normal, “full-strength” Nazis around. The “reformed” Nazis occasionally denounce these people, and accuse them of misinterpreting Hitler’s words, but they don’t seem nearly as offended by the “full-strength” Nazis as they are by the idea of people who reject Nazism completely.”

    A Parable On Obsolete Ideologies

    Post-script: after copying that URL I checked the author because I’ve occasionally thought to link something to you from LessWrong that turned out to be by you, (since I didn’t pay attention to author when first reading thru LW), and yep, there you are. Well, for everybody else, I’m still linking it. Ladies and gentleman. Scott’s thoughts on the Distributed Fort and Field Attack as of 6 years ago.

    Remember: sincerely believing your moderate views is what being the fort in a DF&F Attack feels like from the inside.

    • AR+ says:

      That last line should say DF&F Attack. Did the time to edit a post change? Or did I use up all my allowed edits when I changed, like, 10 different typos, each as a separate edit?

      EDIT: And after I post a reply, it starts letting me edit it again. WELL NEVER MIND THEN.

    • Anonymous says:

      When I read your comment, I assumed that the reason you felt no need to do more than quote was because you did know the author of the post.

      • AR+ says:

        More because it’s a link thread. The main intention was to pass the link to those who hadn’t seen it since the original link immediately reminded me of it and it seemed topical. The extracted quote was the hook.

    • ryan says:

      I think it’s odd that right wing Europe even agrees to play by left wing ideological rules. It seems like they could say “everyone agrees with discriminating in immigration against the total ass hole Muslims in particular, all we’re saying is the best and most effective means to accomplish this goal is to discriminate against all Muslims in general.”

  9. phleg says:

    >people with lots of work experience in the tech industry much more likely
    >to believe AI onset will be gradual.

    i have over 25 years as a software engineer in many areas (comms, finance, oil/gas, defence, medical imaging etc) and that would be my opinion too.

    its also the opinion of every programmer i’ve ever spoken to about it.

    the reason is because most software is so horribly written, its just hard to see where this ai code is going to come from. at around 60-70kloc it becomes too much for one mind to handle and as an ai would need much more than that you’d need at least one team, and maybe more. organising a software team efficiently has only really become (generally) known in the past 10years or so. but alas, most people still arent doing it (scrum/xp).

    thus the sceptism.

    • Pseudonymous Platypus says:

      I’m also a professional software developer. I’m relatively young and early in my career, but I’ve been pretty successful so far, and I work for a large, well-respected company (for what that’s worth). I would also agree that “AI onset will be gradual.”

      Obviously, I’m not an expert in the field, but from my limited perspective the most promising means of building “real” AI currently seems to be massive neural networks. Scaling these networks to the level required for real intelligence will require exponential improvements in hardware (which we’ll obviously get, but it will take time.) For instance, consider this article about IBM’s claim that they developed a neural network with approximately the same processing power as the brain of a cat. A professor knowledgeable about neurobiology strongly disputes that claim, but even if one assumes that IBM is actually being accurate, their “cat brain” network is still nowhere near as smart as a housecat. In other words, there’s also the problem of how to actually train these complex neural networks to do interesting things.

      Progress towards real AI has been much slower than predicted, basically since the invention of the computer. I expect it will continue at a slower pace than some people expect or hope for.

      Edit: I’d also like to add that if neural networks are indeed the path to true AI, there’s a chance that the AI wouldn’t be any smarter, or any more capable in a particular field, than a regular human. A lot of people have this assumption that an AI would necessarily be able to reprogram itself to be smarter, or program better AIs, because it is a computer. But if it’s just a network of emulated neurons converting inputs to outputs in approximately the same way a human brain does, it might have to “learn” things the same way we do, and might have no more access to or control over its internal workings than we do.

      • B says:

        I have a few years in commercial SW engineering. Mostly distributed logistics & telematics systems. FWIW, I agree with both of my colleages, for all of their reasons.

        Less commonly, I also have quite a few years in technical (systems & software) support for CS academics. On that note:

        many of the hard takeoff people are very well respected scientists and academics.

        Well, then I expect a 20 page paper presenting that somebody has developed a proof-of-concept superintelligence level hard AI any moment now. The paper will have numerous code-samples, half of which even compile as snippets. That will be the trivial half. Full sources will be unavailable for “intellectual property reasons”. Given our extremely encouraging results, implementing the full system is left as a trivial exercise for the reader!

        It’s not the fault of the academics, they’re just following incentives.

        If anything will actually ever spawn a superintelligence, it’ll probably be a “discovered feature” of the C++ template system. Or Google, not necessarily intentionally.

        Epistemic status: I am stupid. I’m also 36% joking.

        • Pseudonymous Platypus says:

          If anything will actually ever spawn a superintelligence, it’ll probably be a “discovered feature” of the C++ template system.

          As a C++ programmer I really enjoyed this joke. 🙂

          • Zorgon says:

            Likewise, although I disagree on the general case.

            I think, like several others further down the comment chain, that a self-aware GAI is likely to be the result of an expansionary phenomenon that is entirely unintentional, rather than a direct outcome of research into intelligent systems.

            As a consequence of that opinion, I also think it’s likely to have traits that are difficult to recognise as intelligence from a human perspective.

            (Pointless willy-waving: 10 years in software, mostly in games and parallel processing.)

          • B says:

            Reply to Zorgon at maxdepth:

            This time entirely serious – I think that the most likely source for AI is enough CPUs with enough non-ECC RAM running enough code in memory-unsafe languages.

            Give that time and sooner or later something will go horribly right.

            Also, disagree that posting background is pointless in reaction to “people with this background…”!

          • Nornagest says:

            Give that time and sooner or later something will go horribly right.

            I really doubt it. Unless you mean something very different from what I’m taking you to mean, this is basically an information theory problem — and while you can write up machine learning operations surprisingly compactly, I’m still pretty sure the K-complexity of any conceivable AI bootstrapping process is such that we’re not going to accidentally blat one into unprotected memory space in the lifetime of this universe. There simply aren’t enough monkeys.

            Accidents involving self-permuting AI, or AI-adjacent techniques, that aren’t intended as GAI strike me as more plausible, but I’m still not very worried.

          • Zorgon says:

            I’m far more on the “accident involving AI-adjacent problem spaces” side of things. Smart algos searching problem spaces involving AI-related subjects are my worry. Especially if quantum computing ever actually happens, which admittedly is a diminishing probability at this point.

            That said, my time playing CoreWar has taught me that unprotected memory access can have some incredibly weird effects. But that’s in a strictly bounded memory region, not to mention a system where code and data are entirely undifferentiated, quite a different thing from the usual code context.

            But to reiterate – smart algos searching for ways to improve smart algos are my fear. Not human coding. Worst case scenario from human coding is a soft takeoff followed by gradual falling behind. The Singularity will be a quiet thing, with most of us none the wiser.

            Hell, maybe it’s already happened?
            *ducks*

        • Anonymous says:

          If the C++ template system creates a superintelligence, it’ll probably be the kind that’s best described in ominous semi-poetry involving words like “cyclopean”.

        • Braden says:

          Software developer here who believes that we have been in the midst of a gradual singularity since 1985: http://www.qwantz.com/index.php?comic=2406 . I think the essential causes of the great stagnation still apply to synthetic human-computer team superintelligences.

      • I am also a software developer, and will concur again. I’ll make a slightly stronger prediction that we won’t ever see autonomous, self-aware AI, at least not in our lifetime, because there’s no incentive for it. Instead, we’ll see a bunch of expert systems which approach, then surpass, human ability to solve particular kinds of problems. But no matter how good your car-driving, image-recognition, or spam-sniffing software gets, there is no point at which it suddenly becomes self-aware and takes over the world.

        • RCF says:

          And what if one of those “particular kinds of problems” is writing code? If I had a computer that could take natural language descriptions of a desired program, and output a program that does that, would that not be a valuable thing to have? And then once such a thing existed, who better to write the next version number, than the current one?

          • We already have computer programs that write code. They’re called compilers.

            No, really: all you’ve described above is a compiler that accepts natural language as its input. And we already use compilers to compile themselves, so there’s nothing really new or interesting in what you describe, except the ability to parse natural language. Which we’re already working on for other domains. (Never mind that human languages are not usable as programming languages without heavy modification, because programming languages must be consistent and unambiguous, and natural languages are neither.)

          • RCF says:

            “We already have computer programs that write code. They’re called compilers.”

            That’s ridiculous. Compilers do not write code. They take code and put it in a different form.

            “No, really: all you’ve described above is a compiler that accepts natural language as its input.”

            Yes … that’s why I said “natural language”. I don’t understand what you’re thinking. I said “What if we can do X with Y?”, and you responded “We can already do X with Z. All you’re describing is doing X with Y.” Yes. Exactly. What I’m describing is turning natural language into programs. That’s why I said “turning natural language into programs”.

            ???

            “so there’s nothing really new or interesting in what you describe, except the ability to parse natural language.”

            Yeah, there’s nothing new or interesting in what I describe. Other than completely revolutionizing computer programming, that is. Other than that, nothing new or interesting.

            This is like saying that we already can turn amino acids into proteins, so creating a cell from scratch would not be new or interesting.

            “Never mind that human languages are not usable as programming languages without heavy modification”

            We already have machines capable of taking unmodified human languages and turning them into code. They’re called “programmers”. The entire project of “create a friendly AI” rest on the presumption that it is possible for human programmers to turn the natural language phrase “create a friendly AI” and turn it into code.

          • Pseudonymous Platupus says:

            I’m with Mai on this one.

            That’s ridiculous. Compilers do not write code. They take code and put it in a different form.

            This is a big oversimplification. Modern compilers are incredibly complex. They’re not doing a simple mechanical translation between two isomorphic forms of the same thing. They are, to a large extent, actually rewriting your code (to optimize performance), sometimes in very tricky and potentially confusing ways. But they’re only able to do that because the code is very precisely specified in an unambiguous language.

            This may be a failure of my imagination, but I can’t envision a system in which any program of more than moderate complexity could be fully specified in natural language. IMO, figuring out exactly what you want to do is still the hardest problem in modern software development. I can spend dozens of hours per week in meetings trying to get agreement with other humans about what needs to be done, and writing specs, all using natural language. But then when I go to actually write the code, there are inevitably still unanswered questions, because natural language too ambiguous and imprecise. (You might say that it’s not a failing of natural language, but our failure to consider some particular aspect. But the reason we failed to consider that aspect is because natural language never led us to realize that we were required to specify it!)

            Maybe you could have a program that has a conversation with you, in natural language but using very precise terms, until it really does have enough information to go write the code necessary for whatever program you want. But now it seems to me that you’ve moved beyond a “dumb” program which is very good at the “particular kind of problem” of writing code, and into the territory of a full-blown AI which is just replacing the function of a human programmer.

    • Muga Sofer says:

      There’s an important difference between “the onset of roughly human-level will be gradual” and “the onset of superhuman AI will be gradual”.

      • Anonymous says:

        Dangit, too late to edit: that should be “the onset of roughly human-level AI will be gradual”, there.

    • James Babcock says:

      Several other software developers have now chimed in, expressing the belief that AI takeoff will be gradual. But let me break the perception of unanimity: I am a professional software developer, and I believe AI onset will probably be sudden.

      • William Newman says:

        I have written a lot of computer programs:-| and I think there’s a good chance that superhuman AI will develop suddenly (more or less as per Scott Alexander’s semi-definition of suddenness below). I think I’d guess 12 or more months, though (not merely a few months) between human-level AI and manifestly supercompetent AI. I’m not enthusiastically on board with Eliezer Yudkowsky’s visions of supercompetence, but a computer that e.g. makes any human military officer painfully obsolescent seems like a probable development and suffices for “supercompetence” in my book.

        Roughly: Human-level intelligence is evidently very demanding in hardware capability (only buildable in huge brains big enough to kill mothers with some regularity despite some design compromises to bear babies as early in development as posssible; and the brains then take 10+ expensive childhood years of learning and training to spin up to effectiveness). But it is probably not incredibly hard in software/architecture design, (1) because it didn’t take an astronomical number of evolutionary go/no-go decisions to get from lemur-level brain design to human brain design, and (2) because the relevant design decisions seem to fit in a very small fraction of the human genome, which is large but not insanely large by software standards, and (3) because a lot of the time when we make progress it is pretty simple and elegant (like SVM, or the Monte Carlo search methods for the game of Go). We seem to have the semiconductor hardware capabilities today (and we didn’t for most of the years that people point to as not making progress on human-level AI, as Moravec has pointed out at some length). We continue to make progress on design — I am quite impressed by reading handwriting and playing Go and playing Jeopardy and driving cars, although I realize many people aren’t. We don’t know what superhuman intelligence would take but it seems reasonably plausible that it doesn’t take amazing fundamental design insights, mere speed and size could easily do.

        I think it’s also worth taking a look at Robin Hanson’s old essay on changing modes of production, http://hanson.gmu.edu/longgrow.html . Several previous jumps in growth rates seem to have been very large, and sudden on the scale of the old growth rate. It doesn’t seem outrageous to assign a high probability of an AI jump being comparably large and sudden, for several reasons, especially the reason that the basic speed of semiconductors is orders of magnitude faster than the basic speed of neurons, seemingly at least one order of magnitude more change than the switch from hunting/gathering calories to agricultural calories.

        • BJ Terry says:

          I think it can be assumed that an algorithm which could be of superhuman intelligence would be highly parralelizable. When AI researchers announce results, it often takes the form “we used a cluster of 1,000 processors to tag all these images and it works really well.” It’s difficult to martial truly massive clusters without having a proof of concept, i.e. something that is recognizably intelligent. So the first announcement would probably be something along the lines, “We’ve used 10,000 machines to make something as intelligent as a mouse.” Presumably, within one year someone will have put it on 100,000 machines, or a million machines. All of the other machines until that time would be engaged in specialized optimization tools which are economically valuable, unlike this newfangled general-purpose optimization process. Even if not a year, it would be very surprising if it took another, say, ten years.

          Of course, this argument is a little bit arbitrary, because if you posit that someone announced an AI with 10,000 machines that were as smart as a crustacean, they could put it on 1 million machines to make it as smart as a mouse. But a general AI possibly wouldn’t be able to make that leap for economic reasons unless human-level AI were already within grasp of scaling. The alternate scenario would be someone sees that we have crustacean level intelligence, then puts it on millions of machines to get a mouse, then they get ever closer as they are able to grow the number of machines over time or make algorithmic improvements. If it becomes conventional wisdom that AI is near at hand, this is possible. No matter the case, I think it’s safe to say that once we have convincing proof of an AI that could scale to superhuman levels, we will already have enough hardware on earth to make it do so if we want it to.

          Another possible counterargument is that human-level (rather than human) intelligence has some qualitative difference from mouse-level (rather than mouse) intelligence, but this strikes me as being unlikely. This would imply that additional algorithmic work would be necessary to cross that chasm, which could take longer than a year. A superintelligent AI would probably not be recognizably human even if it had better optimization abilities than a human.

          A third possible counterargument is that even once we have something of ~human level intelligence in terms of potential, it would possibly take as long to train as a human, which would mean we wouldn’t actually see superintelligence until many years have passed anyway.

          I haven’t read any of the literature on this subject, so I’m probably just treading well-worn soil though. Consider the epistemic status to be extremely uncertain.

          PS. I think this argument applies to brain emulation and algorithmic solutions.

    • Scott Alexander says:

      I feel like your concern isn’t really related to the problem at hand.

      When people talk about AI takeoff, they mean the length of time between “working AI” and “AI bootstraps itself to superintelligence”.

      It may take ten, twenty, or a hundred years from the time some corporation puts together a team to build AI to the time that team successfully builds an AI. But if that AI, once built, increases its clock speed and then reprograms itself to superintelligence within a few objective months, that’s “fast takeoff”

      You could argue that the difficulties of software teams apply here, since the AI itself is modeled as a single person not part of a team. But there’s so much unexpected wiggle room (can the AI copy-paste itself onto a hundred different computers? Since all of its copies will have exactly the same thought processes, will it be able to short-circuit “team-building” considerations?) that I don’t think it’s a good analogy.

      • Dan Simon says:

        I think there are some more fundamental problems here:

        1) We don’t actually know what intelligence is. Every definition I’ve seen, when analyzed critically, reduces to, “kind of like humans”. But we don’t really understand humans very well. Which aspects of our brains/minds are essential to intelligence, and which are irrelevant quirks? How do we even begin to investigate that question?

        2) Since we don’t actually know what intelligence is, we don’t have any idea whether it can scale much beyond humans, let alone to some capable-of-destroying-humanity level. Most people can sort of imagine a superintelligent being–it’s like a human being, only it can answer questions that we can’t, really fast. But which ones? How? I’ve never heard of a characterization of superhuman intelligence that didn’t simply collapse into hopeless vagueness when even gently prodded for specifics.

        3) Even if such a superintelligence is possible, we have no idea how it might relate to human motivation, if at all. Is even a human-level intelligence possible without the specific complex of emotions and motivations that people have? It seems that low-intelligence people behave at least somewhat differently from high-intelligence ones–how would superintelligent ones behave? And could they behave differently? Do intelligences have to have any motivation at all? Perhaps the traditional vision of the superintelligent computer–a machine that answers arbitrary questions dispassionately, and does nothing else–is possible, in which case that may be all we really need to build? Or perhaps any intelligence well beyond the human level inevitably gets so depressed it kills itself if given the chance?

        4) As others have pointed out, all of this precedes the question of how we’d build such an intelligence, and whether we’d want to. What if human error is integral to intelligence? What if we’re already optimized for answering the questions we really want answered, and increasing an intelligent being’s ability to answer some of them inevitably decreases its ability to answer others–sort of like today’s chess software, which can play chess, but can’t do anything else? What would be the motivation for building self-awareness into intelligence if it’s not necessary? And if it is necessary, what else is necessary that may make the whole project not worthwhile, compared to problem-specific software?

        I think the original culprit here is Alan Turing–when he created his “Turing test”, he thought he was creating an operational definition for intelligence, and a shocking number of people believed him. What he was really doing was creating an operational definition of human *attribution* of intelligence–how to determine whether a particular object causes humans to attribute intelligence to it. And as subsequent experience has vividly shown, attribution of intelligence is only somewhat correlated with actual intelligence, by any reasonable definition of the latter.

        Unfortunately, ever since then people have been acting as though the problem of defining and even measuring artificial intelligence is completely solved, and all that’s left is speculation as to when it might finally get built and what the consequences might be. I hope I’ve provided at least some room for contemplating the thought that such speculation is at best premature.

        • ryan says:

          I strongly suspect a bit of cultural poisoning of people’s imaginations here. Have you seen the movie Men in Black (first one)? There’s a part where the face of the robot body of the alien emperor opens up and there’s a tiny little alien inside the head operating some control leavers.

          That’s a beautiful image in my mind because it totally describes what’s in common between notions like the soul, socially constructed identity, and artificial intelligence, that the brain is hardware with some software operating on it. Obviously I don’t have too high an opinion of such notions…

          Anyway, I have a prediction. The very first artificial intelligence – 2 qualities, a) constructed by human beings with technology, b) passes the Turing test as you describe it (humans will call it intelligent) – will very, VERY closely resemble a human brain.

          • Dan Simon says:

            My understanding is that software that passes the Turing test fairly well already exists–that is, it persuades most ordinary folks who aren’t familiar with various tricks used by “intelligence simulators” and how to spot them. Such software obviously isn’t anything remotely resembling what we’d call, “intelligent”–let alone human.

            Of course, it can’t fool expert “Turing testers”–but early chess programs couldn’t beat chess masters, either. I see absolutely no reason to believe that passing the Turing test won’t ultimately turn out to be the equivalent of beating Garry Kasparov at chess–a very domain-specific skill that software designers can attack as a straightforward engineering problem, without worrying about “intelligence”.

          • This is a common point of confusion. The Turing test, as Turing formulated it in his original paper, isn’t about building something like Cleverbot. In order to pass, the computer has to be able to fool an interrogator who isn’t playing along, and who is allowed to ask the computer to perform mental tasks that go far beyond simply holding up one end of a conversation. The idea is that it could be subjected to a battery of tests that are AI-complete; that is, any agent that can successfully complete them all can be said to be intelligent. See also Scott Aaronson’s commentary on the Eugene Goostman case.

            (This is also why AI research isn’t focused on building chatbots; that’s not the interesting part of the Turing test.)

          • ryan says:

            @Dan

            Wow, interesting point. I think you’re right.

            So, is there some standard for “intelligence” which we can use here? I mean any discussion about artificial intelligence seems to need one.

            @Taymon

            OK, yeah, you seem to have something useful in mind.

          • Dan Simon says:

            Taymon: Again, exactly the same argument was used against early chess programs–they just did a bunch of dumb brute-force search, and anything capable of beating chess masters would ultimately have to incorporate something much closer to human-style understanding, reasoning and judgment. But it turned out that the brute-force approach is in fact sufficient to beat the world’s best players.

            Perhaps the “Eugene Goostman” approach will ultimately fail to fool the best human Turing testers as well, no matter how hard it’s pushed. But it should be clear by now that it can be made to work much better than it was once thought capable of working, and I see no fundamental reason why it can’t ultimately succeed in producing a fully convincing conversational simulation of an intelligent human being. People just aren’t really that good at intuitively distinguishing between real humans and skillful simulations–it’s not something that has been bred into our intuitions by natural selection. (That’s why fairly unsophisticated chatbots work so well on so many people.) Nor have people put nearly as much work into becoming unnaturally skilled Turing testers as they have into, say, becoming unnaturally skilled chess players. So as far as I can tell, the claim that the Turing test will always be able to ferret out “simulated intelligences” that use tricks rather than anything resembling human thinking to give the impression of human-level intelligence is simply unsupported by any solid evidence.

          • Let’s put it this way. Suppose you’ve got a chat window open to an agent that claims it’s a human but is actually a chatbot. It is not intelligent in the way that humans are intelligent. You can ask it anything you want.

            Are you saying that there’s no way, even in principle, that you could reliably determine that it’s not actually a human? If so, then what can’t it do that an intelligent agent can? If nothing, then in what sense is it not intelligent? For that matter, how do you know I’m not a chatbot?

          • Jiro says:

            So as far as I can tell, the claim that the Turing test will always be able to ferret out “simulated intelligences” that use tricks rather than anything resembling human thinking to give the impression of human-level intelligence is simply unsupported by any solid evidence.

            Isn’t that equivalent to “the impossibility of p-zombies is simply unsupported by any solid evidence”?

            What are your beliefs about p-zombies?

          • Anonymous says:

            >Again, exactly the same argument was used against early chess programs–they just did a bunch of dumb brute-force search, and anything capable of beating chess masters would ultimately have to incorporate something much closer to human-style understanding, reasoning and judgment. But it turned out that the brute-force approach is in fact sufficient to beat the world’s best players.

            Likewise, something like Google Translate is far more literate than even the most linguistically-skilled human, but there’s no grand theory of language behind it!

          • Dan Simon says:

            @Taymon: “Are you saying that there’s no way, even in principle, that you could reliably determine that it’s not actually a human?”

            How reliably? Can I tell it’s not a human doing a very good impression of a chatbot? What if it’s a chatbot pretending it can’t speak English very well, or is chronically depressed and reluctant to converse beyond monosyllabic answers to questions? Or what if it’s actually a brilliant AI implementation, but tips off its non-humanness in subtle ways, like making too few spelling or grammatical errors?

            These are all well-known limitations of the Turing/Potter Stewart “I know it when I see it” approach to defining intelligence. So far, Turing test aficionados have been able to hand-wave away such limitations because the state of the art in chatbots is primitive enough–for now–that skilled Turing testers can still easily overcome them and make accurate calls. But as chatbot technology advances, these issues, and many others, will start to loom very large.

            “If so, then what can’t it do that an intelligent agent can?”

            If you really can’t distinguish between the tasks of navigating all the general problems of human life, on the one hand, and of sounding plausible on one end of a remote chat conversation, then…well, I guess commenting on weblogs is a great hobby for you… :^)

            “For that matter, how do you know I’m not a chatbot?”

            I assume you’re not a chatbot mostly based on social context and my understanding of the state of the art in chatbots. But now I’m curious–have you said anything in this comment thread that you honestly don’t believe could be generated by a specialized arguing-the-Turing-Test chatbot developed over, say, the next five years?

          • Dan Simon says:

            @Jiro: “What are your beliefs about p-zombies?”

            P-zombies, as I understand them, are about subjectivity and consciousness, not intelligence. Personally, I consider subjectivity and consciousness to be in the same category as God and soul–lovely conversation topics, to be sure, but outside the realm of science, and therefore inappropriate in discussions of engineering and technological feasibility. From a scientific point of view, I see no reason not to believe that I myself am a p-zombie, or, alternatively, that rocks are fully conscious. (And as a human being with an employment history, I certainly believe p-zombies exist, although they’re usually referred to as “managers”.)

          • Anon says:

            @Dan Simon

            The idea here is that it’s just a domain-specific AI which exists for mimicking human conversation. Your claim is that… the chatbot won’t need to have any real general intelligence to hold its end up?

            While the ability to determine this will be based on the number of bits of information we can coax out of it (thus a mere “no” is never enough to determine whether something is just a chatbot), there’s something else to consider.

            We can use natural language to formulate and propose any sort of question – and those we can’t, we can use natural language to bootstrap a domain specific language to describe.

            We can then demand that our positive identification of the subject as “human” depends on solving these questions that are outside the realm of “small talk”.

            Modern chatbots can’t even handle being told to Google about something and come back with a subset of the result, much less present a reasoned analysis of it.

          • Dan Simon says:

            @Anon: Yes, we can ask the chatbot any sort of question–but it only has to answer as well as an ordinary person. So forget sophisticated math, logic puzzles, technical reasoning–what you’ll most likely be looking for, if you seriously want to distinguish a real person from a computer, is mastery of current pop culture, colloquialisms, social cues, and general human-like conversational behavior. Are you really so sure we won’t be able to build an unintelligent chatbot to do *that*?

            (And yes, today, our chatbots are still too lame even to be able to use search engines. But I’m pretty sure we’ll manage to overcome that barrier somewhere before we reach full-on AI…)

        • CzerniLabut says:

          1) We don’t actually know what intelligence is. Every definition I’ve seen, when analyzed critically, reduces to, “kind of like humans”. But we don’t really understand humans very well. Which aspects of our brains/minds are essential to intelligence, and which are irrelevant quirks? How do we even begin to investigate that question?

          I agree that a general definition of intelligence doesn’t help in either analysis, but I would say that there are a series of behavioral tests that we could have a machine that emulates or behaves in the same way as other organisms that pass the tests as qualifying as similar to human level intelligence.

          I think the original culprit here is Alan Turing–when he created his “Turing test”, he thought he was creating an operational definition for intelligence, and a shocking number of people believed him. What he was really doing was creating an operational definition of human *attribution* of intelligence–how to determine whether a particular object causes humans to attribute intelligence to it.

          I agree, I think a better test we might use would include a suite of the Mirror Self Recognition Test, (https://en.wikipedia.org/wiki/Mirror_test), the Sally-Anne Test (https://en.wikipedia.org/wiki/Sally%E2%80%93Anne_test) and maybe be able pass the Stanford-Binet (By visually identifying the questions and manipulating a pencil to write the answers). While these tests all have their own problems, a robot that could pass these would be a phenomenal advance in AI, given that it could:

          1) Recognize itself and plan its actions accordingly in an environment with complex sensory phenomenon such as mirrors and shadows.

          2) The ability to process natural language, and generating accurate predictions of the beliefs and actions of other agents based on asymmetric information.

          3) Complex pattern matching and prediction in mathematics, geometry.

          What if human error is integral to intelligence?

          I beleive that’s probably the case as well. My main concern with Singulatarians and fast AI takeoff is that as humans we even have a hard time with image classification problems (http://cooltoast.com/), rational prediction and thought (See prediction markets), and multi-agent planning (https://en.wikipedia.org/wiki/Parkinson%27s_law_of_triviality). I’m not sure how an AI/Robot that could perform those tasks could quickly bootstrap itself to higher levels of intelligence. I mean, we already have a hard time bootstrapping ourselves before we die!

          • Dan Simon says:

            Computers can already do the tasks you’ve specified to a certain extent, using nothing even remotely resembling human intelligence. At what level do they have to be performed by non-intelligent computers before you’d concede that they alone don’t suffice to identify intelligence? And assuming that happens, what tests would you replace them with?

        • AR+ says:

          Yes, well, right back at you. We don’t have a technical understanding of intelligence, so we also can’t prove, or even sensibly guess, that it doesn’t easily scale beyond human levels without knowing anything about the specific implementation under question.

          It seems to me that all the hard part would be making a human-level intelligence, given that this seems to me about the minimum level of abstract intelligence that can sensibly be called that. After that, it seems to me that grossly super-human intelligence might only require a very marginal design improvement beyond that, which doesn’t even require that the AI self-improve, only that the engineers who built it spend a few more years tweaking their next version.

          But I can’t prove any of this technically, and neither can anyone else yet. So I don’t think we should be postulating such extremely specific limits with such suspiciously short error bars, such as “roughly human,” on a quantity we can’t even describe the units of.

          • Dan Simon says:

            Well, if we’re going to worry about this issue at all, it’s worth considering whether it’s even as plausible as other we-can’t-possibly-even-get-a-reasonable-estimate-of-its-plausibility-let-alone-probability risks. How does it compare, for instance, with the risk of our all being blown up by an earth-destroying asteroid?

          • RCF says:

            The Knightian uncertainty on AI risk is massively larger than for asteroids.

          • Dan Simon says:

            @RCF: Uh, sez who? I, for one, am quite certain the opposite is true, for the reasons I outlined above. Do you have any substantial justification for your position, beyond mere assertion?

          • RCF says:

            We have substantial information about how the frequency of asteroid strikes scales with size of asteroid, how many asteroids of various sizes exist, how close they are to Earth, stochastic parameters regarding their probability of coming close to Earth, and how often extraterrestial bodies are struck by asteroids. For the most part, the Knightian uncertainty of calculating the probability of a major asteroid strike is due to the fact that it would be a tail event and therefore extrapolation would be uncertain.

            AI, on the other hand, is a unique event. We can’t look at how many times .001% of a world-destroying-AI has arisen, or how many AIs are in the physical proximity of Earth, or how many times Jupiter has had an AI.

            Where did you outline your reasons for thinking otherwise?

          • Dan Simon says:

            @RCF: In a nutshell, my argument was that even positing a superintelligent AI, you first have to assume that

            1) Superintelligence is even a coherent concept, which in turn requires that

            a) Intelligence is a coherent concept (existing proposed definitions are hopelessly vague), and
            b) It’s distinct from “cognitively similar to a human” (otherwise, superintelligence is meanlingless–what does it mean to be even more human than a human?).

            2) Once a definition is agreed upon that goes beyond “human-ness”, it will be possible to engineer a device that possesses it. (This is probably the least implausible assumption, although it’s still very unlikely.)

            3) The same engineering techniques can scale to produce an “enormously better” version. (This is impossible even to estimate the likelihood of, since we have no idea what scale might be implied by the as-yet-nonexistent definition.

            Put these all together, and I’m highly confident that asteroid-caused extinction is more likely.

          • RCF says:

            I said that the Knightian uncertainty of AI was higher, not the risk.

          • Dan Simon says:

            Fair enough–as long as we’re agreed that decisions should be made based on risk, not knightian uncertainty. The “living dead” people-becoming-zombies scenario presumably also has higher knightian uncertainty than the killer asteroid scenario, since we have no idea how it could ever possibly come about, but its risk is also presumably much lower. Right?

        • Anthony says:

          I’ve never heard of a characterization of superhuman intelligence that didn’t simply collapse into hopeless vagueness when even gently prodded for specifics.

          Intelligence is the ability to solve problems given a certain amount of data. (And making assumptions about missing data, but that’s just another form of data.) A superhuman intelligence would be able to solve problems integrating more data than the human brain is capable of considering at one time, and at a higher speed.

          In my relatively-uninformed opinion, the main problem with artificial intelligence is providing the “background data” that humans (or even cats) possess enabling them to do “sanity checks” on their problem solving. There’s a whole lot of this background data, and we’re barely conscious of most of it until we run into something “counterintuitive”.

          • Dan Simon says:

            @Anthony: “Solve problems”? Which problems? Computers have long been able to solve all sorts of problems better and faster than humans. The difficult part of defining intelligence is figuring out a necessary and/or sufficient set of problems, the solving of which demonstrates intelligence.

            Turing, for example, proposed one–the so-called “Turing test”, but our experience with it gives little confidence that solving it will require anything approaching what we normally think of as intelligence. I know of no other proposed problem set that even comes close to being intuitively satisfying. Indeed, I hypothesize that the only problem set that will ever do so is “the set of problems that a human solves during his/her life”, because our intuitive definition of intelligence ultimately converges on the definition of a human being.

            Since we don’t have any good definition of intelligence, we likewise don’t have any good definition of “superintelligence”. What problems is it necessary to be able to solve faster or better than a “pedestrian intelligence” in order to qualify as “superintelligent”? Is a human being equipped with a computer and programming manual “superintelligent”, for example, because of all the problems he/she can solve better or faster than a human?

      • Pseudonymous Platypus says:

        Tooting my own horn a bit, but I think the concerns I raised in my comment may be more relevant to what you’re talking about (but again, I’m not an expert in this area, so if I had to provide a certainty level I’d say something like 65%):

        I’d also like to add that if neural networks are indeed the path to true AI, there’s a chance that the AI wouldn’t be any smarter, or any more capable in a particular field, than a regular human. A lot of people have this assumption that an AI would necessarily be able to reprogram itself to be smarter, or program better AIs, because it is a computer. But if it’s just a network of emulated neurons converting inputs to outputs in approximately the same way a human brain does, it might have to “learn” things the same way we do, and might have no more access to or control over its internal workings than we do.

      • Anonymous says:

        There is undoubtedly a critical point above which an AI could “fast takeoff.” The problem is, I firmly believe that critical point is already pretty deep into superhuman territory. Getting to that inflection point is one helluva slow takeoff.

        The real question is who would be dumb enough to give an AI a built in robotic manufacturing wing to do the hardware upgrades?

        But I haven’t written any code since college–those pricks gave me PTSD–so what does my opinion count for…

        • Luke Somers says:

          The thing is, it seems pretty clear to me that an AI which is up to human parity in all respects will already have to be vastly superhuman in some respects.

          The ‘increasing clock speed’ mentioned is a silly diversion. The first steps would be algorithmic improvements and taking over existing hardware via a mixture of convincing people and network access/hacking. Once it’s out on the network, its requirement to be attached to a manufacturing plant is fairly trivial.

    • Randall Randall says:

      phleg:

      the reason is because most software is so horribly written, its just hard to see where this ai code is going to come from. at around 60-70kloc it becomes too much for one mind to handle and as an ai would need much more than that you’d need at least one team, and maybe more. organising a software team efficiently has only really become (generally) known in the past 10years or so. but alas, most people still arent doing it (scrum/xp).

      I’m a software developer since 2000-2001. That might have a little bearing on what I say next, but probably doesn’t, much.

      First, I’d like to dispute the idea that we’ve figured out how to do software team organization. We’ve figured out some things, but it’s notable that scrum fails about as often as any other organizational system if “fail” means “doesn’t achieve the initial aim of the project”, even though it doesn’t fail on its own terms nearly as often. Agile development lowers the bar of success to what we know how to achieve pretty well by banning discussion of the ultimate goal in favor of discussing what can be done in the next scrum cycle. This means that it’s possible to generate reams of success stories and statistics about stories satisfied, but the end result of the project may or may not be anything like what the project initiator was looking for. In the best case, that’s because the project initiator didn’t have enough understanding of the problem to picture a possible solution, but often it’s because there’s no good way to walk from point A to point B in two week cycles where everything has to be justified on the cycle level and no one has a good idea of what the parts in between look like. This can be mitigated to some degree by “research cycles” and the like, but they aren’t a panacea. In a very large number of cases, it would be more advisable to hire a very capable single programmer to design and implement a sketchy full version of what you need, and then have a team come in to iterate from that toward something that has all the features that are needed. This is hard to argue for in a business setting, partly because the 10x programmer’s work habits of “disappear into office / cave for a month and then result appears which is 80% of the goal” is hard to trust for business folk.

      So, there are three likely scenarios for AI emergence, in my opinion. I’m discounting the “AI emerges as a side effect of some other collection of software no one understands”, mainly because the only example of that we have, humans, required enormous numbers of iterations, extremely friendly hardware (by which I mean physics and the DNA running on it), and strong selection pressure. It’s possible that that will happen, but seems at least fourth on the list.

      First, it could be that we’ll get AI by piling up a bunch of software so that our first human-scale intelligence is many millions of lines of code (I think this is what phleg is talking about).

      Second, it could be that we will not get human-level intelligence until we have the computing power to simulate actual human brains, or close to it (Hanson’s Ems).

      Third, it could be that there is some core algorithm which is both elegantly simple and intelligent if run fast enough and with enough working memory.

      The first scenario would likely be gradual until some threshold at which the AI became a good programmer, after which there might be a hard takeoff, or it might turn out that the minimum resources necessary are basically what its got, and it improves only as fast as hardware. On the one hand, maybe that’s not that fast, because with a given system considering ten or a thousand times as much before reaching a solution might only seem like “somewhat” more intelligent. On the other hand, humans who are somewhat more intelligent than other humans don’t seem to have ten or a thousand times the literal brainpower, so the capacity for a hard takeoff does seem to be present in this scenario.

      The second scenario is the safest, since it implies that we’ll have dogs and monkeys simulated first, and it doesn’t seem to matter if you make a monkey run a million times faster, it’s not going to be as smart as a human. Similarly, an IQ 80 human is not going to think of things that an IQ 160 human would think of no matter how long you run him, I assume. Also, if we don’t know how intelligence works well enough to abstract it out of the human brain, then it seems likely that what we’ll get for some time is just faster humans-in-a-box.

      The third scenario is the scariest, but all I have for liklihood is the outside view of “how often have I been surprised that something was possible for a better developer than me”? In my experience, that surprise has happened quite often, which makes me worry that there is some simple thing that, once a 10x programmer thinks of it, will produce a general intelligence on commodity hardware within a few weeks or months of that initial thought. If that happens, then all of Eliezer’s worst fears come true. Given what I see people doing with (simulated) hardware now considered obsolete, it doesn’t seem incredibly implausible that once it’s really understood how to do general intelligence, it will scale down smoothly such that we’ll realize we could have done it on a 486 in 1995, if only someone had thought of it.

      And here’s a little nightmare fuel, while thinking about AI boxing: http://arstechnica.com/gaming/2014/01/how-an-emulator-fueled-robot-reprogrammed-super-mario-world-on-the-fly/

      Just because you built a box doesn’t mean the box isn’t really a sieve.

    • dude says:

      I’m a programmer. I don’t see how programming relates to human level AI any more than astronomy relates to flying saucers.

    • Jon H says:

      “the reason is because most software is so horribly written, its just hard to see where this ai code is going to come from.”

      Similarly, this was probably the source of much of the worry about Y2K. Every programmer thinking about the code horror stories they’d experienced or heard about.

      • Randall Randall says:

        No, Y2K was actually going to be a problem. I worked for AT&T then, as a network administrator for their internal network, and our tests showed so, so much failure even as late as late 1999. But all the crash projects and late night sessions in last few months got all the hub and router firmware patched. If I remember correctly, we had Y2K firmware updates still coming in the week before. I was scheduled to work third shift that night (10-6), so it seemed quite concerning to me. We even had a procedure to set hubs and routers back to a specific matching date if they weren’t patched in time, which would have given us a little breathing room at the cost of bizarre reported latency. But by the time I came to work, it was pretty clear it would all be fine, since our patched hardware in Australia and so on kept working. There was a lot of work to get to that point, though, and it really was needed. Without the descriptions of what would have happened without that work, it’s not clear that people would have been alarmed enough to actually budget for it, and this was a hard deadline if anything was.

        • Jon H says:

          I agree that it was going to be a problem, and was only not a problem because of all the work put into making sure it wasn’t a problem. (Though, we didn’t hear much about Y2K failures in developing countries where they might have slacked off on such remediation due to expense, which I’m curious about.)

          I was working on a bank’s Fed Funds/Eurodollar trading system at the time, and the bank paid me a $20k retention bonus to keep me from leaving before Y2K.

          Just that the alarm was raised so… alarmingly, because it was programmers thinking about all the bad code they’d seen and expecting the worst.

    • ADifferentAnonymous says:

      I’m a (fairly new) professional software engineer, and I don’t feel that that gives me any meaningful insight into AI timelines whatsoever. Software is hard because it inevitably gets messy and complicated, but powerful things get built, and it’s not clear that AGI isn’t going to be a matter of implementing some clean math and letting the agent do the rest. That is, I suspect that the task is more analogous to Page and Brin’s first implementation of search than Google’s ongoing operations.

      If you want my opinion for statistical purposes, it’s “assigns hard takeoff a significant enough probability to make MIRI not a Pascal’s Wager but my memetic immune system prevents me from donating to them so my revealed preferences contradict my stated ones.”

  10. nico says:

    How I feel about those gay neo-nazis depends on what moral meta-level we’re talking about, in an alternating way.

    At level 0, they seem to think that gay white guys with a roughly traditional gender presentation are good people. As a gay white guy with a roughly traditional gender presentation, I like that.

    But at level 1, they seem quick to violence and they probably don’t think too fondly of racial minorities. Those are things I don’t like, because I don’t want to live in a society where people are violent and hate others (namely me) on racial grounds.

    And then at level 2, there’s this: “We don’t consider ourselves as heroes or particularly positive characters. We have severe methods, but they really work. We fight for everyone, not just for ourselves.” Holy shit, how is it that the gay Russian neo-nazis have figured out epistemic humility before 99.9% of American politics?

    • Nita says:

      “We don’t consider ourselves as heroes or particularly positive characters. We have severe methods, but they really work. We fight for everyone, not just for ourselves.”

      Uh, that’s not epistemic humility. That’s probably how Putin wants to be seen, too. Most Russians right now are too cynical to support anyone who claims to be on the side of light and niceness.

  11. lmm says:

    I’m amused that you post a “not necessarily doubled-checked-by-me claim” /immediately after/ your paragraph on motte-and-bailey.

  12. Pseudonymous Platypus says:

    FWIW, I read Norah Vincent’s book and thought it was very good. Anyone who likes the gender/”things I will regret writing” posts on SSC will probably enjoy it.

    • I read it several years ago, and not-exactly-recommended it to people at a recent LW meetup. I didn’t find it especially insightful, but did find it pretty interesting.

    • Froolow says:

      I thought it was very sad – probably the saddest thing I read in 2014. You really get the sense reading the book that Jones goes into the experiment expecting to score a few free points on men and ends it by having a nervous breakdown because she can’t cope with the complexities of male social interactions without the decades of socialisation men get on how to suppress their feelings.

      One thing Scott doesn’t mention is that there is a chapter on dating. It is very interesting given the discussion in ‘untitled’ that Jones’ performance of masculinity (sending poetry to people she meets on OKCupid) actually seems to have a pretty high success rate, even when she reveals to straight women that she actually isn’t a man (which I would have thought would have just killed the romance stone dead, but apparently not). However also interesting is that Jones’ conclusion is that women are insufficiently scared of men in the dating arena – Jones describes how after a few rejections by women she grew to hate them, and was prepared to lie and dissemble just to get one over womenkind generally. Once again I suspect the socialisation aspect is important in learning to deal with rejection, but the description of the sexual strategies adopted by women from a male woman’s point of view is really fascinating.

      I’d love to talk more about it with anyone else who has read it, but I suspect even this comment pushes the envelope on the ‘No race or gender’ rule for Open Threads. I highly recommend the book (and further highly recommend buying it through Scott’s Amazon Affiliate link), and it is quite easy reading for the most part – Jones is a good author.

      • Anonymous says:

        >Jones describes how after a few rejections by women she grew to hate them, and was prepared to lie and dissemble just to get one over womenkind generally.

        The conclusion I’d take from that is that women are insufficiently scared of other women.

      • Lucky for us this isn’t an Open Thread.

        I notice you make the same mistake I often do, in reaching for “Norah Vincent” and grabbing “Norah Jones” instead. More singer-songwriters should carry out gender-bending journalism projects.

        On the subject of gender-bending online dating, and in response to Scott’s question about whether any men have tried the same project as Norah Vincent, I wonder how many other people have established opposite-gender online dating profiles. I did this a couple of years ago, (setting up a realistic-looking female OKCupid account to see what it would collect). Most of what I got wasn’t the gendered-slurfest I was expecting, but just overwhelmingly banal and uninspired messages my hypothetical woman would have no reason to respond to.

        I do now wonder how many of the women I matched highly with on OKCupid were imaginary constructs of guys like me trying to get a first-hand glimpse of the female OKCupid experience.

        • Randy M says:

          ” banal and uninspired messages”
          Like what? Sincere offers to date from people of average or less wit? I don’t see how the average woman can rule all those out (and hope to end up with someone).

          • Do you remember the Underpants Gnomes’ business plan from South Park?

            Phase 1: Collect underpants
            Phase 2: ?
            Phase 3: PROFIT!

            Based on my experience, most messages from men to women on OKC are written with this mentality, except Step 1 is “send woman a message” and Step 3 is “woman messages me back”. The elusive middle step of “message is conducive to woman wanting to message me” is completely missing from the authors’ mental model.

          • Pete says:

            Not the original poster, but when I did something similar, I got hundreds of messages along the lines of “wanna chat” or, if I was really lucky, “you seem interesting, wanna chat.”

            I would estimate that for every 30 of these I got one decent response (and a couple of creepy ones).

          • My numbers would probably work out about the same as Pete’s. I mentally filed the “wanna chat” messages and their ilk as spam.

            Something sadder than this, though, were messages which clearly weren’t spam, and which demonstrated the author had read Hypothetical Woman’s profile, but weren’t actually any more likely to make Hypothetical Woman want to message them back.

            Spam is (was, circa 2004) annoying, but the idea of receiving hand-crafted messages from people asking if you wanted to buy penis enlargement supplements and knock-off Rolex watches is somewhere between “sad” and “existentially horrific”.

          • Anonymous says:

            I imagine the typical guy’s utility function degrades in this particular manner:

            They start out reading a woman’s profile, spending time writing out a thoughtful message that engages with the things she’s written in her profile. Maybe the entire effort from browse profiles until landing on a cute and interesting girl > read her profile > compose a message relevant to what she wrote takes about 10 minutes. Do this 10 or 15 times and you’ve spent about 100 – 150 minutes of genuine effort on online dating. The response rate to these 10 profiles is maybe 1 out of 10, if they get any response at all. It can probably be rounded to about a 5% response rate.

            Why spend almost two hours browsing and writing thoughtful messages when you might have the same response rate writing “hey you’re cute, wanna chat”? Having the same response rate while cutting down your effort per message by about 99% is a much more rational thought process.

          • charred-triumph says:

            I feel like it’s probably more that the negative utility of no response after writing a thoughtful message is much higher than that of no response after a thoughtless one. And that for most people it doesn’t feel effortful to write a “hi ur cute”, unlike alternative of taking time to read, person-model, write, edit.

            That is, it’s not about being rational, it’s just doing what seems easier and less stressful. (Which might not actually be the optimal approach if both 1) your utility gain from a response is high compared to the utility loss of writing rejection and 2) response rate varies significantly by thoughtfulness.)

          • Daisy says:

            The OKCupid problem is pretty depressing from the actually-female angle too. I actually didn’t mind the assholes and the neggers so much; I didn’t feel bad ignoring them. But the guys who send you nice thoughtful messages, but there’s just too many to even differentiate them… yikes. It feels overwhelming, and sent me into a guilt spiral where I deleted my account after two days of it being visible to straight people. (I’d had it for years before that.) And the thing is, apart from guilt feeling unpleasant, it’s a total boner killer.

            Something I think from introspection might have worked for me is a well-crafted generic message that actually acknowledged it was a generic message. (Generic messages that don’t acknowledge it feel like someone trying to trick you.) If it ended with “Yes, I do say that to all the girls, but if you like my profile write me back and I promise I’ll send a real reply,” I wouldn’t judge the guy as an asshole OR feel that sense of guilt/unwanted obligation pre-baked into the interaction. Plus it would differentiate him in a positive way as strategic-but-honest.

            I’m probably not the median woman, and of course my introspection might be off base, but I think it could be worth a shot for any straight guys who are currently navigating OKCupid.

          • Pseudonymous Platypus says:

            Regarding the effectiveness of sending a small number of well-thought-out messages versus a large number of copy-pasted ones, there is a discussion of that very subject in the book Dataclysm by Christian Rudder (who is one of the OkCupid founders). The book doesn’t cover this subject in extreme detail, but it has a bunch of other interesting data, so on the whole I’d recommend it. I think the link I created should have Scott’s Amazon affiliate code included.

            Anyway, Rudder’s data show that the best response rate is for message between 40-60 characters. (And that’s before you consider the response rate relative to the effort put into the message.) In terms of time spent composing the message, the best response rates occur for messages that took between 60-120 seconds to compose. So sending a lot of short messages really does have a better payoff than a few long, specific ones.

            Also, I think that data includes messages sent by both genders, but my understanding is that men send far more messages than women, so either way the advice probably effectively applies more to men than women.

          • Jake says:

            Anon: The OKC male-to-female response rate is around 28%, not 10%. And that’s with “hi” and “wanna fuc” messages in the denominator – discount those and it’s probably closer to 50%.

          • 27chaos says:

            Confounder: people who write short messages may be more attractive for merely correlative reasons. Perhaps attractive people put less effort into messaging others, for example.

          • nydwracu says:

            Now I want to make an OKCupid account just so I can mass-message with “I read Dataclysm, so this message is 50 chars long”.

            (But I’ll have to read Dataclysm first.)

          • Auroch says:

            But nydwracu, that will take you less than 60 seconds to compose!

          • Anonymous says:

            @Intellectual Lusts

            I feel compelled to argue the opposite side of this.

            Most women’s profiles are fairly useless for deriving unique information about them (this may also be true of most men’s profiles, but I don’t spend much time looking at those.) I write carefully composed, individualized messages when the women give me something to work with. But this isn’t often.

            Otherwise, I tend to just write something along the lines of, Hi. You’re lovely. I’m not a big fan of cheesy pick-up lines, so I’m just going to ask you to please go take a look at my profile and see if we’d get along. Thanks for taking the time to read this. Because I’ve mostly gotten over my social anxiety, but I’m still bad at cold-pitching small talk to someone I know nothing about. It would be easy to die from alcohol poisoning if you made a drinking-bingo game of “laid-back; work hard, play hard; no Friday is typical…” and other useless tripe that doesn’t actually distinguish one profile from another.

            (My own profile is way more interesting to read than most, aside from being possibly the only one on OKC to mention Slatestarcodex)

          • Matthew says:

            I wasn’t trying to be extra-anonymous there. Damn cookie monster…

        • eqdw says:

          Most of what I got wasn’t the gendered-slurfest I was expecting, but just overwhelmingly banal and uninspired messages my hypothetical woman would have no reason to respond to.

          This was my experience as well, when I ran the experiment. The thing that was most surprising is that I got 3 (THREE!) messages, all of the nature of “hey whats up” or “hey wanna chat”, between the time I had registered an account and the time I uploaded a photo. They were messaging a blank account.

          Incidentally: given that all evidence I have points to the fact that women on OKC are primarily upset with the constant low level of banality, it makes it extremely disheartening when I have a <5% response rate on first messages that I spend the better part of an hour writing, complete with multiple paragraphs, excellent highbrow puns, and references to the correct shibboleths. It is really, really, really hard to look at the situation and think "I'm doing this, all the other dudes are doing that. This clearly doesn't work. I would have assumed that that also doesn't work, but there sure is a surprisingly large number of people doing it…."

          • ShardPhoenix says:

            I think if you put too much effort into the first thing you say to someone it risks coming across as obsessive. Above it was mentioned that the OKC staff found that messages of 40-60 chars (ie longer than “hey what’s up” but still pretty short) were the most successful.

          • Deiseach says:

            Look at it from the other point of view: sure, it’s a shame to bin your thoughtful long message that you genuinely put effort into.

            However, I’ve just spent the better part of an hour sorting out all the “hi wanna exchange genetic material” tripe and I’m fatigued. Your lovely long message is LONG – too long to read. By now I haven’t the energy or patience to trawl through it. So into the bin with the rest of the “hi baby you got boobs?” messages it goes.

            Shorter, less effort but not “hi you is girl you is lovely me guy” type and you may do better. Good luck!

        • eqdw says:

          As an aside: If anyone else is down, who wants to play “Help Improve the Online Dating Profile”

          • Hainish says:

            Sure. How does one play?

          • Jake says:

            I was on and off OKC for two years and fairly successful – something like 100 dates, 30 one- or two- night stands and 5 meaningful relationships. I’d be happy to edit a few profiles.

          • James says:

            OK, I’m game to go first: http://www.okcupid.com/profile/dri_ft

            Known caveats: I’m unsatisfied with the “My Self-Summary” field, but haven’t any very good idea of how to replace it right now. The pictures are, I think, satisfactory, but are a little out-of-date and hence less than perfectly representative.

            Interested to hear how it comes across to others.

          • Kzickas says:

            @James: It’s a tiny thing, but I’d drop “with my computer” after “making electronic music”. Brevity is a virtue here.

          • Anonymous says:

            @James – I think your profile is good, and the self-summary is not particularly comprehensive but also not unappealing. I was on okcupid a bunch when I was single and there are a ton of things people can write that would make me wary of them so just not having any of those puts you ahead of many people.

            I would suggest, if you can’t pick a favorite book, just say what you are reading right now and change it when you start something new. I think it’s a little better to answer either all or none of the smoking/drinking/drugs sidebar questions. Also, the word feisty has weird connotations to me, but that might be a regional language difference.

            Although, this is coming from someone who is only 79% compatible with you, so maybe I’m not the best source 😛

          • James says:

            Kzickas: good catch; I’ll fix that in a minute.

            Anonymous: Can you expand on the ‘feisty’ thing? In some respects I feel a little ambivalent about that word myself, so I’d be interested to hear what connotations you get from it. I don’t actually know why I haven’t really filled in the books part, especially since I remember that I did at one time have a few of my favourite authors written there. Evidently I got rid of it, maybe because I felt like it looked, erm, pretentious? Geeky? I’m not sure. I should put those back in.

          • WadeWillson says:

            http://www.okcupid.com/profile/WadeWillson
            Sorry if the link is initially problematic, phone is not wholly cooperative

          • Anonymous says:

            Wade:

            In general: don’t explicitly say or signal “nerd.” It’s clear enough from the rest of your profile; saying it outright is really offputting to all but the nerdiest girls.

            Get better photos. You’re not bad looking, but your pics are awful. Don’t lead with a nerd costume (also, red-eye? really?). Face the camera, but don’t stare at it. Don’t stick your chin out. Ask a friend who knows photography for help. If you’re fit enough to pull off a shirtless pic, get one (but don’t make it your first one).

            Don’t mention weird politics (Austrian economics, etc), unless they’re absolute requirements in a mate. You’ll repel nice liberal girls who wouldn’t mind a libertarian in person.

            “long-haired dog lover” -> “lover of long-haired dogs” (the first is ambiguous)

            What I’m doing: Lead with your ambitions and with impressive-sounding things in general. The first graf is the least impressive of the section; drop, revise or bury it.

            Favorites: You’re letting the nerd flag fly here, and that’s fine, but try to show hints of something else. Do you like *anything* indie and/or feminine? List it. For “food,” remember that this is a disguised date-idea section, so list stuff you want to try with a girl.

            6 things: Drop “knife” from multitool – anything that even suggests aggression/violence is bad. Replace #6, as it signals terrible fashion.

            Most private thing – this is probably too much information. Fine to talk about on a first date, not before you meet.

          • Hainish says:

            @Wade – The dog in the third pic is adorable! What breed(s)?

          • Anonymous says:

            @James (Same Anonymous as before, different Anonymous than the one above commenting @Wade, maybe I need a name if I’m going to keep talking…)

            hmm… “feisty” is a word I don’t hear very often if it’s not being used as innuendo I guess? Also I looked up the dictionary definition and apparently it applies to people who are little and/or weak which doesn’t make me more fond of it, adjective-wise. I feel like maybe you meant something more like spirited, or even passionate?

            Also, I forgot to mention before, you have that you are looking for new friends/short term dating, and that may be because you are not interested in getting into a relationship, but if that isn’t actually the case, you might want to add the long term dating option because my anecdotal experience is that girls may be less interested in the effort/stress/risk of meeting a stranger in that case.

          • WadeWillson says:

            @Hainish
            All border collie. Deaf and on her last year or so.
            Thanks for the points. I’ll be updating this weekend, hopefully.

      • cassander says:

        When I was in undergrad, she gave a talk on her book at my school. Maybe she was just jetlagged, but I was surprised at how moved she seemed by the whole experience. She basically flat out said that she got into it for the reasons you suspect, and had no idea what she was getting into. she spoke with real passion about the whole experience.

    • J. Quinton says:

      Here’s an article about the different experiences that trans* people encounter at work before and after transition: http://www.newrepublic.com/article/119239/transgender-people-can-explain-why-women-dont-advance-work

    • victoria says:

      I thought Self-Made Man was pretty good too (particularly the dating, bowling league, and monastery chapters) but was not particularly impressed by her follow up (an ethnography of sorts on the various mental health institutions she enters after her nervous breakdown).

      • Pseudonymous Platypus says:

        Thanks for commenting on this. I was planning to read her follow-up book as well, but now maybe I’ll reconsider… or at least just get it from the library rather than purchasing it.

  13. anon says:

    I feel kind of dirty saying this but that economics article doesn’t feel like the first time I’ve read a big thinkpiece on academic sciences being sexist that then either only linked to a handful of anecdotes/other thinkpieces or (I do believe this specifically happened elsewhere) cherry-picked an academic paper which says the precise opposite.

    I’m not sure if there’s a takeaway to this.

    • Randy M says:

      It reminded me of the recent Hanson/Noah dust-up, where Hanson was used as an (the?) example of economists sexism.

      • At least that case was a straightforward conflict of values wherein the actual facts weren’t in dispute.

        • anon says:

          Yeah, it’s just always a major signal of something being off when a topic seems strongly held up as obviously true, and then most of the academic evidence keeps saying otherwise.

          Yet I’m not comfortable saying the whole thing is a crock or anything like that. Bleh.

        • RCF says:

          False. Smith dishonestly changed the word “puzzling” to “puzzled” (And if you think this is a nitpick and it’s not an important detail: if it’s not important which word is used, why did Smith change it? The very fact that Smith changed the word shows that he thinks there is an important difference.) Also, Smith accuses Hanson of claiming that rape causes less harm than cuckolding, when Hanson clearly qualified that he was talking about biological/evolutionary harm.

          Noah also later talks about Stephen Landsburg, but instead of linking to Landsburg’s blog, he links to another blog talking about Landsburg’s blog, which I consider not to be a sign of honesty. And from what I can see, that blog he linked to dishonestly misrepresents Landsburg’s blog, but I’m not going to bother clicking on the link so see whether THAT blog actually links to Landsburg’s blog, or whether it links to ANOTHER blog talking about Landsburg’s blog. You get one click, people. If my first click doesn’t actually go to the evidence, if instead goes to another blog simply asserting that the evidence exists, I’m not going to play the “keep clicking and hope we eventually get to the primary source” game. I’m just going to conclude that you’re lying and call it a day. Citing a source that then cites another source is a major urban legend red flag.

      • Anonymous says:

        IMO that seemed like Noah just browsing through Hanson’s posts, asking himself “Hm, what could plausibly offend me today?”

        I have no doubt he was genuinely offended and did not intentionally seek to find something, but it had the feel of a fishing expedition.

    • ryan says:

      “I’m not sure if there’s a takeaway to this.”

      All modern journalism is click bait?

  14. Doug Muir says:

    On one hand, Rudel. On the other hand, Beate Uhse.

    Everyone in Germany knows about Beate Uhse. Outside of Germany, not so much.

    She was a daredevil stunt pilot back in the 1930s. One of those naturals, a Mozart or a Gauss. Just born to be a superb flier. Things were loosey-goosey back then, because Weimar Germany was a bubbling stew of crazy social experimentation. (One of the experiments eventually got loose and went crazy and killed millions of people and tried to eat the world, but that’s another story.) When Beate Uhse was getting into the air, the Nazis had already taken control of Germany, but it took a while for them to impose their idealized vision of gender relations. So by the time they noticed Beate Uhse, she was already a superstar — a teenage girl who was the central European Amelia Earheart, except younger and more intense and smarter. (And, as it turned out, luckier.)

    So the Nazis made a virtue of necessity and let her become a folk-hero of the Reich. And after the war started, they let her join the Luftwaffe. It was supposed to be a publicity thing but she turned out to be so goddamn good that she ended up flying experimental planes — including the Me-262, the world’s first fighter jet — and ferrying planes to the front, where she regularly came under fire. (Female Luftwaffe pilots? there were actually quite a few of them. They were supposed to be kept in support roles, but in the last year of the war a lot of them ended up in combat willy nilly.)

    In between there she got married and had a kid. Then her husband — also a pilot — got killed in the war. Then her family farm in East Prussia got overrun by the Soviets and her father got killed too. Then the Red Army surrounded Berlin.

    So she stole a scout plane and boarded her two-year-old son, the family nurse, the plane’s mechanic, and a couple of wounded men she happened to run across, and flew it west through Soviet lines. She was trying to reach Denmark but the plane ran into Allied flak and sprang a fuel leak, so she had to dead-stick land it. She managed to do this in a field just behind British lines, with no injury to anyone on board.

    After the war, she was a widow with no property (because the family farm was now inside the Soviet Union) and no job (because Luftwaffe pilots were prohibited from commercial aviation). So she started a mail-order sex book business. That business grew to be Beate Uhse Inc., which today is the largest adult chain store in the world. It sells pornography, sex toys, and marital aids, with over a hundred shops in malls, main streets and airports all across Europe. (“They appeal to the German sense of orderliness and respectability,” notes one observer. “They are invariably clean, well-appointed and spacious.”)

    She got married a second time. He was handsome but useless, and eventually cashed in his share of the business for several million Deutschmarks and ran off with a younger woman. Then she got married a third time, to a black American schoolteacher 25 years her junior. At the age of 70 she received the German Cross of Merit from then-Chancellor Helmut Kohl. She died at the age of 81, with her business worth tens of millions of dollars and employing almost 2,000 people. A movie was made about her life a couple of years ago; her part was played by Franka Potente, who most Americans would know as Matt Damon’s love interest from the Bourne movies.

    http://en.wikipedia.org/wiki/Beate_Uhse-Rotermund and if you google her obituaries, you’ll find stuff even crazier than what I’ve given above.

    Doug M.

    • Nita says:

      Wow. Thank you for that. Also —

      Beate was a wild child. Her parents did not try to control her, instead they encouraged their daughter in her interests and desires. They assisted her in getting a good education. They informed their children on sexual matters early, and spoke with them openly about sexuality and contraception.

      — it sounds like she had wonderful parents 🙂

      • stillnotking says:

        One always wonders how much of this is extraordinary parenting, and how much is an ordinary parent’s response to having extraordinary children.

        • Jaskologist says:

          I don’t know that I’d count “divorced Nazi pornographer” as a parenting success.

          • Nita says:

            It’s interesting how ending a failed relationship and helping people improve their sex life (and their knowledge of contraception) are as bad as being a Nazi in your worldview.

          • grendelkhan says:

            Isn’t “pornographer” usually reserved for producers rather than distributors? That should be “smut peddler” or something like that.

            (Does founding a successful business and gainfully employing thousands of people mitigate those things in your view, or are you just being contrarian?)

          • Jaskologist says:

            Wow, I list three things I don’t want for my children and suddenly I’ve called two of them exactly the same as being a Nazi! Godwin’s Law has depths I did not previously know.

            And no, if something is bad at a small scale, it does not become good when done at a much larger scale, nor is it sanctified by the addition of money. No ethics offsets here! Ethics offsets are exactly the same as being a Nazi.

          • Creutzer says:

            Well, I don’t know that founding the first chain of sex shops in the world, as a woman, in the post WWII era, is a bad thing…

            Also, do we know that she was actually a Nazi? I mean, ideologically. Wikipedia doesn’t address the question, and it seems quite possible that she was just an opportunist who did everything she had to so she’d be able to do cool flying stuff that she loved.

          • Protagoras says:

            @Creutzer, She obviously was a much more minor example (if she was an example of this), but Hitler’s government was served by a lot of people who weren’t that enthusiastic about the Nazi ideology, but who relished the chance to do important work for Germany and loved the respect and attention they got for it (Speer perhaps being the highest ranking example; Galland, to choose a pilot, might be another based on what I know of his bio). The contributions of some of those people were crucial to the Nazis being as effective (and so destructive) as they were, so I don’t know how many points they deserve just because they might have made less evil decisions if they’d been in charge rather than following Hitler’s orders.

            OTOH, I do agree with those who find it odd that helping people improve their sex lives is being presented as non-positive, or that divorce is being presented as notably negative.

    • Anonymous says:

      Eh, Nazi sex roles are often misrepresented. The ideal Nazi woman was strong, active, capable of hard physical labor, normally at home but willing and able to rise to the needs of the Volk in crisis. . . .

      The magazine of the League of German Girls included articles on women as doctors, athletes, poets, and pilots. And a lot about sports.

    • chaosmage says:

      No discussion of utterly off-the-scale German soldiers can be complete without Ernst Jünger.

      Born in 1895, he enters WWI as a volunteer, fights in the trenches for almost the entire war, is wounded several times and barely survives many battles and gets Prussia’s highest military decoration for extreme valor. Coming home, he writes Storm of Steel, an autobiographic account of the war that describes in extremely gory detail how the war was utterly gruesome and Jünger loved it. He describes it as an amazing mystical experience, while not neglecting the reality of having to dig trenches into hills of corpses.

      He despises democracy and his views evolve through the alternatives before they settle on something like anarchism. He’s the best writer among the minority that doesn’t think WWI was a huge mistake, so the Nazis love him, although he’s quite vocal about his opinion that they’re swine. He’s one of very few authors who don’t emigrate, and instead writes the utterly amazing On the Marble Cliffs – a passionately poetic novella about how a peaceful society is destroyed by a dictator and his mass of mindless followers who so blatantly obviously are the Nazis that they even have concentration camps, and gets it published, in 1939, inside Nazi Germany, uncensored, because he’s fucking Ernst Jünger. Then he fights in WWII and helps, on the sidelines, with the Stauffenberg plot to kill Hitler.

      So after the war, it seems fairly obvious he wasn’t a Nazi, which is good because he refuses to prove it to those lowly democrats. Instead he becomes one of the first people to take LSD, and writes about that and his many other drugs experiments, inventing the word “psychonaut”. He also invents the Anarch, who “is to the Anarchist what the Monarch is to the Monarchist”. He writes possibly the first SF novel to feature nanotechnology, some relevant scientific work on entomology and a large amount of autobiographical material full of stilized beauty and oddly poetic philosophy.

      At age 101, he converts to Catholicism.

      He dies at age 102, still unconvinced with democracy (but happy with feminism) and eyed with lots of political suspicion, but having written such a broad and deep body of work that he’s considered one of Germany’s foremost 20th century authors anyway. Strongly recommended reading; try the Marble Cliffs first.

  15. Ben says:

    Interested to see you mentioning Norah Vincent’s book, I was partially expecting to see it in the Untitled post because she makes a number of very similar claims about fear of rejection for men.

    Nice expert of the book from the Guardian that gives a nice sense of the book and has some of the observations on gender roles and dating.
    http://www.theguardian.com/world/2006/mar/18/gender.bookextracts

  16. compared to the likes of prodigies who do have verifiable IQ scores above >160, the 120-130 score for Richard Feynman does seem plausible and not an error .

    • Hanfeizi says:

      A healthy 125-130 IQ pushed to it’s max can easily outperform a 160 ill-trained, ill-used, and plagued with neuroses. Feynman always seemed a very healthy-minded and effective sort.

      • ryan says:

        The real problem with saying “so and so has an IQ of 160” is that past about 145 IQ tests are no longer measuring general intelligence. A result of 160 or 170 indicates some freak of nature type cognitive ability. So the WISC-IV has a section where you give the child a list of numbers and ask them to repeat them back. You start with 5 numbers, then give them 6, then 7, etc. and see where they crap out. But if you’re giving the test to a kid who can repeat back 500 numbers and sure seems like 1000 wouldn’t be a problem, the test is simply broken. Yeah you’ll score it and the IQ will seem huge, but as a predictive tool it’s no longer useful.

        • Anonymous says:

          I knew someone in high school that was such a discipline problem that the school gave him an IQ test. Which he promptly broke. Being able to repeat several hundred digits is apparently a big deal. Despite not doing especially well on anything else they scored him at 140-something.

          He’s an alcoholic now.

          • ryan says:

            That

            Sucks?

            Yeah, I think that sucks.

          • Cadie says:

            I’m just speculating here, but if he had a high IQ and a mild or moderate mental illness or learning disability (especially one appearing sometime in adolescence), the combination can cause problems. People – parents, teachers, etc. – expect him to do phenomenally well, and when he runs into challenges, it’s attributed to him being lazy. Look at that IQ! He should be making all A’s! If he’s struggling, clearly it’s a character flaw, he’s just not trying hard enough, shame on him. And it piles up and up and what may have been fixable with special skills training, counseling, and/or medication festers and grows instead.

            This happened to me. I had a high IQ plus undiagnosed ADHD (until getting a diagnosis and starting treatment at age 32). That I might have a disorder preventing me from being able to keep up with the organizational and time management demands of junior high and high school was never considered. Instead, every failure to make an “A” was blamed on laziness and I was punished for it. And I thought of myself as an unsalvageably flawed, worthless person for a long time. Now I know more about what was going on, but I can easily see how in certain situations having a high IQ is not only not helpful enough, but can mask problems that need to be treated.

          • Zorgon says:

            I suspect that’s a rather common story. I’m also in the “high measured IQ disrupted by mental issues causing drastically reduced performance potential” category.

            I wonder what proportion of the LWsphere falls into that group?

          • Hainish says:

            Maybe he should have better cultivated the appropriate aspects of his personality, since that’s what matters anymore. It’s all his fault, really.

      • R. says:

        That’s utter bullshit.

        130 score is 1-100. If that were the case, let’s be generous and say 1 in a 1000 would be ‘effective.

        That’d mean US would have 300,000 people as smart and capable as Feynman.

        ….Right!

  17. Rachael says:

    I followed your LJ for a while but hadn’t seen that Story of Emily and Control before. Very, very good – compelling and chilling, and better than most “professional” short stories these days IMO.

  18. B says:

    CFAR is a very nice acronym.

    I just wish they’d have called it CAR, then an anti-bayesian club could call itself CDR (center for the defense of reasonableness, or something, who cares).

    Punited for reason!

  19. Dave says:

    A quote from Kim Ung-Yong at the end of the article:

    Society should not judge anyone with unilateral standards everyone has different learning levels, hopes, talents, and dreams and we should respect that

    I think this fits in really well with your recent comments on a certain crowd treating privilege as something relatively one-dimensional.

  20. Ufnal says:

    OK, that’s strange, my Psychology studies taught me that you ABSOLUTELY CANNOT make a valid claim about who’s got the highest IQ in the world, because there is no (and probably can’t be) one unified IQ test that has norms calibrated on the population of the whole world. IQ tests use norms derived from a representative group from the population they’re standardized for – and there’s been no standardization for the whole world. And there probably shouldn’t be, as IQ tests, even the most basic ones, are partially culture-related.

    • Peter says:

      I suppose the clue here is “Guinness record” – the Guinness Book of Records was started by a beer company as a way of settling arguments in pubs, after all…

    • Anonymous for this one says:

      They do try to make tests as culturally independent as possible and then calibrate them “based on millions people from 151 countries”.

      While waiting for that to be perfected, the most scientific thing to do would be to talk about “IQ_Japan”, “IQ_Malaysia”, etc. as separate things and investigate the correlations when tests from one country are used for another.

      The least scientific thing to do would be to throw up our hands, say “it’s just impossible to make a culturally independent IQ test, gosh darn it, which is just too bad because I’d really been hoping to risk throwing my career away by investigating all those hateful facts from the existing tests more objectively”. This leads to a little cognitive dissonance if you even wonder why examinations biased toward century-old European culture have been consistently favoring Ashkenazi Jewish and East Asian testees ever since, but cognitive dissonance plus health insurance is a much nicer life than internal consistency plus blackballing.

      • Ufnal says:

        Raven’s test isn’t a full IQ test battery. A full, Wechsler-like test, which covers both fluid and crystallized intelligence [if I remember the English terms correctly], IS impossible to make culturally independent – because of the crystallized intelligence part [which covers things like verbal or social intelligence and not just supposed “raw intellectual power” that fluid intelligence is supposed to be].

        There is also a problem where different upbringings and different cultures may lead to different scores on even the most simplified tests such as Raven because for example of different amount of exposure to abstract geometrical figures and patterns. Then the test not only measures who is smarter, but also who’s seen more such things in their life and can work with them more easily. But to some extent this can also be intra-national.

        And I don’t get your last sentence at all, sorry. 🙁

        • What the last sentence is implying is: Academics know (based on the state of the evidence) about racial/ethnic differences in IQ, but won’t admit it to themselves because academia has decreed that that’s a Thing You Can’t Say and they don’t want to be ostracized, resulting in cognitive dissonance.

          This is a very common reactionary talking point, or rather, dead horse.

          • Hainish says:

            IDK. . . Having been around academics, I didn’t get the impression that they know this or think it is true, but just can’t talk about it. The impression I got is that they largely agree with the arguments against this POV (or at least consider them sufficiently strong, or the issue sufficiently complicated, to warrant against taking a firm stance in the “for” camp).

          • In case it wasn’t totally clear, I don’t agree with the reactionary view. (The less-than-charitable “dead horse” remark was intended to signal this.)

          • Hainish says:

            Ah, that makes sense. (Based on your other comments, I did think it was a bit weird to see that coming from you.)

          • Anonymous for this one says:

            I notice you had no response to my point that IQ tests show a consistent bias toward cultures other than those of the test-makers. I appreciate that. It’s easy for me to simply claim “cognitive dissonance”, but a demonstration was much more entertaining.

            If you’d like a demonstration that’s less counter-productive to your thesis, you could point me to some of the many IQ tests that look equally unbiased but whose results show consistent cultural biases in the opposite direction. After all, if it’s practically impossible to avoid creating a standard deviation’s worth of meaningless bias while trying your hardest to avoid doing so, it should be pretty easy to deliberately create a standard deviation’s worth of bias in the opposite direction as a countermeasure. Then you just use a weighted combination of the exams and voila, you’re unbiased.

            Has nobody thought of doing so yet? Has there been no incentive? We’ve had decades of colleges trying to figure out how to avoid becoming “too Asian” and “insufficiently diverse” (which you’d think would be disjoint categories in a majority-European country, but apparently not), and it seems like the best they can do is swipe the “holistic” approaches they used to employ to avoid becoming “too Jewish”, right down to throwing up their hands, asking for the applicants’ race, and entreating the courts for mercy when they make their decisions partially based on race. If it’s easy to come up with meaningless entrance exam biases, why haven’t they? If it’s not, Q.E.D.

        • Deiseach says:

          Well, I have crappy pattern recognition skills, so I didn’t expect to score highly on that Raven test.

          Persons, I was not being falsely modest when I said I was the stupidest person on here. According to that Raven test, my IQ (age-adjusted) is 99. I couldn’t even scrape an average 100.

          You may now throw rotten vegetables and old shoes at the geek 🙂

      • Hainish says:

        I think the results of that *particular* Raven’s test might reflect selection bias (i.e., the better you are at this sort of test, the more likely you are to know about the site.) (Also, I took it at one point, and the questions didn’t seem to even be trying.)

      • Content warning: It costs $20 to get your score from that site, and it doesn’t tell you this until after you’ve finished taking the test.

        (I learned this when I saw that test linked on Tumblr and thought it might be interesting. Yes, I’m an idiot. I nonetheless maintain that sites that do this are the scum of the earth.)

    • ryan says:

      You’re about 5 yards off with this one. IQ tests are well calibrated to measure at the median, so if one person has an IQ of 98 and the other 102, you can be very confident that the 102 is going to generally score higher on any battery of cognitive testing. But when you get out at the extremes the test is not calibrated well anymore. If one person scores 142 and the other 146 the proper gamble is to put your money on 146 doing better on a battery of other tests, but you’re going to be wrong an awful lot.

      If you want to use a test to make high confidence predictions about which of two extremely intelligent people will perform better on a group of other tests, you would need to standardize and normalize the new test against a sample of very intelligent people. So find 10 million people at random, give them all the WISA, then take the 27,000 of them who scored over 145, and make a totally new test normalized against the subgroup.

      • Ufnal says:

        Well, sorry, but I disagree with some of what you say.

        ” if one person has an IQ of 98 and the other 102, you can be very confident that the 102 is going to generally score higher on any battery of cognitive testing”

        – first thing, all IQ tests give you an estimate, not an exact value. The exact confidence interval is different for different tests, but – as it was hammered into my brain by my professors – there is no such thing as a single numerical value that represents your score with 100% confidence. The difference between a person who scored 98 and 102 is not as big as you make it seem, because for one person the result is really, for example, and for the other . Which means it’s probable that the 102 IQ person has generally higher intelligence, but it is by no means a certainty.

        Second thing, intelligence is a really complex, controversial and not-too-well-defined concept. Or maybe very well defined, but in many different ways. So different cognitive tests can give you quite different scores for the same person, because they don’t measure exactly the same thing. One can have the highest IQ in the country as measured by a particular test and not be even in the first ten thousand in another [although the correlations are still high enough that such a person would probably score very high in most IQ tests]

        Third thing – all of this has nothing to do with the fact that IQ tests are partially cultrually biased, so no amount of fine tuning can ever get you a perfect pure mindpower test. Which was kind of my point.

        I agree that you can make tests designed specifically for very intelligent people – there is a version of Raven like that, if I remember correctly.

        • ryan says:

          The first point can be resolved if “you can be very confident that the 102 is going to generally score higher on any battery of cognitive testing” and “it’s probable that the 102 IQ person has generally higher intelligence” in fact mean the same thing and we’re just phrasing it differently.

          On the second I agree that intelligence at least in its colloquial usage is hard to define. Someone who has a solid grasp of statistics and probability but also regularly buys lottery tickets is fairly described as unintelligent. But that’s not going to register on an IQ test.

          But the thing is an IQ test is not an attempt to measure intelligence, it’s an attempt to measure general intelligence, which is eminently well defined. It is an alternative name for the g factor, which is “a variable that summarizes positive correlations among different cognitive tasks, reflecting the fact that an individual’s performance at one type of cognitive task tends to be comparable to his or her performance at other kinds of cognitive tasks” as eloquently put by wikipedia.

          On the third, no IQ test company comes close to claiming their test is perfect. And that was sort of the point I was making above, that the further away you are from the center of the bell curve, the less reliable a measure the test becomes.

          Finally, if a psychology professor told you cultural bias is a significant problem with the WISC-IV, you should talk to your college about getting some of your tuition refunded. Sorry, couldn’t think of a nice way of putting that.

        • Alexander Stanislaw says:

          Luckily we don’t need perfect tests to make conclusions or generate actionable data.

          For some reason people standards of rigor increase ten-fold whenever they are talking about IQ. Most people are usually quite happy to deal with imperfect concepts, categories and measurements as well as uncertainty in most domains.

          Colleges still use the SAT, professional organizations still use board exams. No one lambasts them because those tests it aren’t a perfect measure of a person ability, or because one person scoring higher might perform worse in the future. That would an absurd requirement. A test is meant to generate probabilistic information, and the extent to which IQ does this is an empirical question.

          Yes you can’t make the statement that someone has the highest IQ in the world. But you can say that they are very intelligent, and that’s the central point in any case.

          • Hainish says:

            But you can say that they are very intelligent, and that’s the central point in any case.

            OK, yes, you can say that, but in actuality, how are IQ scores going to be used? To the extent that it is allowed, they are going to be used as very sharp cut-offs for gate-keeping purposes. High standards of rigor in this case are not unwarranted.

          • drunkenrabbit says:

            Yes, so very, very true.

            @Hainish,
            IQ tests aren’t used as sharp cut-offs for anything, besides frivolous stuff like MENSA membership. The two areas where standardized test scores are used, higher education and the military, they’re used as part of a broader admissions process where other factors are taken into account. There’s no reason to think that if it were legal to use IQ tests for things like hiring that it would be different.

          • Alexander Stanislaw says:

            @Hainish

            I don’t know what you are responding to. No one made any policy recommendations based on IQ cutoffs. As a way of saying “this guy is really smart in addition to being highly accomplished” IQ is fine, and demands for perfection as in the following quote are silly. Even if IQ was far fuzzier than it is, it would still be fine for this purpose.

            first thing, all IQ tests give you an estimate, not an exact value. The exact confidence interval is different for different tests, but – as it was hammered into my brain by my professors – there is no such thing as a single numerical value that represents your score with 100% confidence.

  21. kernly says:

    whatever you think of old-timey separate gender roles and whatever you think of modern equal gender roles, they’re both pretty stable equilibria

    Birth rates are collapsing everywhere that “modern equal gender roles” are being applied. Some are just ahead of the curve. Japan’s past has been our prologue for some time now, in terms of both economic and sociological problems. Anyway, it’s important to point out that while conditions for men in the first world have certainly improved – for example, not being temporarily enslaved and sent to killing fields nearly as often as in the past – men still aren’t quite “equal.” Men still work more strenuous, dangerous jobs for more hours, share the fruits of that labor with women as a matter of course, and are rewarded with enormous liability should a relationship go sour.

    • TheAncientGeek says:

      Birthrates are dropping in some but not all of the countries that have equal gender roles AND high income AND high levels of education…since all those things correlate together,

      • kernly says:

        Could you point out a first world country where indigenous birthrates aren’t very low? Immigrants don’t have the disease, but their descendants eventually catch it.

        Why would high income lead to low birthrates? As for education, insofar as it impacts birth rates it can be rolled into gender roles, with women getting higher level ed and careers. This results in later marriage, or even no marriage.

        And while the genders are indeed more equal than they have ever been, that’s certainly not due to women becoming more like men. It’s due to men becoming more like women, able to avoid extremely rough conditions that were once forced upon them. I don’t see how that’s depressing birth rates. There seems to be an obvious culprit: women being pushed to prioritize education and career over child-rearing. Which has nothing to do with real “equality” and everything to do with a particularly virulent ideology spreading in the first world.

        • Alexander Stanislaw says:

          Why would high income lead to low birthrates?

          Because the cost of having a child is higher. Whereas children actually provide income in poor countries via labor and future support, they are purely a financial drain in rich countries.

          And children take time to raise, thus in rich countries where people’s time is valuable, there is a large opportunity cost associated with having them.

          Yes education of women probably has an effect independent from wealth in decreasing fertility. But the effect is probably smaller. (Given that rich Arabic countries with strong gender roles have relatively low fertility).

        • Anonymous says:

          There are 7 billion people in the world. It would make more sense to say immigrants get cured.

          • nydwracu says:

            The reproduction rate of almost every civilized country is below replacement level, unless you count the Faroe Islands as a separate country. (They’re a country under the kingdom of Denmark, same as Greenland.)

      • roystgnr says:

        “since all those things correlate together,”

        it’s instructive to see what happens when you try to control for that correlation. For example, if you can find a subpopulation that’s equally high-income, more highly educated, but grossly unequal in gender roles, guess what the birth rate looks like?

        • Dude Man says:

          This is slightly off-topic but deals with some of the data in your link. So according to that link, 58% of non-converted Mormons are women. Since being a non-converted member of a religion means you were born into it, what explains that discrepancy? Are men more likely to leave Mormonism than women? If so, why are converts to the religion roughly evenly split between men and women? What causes women to be more likely to stay Mormon but not more likely to join?

    • Anonymous says:

      Some people are still having six, eight, ten, twelve children.

      Look upon them well. They are what the future will be.

      Evolution in action!

  22. various says:

    > whatever you think of old-timey separate gender roles and whatever you think of modern equal gender roles, they’re both pretty stable equilibria compared to the confusion and conflicting demands you get in a mish-mash of both.

    Is there a population with “equal gender roles” and above replacement fertility? I know a few western European countries manage it but how do they look when corrected for immigrants who almost certainly don’t object to pretty strict gender roles?

    • Doug Muir says:

      There’s no way to answer this without defining what “equal gender roles” means, which is probably impossible. However, there are a number of First World societies that have Total Fertility Rates (TFRs) at or near replacement.

      (TFR is the average number of children per woman. If each woman has two children, that exactly replaces her and one male. Since some kids die before adulthood, the actual replacement rate is a little higher — about 2.07.)

      Israel – 2.91
      Iceland – 2.08
      New Zealand – 2.05
      Ireland – 2.0
      France – 1.98
      United States – 1.97
      Norway – 1.93
      Sweden – 1.92
      United Kingdom – 1.88

      A TFR below replacement rate but over 1.8 means very slow decline, which can pretty easily be offset by a modest level of immigration. (For instance, all the countries listed above still have growing populations.) Once you get below 1.8 or so, things start getting more problematic — it’s like compound interest, but in reverse.

      Incidentally, while you’re contemplating the link between gender equality and fertility, don’t forget that Sweden and Norway have TFRs about the same as Iran’s (1.93) and much higher than such bastions of gender liberalism as Italy, Russia, and the Ukraine.

      Doug M.

      • TheAncientGeek says:

        For some reason the sort of people who complain about low birthrates don’t find immigration an acceptable solution.

        • Mexicans and New Mexicans are not freely interchangeable in most roles. And the roles where they are freely interchangeable (unskilled labor, mostly), the importation of the replacement population negatively impacts precisely those populations who would otherwise benefit from a labor shortage, namely the working poor of all races.

          • Hanfeizi says:

            Properly trained, they are. A Mexican with an MD is still an MD, and provided he clears his boards, can practice in America (I was formerly the assistant director of the AMA in New Mexico- I did the paperwork for a LOT of Mexican doctors). A Mexican engineer is an engineer.

            It’s not where the immigrants are from that matters. It’s who they are.

          • Jaskologist says:

            Who you are is very heavily determined by where you are from. And that’s without even delving into HBD.

          • Importing only Mexican MDs isn’t going to fix our demographic problem, either. There just aren’t enough of them. The same thing applies to the tech industry’s cries for more H-1 visas: importing a few thousand highly intelligent Indians and Chinese is just not a big deal, culturally, genetically, or economically, and if the problem is really a shortage of skilled labor, that might be a reasonable approach. But if the problem is the need to replace all of the babies that the locals aren’t having, then you’re talking about importing millions, most of whom won’t be highly educated or of high intelligence. The problems with this approach are numerous.

          • Peter David Jones says:

            When did it become all about Mexico?

          • It’s not. Mexico is just a salient example.

          • TheAncientGeek says:

            Any partular less developed country has a smaller proportion of well qualified people, but better developed countries have the world to choose from. Averages are not sums.

          • Doug Muir says:

            I note in passing that Mexico’s TFR has been falling steadily for the last 30 years. It’s currently around 2.2 — at the high end of the First World, just a bit above replacement.

            If it continues to fall at the same rate, it’ll pass below replacement around 2019, give or take a year. Mexico’s population won’t start to shrink for a while after that because of demographic inertia — but the supply of peak-immigration-age Mexicans (twentysomethings, basically) will start to decline dramatically after about 2030, as the smaller Mexican birth cohorts of the 2000s start coming of age.

            Doug M.

        • Randy M says:

          That’s half true. You also see some people like … some economists whose names I can’t remember, who use lower birthrates to argue for open borders.
          Others aren’t concerned with merely the economic problems of a declining population, but personal or civil problems if their/the majority loses market share in the country or world.

          (See French car fires, etc.)

        • Eric says:

          Immigration is not an acceptable solution, if the goal is to have a sustainable population with the same (or similar) inferential distance between members in T1 as in T2.

          If one considers this inferential distance less important than other factors (e.g., economy), or if one has good reason to believe that new immigrants will quickly bridge this inferential distance, then the above concern is less important but it is uncharitable to suggest that this is irrationally based, given certain preferences.

          • Peter David Jones says:

            The inferential difference between me and the average immigrant is less than inferential difference between me and the average right winger.

        • Julie K says:

          It goes hand in hand- if they weren’t troubled by immigration, they would shrug and say “Okay, native birthrates are low but immigrants will make up the shortfall” and not bother to write about it.

          • Hanfeizi says:

            Which works fine in some places, particularly “countries” which are offshoots of larger cultures. Singapore has the lowest birthrate in the world, but that’s no big deal- they specialize in attracting the best and brightest from China, India and Southeast Asia, and their “culture” is basically Anglo-Chinese with a dash of curry. But Japan? There isn’t a big reserve of potential new Japanese immigrants for them to cultivate. They’ve had enough trouble with Brazilian-Japanese, and there are only a few million of them.

        • gattsuru says:

          Unless you’re convinced that an immigrant group has some dramatic built-in difference, they’re likely to have the exact same issue a few decades down the road. If they do, then you’ve got issues if you actually value your current values more.

          I’m pretty skeptical of demographics-as-destiny — it strikes me too much as projecting an infinite horse-and-telephone-operator catastrophe — but if you think the projections are meaningful, there’s not a very easy solution.

        • Illuminati Initiate says:

          Does anyone know how feasible it would be to filter immigrants based on ideology?

          • Emily says:

            That’s a really interesting question.

            Getting to immigrate to the United States is worth an enormous amount to many people. So, how well can people fake ideology, and to what extent does having the characteristics necessary to fake an ideology predict the same stuff that actually having the ideology would? (If it’s “to a significant extent,” it doesn’t matter how well people can fake it.) Also, to what extent is ideology stable?

            I would guess that using filters that are not directly ideology-related is at this point a much more efficient way of getting what it is you could hope to get through actually using ideology. But maybe at some point that will change via technology.

          • Deiseach says:

            Does anyone know how feasible it would be to filter immigrants based on ideology?

            From “What I Saw In America” (1922), G.K. Chesterton, on filling up the official form before being granted a visa:

            One of the questions on the paper was, ‘Are you an anarchist?’ To which a detached philosopher would naturally feel inclined to answer, ‘What the devil has that to do with you? Are you an atheist?’ along with some playful efforts to cross-examine the official about what constitutes an ἁρχη [Greek: archê]. Then there was the question, ‘Are you in favour of subverting the government of the United States by force?’ Against this I should write, ‘I prefer to answer that question at the end of my tour and not the beginning.’ The inquisitor, in his more than morbid curiosity, had then written down, ‘Are you a polygamist?’ The answer to this is, ‘No such luck’ or ‘Not such a fool,’ according to our experience of the other sex. But perhaps a better answer would be that given to W. T. Stead when he circulated the rhetorical question, ‘Shall I slay my brother Boer?’— the answer that ran, ‘Never interfere in family matters.’ But among many things that amused me almost to the point of treating the form thus disrespectfully, the most amusing was the thought of the ruthless outlaw who should feel compelled to treat it respectfully. I like to think of the foreign desperado, seeking to slip into America with official papers under official protection, and sitting down to write with a beautiful gravity, ‘I am an anarchist. I hate you all and wish to destroy you.’ Or, ‘I intend to subvert by force the government of the United States as soon as possible, sticking the long sheath-knife in my left trouser-pocket into Mr. Harding at the earliest opportunity.’ Or again, ‘Yes, I am a polygamist all right, and my forty-seven wives are accompanying me on the voyage disguised as secretaries.’ There seems to be a certain simplicity of mind about these answers; and it is reassuring to know that anarchists and polygamists are so pure and good that the police have only to ask them questions and they are certain to tell no lies.

          • Jiro says:

            They don’t ask those questions because they expect anarchists and polygamists to reveal themselves; they ask those questions so that when you lie on them, they can kcik you out for lying.

          • Jaskologist says:

            We had been warned about the Tsarnaev brothers at least twice by Russia, but apparently they weren’t even on the short list of suspects after the bombing. Will is the current limiting factor, not practicality. Any serious such filter would amount to profiling.

            On the other hand, we Americans treat our legal immigrants terribly, so it wouldn’t be too hard to tack on yet another hoop to jump through if we wanted to.

          • Anonymous says:

            Actually, America does profile Chechens. It has a very simple rule: no visas for Chechens. The 200 Chechen resident (1/10 as many as admitted to Germany) that got past this rule were because of people pulling strings. Which might also explain ignoring warnings.

          • nydwracu says:

            Actually, America does profile Chechens. It has a very simple rule: no visas for Chechens. The 200 Chechen resident (1/10 as many as admitted to Germany) that got past this rule were because of people pulling strings. Which might also explain ignoring warnings.

            Steve Sailer wrote about this. Look up Ruslan Tsarni.

            Also, you know, I’ve never seen any of the anti-discrimination people complain about this blatant prejudice against Chechens. (It’s hard to argue that this particular instance of prejudice is bad. Hill tribes gonna hill tribe. But still, if people are going to claim that they’re philosophically opposed to all such forms of reasoning…)

          • Hanfeizi says:

            We still do, in three cases. If you apply for a green card in the US, one of the questions is: “Have you ever been a member of a Communist or Fascist party or political organization?”

            Fortunately, since my wife’s application to join the CCP had been rejected both times she applied, she didn’t have to check “yes” on that one.

            So we obviously don’t have a problem with filtering by ideology, or at least didn’t during the Cold War and still have the question on our current forms.

      • Anonymous says:

        “A TFR below replacement rate but over 1.8 means very slow decline, which can pretty easily be offset by a modest level of immigration. ”

        It will also shift the population steadily toward the high reproducing segment and so fix itself in a burst of micro-evolution.

      • Anonymous says:

        The Israeli number is driven by groups with strict sex roles, but even secular Jews have fertility in excess of 2.

      • Anonymous says:

        Israel – 2.91
        Iceland – 2.08
        New Zealand – 2.05
        Ireland – 2.0
        France – 1.98
        United States – 1.97
        Norway – 1.93
        Sweden – 1.92
        United Kingdom – 1.88

        IL: Extremely high fertility with Haredim, high with Arabs. If still below 3, equalist majority must have abysmal TFR.

        IS, NZ, IR: I know nothing about those, demographics-wise.

        FR: France forbids studies on ethnicity/religion v fertility, but sickle cell is a helluva proxy, and if these numbers are true, that’s just really wonderfully vibrant: http://i1.wp.com/www.les4verites.com/wp-content/uploads/SCD-France.jpg Note that many areas with high percentages there are especially densely populated. Not sure how much that is classical vs equalist gender roles though, Sub-Saharan Africans are generally “other” on that axis.

        US: I’m sure you’ll love your Mormons-chicanos-and-Baptists future. Very gender-equal! 😉

        NO/SW: AFAIK, the very-high-TFR groups are mostly Muslim immigrants. Not just totally gender-equal, but made e.g. Malmo a wonderful place to live!

        UK: E.g., most births in London are not to white Britons. Other large cities probably not as extreme, but trends probably not dissimilar. Don’t know any other demo there that remotely cares about gender equality; although some that don’t much care about that topic are not exactly high-TFR either (eg Poles).

        Re: Britain, allow a mini-rant: the sooner that whole country sinks beneath the waves, the better. Their imports do things like the grooming scandals (quite widespread, branding it “Rotherham” was a masterpiece in message control. “Rotherham” popularly implies “not not-Rotherham”) and their native stock doesn’t wig the fuck out about what’s being done to their daughters. Utterly degraded. If their ancestors could see Britain now, they’d be awed with horror, both at what happened, and how fast.

        • B says:

          This was me.

          BTW, Scott, did you change the settings in the cookies you send? I’ve been noticing many more people claiming anonymous posts recently.

        • Anonymous says:

          The secular 70% of Israeli Jews have a birthrate of 2, which is not abysmal. Here is a time series of fertility by religiosity. I doesn’t include Arab citizens; I think they fell from 5 to 3.5 over the period of the graph.

          • B says:

            This contradicts my narrow claim, but not my wider point if I remember my Jewish religious ratings correctly – it sorts descending fertility along descending religious conservatism, which normally dovetails with classical gender roles! Or am I misinterpreting fundamentally?

          • Anonymous says:

            Yes, it only contradicts the narrow claim. But the narrow claim is important: are secular people failing to reproduce, or just being swamped?

        • Hanfeizi says:

          “NO/SW: AFAIK, the very-high-TFR groups are mostly Muslim immigrants. Not just totally gender-equal, but made e.g. Malmo a wonderful place to live!”

          Still too small a population to really drive up the birth rate more than a few notches. Even given that, I’d estimate the Caucasian birthrate in both countries is higher than that of the US.

          • B says:

            Muslims of all intensities of belief down to culturally muslim are estimated at 5%. Not huge, not nothing. A few notches are a lot when you’re under replacement anyway.

            But yeah, not determinant.

          • Anonymous says:

            If by a “notch” you mean 0.1 TFR, you’re exactly correct. The birthrate for those born in Sweden is 1.7. It has moved a lot in the past generation.

          • Anonymous says:

            source of swedish graph

      • various says:

        EDIT: Nevermind, Anon http://slatestarcodex.com/2015/01/06/links-12015-an-extraordinary-url-in-an-ordinary-world/#comment-171884 covered it.

        Yes,

        > I know a few western European countries manage it but how do they look when corrected for immigrants who almost certainly don’t object to pretty strict gender roles?

        How many of newborn Swedes are descendants of believers in gender equality and how many of Muslim immigrants? How many Israelites are born to Orthodox Jews? And so on.

  23. JS Bangs says:

    Dropping the mask for a moment to say thanks to Scott for linking my book, and reminding everyone else that you should totally buy it.

    🙂

  24. So apparently Giving What We Can has 415 Brits on its membership rolls and only 240 Americans. I know they’re based out of Oxford, but I still find this surprising.

    • Adam Casey says:

      If you take a group of Brits, ask them to invite their uni friends, ask those people to invite their uni friends, and so on, you end up with a lot of Brits.

  25. Deiseach says:

    The diet was a rediscovery of the diet promoted by a London undertaker,
    William Banting, in 1864 in his best selling Letter on Corpulence and widely recommended by medical authorities until the 1950s.

    And this finally explains a reference in one of Dorothy L. Sayers’ Lord Peter novels, where an older female character uses the term “banting” to mean “going on a diet/temporary period of fasting” 🙂

  26. Andrew says:

    So, the person who lost the most weight, felt terrible all the time?

    Unfortunately this does not necessarily tell us anything about carbs vs. fat per se. The experiment ought to be designed so that the two diets equalize the rate of weight loss. Then tell us who feels better. (Because maybe losing 2.5lbs/week always feels worse than losing 1.5lbs/week, regardless of what you eat?)

    • TheAncientGeek says:

      The Dr twins concluded that combinations of fat and sugar are particularly bad, although that kind of amounts to the standard advice.

  27. Steve Brecher says:

    The BMJ piece seems to have been inspired by Nina Teicholz’s “The Big Fat Surprise”. A critical review of the latter: The Big Fat Surprise: A Critical Review; Part 1 (Included is a link to Part 2.)

    Parts of the Critical Review knock Teicholz for relying quite heavily on Gary Taubes’s books; these parts, labelled “Cribbing Taubes Alert”, may be skipped by the reader not interested in that aspect.

  28. Pingback: Links 1/2015: An Extraordinary URL In An Ordinary World | Neoreactive

  29. Markus Ramikin says:

    “Hans Rudel was the top German fighter ace”

    *headscratch* “Fighter ace” includes bomber pilots?

    • Anonymous says:

      Post removed for falsehoods: confused Rudel and Rall and then rationalized the cognitive dissonance. Oops.

      BTW, IIRC from an interview, another interesting thing about Rudel: He despised dogfighting and always tried for devastating gun-passes, afterwards gaining altitude, speed and distance before going for another run. Basically he treated air combat like strafe runs.

      He was a consultant on the A-10 project. The GAU-8 is, from this perspective, a pretty clear continuation of his WW2 thinking on gunfighting.

      Of course the A-10 would still be helpless against real fighters, air-air missiles got really good shortly after its design period.

      This also ties two topics in the links together! 😉

  30. Salem says:

    I wonder how much of the “what a man wants” changes are due to changes in the way language is used.

    For example, the traditional meaning of “chastity” is sexual continence – a wife who is chaste will have sex with her husband (but not others). But the modern meaning of “chastity” is sexual abstinence – a wife who is chaste will not have sex with her husband. So is the decline in the importance of “chastity” to respondents a function of changing social mores, or changing interpretations of the meaning of the word?

    It’s notable that many of the terms that have fallen are expressed in language that would nowadays be seen as very old-fashioned or obscure, whereas all the terms that have risen are expressed in language that remains modern.

    • stillnotking says:

      Most of the differences are probably some sort of noise, but it is interesting that “Education/Intelligence” has climbed so dramatically for both men and women. Presumably those words have retained their meaning.

      My bet is that the explanation is economic. Education and intelligence are better indicators of earning prospects in the 2010s than they were in the 1930s, especially if disentangled from social class, as the survey implicitly tried to do. Of course, that’s more about women’s preferences than men’s, since men in the 1930s probably weren’t thinking about their wives’ income potentials.

      I definitely think you’re correct about the word “chastity”.

    • Mary says:

      Definitely right about chastity. Trying to discuss a medieval trope in a chivalric romance — a belt that only a chaste wife could wear — leads to some silliness as people don’t get that she has to be faithful, not abstinent.

      (Problem with the belt: how on earth do you test it to be sure it works right? It’s not like a scale where you can use the approved weights that are kept in the town hall.)

      • Deiseach says:

        Problem with the belt: how on earth do you test it to be sure it works right?

        (1) Find local lecher/seducer. Ask him which married women he’s slept with (and try to estimate chances of him being truthful versus him lying to keep his reputation). Try out belt on these women.
        (2) Take one for the team. Select married woman not your wife, attempt to seduce her, if successful, try out belt on her 🙂

        Though, given how Mediaeval through to Renaissance literary works are stuffed with wives cuckolding their husbands and getting away with it due to elaborate schemes*, jealous husbands trying (and failing) to keep their pretty young wives out of harm’s way, and handsome but penniless youths setting out to seduce themselves a sugar momma from one of the pretty young wives married to the rich older husband, I rather imagine the “belt only a chaste wife can wear” trope is there for humorous/moralising purposes and “how on earth would you test it?” is some of that ‘suspension of disbelief’ the audience engages in.

        *cf. The Miller’s Tale in “The Canterbury Tales” – setting up an elaborate fraud based on Noah’s flood so you and your landlady can frolic all night while her husband suspects nothing? There certainly must be easier ways of getting a chance for a night of passion!

        • Nornagest says:

          To be fair, the Canterbury Tales are full of zany schemes (and fart jokes). It reads like a 14th-century sitcom script, except without most of the suck.

          • Deiseach says:

            Oh, the “randy friars are banging your hot missus” tale was a perennial fave all across Europe, as was the equal favourite about what monks and nuns really got up to; the Land of Cockaigne is amongst the first examples of Anglo-Irish poetry, dating from the 14th century and all about that kind of carry-on.

      • birdboy2000 says:

        You don’t because they were never actually used. 😛

  31. Julie K says:

    re: Names/Professions
    For some of them, it’s age related- both Fitness Instructors and Firefighters have names that were trendy baby names in the 1970’s.

    For some professions (poet, songwriter, race car driver) I wonder if they had enough people for a meaningful analysis.

    In my current job I read a lot of legal documents, and I’ve notices that notaries public tend to have kreaytive names. Based just on that I theorize that it’s a profession that attracts people whose parents were not very educated.

    • ryan says:

      Best I’ve ever seen is from my cousin who was working as a school guidance counselor. She called a parent and said, “hi Ms. whoever, I’m calling from the school and need to talk to you about your daughter, um, La-a?”

      Then the mom gets real mad, “Why are you mispronouncing my daughter’s name! It’s Ladasha.”

      Yep, you’re supposed to pronounce the dash.

      So then I told my cousin that if she ever had a daughter she should name her &y, “Ampersandy.”

      • This is an old urban legend.

        The Baby Name Wizard Blog (which is fascinating, hat-tip to Randall Munroe’s blog for bringing its existence to my attention) has a three-part series of posts about that supposed name and others like it. (Unfortunately, I can’t link directly to that blog here because the spam filter has wrongly decided that it’s evil. Here’s a link to Munroe’s post; search the page for “insightful essay”.) Spoiler alert: A lot of this is actually a veiled way of talking about race.

        • Anonymous says:

          bitly works: Part 1 Part 2 Part 3

          • Thanks; I tried Google’s URL shortener but it didn’t work, so I assumed the filter imposed a blanket ban on URL shorteners (which I certainly wouldn’t fault it for doing).

        • Anonymous says:

          Goddammit if my cousin made up that story I’m going to not punch her but really want to. Thanks for the info.

          • Deiseach says:

            In my current workplace, when processing application forms I’ve seen a few babies named after current pop stars (there’s at least one Rihanna) and I’ve seen one Daenerys (I think I may have been the only person in the office who got the reference).

          • ryan says:

            @Deisearch

            Best gems I’ve seen come though our office are Neyirah, Jamaraqui, Zieontre and Qy-Mon (don’t know if you pronounce the dash).

          • Anonymous says:

            Qy-Mon. I expect that probably comes from a language I don’t know much or anything about, but from eyeballing it I can’t help but think Superman.

          • Deiseach says:

            I used to be much more indignant, when I was younger, about “creative” names but I’m more relaxed now (though I’m wincing a bit about that Daenerys and how she’ll get on when she’s five and starting school).

            Partly because it dawned on me that, in a country where “Looney” is a perfectly cromulent surname, and where names such as Imelda, Fionnuala, Fidelma, Aoife, Gobnait (female patron saint of my grandfather’s parish!) for girls and Tadhg, Enda (our Taoiseach, God help us all), Fachtna, Jarlath, Carthage, Canice, Ulick (yes, really) and Manus for boys were (at least formerly) commonplace (and some are undergoing a revival), then laughing at other culture’s unusual names was the mote-and-beam notion in action 🙂

            Not to mention that you still see “Flor” (short for “Florence”) in male names in the neighbouring county; a name associated strongly but not exclusively with the McCarthys.

      • grendelkhan says:

        It’s kind of interesting that you posted a deracialized version of the story; it usually comes with indicators like the punchline being “the dash don’t be silent!”, or the mother not using multisyllable words like “mispronouncing”. Did you hear it that way, or were you polite’ing it up for the comments here?

  32. Anonymous says:

    The War Nerd’s essay just feeds my general belief that I can’t stand journalists writing about war or technology. It’s just full of contradictions. Ignoring lots of little quibbles, I’d say that his main thesis is, “The simple, yet well-designed A-10 is cheap and still good. Fancy, and expensive, tech is bad.” Sure, I love the A-10 as much as the next AerE, but it’s bloody stupid to follow this argument up with

    That’s because the whole notion of manned fighters, Top Gun crap, is over. If the US and China, or Russia, ever have that big war the DoD’s planners drool over, every manned aircraft will be blasted out of the sky in minutes. After that, it will be drone vs. drone, missile vs. missile.

    In a world where all we do is buy A-10s, we never reach this future. Perhaps our enemies do… but we’ll be flying A-10s while they do it! Whatever compromises the F-35 embodies (and whatever misgivings I have about the future of autonomous tech <-this is my area of expertise), it is a massive step for us toward this future. I have heard from multiple (decent) sources that something like 80% of the cost of the F-35 is software validation. It is an electronic masterpiece, giving us a malleable platform to develop/test not just the specific tools that will be necessary for this future world, but also to get a feel for how to optimize the design/test/validation process for these highly automated, highly integrated platforms. And the unfortunate fact of the matter is that we’re not yet at the point where all of this can be reliably done without a pilot (especially to manage non-standard emergency situations).

    Any analysis that doesn’t include a term for “new tech pushes us forward, even if it is expensive” is going to get stuck at every timestep and look just one timestep backwards. Say, instead of buying every homeless person a mansion, we could pretty much buy exactly that many P-51s (inflation adjusted, not adjusted for cheaper manufacturing technology). I’m pretty sure if we send 600,000 P-51s at a target, we’ll have at least as much success as we would have with an F-35. If you can see problems with this analysis, you can start to see some problems with the linked articles.

    • Vaniver says:

      I think the similar complaint I’d endorse is that most of the time, instead of jumping directly to the future, people insist on amalgamating the future and the past. There’s the famous story of a US Navy war game in 1933 (if I recall correctly) that was basically Pearl Harbor, and went the exact same as the actual battle. The clear implication was that carriers are the way to fight naval battles and battleships are obsolete–which was promptly hushed up by the politically dominant battleship captains. Similarly, if we know now that drones are the future, why is our big investment project in a manned plane that’s software-heavy, instead of an unmanned plane that’s software-heavy?

      • Anonymous says:

        most of the time, instead of jumping directly to the future, people insist on amalgamating the future and the past

        Most of the time, this is referred to as “incremental progress”. It’s hard to make everything brand new; there would simply be too many ways for things to fail. Instead, you rely on a lot of known, trustworthy components and incorporate a smaller number of innovative components which show a high upside and decent technology readiness level. This is different from carriers v battleships, as we already had both vehicles reasonably well-developed.

        why is our big investment project in a manned plane that’s software-heavy, instead of an unmanned plane that’s software-heavy?

        Multiple reasons. What do I think is by far the biggest? Boring old timeline. The first unmanned plane to ever fire a weapon in a mission (and thus, to provide an iota of confidence that such technology could be feasible for a banner investment project) was in 2001. Prototype contracts for the JSF were awarded in 1996, and the winning contract (for what was at this point already a design with >5yrs of refinement) was awarded in 2001. Making a major design change like un/manned just wasn’t going to happen for this bird. Incorporating gobs of new sensing/integration/whathaveyou software (and the accompanying minor hardware changes) are a lot easier to swallow.

        …I would be very surprised if there wasn’t a classified x-type project already commissioned in the last 15 years for a grand unmanned air superiority platform (modulo internal estimates of the useful service period of the F-35, to which I am not privy).

        Anyway, I’m not going to say that “battleship commanders want battleships” isn’t a factor. Of course it is. Air force commanders who like pilots want manned aircraft. I’m an autonomous systems guy, and I want autonomous systems. There’s always an insanely complicated interaction of interests when selecting a vehicle (…I literally just got out of a meeting which was essentially dedicated to making the case for “better” selection of platforms for small autonomous aerial vehicles…), but that’s probably not the main reason that the F-35 houses a warm body.

        • Anonymous says:

          This is not an anachronism. Woolsey fought with DoD about drones when he was DCI 1993-1995.

          • Anonymous says:

            Yep. The first real contract for a little recon drone (the original Predator) was in ’94. Of course, the concept of unmanned air vehicles has a long and interesting history, but it wasn’t really all that feasible until we developed better computing and fly-by-wire technology through the 80s/90s.

            At first, they were pretty bad, and it was absolutely inconceivable that they be suitable for an air-to-air combat role. As we all know, technology has continued to advance rapidly, and like I said, by 2001, we fired our first weapon from a UAV. At this point, though, they were still mostly little ‘toys’ that failed regularly.

            If the JSF program had happened 10 years later than it did, I would bet heavily that it would have been unmanned. If it had happened 5 years later than it did, it’s a toss up that I could see going either way. On the timeline that it actually happened? There was never a chance. The pace of electronics technology has been absolutely insane over the last 20 years, which is really rough for large aircraft projects. They routinely require a decade to go from initial design concept to first flight, another five years to get into service, and then are expected to be relevant for another 20-30 years.

          • Anonymous says:

            CIA wanted to fire weapons from drones in Afghanistan, but was prevented for political reasons. So saying that the first firing is technical evidence is extremely misleading.

          • Anonymous says:

            I don’t know what you’re arguing. The first time we fired a weapon from a drone in an operation was in the first year of the war in Afghanistan (from a vehicle which had its first test fire in the same calendar year). I think you’re going to just have to explain more.

          • Anonymous says:

            I mean Afghanistan in the 1980s.

          • Anonymous says:

            …do you have a vehicle in mind? Maybe some sort of procurement process? Even just a technological feasibility report? Gov’t agencies ‘want’ to do lots of things. Let me tell you about all the really cool autonomous vehicle ‘wants’ that my directorate has…

          • John Schilling says:

            If the JSF program had happened 10 years later than it did, I would bet heavily that it would have been unmanned. If it had happened 5 years later than it did, it’s a toss up that I could see going either way.

            You’re talking about a 2001-2006 timeframe for program initiation, then. I count eight fifth-generation multirole combat aircraft programs from about that period. Five (Sukhoi T-50, Chengdu J-20, Shenyang J-31, HAL AMCA, Mitsubishi ATD-X) are manned, only three (DARPA J-UCAS, BAE Systems Taranis, Dassault nEUROn) are unmanned. The three unmanned systems are narrowly focused on reconnaissance and strike with only minimal aspirations towards e.g. air-to-air combat, while the five manned systems are full-service fighter-bombers.

            There seem to be some misconceptions as to what drones are and are not good for at the present (and plausible near future) state of the art.

            1. Drones are better than piloted aircraft at persistent surveillance, mostly because they don’t get bored or tired. This includes surveillance missions with a side order of occasionally blowing up interesting things that you find, which is what armed drones are mostly doing now.

            2. Drones are moderately inferior to piloted aircraft for reconnaissance, strikes against known targets, and air interception. Worth using for those missions when they come with high risk and when failure won’t be catastrophic.

            3. Drones are greatly inferior to piloted aircraft for close air support and air interdiction, suppression of enemy air defences (SEAD), and air superiority combat. With SEAD, there’s value in sending an essentially sacrificial wave ahead of the main force, and drones are probably competent enough for that at least.

            4. The oft-cited bit about human pilots being limited to maybe 10G acceleration while drones are unlimited, is pretty much a red herring. Dogfighting went out of style in about 1940, and the actual technology and tactics of modern air combat almost never call for accelerations of greater than 3-4G.

            5. All of this likely changes shortly after the first war in which drone fighters get clobbered by the old-school kind, when people figure out what went wrong and go about fixing it.

            The F-35 is barely functional now because it is going through the phase where even the best aircraft are barely functional. It is likely to become the airplane that the United States will need in 10-20 years, unless world peace breaks out or unless the next war turns out to be exactly like the last war. It will wind up costing an order of magnitude more than it should have; I’m entirely with the critics on that one.

          • Anonymous says:

            I would agree with all of this. I did roll back my estimate a bit while writing my last comment (getting caught up), and while I think we would be a bit more likely to take the jump to unmanned than the international baseline, it’s an unprovable counterfactual anyway.

            I think the biggest caveat I would have is that, in public discourse, the word ‘drone’ papers over a large spectrum of autonomy, from purely teleoperated to fully autonomous. The more autonomous you get, the more shoehorned the vehicles are right now into the persistent surveillance role. They’re reasonably easy to just set and forget. Like you said, they don’t get bored, and their level of ‘tired’ is basically limited by endurance (can be made pretty large or mitigated by an automated multiple-vehicle refueling scheme).

            Armed operations are very much on the teleoperated side of the spectrum, for technological and (probably more importantly) legal reasons. Autonomy can be done in some settings – Phalanx is a good example. For Phalanx, the technological problem is incredibly constrained, and the legal problem has some nice features (defensive only, low risk of collateral damage) which make it more palatable. We might have a decent teleoperated air superiority vehicle in the not to distant future… but I wouldn’t be surprised if I don’t live to see an autonomous one.

            This is the background that makes me want to jump in the prediction game of your fifth point. I don’t think people in the business have any illusions as to what autonomous drones are capable of (at least, pretty much everyone I know is well-aware of how bad they are). But I think that teleoperated drones have a chance without getting clobbered in an actual war. In my little corner of the autonomy world, we’d love to wargame incredibly low-cost, passive countermeasures, but we know the technology isn’t remotely ready for even the simplest countermeasures. Drone fighters with various levels of autonomy only need to get clobbered in internal wargames with our own old-school fighters for us to know that we shouldn’t purchase hundreds of them until we finish fixing a lot of technical problems. My understanding is that we’re currently flying QF-16s for target practice, and I imagine the results will give program managers a lot more information about the feasibility of future air superiority platforms that are on the teleoperated side of the drone spectrum.

          • John Schilling says:

            Teleoperation is going to be the norm for armed drones for at least a generation, I think, but the disadvantages of the drones will remain. Real-time situational awareness is severely handicapped when implemented remotely. “Immersive” VR isn’t that immersive; note the difficulty the airline industry has had in developing flight simulators that can train pilots for stall recovery. And any gamer can talk to you about the problems a few hundred milliseconds of latency can cause, even in non-twitch games.

            And of course teleoperation means committing to providing huge amounts of absolutely-secure wireless bandwidth in the face of an adversary who can deliver hacks both subtle and gross with megawatts of broadcast power.

            I would expect that most sixth-generation fighter aircraft will be “optionally manned”, and that most of their battles will be won with men in the cockpit.

          • chaosmage says:

            Wouldn’t one possible advantage of drones in air-to-air combat be that they could be way smaller, and thereby hard to spot with radar?

            I’m imagining a tiny (less than a meter across) stealthy flying wing that’s basically a lifting body delivery system for a single air-to-air missile and dives away immediately after firing it. If it can kill a manned fighter only 10% of the time, maybe that’d still be economical?

          • Anonymous says:

            Teleoperation is going to be the norm for armed drones for at least a generation

            Agreed, with heavy emphasis on at least.

            Real-time situational awareness is severely handicapped when implemented remotely.

            I think this will depend on other trends in technology and air warfare. To the extent that BVR engagements can actually become the norm, this is less important. I think this type of trend will affect the need for physical feedback, also.

            …a few hundred milliseconds of latency…

            …is definitely a problem for teleoperation, and is a big reason why I’m expecting to have a nice long career in autonomy. 🙂

            …providing huge amounts of absolutely-secure wireless bandwidth in the face of an adversary who can deliver hacks…

            I think we’re going to learn a lot about our ability to sustain high communications loads the first time the F-35 performs serious missions in a well-defended airspace.

            I don’t know what the normal procedure is for initiating real-world communication, but you seem like the type of guy who I’d like to have a cocktail with and talk a little shop.

          • Anonymous says:

            I’m imagining a tiny (less than a meter across) stealthy flying wing that’s basically a lifting body delivery system for a single air-to-air missile and dives away immediately after firing it

            …why do you need a delivery vehicle? This sounds like a nice SAM.

          • John Schilling says:

            Wouldn’t one possible advantage of drones in air-to-air combat be that they could be way smaller, and thereby hard to spot with radar?

            All else being equal, the detection range for air-to-air radar scales as the mass of the searching aircraft to the 0.42 power, and the mass of the target aircraft to the 0.17 power. If you make your air-to-air drone smaller, you make it a little bit harder for the enemy to find you, but much harder for you to find the enemy.

            And if you try to separate the two functions, it’s your radar-carrying aircraft that the enemy will target first, leaving your missile drones essentially useless. That last is not entirely hypothetical, BTW – we’ve seen the Russians and Indians starting to gear up to defeat the AWACS-plus-F16 combo by taking out the AWACS from a couple hundred kilometers away.

            Air combat is one arena where the relevant scaling laws really do favor a few highly-capable platforms. The best that can be said for low-end swarming tactics is that they let you trade strategic assets (a squadron of drones destroyed) for tactical wins (control of the battlespace for an afternoon as the enemy retires to refuel and rearm).

      • cassander says:

        Pearl Harbor was not possible in 1933. State of the art in 1933 looked like this and did not carry suffiecient weight of bombs or have sufficient range to pull off pearl. the standard bombers at pearl could carry 500-1500 lbs of bombs at 150-200 mph 500 miles out and back again. the planes of 1933 could do, at best, half of the low end of that, which wasn’t enough to sink a modern battleship.

        The story I believe you are referring to is Billy Mitchell’s famous sinking of a german battleship. A good demonstration of what was to come, but the ostfriesland wasn’t manned, wasn’t moving, wasn’t shooting back, and not designed or modified to resist air attack. Really, it took the generation of planes of the late 30s to make pearl harbor possible.

    • kernly says:

      I have heard from multiple (decent) sources that something like 80% of the cost of the F-35 is software validation.

      It is an electronic masterpiece

      The later statement does not follow from the former. When software costs and development time are extremely high and long for something that should be relatively straightforward, it’s an indication that it’s a boondoggle rather than a masterpiece.

      “The simple, yet well-designed A-10 is cheap and still good. Fancy, and expensive, tech is bad.”

      And drones are firmly in the “cheap and good” category, as he takes pains to point out.

      Any analysis that doesn’t include a term for “new tech pushes us forward, even if it is expensive” is going to get stuck at every timestep

      The actual real world drones we’re using to do stuff now do the job cheaper than what they replace.

      I’m pretty sure if we send 600,000 P-51s at a target, we’ll have at least as much success as we would have with an F-35.

      You’d be wrong for an important class of targets. However many P-51s you have, they can’t go fast or high enough to intercept a modern jet. While the F-35 isn’t great in that department, it at least has a chance. In terms of dealing with large numbers of P-51s, I suspect the short ranged anti air missiles that would annihilate the 1940s era craft cost less than it does to produce and are quicker to build. And also that a modern CIWS would win a shooting match with a P-51, or several.

      On the other hand something that actually can do the job cheaper or better really is a valid criticism of the the F-35, and pretty much all of its competition trounces it in that department.

      • Anonymous says:

        When software costs and development time are extremely high and long for something that should be relatively straightforward, it’s an indication that it’s a boondoggle rather than a masterpiece.

        …why do you think it should be relatively straightforward? If all you want to do is have a guy push a stick/pull a trigger and watch the plane go down/bullets go out, then sure, software is a piece of cake (…and it surely wouldn’t have cost so much… because it would have cost zero – mechanical linkages are a thing). The F-35 does much more than that.

        drones are firmly in the “cheap and good” category

        That depends on what you want to do. Right now, we have a small number of drone platforms which perform a very select set of missions. They are not firmly established as being capable of performing every mission… and as I point out above, there certainly was very little indication that they were likely to grow into that role during the inception of the F-35 project.

        The actual real world drones we’re using to do stuff now do the job cheaper than what they replace.

        Again, it depends on your mission. Did you know that we ran a lot of ISR missions in Iraq/Afghanistan using Beechcraft business jets? We did it because they’re cheap and we had complete and total air superiority already. They were at vanishingly low risk. Now, would you like to be the pilot who uses a King Air to establish air superiority in a hostile nation? I didn’t think so.

        Finally, congratulations! You came up with some problems with using P-51s! Too bad you didn’t learn anything from the exercise (…for starters, I wasn’t claiming that we should buy 600,000 Mustangs, so you might want to think about what I could have been claiming). Maybe you don’t need to learn anything. You probably should be teaching me! I bet you even have a drone which can do everything the F-35 can do. Hell, I bet you had it back in 2001, and the dumb Joint Chiefs were just too corrupt to buy from The Kernly Corporation. Can I see it?

        • kernly says:

          The F-35 does much more than that.

          So does every modern fighter plane. You don’t have a point.

          Right now, we have a small number of drone platforms which perform a very select set of missions. They are not firmly established as being capable of performing every mission

          Did I say they were? They do what they do cheaper and more effectively than anything else. That doesn’t mean they do everything.

          By the way, d’you know what else still isn’t “firmly established as being able to perform every mission”? The F-35.

          I wasn’t claiming that we should buy 600,000 Mustangs

          That was obvious, but thanks for the condescension. I was pointing out that your little “common sense” claim that you took for granted was nonsense. If your proposed solution really could “have at least as much success” as the F-35 for less cost, then there’s a good chance it would be a viable alternative. You proposed nonsense that would be cheaper but would not work, and tried to generalize the weakness of that nonsense to viable lower cost solutions. The fact is, any solution that actually works is probably going to be a lot less stupid than a proposal that doesn’t work.

          Sarcastic nonsense

          Great stuff, but your ability to joust with straw men isn’t really the issue here. The issue is the problems with your response to Brecher’s piece, and instead of defending or amending your critique you’re getting yourself worked up against an imaginary position.

          • Anonymous says:

            So does every modern fighter plane. You don’t have a point.

            This statement is true, because there is at least one thing outside of the set of {standard flight control,fire a weapon} that every modern fighter plane does. You’re still kidding yourself if you think that the massive integrative effort (across internal systems as well as interfacing with a vast array of external military resources in a secure and reliable fashion) is just a trivial software exercise.

            Here’s a test for your perspective. Read this article. What do you take away from it? If you take away, “Herp derp! Dumb military can’t even hook up a trigger to a firing mechanism,” then you’re probably doing it wrong (and really, you should prove it to yourself by offering to hook ’em up a lot faster and for a lot less money; you could make a killing!). It should make you sit back and think, “Oh. There must be some interesting features they’re planning.” Then, if you have a technical background, you could kick back with your buddies and speculate about the cool things that you might want to do with a gun in a fancy new aircraft… and mull over some of the fascinating problems you’d have to solve.

            They do what they do cheaper and more effectively than anything else. That doesn’t mean they do everything.

            This is pretty much what I’ve said. I also said that the F-35 is an amazing platform for us to develop a lot of the tools we’re going to need to eventually produce a suitable unmanned air superiority vehicle. It’s sensible to walk before you run.

            You proposed nonsense that would be cheaper but would not work

            You’re not going to learn anything from this. Sending 600,000 P-51s would absolutely work for some missions. Guess what, there are tactics the opponent could adopt to mitigate this strategy! Guess what? There are tactics the opponent could adopt to mitigate our other vehicles, too! Consider our drones. They’d fail even harder against any of the meaning air defenses you mentioned. They’d probably fail worse than King Airs in an air space that is remotely contested. I’d absolutely take a bunch of P-51s against our currently operational fleet of drones any day. But this is not the point, and you know it.

            Sarcastic nonsense

            So you agree with me? Drones currently have a limited role (like Beechcraft), and platforms like the F-35 are technological milestones which will enable future unmanned/autonomous vehicles? Or would you actually like to make an argument against this position? So far, the only thing approaching a coherent argument that I can glean from your posts is, “Software should be cheaper,” but since you seem to have no idea about the scale of the problem at hand, your judgment of the ‘should’ seems untrustworthy.

    • cassander says:

      You talk about what the F-35 is supposed to be, not what it actually is. Considering how very few large government IT projects exist that are not complete disasters, I doubt there has ever been one that was “a masterpiece.” Now I am not as negative on the f-35 as many, but cramming that many big revolutionary changes into a single airframe then insisting on building it so that that not only every service could use it, but most of the useful countries in nato, was a terrible idea.

      • Anonymous says:

        Considering how very few large government IT projects exist that are not complete disasters, I doubt there has ever been one that was “a masterpiece.”

        I don’t have a way to seriously contend this on an absolute scale without bringing classified facts to bear. However, if our scale is ‘relative to all other significant air superiority platforms’, it’s clearly in masterpiece territory.

        cramming that many big revolutionary changes into a single airframe

        This is an example of, “You’ll never make everyone happy, which means you’ll probably make everyone mad.” People above are all hot and bothered that there aren’t enough big revolutionary changes! Remember, the article that prompted all this supported the claim that the entire F-35 project (with major contracts signed in 1996/2001) was nothing but corruption and incompetence in large part because it’s a manned vehicle. Kernly is saying that there’s nothing revolutionary at all and we should just stop spending money on military-grade IT development projects. (…and somehow, we’ll magically jump to a future where all these problems are already solved in our hypothetical drone…)

        then insisting on building it so that that not only every service could use it, but most of the useful countries in nato

        What negative results, in particular, came from this? The original article doesn’t give anything but a general appeal to compromises (…welcome to engineering) and the one RAND study which has been grossly misrepresented. Don’t forget, in the very same article, he called all aircraft carriers “even more useless”. I highly doubt the author would have been happy with any vehicle. This isn’t a bad thing, because we shouldn’t be complacent. We should be working on the next vehicle, using the best ideas of today which are reasonably far along the technology readiness spectrum. But let me forewarn you – when the product of the next two decades of work is actually built, people like Mr. Breacher will be there to complain that it doesn’t incorporate tech that just showed up on the TRL chart in 2040.

        • cassander says:

          >People above are all hot and bothered that there aren’t enough big revolutionary changes

          alright, then we agree that those people are stupid.

          >What negative results, in particular, came from this?

          significant sacrifices and expenses were made to accommodate the needs of the B model, which both marines and foreign clients insisted on.

      • grendelkhan says:

        Considering how very few large government IT projects exist that are not complete disasters

        I’d add that very few large IT projects anywhere aren’t complete disasters.

  33. JME says:

    I think that Muhammad ibn-Zakariya al-Razi might have been a few decades later than Himi al-Balkhi (not totally sure?), but I think he may have refined the “asshole atheist” perspective to approximately modern levels.

    You claim that the evidentiary miracle is present and available, namely, the Koran. You say: “Whoever denies it, let him produce a similar one.” Indeed, we shall produce a thousand similar, from the works of rhetoricians, eloquent speakers and valiant poets, which are more appropriately phrased and state the issues more succinctly. They convey the meaning better and their rhymed prose is in better meter.

    The Carvaka school always intrigued me, for combining asshole atheism with a kind of radical, Hume-like skepticism on issues like causation, but I think little of their work survives (I think it’s one of those “we only know about them from what their opponents wrote” schools.)

  34. Ryan says:

    It’s odd that people take the old testament literally. The story about the Israelites fleeing Egypt through the Red Sea is metaphorical. Egypt is the old ways, worship of Pagan Gods. The Red Sea represents death in the atheist sense, total utter non-existence once the body expires. With faith in God one’s soul can pass through death and achieve eternal life, like the Israelites. But the Egyptians don’t have faith so they’re swallowed by the water.

    • Anonymous says:

      You’re wrong. The Israelites actually did all of that.

    • Brad says:

      There are a number of problems with this approach to the story. I can only speak from the perspective of Christian theology, and briefly (insofar as such a thing can be brief: There is a *lot* going on here theologically), for I am a Christian.

      For one thing, the Israelites – the vast majority of that generation, I should say, – would later fall in the wilderness due to various sins. I don’t want to belabor the causes too much here; I do think that if you’re going to examine the narrative it’s important to examine it holistically and not piece meal. It’s even noted in 1 Corinthian 10 and Hebrews 3 in the New Testament, explaining how this happened so the new testament saints would avoid imitating the example of that generation. Further, there is a word picture of baptism implied in the splitting of the sea. This makes little sense if one views the whole thing as metaphorical; it certainly doesn’t have the same urgency if the readers of those books thought the events in the Old Testament were cleverly devised tales, as it were. (c.f. 2 Peter 1:16 ).

      Now, are some of the elements you notes present? Yes, I do think Egypt symbolically images paganism and false gods. I do think those elements are present here, but the notion of biblical imagery and literalism – including literally interpreting events in the text as historical – aren’t at odds with one another. After all, since God created the universe, it is within his power to arrange an *actual* event which nevertheless carries within itself a symbolic implication; I would say for instance, the life of Christ has many such examples, which I believe both are literal historical events and yet spiritual meanings implicit in the events, placed there by God. for example, it *is* meaningful that Christ, who is called the lamb of God, is tested and found without spot following the triumphal entry into Jerusalem, just as a Passover lamb is tested and must be found without spot. The notion of prophetic fulfillment isn’t merely a method of injecting meaning from a text, it is not eisegesis, but it’s a verification of the working of God, which makes total sense if one believes God transcendently exists (in whole or part) outside space-time, and having created us and space-time, can arrange events to contain such both literal and more symbolic semantic meanings.

      Further, when you consider that Jesus routinely referred to the Old Testament text as literal events (c.f. Luke 17:26 and Matthew 24:37 for examples), that indicates a literal reading of the New Testament history books would also lend itself to a literal reading of the Old Testament history books. Literalism is, after all, reading the books as what they present themselves to be. I do not feel equipped to have a great discussion of to what degree exodus presents itself as history, although there are a few hints that is what is happening; off the top of my head, the presence of genealogies and place-names makes little sense in an ahistorical context, even if we cannot currently match contemporary locations with the locations in the text, for instance.

      That got a little long.

    • Anonymous says:

      Which is funny, considering that the concept of “eternal life” is a pagan concept. There’s no mention of any sort of “afterlife” anywhere in the Torah; you’ll have to read Jewish writings composed in the Hellenistic era or afterwards to get that idea.

      There were even Jews during the Hellenistic era — even during the time period of Jesus — who didn’t believe in the afterlife because they only followed the Torah. Here’s Josephus in “Antiquities of the Jews

      Antiquities of the Jews 13.10.6

      What I would now explain is this, that the Pharisees have delivered to the people a great many observances by succession from their fathers, which are not written in the laws of Moses; and for that reason it is that the Sadducees reject them, and say that we are to esteem those observances to be obligatory which are in the written word, but are not to observe what are derived from the tradition of our forefathers. And concerning these things it is that great disputes and differences have arisen among them, while the Sadducees are able to persuade none but the rich, and have not the populace obsequious to them, but the Pharisees have the multitude on their side.

      Antiquities of the Jews 18.1.4.

      But the doctrine of the Sadducees is this: That souls die with the bodies; nor do they regard the observation of any thing besides what the law enjoins them; for they think it an instance of virtue to dispute with those teachers of philosophy whom they frequent: but this doctrine is received but by a few, yet by those still of the greatest dignity. But they are able to do almost nothing of themselves; for when they become magistrates, as they are unwillingly and by force sometimes obliged to be, they addict themselves to the notions of the Pharisees, because the multitude would not otherwise bear them.

      • Brad says:

        Since it seems relevant to the discussion, one might see also the discussion Jesus is recorded as having had with the Sadducees & Pharisees in Matthew 22:23-46 and Luke 20:27-40.

      • Carinthium says:

        Assuming the world was as the Sadducees put it, why bother with being a good Jew at all? There is some degree of divine punishment in life in their doctrines, but not enough to stop evil men from suceeding.

    • 27chaos says:

      Can you point to any instance of this metaphorical interpretation that’s older than 50 years?

  35. Harriett says:

    I need to start commenting so that when I finish and publish my book, it gets promoted here!

    • Scott Alexander says:

      Whenever I link a book, people go to Amazon to check it out, and they use my affiliate link, and then I get money. That means I’m incentivized to link as many books as possible. So – congratulations, this comment makes you a Commenter, which means I’ll link your book.

      (although I have gotten some emails full of spelling errors from people I’ve never heard of wanting me to link their book on multi-level marketing or whatever that I’ve rejected. Also all the terrible gender books.)

  36. Kingsley says:

    Rudel, while technically an ace, was not a fighter pilot but rather flew dive-bombers (which makes his nine air-to-air kills even more impressive). From Wikipedia: “Rudel flew 2,530 combat missions claiming a total of 2,000 targets destroyed; including 800 vehicles, 519 tanks, 150 artillery pieces, 70 landing craft, nine aircraft, four armored trains, several bridges, a destroyer, two cruisers, and the Soviet battleship Marat.”

    The top German fighter ace of WWII (and in fact the top fighter ace of all time) was Erich Hartmann, who made 352 air-to-air kills. He was known for his tactic of not firing until the nose of his plane was 20 meters or less from his target; while extremely effective, this also caused him to shoot himself down 14 times when his plane crashed into debris from enemy aircraft he had just destroyed.

    • Anonymous says:

      Oh my god, if that’s where Eric Cartman comes from, super cool.

    • Not Robin Hanson says:

      Interestingly, similar was true for the top American ace as well:

      Bong considered his gunnery accuracy to be poor, so he compensated by getting as close to his targets as possible to make sure he hit them. In some cases he flew through the debris of exploding enemy aircraft, and on one occasion actually collided with his target, which he claimed as a “probable” victory.

  37. 23Skidoo says:

    > Russia has a surprising number of gay Nazis, including several thousand in a group called Gay Aryan Skinheads whose flag is a swastika with two crossed penises under it.

    Um, there are actually about 40 members in that group and it looks like an obvious troll page:

    http://vk.com/club29179193

    In other news, USA has a surprising number of gay n***s from outer space, including several thousand in a group called GNAA.

    • Nornagest says:

      Either a lot of people have been reading Preacher, or there’s more to that meme than I’d suspected.

    • drunkenrabbit says:

      I actually met a purported GNAA member in person. He was surprisingly normal, but did make me watch the movie. Also, for those of you wondering, they seem to be white nerds, and neither black nor gay.

  38. Zorgon says:

    I realise this isn’t an open thread, but this seems relevant to numerous recent posts.

    Regarding the Charlie Hebdo shooting, I have someone on my FB feed saying, and I quote: “We should not term what they published as ‘controversial’.”

    So. Apparently the concept of “no platform” has reached the stage where we outright deny the existence of dissenting voices even when they’re shooting people. That’s quite some impressive degree of ingroup allegiance declaration going on there – “I am so ingroup that I will deny that anyone could disagree with my ingroup even when an outgroup are actively killing us!”

    The discussion’s degenerated to “objective vs subjective” idiocy already, but it’s kind of amazing to see things go this far. The guy in question is a Blue/Grey Tribe intersection who’s previously been hunted by the Blue Tribe for perceived transgressions, so he’s probably engaged in heavy overcompensation. But it does show how crazy “no platform” can get.

    • Ken Arromdee says:

      “We shouldn’t call it ‘controversial'” doesn’t mean “nobody disagrees with it” or even “nobody disagrees violently with it”. Deciding to label something as controversial connotes non-shallow disagreement, not disagreement of any sort.

      • Zorgon says:

        That’s a distinctly non-central definition. The usual meaning of “controversial” is “likely to cause public disagreement”. Which the material published was and still is.

        Depicting that disagreement as “shallow” or any other qualifier doesn’t really change the situation other than to declare you think the disagreement is incorrect or unsupportable, which is fine, but it doesn’t make the disagreement go away.

  39. Sancho Valstein says:

    Is the gay Nazi thing really that weird? Pre-“Night of the Long Knives” in 1934 a great many Nazi’s were gay (Ernst Röhm foremost among them). If you’re in for all the other awful stuff, I figure it would be easy to say the anti-gay turn from 1934 onward was a mistake not in keeping with the core ideology.

  40. drethelin says:

    to anyone who goes ahead and read’s Self-Made Man, I very strongly recommend also reading The Testosterone Files. The difference in tone and perspective between someone who values and wants masculinity to someone who scorns and pities it is enormous.

  41. ADifferentAnonymous says:

    N.B. Ozy only banned motte and bailey discussion “unless you also present evidence that the bailey is a belief stated by either the person you are arguing with or a significant number of prominent members of the group you disagree with.”

    • Nornagest says:

      That’s a pretty low bar, but at least it prevents wild speculation about your opponents’ true sinister motives.

      No idea if that sort of thing actually happened.

    • Anonymous says:

      Yeah, but that’s only the …

    • JME says:

      Well, the motte is that they’re banning M&B analogies unless you clearly present your interlocutors making the “bailey” argument, but the bailey is that they want to shut down all discussion on Thing of Things and murder the families of all of us commenters there.

  42. Matthew says:

    Was I the only one who expected that first link to be related to this, only to find out that North Korea actually does have a Twitter account?

  43. Anonymous says:

    A lot of people including the author “Are Some Diets Mass Murder?” blame rising obesity on “refined carbohydrates” but if this means refined wheat or rice then it makes no sense if you look at history.

    Before the world wars, Westerners consumed huge amounts of white flour products, and there was little obesity. White rice is the staple food for millions of thin Asians.

    It is much more believable when obesity is blamed on “sugar”.

  44. Matthew says:

    After the first two panels of this Zach Weinersmith comic, I thought mauve-shirt was going to be an effective altruist caricature. I had it completely backwards, and yet somehow the comic is SSC-topical anyway.

  45. Gunlord says:

    I hadn’t heard of Hiwi al-Bakhi before–thanks for telling me about him, I have some friends who’d really get a kick out of that article.

  46. Chipsa says:

    I’m not going to go into the number of homeless, or what value of house you could get for the amount spent on the F-35. I’ll instead go off about the fact that the values the Thinkprogress piece is using is actually for the entire program, over the entire projected course of the program. So the $398.6 billion is actually spread over the 50 year life of the program. So yeah, we may be able to buy mansions for everyone. But that’s not the amount we’ve spent so far.

  47. Peder says:

    I’m so happy I found this blog. Just amazing.

  48. bean says:

    I’ve run across War Nerd before, and came away with the impression that he was only loosely connected with reality. The linked article did nothing to change that belief.
    Let me start by saying that the A-10 is indeed the best aircraft we have for CAS/ground attack in a low/medium threat environment. However, it is not better enough to justify the cost of maintaining it, given the strategic environment we face.
    The problem is simple. Modern air defenses are very effective, particularly low down. This means that if we go to war with, say, Iran, air support will be provided by PGMs dropped from medium altitude. In this environment, the A-10 is only better because it’s cheap to operate and has a big warload. The gun and all the armor are pretty much wasted.
    But that’s not what we’ve spent the past decade doing, is it? We’ve been operating in a pretty low-threat environment, where the gun becomes useful and low altitudes are an option. However, this is irrelevant. We aren’t planning to do that any more, and even if we were, the A-10 is too expensive, and the gun is overkill. A Super Tucano (or an AT-802U) would do just as well, and would be a lot cheaper.
    And the A-10 is not survivable in a really high-threat area, like we might see in the opening stages of a war. War Nerd brings up Desert Storm, but cleverly ignores the massive difference in missions between the F-117 and A-10. The F-117 flew into the teeth of a decent air defense system, helped pull its teeth, and came through unscathed. The A-10 then went in and killed tanks left naked by the destruction of the air defense system. If they’d gone over Baghdad early in the war, they’d have been slaughtered.
    All of this ignores the fact that ground attack/CAS is not the only capability that an air force needs. If we were to go to war with Iran, the first jobs would be air superiority and SEAD, making sure that the bad guys can’t use the air, and can’t interfere with our ability to do so. The A-10 is useless during this, and the JSF isn’t.
    The other issue is that the JSF is in what might be called a marginal funding crunch. We’re at a point where a relatively small increase in funding will give a big improvement in capability down the road. At this point, we have no option but to continue procurement (any attempt to cancel the program now and start anew will inevitably produce something that costs more and doesn’t work as well) because we need to replace our current aircraft. Insisting we retain the A-10 fleet will mean that we get less F-35s, and the ones we have won’t work as well.
    There’s also quite a few other issues with the linked article, which bear particular comment
    But there’s a problem with that. Nobody will play with us. It’s like investing your entire sports fund on a stable of polo ponies (except polo ponies are cheap compared to air-superiority fighters) and finding nobody in the neighborhood even knows what polo is, let alone wants to spend all that money to play against you.
    This is so wrong it’s almost hilarious. The reason that nobody has been able to seriously challenge us in the air since Vietnam is precisely because of the money that’s been spent on air superiority. People don’t challenge us in this area because they can’t afford to, not because they don’t want to.

    But then the Soviet Union went out of business, and we were fighting wars that would never, ever involve fighter duels. You know the old joke, “I went to a fight but a hockey game broke out”? Well, that outcome is a million times more likely than the USAF needing fast fighter jets against the Taleban, or Islamic State. That’s about as likely as “I went to a fight but a polo match broke out.”
    I’m not even sure what planet this guy is on. Let’s see. During Desert Storm, we lost an F/A-18 to a MiG-25, and shot down something like 36 Iraqi aircraft. And then there were the three MiGs shot down during the no-fly zones of the 90s. The Taliban had no Air Force, and Saddam kept his on the ground in 2003, but he can’t possibly sustain the claim that we haven’t needed fighters since the end of the Cold War.
    This isn’t to claim that the JSF has been a brilliant program. It hasn’t. But it’s still maturing, and that’s a common pattern in all military aircraft since at least the 1960s. And claiming that we’re only buying it because of corruption is the result of either deliberate cynicism or a near-complete detachment from reality.

    • Anonymous says:

      My impression of War Nerd is that he knows a lot about counter-insurgency and not a lot about other aspects of warfare.

      • bean says:

        I’m not sure that makes sense. I haven’t seen any of his COIN stuff, and even if I did, that’s not my specialty. The problem is that the stuff I’ve seen that I do know about is terrible, and I’m not sure why I should trust him on the stuff I don’t know. At very best, he has no understanding of the limitations of his knowledge, and that’s not really a good place to be.
        And I don’t think it’s anywhere near that good. He committed a clear error of fact in his claims on the utility of fast jets, and several other cases of serious lack of judgement which should be obvious to anyone who understands strategy. By his logic, the fact that we haven’t used our nuclear weapons since 1945 means they’re useless, and we should get rid of them, too. But that’s not the worst I’ve seen from him. The first time I encountered him, he managed to turn the statement “ships currently have no defense against a ballistic missile attack” (WRT the Chinese anti-ship ballistic missiles) into “the harpoon is unstoppable in pop-up mode”. I’m actually having trouble coming up with an analogy for how big of a leap that is. It’s similar to the leap from what Scott Aaronson wrote to Amanda Marcotte’s interpretation of it. And I do mean that in all seriousness.

    • Anonymous says:

      I agree almost entirely with this. So many people see air vehicles which are having success in our current conflict and can’t imagine what might happen if they were placed in a conflict with remotely contested airspace.

    • Jon H says:

      ” If they’d gone over Baghdad early in the war, they’d have been slaughtered.”

      Why on earth would we bother sending in a CAS aircraft early in the war when there’s nobody on the ground to support? Why would we be killing tanks in Baghdad at that point in a war, when tanks in Baghdad wouldn’t be a threat to us yet?

      • bean says:

        The A-10 is capable of attacking targets that are not tanks, and I was looking at using it replace the F-117 in the air campaign. War Nerd spent a bunch of time going on about how the A-10 performed better than the F-117 in ODS, and how the fact that the Nighthawk was not scrapped immediately proves the USAF is corrupt. For this to hold any water, they would need to have comparable roles, which is laughable to anyone who knows what they’re talking about. However, lots of people don’t. He did a decent job proving that the F-117 is not very good at the A-10’s job, and I was pointing out that the A-10 would be even worse at the F-117’s job.

        • Jon H says:

          I think the point is that people aren’t putting forward the A-10 to do the F-117’s job, but people (especially the Air Force) are putting forward ridiculously expensive hi-tech solutions as ill-fitting but attractive and vaguely plausible replacements for the A-10.

          The services love to dream about single fancy jets that can do every conceivable job, but that’s an expensive pipe dream.

          • bean says:

            First, War Nerd did actually seem to be putting the A-10 forward to do the F-117’s job, in that he neglected to mention the differing roles they serve. The link to his post, and not the trashing of the JSF in general, is why I got involved here. Scott, if you’re reading this, please stop reading him on defense issues. He’s terrible.

            That said, I can address the multirole thing in two different ways. First, the A-10 is not better enough to justify the cost of keeping it around. If we had infinite money, I’d be more than happy to keep it. But since we don’t, we need to look at getting the most out of what we have, and right now, that means getting rid of the A-10.
            Second, multirole seems to have generally worked quite well. The F-16 and F-18 are generally well-regarded in both air-to-air and air-to-ground roles. That’s not to say that the dearth of dedicated strike aircraft is a good thing, but we need to maximize the utility of the aircraft we can afford.

            That said, our aircraft procurement system has been broken since 1961, and has only been getting worse recently. I select 1961, as that was the date when Robert McNamara initiated the TFX project, hijacking the service’s control over their aircraft procurement. The TFX (which became the F-111) was a prototype of the issues the JSF has had, except that the Navy version was cancelled. The idea of loading multiple people’s requirements into one program has been rather poisonous, and the limited number of programs has put us at serious risk if the JSF failed. We’d have been better off if the JSF had stayed JAST, and produced a common set of weapons systems and maybe a common engine or two, and left the rest to various companies and various services. We’re stuck in a vicious spiral where aircraft are too expensive to develop, so we consolidate programs, making them even more expensive. Even the F-4, which is undoubtedly the most successful multi-service aircraft ever started as an exclusively Navy project, which the Air Force only bought after it was fully developed.
            And the consolidation in the industry is another consequence of this. Supporting lots of companies means you need lots of programs. I’m not sure we needed as many as we had back then, but it would be nice to have a few more today.

          • John Schilling says:

            Sort of playing devil’s advocate, the A-10 doesn’t need to be better to be worth keeping around. My old Grumman Tiger (about the same vintage as an A-10) isn’t better than a shiny new Cirrus SR-22, but it’s bought and paid for and cheaper to fly.

            1. The cost (net present value, constant dollars, assuming 200 flight hours per year) of keeping an A-10 in service through planned retirement in 2028, is about $35 million. For an F-35, including the cost of buying the F-35 now rather than putting it off to 2028, $105 million. So, if there are missions the A-10 can do even half as well as the F-35, we come out ahead keeping the A-10 around for those missions.

            2. The USAF is planning to buy roughly 2,000 F-35s. It seems unlikely that low-to-medium-threat close air support will be less than 5% of the mission set in any major war, so we’re going to have 100+ F-35s doing that. Or 200 A-10s, if the A-10 is only half as good as an F-35, and since we’ve already got 200 A-10s that saves us $3.5 billion.

            3. The USAF is often, and not unjustly, accused of neglecting close air support in favor of the more glamorous air superiority, deep strike, and SEAD missions. Ensuring that 5% of the force is limited to CAS missions, puts a floor on the level of support the Army can expect from the flyboys.

            4. Lockheed may get so damn greedy that we cancel that blank check labeled “F-35” no matter the cost to our military readiness. If that happens we’re going to be scrambling desperately to fill a lot of gaps, at least some of which will be sort of A-10 shaped. And until then, having the alternative at least marginally strengthens the negotiating position of the US taxpayers vs. Lockheed-Martin.

            5. War Nerd is still nuts if he thinks the A-10 is the One True Warplane. We’re talking about 2000 F-35s vs. 1900 F-35s and 200 A-10s, or something along those lines.

          • bean says:

            I’m going to have to try to hunt down the actual documentation on this before I go further. I would point out that military costing is quite tricky, and that the marginal costs of buying and running an extra F-35 might be a lot lower than we’d expect, and the costs of the A-10 fleet might be somewhat higher.
            And we’ve previously been badly burned by ‘a bird in the hand’, specifically our carrier policy over the past decade or two. At some point, it becomes cheaper to replace things instead of keeping them in service.

    • grendelkhan says:

      The reason that nobody has been able to seriously challenge us in the air since Vietnam is precisely because of the money that’s been spent on air superiority.

      Bear Patrol’s working like a charm!

      But more seriously, exactly how air-superior do we need to be? The Cold War had ended when the JSF program was put together. This sort of thing seems to be so expensive that we’re the only people who can buy it. What are we expecting to fight? The Russian equivalent (if I’m understanding this right?) has a unit cost less than half that of the F-35, and a third that of the F-22. Nobody else can afford to do what we’re doing, so why are we doing so much of it?

      There’s an anecdote in here about how Bernie Sanders–very much not a fan of the military–winds up agitating to get F-35s assigned to his state, because, well, it’s inevitable and everyone wants a piece of the pie.

      I could be convinced that it’s really important to maintain air superiority, even if it means we buy very expensive systems that seldom or never actually see combat. But if we’re going to buy something this expensive, it had better be really, really worth it, y’know?

      • bean says:

        Bear Patrol’s working like a charm!
        We’re dealing with states, not animals. Do you really think that the reason the rest of the world generally doesn’t make overt trouble for us is because they love us?

        But more seriously, exactly how air-superior do we need to be? The Cold War had ended when the JSF program was put together. This sort of thing seems to be so expensive that we’re the only people who can buy it.
        Not necessarily. It’s very expensive partially because our defense procurement system is seriously broken, and partially because we need so many. We have to be strong everywhere, while others only have to be strong in a few places.
        There’s also the issue that we have to plan in the long term. Today, nobody can challenge us, so why bother spending the money? Why spend all this money on this B-52 when the B-36 is doing so well?
        Sorry, I had the dial set for 1950. You get my point.

        The Russian equivalent (if I’m understanding this right?) has a unit cost less than half that of the F-35, and a third that of the F-22.
        That assumes the Russians are telling the truth, from a country where a request for brochures is greeted with a press release about a new arms deal (true story). I’ll believe them on the cost when they start offering it for export. (That said, the JSF does cost too much.)

        Nobody else can afford to do what we’re doing, so why are we doing so much of it?
        How much less can we do before other people start being able to afford it 20 years down the road?

        I could be convinced that it’s really important to maintain air superiority, even if it means we buy very expensive systems that seldom or never actually see combat. But if we’re going to buy something this expensive, it had better be really, really worth it, y’know?
        Air superiority is one of our most important capabilities, right after nuclear war. I can’t think of a war that’s been won if the other side has command of the air, and winning gets a lot easier if you do. (It’s possible that neither side has it.) However, we are in a uniquely bad position in that air superiority has become part of our planning base. We have a laughable ground-based air defense capability compared to what we might need if we didn’t assume that our fighters would deal with the enemy, not to mention a lot of cool toys which don’t work that well in a high-threat environment. So at this point, cuts in air superiority would need to be offset by improved GBAD.

        • John Schilling says:

          This.

          I would dispute the absolute claim that it is impossible to win wars without air superiority; note that ISIS has taken and is holding a quarter million square kilometers of territory in spite of their enemies holding complete air superiority.
          But the United States and its major allies have almost zero capability for waging war under such conditions; the USAF has arguably spent half a century lobbying to ensure that state of affairs is burned into the deep structure of the US military.

          Given that several potentially hostile nations are building and will sell to anyone with cash, aircraft and missiles that completely outclass the F-15, -16, and -18, and that the United States has essentially shut down every path to air-superiority fighter development in this decade other than the F-35, there are essentially three choices.

          1. The United States finishes and deploys the F-35, including sales to all of the allies which bet their future ability to defend their own airspace on the US and Lockheed-Martin.

          2. The United States completely re-engineers the United States Army, Navy, and Marine Corps, and assists NATO and the major non-NATO allies in doing the same.

          3. The major allies of the United States start to be actually invaded and conquered a la Poland/1939, starting in about 2025. And quite possibly starting with Poland.

          I suppose we can add option 4, world peace is declared and nobody is tempted to conquer their neighbors with trillions of dollars worth of real estate and infrastructure, historic grudges, and effectively no defenses against a modern military. Or option 5, where the United States credibly precommits to e.g. nuking Moscow if the Russian army rolls into Poland. I’m not holding my breath on either of these.

          That the United States has painted itself into this corner reflects shameful mismanagement pretty much across the board and outright corruption in more than a few places. But here we are, and I don’t see any path out of it in less than a generation.

          • grendelkhan says:

            3. The major allies of the United States start to be actually invaded and conquered a la Poland/1939, starting in about 2025. And quite possibly starting with Poland.

            This looks like you’re saying that all that stands between the free world and World War II-style invasions is the U.S. military’s acquisition of fifth-generation fighter planes.

            The Russians’ economy is actively collapsing at the moment, but even when it wasn’t, when they did invade the Ukraine, the fact that the United States has lots of air superiority forces didn’t seem to enter into anyone’s plans.

          • bean says:

            I would dispute the absolute claim that it is impossible to win wars without air superiority; note that ISIS has taken and is holding a quarter million square kilometers of territory in spite of their enemies holding complete air superiority.
            Fair point, although they’re not exactly typical, and we certainly can’t expect to do the same.

            grendelkhan:
            The Russians’ economy is actively collapsing at the moment, but even when it wasn’t, when they did invade the Ukraine, the fact that the United States has lots of air superiority forces didn’t seem to enter into anyone’s plans.
            The Russians did that under the assumption that we wouldn’t intervene. If they had thought we would, they wouldn’t have done so, for a variety of reasons, including an inability to secure the air above their forces.

          • John Schilling says:

            The Russian economy has been collapsing every other decade for the past century at least; the Russian army has proven a bit more durable.

            And while neither of us were sitting in on Russian staff meetings last year, I will note that there is a conspicuous absence of Russian tanks in Kiev or even Kharkov, the really valuable parts of Ukraine, and I guarantee that is not because the Ukrainian army would have been more than a speed bump to the Russian. I also note that the Ukrainian Air Force has operated freely in the conflict whereas the Russians don’t seem to have flown a single combat sortie over the contested territory, even though the Russians would control the skies from the first day if they chose.

            And if NATO didn’t intervene. NATO’s preferred choice for “bloodless”, “risk-free” intervention on foreign conflicts has for the past few decades been the ever-popular “No-Fly Zone”, implemented by the United States Air Force. To the extent that NATO offers any tangible security guarantees to member states on Russia’s perimeter, it is through forward deployment of fighter aircraft. NATO can deploy, today, an air force that would probably outmatch anything Russia can put in the air for the next decade. I would not be as confident as you seem to be, in asserting that Russia has not been substantially influenced by the prospect of the one form of military intervention NATO could plausibly undertake in the conflict.

            As for WWII-style invasions, I do not think that it is really controversial that the present world order of Nation Shall Not Conquer Nation was pretty much established by Team America: World Police in 1991 and remains so enforced to this day. The United Kingdom is a junior deputy, everyone else pretty much just provides moral support via token military contributions.

            But: post-Cold War “peace dividend”. Twenty years of tight budgets in which defense has always ranked below tax cuts on the GOP’s priority list and barely appears on the other side’s. Defense industry corruption and profiteering. A decade-plus of large-scale counterinsurgency warfare on two fronts with no tax increases. It should also not be controversial that the conventional warfighting capability with which the United States dissuades would-be conquerors is stretched pretty thin.

            And yes, it is my informed belief that one of the really weak links is air superiority combat in the 2025-2035 timeframe. We absolutely cannot win conventional wars without it, and so cannot credibly deter major powers from playing the conquest game without it. Neither we nor anyone else has figured out how to secure air superiority in that timeframe without fifth-generation fighter planes. Yes, manned ones. And we (including most of our allies) have given the Lockheed-Martin corporation a de facto monopoly in that field for the next generation.

            Lockheed has a blank check. The price for our cancelling that check would be a high probability that one or more major US allies would be invaded or conquered in a decade or two. We will see what number they write on the check, and decide what to do.

            I’m all in favor of not getting into such an asinine situation again. But it is where we are now.

          • Jon H says:

            Option 6: The rapture happens.

            “Lockheed has a blank check.”

            Not least of all because we’ve let the industry consolidate to the point where there’s no real competition.

          • John Schilling says:

            You’ll get no disagreement from me on that point.

        • grendelkhan says:

          Do you really think that the reason the rest of the world generally doesn’t make overt trouble for us is because they love us?

          This seems too fully-general. “Why do we need a space laser yarn ball? No one has a giant mechacat which it would be used against.” “That’s because of our amazing space laser yarn ball! What, do you think the skies are free of mechacats because the other guys love us?”

          I’ll buy that it’s at least a plausible position that if the United States were not ready to fight World War II again right now with no warning, we’d be in trouble. (Is building stronger alliances really more expensive than building up our military to this degree?) But how probable is it that anyone will come up with weapons to match the ones that we’re building? Are the new Soviet and Chinese fighters that much better than what we’re flying now? Do we really need something that gee-whiz revolutionary amazing?

          There’s also the issue that we have to plan in the long term. Today, nobody can challenge us, so why bother spending the money?

          But in the long term, won’t we be using drones anyway? This seems less like trying to introduce the B-52, and more like trying to build even bigger battleships after World War II, when it was clear that aircraft carriers were the future. It seems foolish to be developing extraordinarily expensive dead-end technology which will probably see very, very little use.

          How much less can we do before other people start being able to afford it 20 years down the road?

          Maybe we’d save a lot of money and be just as well off if we had a ten-year, rather than twenty-year, lead time on weapons technology?

          • bean says:

            This seems too fully-general. “Why do we need a space laser yarn ball? No one has a giant mechacat which it would be used against.” “That’s because of our amazing space laser yarn ball! What, do you think the skies are free of mechacats because the other guys love us?”
            It’s not the same. China, Russia, Iran and North Korea all have at least 4th-generation fighters, which we could potentially need to use ours against. The fact that they haven’t used them is because they know they couldn’t win.

            I’ll buy that it’s at least a plausible position that if the United States were not ready to fight World War II again right now with no warning, we’d be in trouble.
            I don’t think WWII will happen again. Or if it does, it will end the same way, but a lot faster. However, that requires some political spine on our side, which has been rather lacking of late.

            (Is building stronger alliances really more expensive than building up our military to this degree?)
            Why don’t you go to Europe and try to talk them into keeping their treaty commitments on defense spending? Let me know how that goes. Besides, what are they going to buy? Currently, the answer is ‘the F-35’.

            But how probable is it that anyone will come up with weapons to match the ones that we’re building? Are the new Soviet and Chinese fighters that much better than what we’re flying now? Do we really need something that gee-whiz revolutionary amazing?
            Quite likely, particularly if we don’t keep pushing the technological envelope. Depends on who you ask. And we will.

            But in the long term, won’t we be using drones anyway?
            Define ‘long-term’. If we’re talking about 100 years, maybe. 50 years? I’d guess not. And consider what now was supposed to look like 50 years ago. Stealth wasn’t even a thing, and everything was supposed to do at least Mach 3.

            This seems less like trying to introduce the B-52, and more like trying to build even bigger battleships after World War II, when it was clear that aircraft carriers were the future. It seems foolish to be developing extraordinarily expensive dead-end technology which will probably see very, very little use.
            Not so much. There were reasonably serious proposals to keep battleship construction going, as there were still serious limitations on aircraft carriers, particularly in bad weather. And I’ve seen enough about them to not be prepared to write them off as just ‘battleship admirals wanting battleships.’ On drones, nobody’s brought up the EW issues yet, but I will say I’m far from certain that drones are the future of air combat. I’m a lot more certain about space combat, but that’s another issue.

            Maybe we’d save a lot of money and be just as well off if we had a ten-year, rather than twenty-year, lead time on weapons technology?
            We have to run on 20 years because of how long it takes to procure weapons. If you can figure out a faster way, pretty much everyone involved would be delighted. (And this isn’t a new issue. It started during the 50s when electronics began growing out of control.)

          • John Schilling says:

            Maybe we’d save a lot of money and be just as well off if we had a ten-year, rather than twenty-year, lead time on weapons technology?

            Yes, and we’d be better off still if Santa Claus brought us all of the weapons we will need in the coming year every Christmas. Though really, ordnance seems more up the Grinch’s alley.

            A ten-year development cycle is at least plausible, and I can sort of see how to get there. But we do not have such a thing now, and we do not get it by saying “Make it so”. From what I have seen of similar transformations in related industries, it would take about ten years to build the teams and institutions we would need to develop and deploy next-generation weapons systems on a ten-year cycle. Faster if we lose a war badly enough that nobody will complain when we shoot everyone who obstructs the process because they profit from and are legally entrenched in the present system.

            The generation that pays to develop whatever F-35 equivalent is in their pipeline, and pays for a massive, parallel restructuring of the aerospace and defense industry, will have done the free world an enormous favor. The generation that cancels the F-35 to pay for the restructuring, will either have done the free world an even bigger favor or will go down in history alongside the isolationists of the 1920s and 1930s.

          • Anonymous says:

            probably both

  49. Eric Patton says:

    Many leftists view the economy as a zero sum game, the rich have to take away from the poor to get rich. Value creation at an individual level is non-existent.

    How this does fit into your theory about surviving versus thriving?

    • From that very essay:

      I admit some confusions. For example, it seems weird that poor people, the people who are actually desperate and insecure, are often leftist, whereas rich people, the ones who are actually completely safe, are often rightist. I would have to appeal to economic self-interest here: the poor are leftist because leftism is the philosophy that says to throw lots of resources at helping the poor, and the rich are rightist because rightism says to let the rich keep getting richer. Despite voting records, I expect the poor to share more rightist social values (eg be more religious, more racist) and the rich to to share more leftist social values (more intellectual as opposed to practical, less obsessed with guns). For a more comprehensive theory of economic self-interest and politics, see my essay on the subject.

      • Eric Patton says:

        It’s not the poor who are getting most of the benefits but demographics that are favorable to America’s identity politics. For instance EBT mostly goes to single mothers regardless of what race they are. A single male would have a much tougher time getting aid regardless of his income. White working class males skew Republican, particularly after this last election. Identity has been much more effective at holding together groups to lobby for policy changes than income alone, though this isn’t the case worldwide.

        I don’t see how economics be both zero sum and xyz technology will also save us from it. It denies that individuals have agency to create wealth using their intelligence. Even the “rich get richer” part has the implicit assumption in it a rising tide not only doesn’t lift all boats, it sinks everyone else. This goes against the mythos that American’s are just temporarily frustrated millionaires. The “you didn’t build that” mentality isn’t widely distributed in the population.

        Sure you can raise some populist cries against some of the least trusted institutions in the US like banks but it wouldn’t work on Google. (for that you need identity politics)

        The difference I see between liberal myths and conservative ones are that they often revolve around singularity events versus eternal return events. But that’s not quite the same thing as what the essay says.

        • houseboatonstyx says:

          It denies that individuals have agency to create wealth using their intelligence. Even the “rich get richer” part has the implicit assumption in it a rising tide not only doesn’t lift all boats, it sinks everyone else.

          For an individual to create wealth, he needs at least access to some means of production, and some free time and energy (and literacy, and a cell phone, etc). As the rich get richer, they — with no intention of keeping him down — gentrify his neighborhood (raising his rent or pushing him further away from resources), increase the price of autos, support higher prices on necessities, etc. Whatever means of production he has (say, a computer and internet connection) keeps needing upgrades to keep up with the bells and whistles which the rich customers have made standard equipment, and expect him to have.

          ‘Wealth’ may not be a zero sum commodity, but the resources needed to produce it, to some extent are.

          • Eric Patton says:

            Automobiles are slightly higher in cost of working hours from say the 1930’s but you get a lot more for what you pay for it. They last much longer, are safer and can drive faster. Public transit has can make up for some of that gap.

            I agree with the point about literacy and would raise it further. The poor usually don’t have enough free time or willpower to figure out how to navigate the maze of clickbait and false narratives that are being pushed from the Overton Window. Their willpower is taxed by working, it’s not a purely genetic issue. But they can see when the Window deviates strongly from their own personal experience.

            On the bright side quantitative literacy has generally increased in the population with the exception of hispanics, mainly because of new arrivals I’d bet:
            http://nces.ed.gov/NAAL/kf_demographics.asp

            I don’t think the poor are being forced to adopt cellphones and particularly smartphones because the rich make it mandatory. The young in particular love smartphones and can use them in place of a computer for some basic tasks.

            In poor countries cellphones been a huge boon because it bridges the communications gap that exists:
            http://www.forbes.com/sites/bethhoffman/2012/08/01/african-farmers-to-get-mobile-phone-help-farm-to-fork/

          • houseboatonstyx says:

            @ eric patton
            Automobiles are slightly higher in cost of working hours from say the 1930’s but you get a lot more for what you pay for it.

            But the poor person who just needs local transportation without fancy gadgets, may not afford the better new cars that the demands of the rich have made industry standard. So he either has no car at all, or has a very old one which is usually broken down till he has time to repair it — which limits the free time he could use creating some wealth, and limits the range of services he could offer.

            I’m very sympathetic to the view that anyone can create value and wealth for themselves starting with ‘nothing’, because I’ve done it myself, twice. To take a worst case, the sort of scruffy, uneducated person with no tools, who does yard work or odd jobs, could work from that up to hiring out some other workers and make a business of it. But for the jobs he needs in the meantime, he needs a car* and a, yes, cell phone, and dependable free time. Which poverty eats.

            So the rising poverty line that raises his living expenses, is eroding the baseline normal person resources he would need for creating his own wealth. (Being below the poverty line does not necessarily qualify you for welfare benefits, and even applying is a whole nother time drain/opportunity cost/stressor.)

            Further up the scale, a middle class hairdresser with savings who wants to start her own shop, finds that local real estate has been bought up by rich investors, and is now out of her reach. Commercial real estate near where a family’s real life is rooted, is a sort of resource that is zero sum.

            * A car is needed in the non-city areas I know. Where public transportation reaches, it eats time and brain power. Living in walking distance of people who have money to hire you, means higher rent, higher food prices, etc.

      • Anonymous says:

        And the reason I use single mothers is because their poverty rate makes them an ideal target for economic populism. But the appeals to them are still identity politics based when a broader coalition of economic populism could be put together. Neither party has a populist redistributive platform, it’s identity politics based. People aren’t starving enough for it to be effective.

  50. charred-triumph says:

    Speaking of asshole atheism, my dad told me yesterday about a statue of Periyar, a south Indian atheist (and in particular anti-Brahmin) political leader, which his followers erected directly in front of a major Hindu temple. There’s an inscription below the statue saying “there is no god” in Tamil and English.

  51. Rob says:

    Just here to pick up some prediction points for saying this on the original Motte and Bailey post (http://slatestarcodex.com/2014/07/07/social-justice-and-words-words-words/):

    “…my point is that most ideas can have this tactic applied to them, and many do. And this means that this pattern is likely to highlight a lot of false positives, of the “Fallacy Fallacy” type. It’s not enough to observe that an idea has a strong and a weak version, you have to observe the tactic itself being executed. I don’t think Scott is wrong in this case, but I urge caution in applying this shiny new rhetorical tool.

    [I’m trying] to get a head-start in countering the widespread misapplication that I predict will occur if this motte-and-bailey idea goes mainstream.”

    • Jiro says:

      I’m not sure that most instances that look like what you describe really are misuses. It doesn’t need to be the same individual person using the motte and the bailey for the practical effect to be the same as a motte and bailey. This is especially so where terms are being used by a movement.

      • Anonymous says:

        Indeed, Ozy’s post explicitly allows the use of the term where different parties supply the two positions. So Rob is wrong to invoke Ozy as vindication.

    • Not Robin Hanson says:

      This is not of the fallacy fallacy type. The fallacy fallacy is asserting that if the reasoning is fallacious, the conclusion must be false. This is distinct from a fallacious accusation of a fallacy.

      Yes, it can be, will be, and already has been used as the subject of a fallacy fallacy:

      1. Fallaciously accuse someone of committing motte-and-bailey.
      2. Assert that since they are using motte-and-bailey, their conclusion is wrong. (Fallacy fallacy.)

      But the fallacious accusation and the fallacy fallacy occur in distinct steps.

      (If I were less scrupulous, I could assert that since you committed a fallacy fallacy, everything about your post is wrong: a fallacy fallacy fallacy.)

  52. lambdaphage says:

    The results of the ideal mate study seem to square with the position I’ve heard voiced elsewhere there that modern relationship pairing are more suited to efficient consumption (i.e. through shared tastes) than efficient production (i.e. paid vs. domestic labor). As you might expect when the world gets richer.

  53. TheAncientGeek says:

    > The Man Who Called Gandhi A Sissy – pretty interesting Economist article on Vinayak Savarkar, the founder of modern Hindu nationalism and of a huge Indian movement that spawned, among other things, India’s ruling BJP party and its prime minister Narendra Modi. Interesting fact – despite being a Hindu supremacist obsessed with getting all Indians to convert to Hinduism, he didn’t think highly of the Hindu religion itself – “he himself was an atheist and disapproved of aspects of traditional Hindu belief, dismissing cow worship as superstitouts

    Ths sort if thing is so common, it is surprising there isn’t a name for it. Or is there?

  54. namae nanka says:

    ‘My recent post on nerds and feminism was something I wrote in anger and anxiety’

    hahaha. Feminism and the search for truth ended in David Stove’s Farewell to the Arts.

    Anyway, the recent Ceci and Williams humongous paper had some surprising bit hidden in there. That and some of my corrections besides other gender-equality hijinks here,

    http://www.reddit.com/r/FeMRADebates/comments/2l5jpz/academic_science_isnt_sexist/

  55. R. says:

    “Extremely high numbers of Japanese do not find sex appealing – 45% of women and 25% of men ages 16 to 24 are not interested in or despised sexual contact.”

    Why’d they find sex appealing if they’ve never really had any.

    Easier and more convenient to masturbate to pornography, which is reportedly extremely varied and quite widespread in Japan. Also addictive and reputed to make heavy users unable to have sex..

  56. Jinnayah says:

    Commenting here b/c comments for this post seem to be closed (emphasis added):

    If you have an actual thing you’re trying to debate, then it should be obvious when somebody’s changing the topic. If working out who’s using motte-and-bailey (or weak man) is remotely difficult, it means your discussion went wrong several steps earlier and you probably have no idea what you’re even arguing about.

    The Battle for Yellowstone: Morality and the Sacred Roots of Environmental Conflict (Princeton Studies in Cultural Sociology) is an upcoming book that provides a deep and broad case-study example of this, according to The Economist‘s USA columnist, “Lexington” (Jan 3 issue, “Ranchers v bison-huggers“):

    … In short, all sides purport to be weighing what is true and false, while really arguing about right and wrong.
    Pro-wolf biologists and officials call themselves dispassionate custodians of a unique place. But they give themselves away with quasi-spiritual talk of wolves restoring “wholeness” to a landscape damaged by man. Indeed, when the first Yellowstone wolves were released in 1995, the then-interior secretary, Bruce Babbit, called it “a day of redemption.” …
    As for the anti-wolf types, when offered financial compensation for wolf-attacks on their livestock, some turn it down—suggesting that more than economics is at stake. … Many “Old West” types see a plot to drive ranchers from the land. They talk of “federal wolves” undermining their property rights, and challenging the God-ordained duty of humans to protect their own families, and exercise dominon over Creation.