NꙮW WITH MꙮRE MULTIꙮCULAR ꙮ

Links 5/17: Rip Van Linkle

Multiocular O, “the rarest Cyrillic letter”, used only to describe the eyes of seraphim. Kind of sounds like something out of a Borges book.

More on Low-Trust Russia: Do Russian Who Wants To Be A Millionaire contestants avoid asking the audience because they expect audience members to deliberately mislead them?

Xenocrypt on the math of economic geography: “A party’s voters should get more or less seats based on the shape of the monotonic curve with integral one they can be arranged in” might sound like a very silly belief, but it is equivalent to the common mantra that you deserve to lose if your voters are ‘too clustered’”

Please stop trying to “buy Congress’ Internet history” to “punish” them for “ending Internet privacy”. Please stop donating to crowdfunding campaigns promising to do this. Please stop claiming that now anyone can learn what you read on the Internet in a personally identifiable way. And please remember that the sense in which they “ended Internet privacy” was “they repealed a less-than-one-year-old regulation that hadn’t come into effect yet, changing literally nothing” (though see here for counterargument)

Facebook plans to launch GoFundMe-style fundraising tool. Seems like a good business move, though a little bit monopoly-ish.

Amber A’Lee Frost on attending Left Forum. “At its best, Left Forum remains a reassuring beacon of cameraderie and ambition…at its worst, however, Left Forum is Comic Con for Marxists — Commie Con, if you will—and an absolute shitshow of nerds and social rejects.”

Contra stereotypes, at least one study shows autistic children are more likely to share.

A combination men’s business suit / onesie is a thing that exists and that you can pay $378 for. The company involved being called “Betabrand” might be a little too on the mark, though.

Largest ever study on sex differences in the brain finds the usual – sex differences definitely exist and are significant, but there are nevertheless large areas of overlap between sexes in pretty much everything.

Okay, look, I went way too long between writing up links posts this time, so you’re getting completely dated obsolete stuff like Actually, Neil Gorsuch Is A Champion Of The Little Guy. But aside from the Gorsuch reference this is actually pretty timeless – basically an argument for strict constructionism on the grounds that “a flexible, living, bendable law will always tend to be bent in the direction of the powerful.”

Epidemiology buffs, is this true? US life expectancy, long believed one of the worst in the developed world, is actually the best in the developed world if you correct for our very high violent death rate. [EDIT: This CDC paper investigates fewer causes of violent death but might get proportionally similar results]

The Kernel Project is an in-planning rationalist group house and community center in Manchester, UK.

Chicago mayor Rahm Emanuel proposes denying diplomas to students leaving high school unless they can provide a “plan for their future” – acceptance to college or some kind of trade. Current Affairs has pretty much the right take with Rahm Emanuel’s College Proposal Is Everything Wrong With Democratic Education Policy, although I might have used the words “tulip subsidies” a few more times.

The company that makes Taser is offering free body cameras to every police officer, although this might just be part of a plot to get police locked into their system so they can jack up prices later.

Reductress: Are You Dating, Or Just Friends Who Have Sex And See Each Other Five Times A Week? This is even more confusing when you’re poly.

Otium: Are Adult Developmental Stages Real? Looks at Kohlberg, Kegan, etc.

Edge asks “What do you consider the most interesting recent scientific news”. Evo psych founder John Tooby answers: the race between genetic meltdown and germline engineering.

FDA agrees to let 23andMe start telling people their genetic disease risk again. Seems to be less of a Trump pivot than a carefully-considered decision that, whatever point they were trying to make by randomly impeding technological growth and preventing people from getting important health information, they had apparently finished making it.

Beeminder adds a feature to automatically beemind your writing by tracking word count.

I mentioned the debate over 5-HTTLPR, a gene supposedly linked to various mental health outcomes, in my review of pharmacogenomics. Now a very complete meta-analysis finds that a lot of the hype around it isn’t true. This is pretty impressive since there are dozens of papers claiming otherwise, and maybe the most striking example yet of how apparently well-replicated a finding can be and still fail to pan out.

Rootclaim describes itself as a crowd-sourced argument mapper. See for example its page on who launched the chemical attack in Syria.

Apparently if you just kill off all the cells that are growing too old, you can partly reverse organisms’ aging (paper, popular article)

Pope John XIX ruled from 1024-1032; Pope John XXI ruled from 1276 – 1277. It wasn’t until years later that the Catholic Church realized they had gotten confused and accidentally skipped over having a Pope John XX

[Small brain]: Attachment style toward parents
[Bigger brain]: Attachment style toward peers
[Giant glowy brain]: Attachment style toward God

Overcoming Bias on the role of jargon and mythology: “Similarly, religions often expose children to a mass of details, as in religious stories. Smart children can be especially engaged by these details because they like to show off their ability to remember and understand detail. Later on, such people can show off their ability to interpret these details in many ways, and to identify awkward and conflicting elements. Even if the conflicts they find are so severe as to reasonably call into question the entire thing, by that time such people have invested so much in learning details of their religion that they’d lose a lot of ability to show off if they just left and never talked about it again. Some become vocally against their old religion, which lets them keep talking and showing off about it. But even in opposition, they are still then mostly defined by that religion.” Of course, I wouldn’t know anything about that.

From Garrett Jones on Twitter: no correlation between a country’s change in education and its change in growth rate.

List Of Greek And Roman Architectural Records. Did you know Constantine’s bridge across the Danube was over a mile long?

The American Federation Of Teachers, one of the US’ largest teachers unions, comes out in favor of bombing Syria. I feel like this is some sort of reductio ad absurdum of unnecessary politicization of stuff.

Some past studies that I took somewhat seriously suggested that antidepressant use during the first trimester pregnancy could slightly raise autism risk. The latest very large study fails to replicate this result and finds only a slightly increased risk of preterm birth.

The person who put together the list of vindicated scientific mavericks responded to my criticism here; I responded to the response here.

The Politics Of The Gene: “Contrary to expectations, however, we find little evidence that it is more common for whites, the socioeconomically advantaged, or political conservatives to believe that genetics are important for health and social outcomes.”

Related: the hereditarian left. This seems like as close to a useful self-identifier as I’m going to get.

White House refuses to give Exxon Mobil special waiver to drill in sanctioned Russia. I want to emphasize how proud I am of (some parts of) America right now. Our Secretary of State is the former CEO of Exxon Mobile, our President is widely suspected of having benefitted from Russian interference in his election, but the government is still able to rule against Exxon and Russia when it needs to. Given how corrupt half of what we do is, it’s nice to know we have some weird hidden talent at not-being-corrupt that we can pull out sometimes.

More interesting techniques for surveying scientists and sounding out consensus: “As level of expertise in climate science grew, so too did the level of agreement on anthropogenic causation…the respondents’ quantitative estimate of the [greenhouse gas] contribution appeared to strongly depend on their judgment or knowledge of the cooling effect of aerosols.” Also: “Respondents who characterized human influence on climate as insignificant, reported having had the most frequent media coverage regarding their views on climate change.”

Siberian Fox linked me to two studies that somewhat contradicted my minimalist interpretation of childhood trauma here: Alemany on psychosis and Turkheimer on harsh punishment.

Deep learning system is able to generate new poems on arbitrary topics. See page 6 for its poem about bipolar disorder, which passes the Emo Teenage Girl Turing Test with flying colors.

A certain population in Bosnia is found to be the tallest in the world, likely for genetic reasons (study, popular article). This sort of thing drives me berserk; everyone can talk about between-populations genetic variation in height as if it’s so obvious it doesn’t even need defending, and then as soon as someone mentions between-populations genetic variation in cognitive abilities, it’s “Haven’t you heard? Scientists proved race is a social construct!” People should either be frantically trying to debunk all of these height-related claims, or else shrugging and saying “yeah, that’s a plausible minor extension of the existing literature” when they read cognition-related claims.

More evidence linking BDNF to depression: it appears to be a good biomarker for antidepressant treatment response. Usually my eyes start rolling when I see “psychiatry” and “biomarker” in the same paper, but with an n = 6000, d = 1.3, and p=4.4E-07, I am grudgingly prepared to take note. Extra neat – it’s serum rather than CSF, so we might actually be able to use it in real life.

Pictures of big data dot tumblr dot com

Matthew Yglesias changes my mind and convinces me that Obama accepting a $400,000 Wall Street speaking fee is bad. Basic argument: as long as corporations can offer politicians lucrative deals after they retire, they can reward pro-corporate decisions with plausible deniability, which incentivizes politicians to be pro-corporate. If you’re anti-corporate, this is directly bad; if you’re pro-corporate, this makes it impossible to convince people that you’re really making well-considered decisions in their best interests and not just being corrupt.

There have been a lot of hot takes that the March For Science was bad in some vague way (see eg Slate’s here), but despite sharing their intuition of discomfort none of them really rang true to me. One thing that did strike me was this tweet about the focus on funny signs and who had the best costume. It seems to me that if we were protesting something genuinely awful (like a genocide abroad), we wouldn’t wear silly costumes and funny signs. Does that mean that a decision to go ahead with the signs and costumes reflects some kind of subconscious feeling that this isn’t really that bad, or a motivation springing from something other than true outrage?

Lyrebird is an AI project which, if fed samples of a person’s voice, can read off any text you want in the same voice. See their demo with Obama, Trump, and Hillary (I find them instantly recognizable but not at all Turing-passing). They say making this available is ethical because it raises awareness of the potential risk, which a Facebook friend compared to “selling nukes to ISIS in order to raise awareness of the risk of someone selling nukes to ISIS.”

Rod Dreher’s Monastic Vision. I had always thought of Rod Dreher as some sort of crotchety conservative blogger who was deeply concerned about The Gays. Apparently he is actually a tragic figure resembling an Old Testament prophet come to life. I regret the error.

Current Affairs on the back-stabbing, infighting, and comical errors of Hillary Clinton’s campaign. Although of course if a handful of Rust Belters had voted differently, we’d be praising every one of these people as geniuses right now.

Magazine The American Interest recently print-published a version of my essay Considerations On Cost Disease. And here’s the editor’s commentary and proposed explanation.

FHI’s April Fools’ joke – a paper On The Impossibility Of Supersized Machines. Size isn’t even a well-defined natural concept, so how could machines ever become “larger” than humans?

The Myth Of Superhuman AI is yet another poorly thought-out repetition of the same anti-AI claims, and in some cases uses exactly the arguments the article above is parodying. But I link it because it’s the first article that explicitly claims that the “scientific consensus” is in favor of superintelligence, saying things like “a panel of nine of the most informed gurus on AI all agreed this superhuman intelligence was inevitable and not far away”, and that it wants to distinguish itself from the “orthodoxy”. I’m not sure that’s quite right, but it’s nice to see the criticism shift from “stupid crackpot idea that no sane person believes” to “entrenched scientific orthodoxy that must be challenged”, even if I do wish we’d been able to spend at least a little time as “plausible idea that should be approached with interest and curiosity”.

Related: Siberian Fox – “Before thermometers, people mocked the idea of temperature ever being measurable, with all its nuance, complexity, and subjectivity.”

Freddie deBoer gives lots of evidence that there is no shortage of qualified STEM workers relative to other fields and the industry is actually pretty saturated. But Wall Street Journal seems to think they have evidence for the opposite? Curious what all of the tech workers here think.

Also, I can’t remember if I’ve recommended Freddie deBoer’s new education science blog ANOVA on here yet, but you should definitely read it. He’s one of the most engaging writers I know, plus also one of the few people I really trust to report on scientific research accurately, plus also has a rare gift to write about politics without making me want to scream at my computer. See also: his Patreon.

80,000 Hours presents what they recommend as a rare actually-evidence-based self-help career guide. I am a little skeptical of the billing – the “evidence” is mostly along the lines of “a popular book written by science-y sounding person recommended this”, and there are actually ten million different self-help guides that do that kind of thing. But it’s not bad advice and if you’re looking for self-help you could probably do worse.

Scott Sumner: How Can There Be A Shortage Of Construction Workers? That is, is it at all plausible that (as help wanted ads would suggest) there are areas where construction companies can’t find unskilled laborers willing to work for $90,000/year? Sumner splits this question in two – first, an economics question of why an efficient market wouldn’t cause salaries to rise to a level that guarantees all jobs get filled. And second, a political question of how this could happen in a country where we’re constantly told that unskilled men are desperate because there are no job opportunities for them anymore. The answers seem to be “there’s a neat but complicated economics reason for the apparent inefficiency” and “the $90,000 number is really misleading but there may still be okay-paying construction jobs going unfilled and that’s still pretty strange”.

Roscoe Arbuckle, one of the most famous silent movie actors, had his career destroyed by a Trial-Of-The-Century-style rape scandal that sounds like a 1920s version of the UVA Rolling Stone case. Key quote “The jury began deliberations April 12, and took only six minutes to return with a unanimous not guilty verdict — five of those minutes were spent writing a formal statement of apology to Arbuckle for putting him through the ordeal…After the reading of the apology statement, the jury foreman personally handed the statement to Arbuckle who kept it as a treasured memento for the rest of his life. Then, one by one, the entire 12-person jury plus the two jury alternates walked up to Arbuckle’s defense table where they shook his hand and/or embraced and personally apologized to him”. Also a good example of how it doesn’t matter what the justice system finds as long as an industry is controlled by people happy to blacklist you for being unpopular. Also, trigger warning for…fatphobia? That wasn’t the trigger warning I was expecting to have to give, but it’s definitely needed here.

US Supreme Court rejects the argument that states can keep certain suspects’ money even after they are found innocent. This seems like a kind of niche situation, but the article correctly points out that it establishes a strong precedent that might be applied later to rein in civil forfeiture, which definitely isn’t a niche problem and is a really important issue.

Also, Alyssa Vance on Facebook on law: the test cases that set Fourth Amendment precedent will inevitably be ones where defendants are clearly guilty, biasing judges in favor of expanding police search powers.

Study which is so delightfully contrarian I choose to reblog it before reading it all the way through: mandatory class attendance policies in college decrease grades by preventing students from making rational decisions about when and how to study.

You’ve probably heard of Vantablack, the “world’s blackest pigment”, and seen the creepy pictures. But it’s proprietary, it requires special equipment to apply, and you can’t have it. But now artist Stuart Semple has released an open-access version that anyone can use – except, presumably, Anish Kapoor.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

915 Responses to Links 5/17: Rip Van Linkle

  1. cassander says:

    > I want to emphasize how proud I am of (some parts of) America right now.

    “Pride in yourself is a vein emotion,” Tom said sanguinely.

    • liz says:

      This is brilliant. I can’t believe no one threw you an attaboy…
      So I’ll do it.
      Well done! 🙂

  2. Anonymous Bosch says:

    This sort of thing drives me berserk; everyone can talk about between-populations genetic variation in height as if it’s so obvious it doesn’t even need defending, and then as soon as someone mentions between-populations genetic variation in cognitive abilities, it’s “Haven’t you heard? Scientists proved race is a social construct!” People should either be frantically trying to debunk all of these height-related claims, or else shrugging and saying “yeah, that’s a trivial extension of the existing literature” when they read Nicholas Wade.

    And this sort of thing drives me berserk; people getting so high on their own meta-contrarian virtue that they engage in weakman/steelman distortions while calling for a more honest debate. As if “race is a social construct” is the most sophisticated criticism of the hereditarian position, or as if Wade’s sweeping and entirely speculative just so stories about how, say, the Chinese have an inherited disposition towards authoritarianism are “a trivial extension” of “between-populations genetic variation in cognitive abilites.”

    • Anonymous Bosch says:

      I don’t know what the most sophisticated criticism is, but an obvious and common one is that the science is pretty goddamn far from settled and premature, conclusory speculation about it, especially from a position of authority has the potential for massive harm.

      I’m not saying it’s Roko’s basilisk, but I am saying maybe don’t be the guy who writes a book explaining why Jared Diamond was a pussy for ignoring the Western Civilization Gene explanation for European dominance. Or for that matter the other guy who writes a book saying “if the reader is now convinced either the genetic or environmental explanation has won out, we haven’t done a good job, by the way here’s two whole chapters on why affirmative action doesn’t work for all those badbrains!”

      Or, and this is the kicker, the guy who praises them for being honest and breaking taboos without seeming fully cognizant of the directions they take it in when getting prescriptive. I don’t think the hereditarian left is necessarily a bad idea. Hereditarian ideas are at least plausible and could well turn out to be proven true beyond Galton’s wildest fantasies. But until then, and even after then, I would suggest being a little bit more critical from the “left” perspective when evaluating the work of those who only agree with you on “hereditarian.”

      • Anonymous Bosch says:

        We agree on that and I don’t condone Middlebury-style assaults on people like Charles Murray.

        But epistemic humility is a two-way street, and free speech or not I’ll reserve the right to get intensely irritated at him, Wade, and other pundits and non-scientists who affect a sort of sweeping certainty about issues (like the genetic component of inter-population IQ gaps) which still radically divide scientists at the bleeding edge of research.

        My primary hope for a hereditarian left would be that they can open up this debate because they won’t simply be rushing headlong to justify the same regressive bullshit that kicked around when it was skull shapes and not SNPs. Someone like Freddie deBoer can make this case to people with a humility and rigor that the hereditarian right simply can’t, because their motives for talking up genes are (rightly) suspect.

        EDIT: I also don’t condone banning books or any of that other stuff you edited in, and yeah I read the parable about The Brother And The Jazz Piano. I’m making an intellectual argument about the care with which ideas should be approached, not a legal argument about what speech should and shouldn’t be permitted. Legally, sure, knock yourself out, write all the microwaved Lothrop Stoddard you can churn. Just don’t tell me it’s science and not science-flavored trolling.

      • reasoned argumentation says:

        Also, I hate to say this, but everyone always uses skull shape as a “look how terrible this debate used to be” gotcha, but as far as I can tell most of what people said about skull shape (different groups have different cranial capacity, which correlates with intelligence) is basically true. I’m not sure how that’s knockdown proof that everyone in this field used to be terrible.

        Well there was that intentionally falsified data about skull volume that was made up for political purposes:

        http://www.nytimes.com/2011/06/14/science/14skull.html

        Dr. Gould did not measure any of the skulls himself but merely did a paper reanalysis of Morton’s results. He accused Morton of various subterfuges, like leaving out subgroups to manipulate a group’s overall score. When these errors were corrected, Dr. Gould said, “there are no differences to speak of among Morton’s races.”

        But Dr. Gould himself omitted subgroups in his own reanalysis, and made various errors in his calculations. When these are corrected, the differences between the racial categories recognized by Morton are as he assigned them. “Ironically, Gould’s own analysis of Morton is likely the stronger example of a bias influencing results,” the Pennsylvania team writes.

        Side note – you can leave these sorts of arguments to men like me in the comments rather than making them yourself if you feel uncomfortable.

    • Scott Alexander says:

      It was dumb of me to mention Wade, who I thought would be a good inoffensive reference point but who, as you say, was mostly wrong. I’ve edited the post a bit. I’ve also deleted the rest of my replies to this thread since we seem to agree on most things and I should probably enforce my own rules on myself.

      • HeelBearCub says:

        I’ve also deleted the rest of my replies to this thread

        This seems … bad?

        You should be willing to be wrong and be called out and have that evidence available. If nothing else because it serves as a model of what being wrong and admitting it is like. And because it sows bad arguments being corrected.

      • J Mann says:

        Scott, if you’re taking suggestions, I’d love to see either some editing notes at the bottom or just a strike-through and updated text with a note about why. I appreciate your willingness to reconsider, but sometimes the process by which you change your mind is the most interesting thing, IMHO.

        Also, without the comments, I’m wondering what AB is referring to by more sophisticated critiques.

        Anonymous Bosch – can’t we at least get rid of the stupid arguments so we can hear the more sophisticated critiques? Any discussion of cognitive distribution gets an immediate “don’t you know race is a social construct” response, often from very smart people. If you’re interested in balancing the arguments, it wastes a lot of time sorting through them for the sophisticated ones.

        • Bugmaster says:

          You may find these arguments stupid, but I still think deleting them is a bad idea, (unless they are also abusive somehow). Otherwise, you’re on a very real slippery slope from deleting anything with the words “social construct” in it, to deleting anything you personally disagree with just because it sounds “stupid” to you.

          • Anonymous Bosch says:

            To be clear, that isn’t why this reply chain was deleted. It got way off track and Scott and I didn’t disagree as much as either of us thought and I probably got a little too het up. I’d rather it was still here but it’s not because of some sinister thoughtcrime on my part.

          • J Mann says:

            This is my day for expressing myself even less clearly than usual! What I meant to say was:

            1) Scott – I’d love it if you preserved a note of what you changed and why, because I think that would be very informative.

            2) Anonymous Bosch – maybe we could try to stamp out “race is a social construct” as a rejoinder to the Jensen/Sailer/whomever hypothesis, because it’s stupid and then we could focus on the more thoughtful arguments. By stamp out, I don’t mean “cast into the memory hole” but just tell people why it’s stupid and what arguments are better.

      • kronopath says:

        The reason why people freak out against any mention of “between-population genetic cognitive differences” is that usually the people who argue in favour of ideas like these follow it up with “and therefore low-IQ races should be ruled over by high-IQ ones” or “you can make assumptions about an individual’s intelligence by looking at their skin colour and where they’re descended from” or “you shouldn’t date or marry that person, they’re an X and Xs are genetically predisposed to crime!”. Give that kind of stimulus often enough and people end up reacting like Pavlov’s bell to any mention of the idea of cognitive population differences. Combine that with the fact that it’s always easier to say “no, fuck off” than it is to say “you’re partially right, but your conclusions are both wrong and awful”, and that the latter feels like ceding ground to an opponent, you are setting yourself up for a very hard argument if you start talking in support of cognitive population genetics.

        • is that usually the people who argue in favour of ideas like these follow it up with “and therefore low-IQ races should be ruled over by high-IQ ones” or “you can make assumptions about an individual’s intelligence by looking at their skin colour and where they’re descended from” or “you shouldn’t date or marry that person, they’re an X and Xs are genetically predisposed to crime!”.

          Oddly enough, I do not believe I have ever heard anyone make any of those arguments, possibly because none of those conclusions follow from the evidence on different distributions of abilities. If we switch from the racial version of the question to the gender version, your final point only works if the audience is gay.

          So far as the idea that skin color determines intelligence, Thomas Sowell is very black, very smart, and popular with conservatives–the very people who, if your account were true, would assume he had to be stupid.

          Would you like to offer some evidence for your claim? Charles Murray is routinely charged with believing there are relevant black/white differences. Can you find anything he has written that supports any of what you are saying?

          • Moldbug.

            Let us work up from order to Carlyle’s theory of slavery. If you can understand slavery through Carlyle’s eyes – and he is one of the few theoretical defenders of slavery in the last two centuries, the only other I can think of offhand being George Fitzhugh – nothing in Carlyle will shock you, unless you are unaware of current results in [redacted].

          • nimim.k.m. says:

            The example you want is not Charles Murray (I assume he is a reputable person).

            The example you do want is the your local neighborhood variant of stereotypical racist uncle. Consider yourself lucky if you’ve never met one. I have a couple in the extended family.

          • John Schilling says:

            How do you know that David Friedman is particularly lucky in that regard, as opposed to your being particularly unlucky?

          • nimim.k.m. says:

            Well, I was not thinking in terms of probability distributions. If only very few are unlucky, from their viewpoint the lucky should still remember that they are lucky.

          • Brad says:

            I don’t think the stereotypical racist uncle is talking about “between-population genetic cognitive differences”. For better or worse, on a volume adjusted basis we are looking at a relatively small group of mostly young, mostly white, mostly men, mostly upper middle class+, mostly online.

            Some no doubt ended up with this topic in the lottery of fascinations, but an awful lot of them have seemingly become obsessed with the subject because they want ammunition for their preexisting edgy political positions.

          • kronopath says:

            I’m not talking about people like Charles Murray. Most people don’t know or care who he is. I’m talking about the kinds of people who spew false racist talking points like “The richest black community in the US has more crime than the poorest white community,” implying that black people are predisposed towards crime due to reasons inherent to blackness. I’m talking about those historical “scientific” racist comparisons of the skulls of other races to monkeys, which imply that those races are stupider.

            The problem isn’t so much with Charles Murray (though I am admittedly unfamiliar with his work) as it is with the people who look at his arguments, extrapolate them to extremes, and use them to justify pre-existing hatred. That makes it a lot harder for anyone like Murray to have a good-faith discussion, because people reflexively lump him in with the worst of the hate-spewers.

            It also doesn’t help that people and communities trying to talk about this end up falling prey to that one phenomenon that Scott has written about recently.

          • The problem … it is with the people who look at his arguments, extrapolate them to extremes, and use them to justify pre-existing hatred.

            1. If the hatred already exists, they don’t need arguments to justify it. They can always invent facts as needed–as you describe them doing.

            2. If they look at his arguments, more generally at the data, they will observe that even if averages are different, the distributions heavily overlap, which means that they do not justify the conclusions we are discussing.

            3. One consequence of treating the topic as taboo is that people with such beliefs don’t look at the data, which makes it easier for them to hold the beliefs you describe–easier still if they ask why the topic is taboo and reach the obvious conclusion.

            4. If the topic is not taboo, then one of the facts people will discover is that East Asians have a higher average IQ than whites, which makes it less likely that they will be comfortable with arguments claiming that higher IQ races should rule.

            I can see two significant effects of treating the topic as taboo. One is to reduce the amount we know about it. The other is to defend incorrect arguments about discrimination and policies that follow from those arguments.

            Which suggests that those may be the results desired by the people who want to treat Murray, or anyone else who publishes research on the subject, as pariahs.

    • Conrad Honcho says:

      To be honest, I don’t know what the good arguments against the hereditarian model of intelligence are. Or rather the strong arguments for an alternative model. All I usually see is either the “social construct” meme, or Stephen J. Gould-esque handwavery that amounts to isolated demands for rigor. So, something like “IQ doesn’t really exist” (but then we have studies that show it pretty much does and predicts lots of things) or “the tests are culturally biased” (and so researchers have gone to great lengths to make unbiased tests…that pretty much show the same things as the biased tests).

      It seems to me if one believes IQ tests don’t really measure intelligence and wants to blow racists out of the water for all time it would extremely productive to create an accurate model of intelligence and prove every ethnic group has a similar distribution.

      • dndnrsn says:

        @Conrad Honcho

        You begin by talking about the hereditarian model of intelligence, but then shift to discussing the idea of genetically-based intelligence (no hereditarian says genetics are 100%, after all) varying by racial or ethnic group. These are two different things.

        If discussing the latter, there are arguments ranging from the surprising and sudden narrowing of the Irish-English IQ gap, or the fact that there are groups that were once stereotyped as stupid now stereotyped as intelligent (100 years ago, was the stereotype in North America of the Chinese that they were intelligent?) or not stereotyped at all (the Irish, again, or the Italians, are two examples). I think there is a great deal of environmental “damage” (in the form of the effects of poor nutrition in the womb or childhood, exposure to toxins, etc) to people’s IQ, that some (for various reasons) count as a mark against their innate potential.

        EDIT: As to why these arguments get made less than one might expect, there’s a few reasons. One is that there are elements of the left (mostly the academia-overlapping bits) that really like social constructionism, and would like to be able to say that most things (or, everything) is a social construct (I think this is bad, because it leads to not being able to perceive reality correctly – the combination of the left taking over much of academia and the influence of social constructionism is a bad one). Second, hereditarian models of intelligence have been used to justify some awful things (whether racially-based models, or not) – there is a good moral motivation there. Third, the sort of people who sit down and think about intelligence tend to be intelligent themselves, and a decent chunk of them seem to like the idea that intellect, rather than being outside of their control, reflects well on them: imagine if people were praised as much for being able to reach high shelves as they are for doing well in school; doubtless some tall people would like the sound of the idea that their height is a virtue they worked for.

        • Steve Sailer says:

          “(100 years ago, was the stereotype in North America of the Chinese that they were intelligent?)”

          Yes.

          About 150 years ago Francis Galton suggested that the Chinese would make far more out of Africa economically than Africans would.

          • JulieK says:

            That only tells us that he thinks they’re more intelligent than Africans. How does he think they compare to Europeans?

          • dndnrsn says:

            Francis Galton thinking Chinese smarter than Africans is not the same thing as the North American stereotype of Chinese 100 years ago.

        • Careless says:

          As I understand it, the 19th-early 20th century stereotype about the Chinese has always been that they’ll take the jobs of white people. Which was not something that white Americans thought about blacks or Latinos.

      • INH5 says:

        Personally, I think the strongest arguments against the hereditarian model of intelligence, or at least the “black people are less intelligent than white people because natives of Sub-Saharan Africa are genetically less intelligent than natives of Europe” claim that is traditionally associated with it, are the sadly underpublicized arguments of Chanda Chisala.

        Granted, I’m not an expert in the field, but in my view Chisala has presented some very strong evidence that the traditional hereditarian model of intelligence fails to predict the cognitive abilities of native Sub-Saharan Africans after environmental factors are taken out of the equation, while hereditarian commenters haven’t presented anything nearly as strong in response to his arguments.

        The usual hereditarian response to, for example, the high performance of black African and Caribbean immigrants to Western countries is to say that this is the result of high immigrant selection for intelligence, but Chisala has provided a decent amount of evidence against this. He has examined the performance of the children of these immigrants, which hereditarian models predict would regress towards the mean and perform much less well than their parents, and finds little evidence of any significant regression. He has looked at the performance of highly unselected immigrant populations like Somali and Ethiopian refugees and found that they too outperform native US blacks.

        Most convincingly, in my view, Chisala has compared the performance of black African immigrant groups to immigrant groups of other races that are presumably just as selected for intelligence. He founds that some African immigrant groups do just as well or even better than other non-black immigrant groups. For example, one study in the UK found that Yoruba students from Nigeria score higher on standardized tests than Chinese-speaking students (compare the “5+ A*C incl EM” column in this graph to this graph) – quite impressive when you consider that hereditarian models typically predict that East Asians are genetically smarter than Europeans.

        And this isn’t even touching on the issue that hereditarian models predict that American blacks should be more intelligent than black Africans, because the former have on average ~20% white ancestry.

        More recently, Chisala has taken a different approach , examining the world rankings of cognitively demanding games like Scrabble and draughts. In his research, he found a significant number of black African world champions of these games. Hereditarian intelligence models predict that no black Africans should be able to reach high levels of competition, for the same reason why people of West African descent dominate Olympic sprinting competitions. The evidence here isn’t totally airtight – for example, there aren’t many black African world champions in chess, though Chisala claims that this is explained by the fact that succeeding in high levels of chess competition requires extensive study of chess theory, which for obvious reasons is much harder to accomplish in Sub-Saharan Africa than in, say, Russia. Still, I think this is a significant challenge to traditional hereditarian models of intelligence, and I find the responses by hereditarians in the comments to be mostly weak and hand-wavy.

        Of course, this doesn’t falsify hereditarian models of group differences in intelligence in general. It isn’t impossible to come up with reasons why native US blacks could be genetically less intelligent than native black Africans. Chisala himself has come up with such a hypothesis, his involving the partial white ancestry of American blacks being from lower quality stock (personally I highly doubt this, because for obvious reasons most white ancestors of American blacks were slave owners, and since slaves weren’t cheap those definitely weren’t the dregs of society). But the evidence is sufficient to punch a large hole in existing hereditarian models, and I think it is a testament to the poor quality of modern discourse that evidence like this is largely ignored in favor of much weaker claims like “race doesn’t really exist.”

        • Douglas Knight says:

          For example, one study in the UK found that Yoruba students from Nigeria score higher on standardized tests than Chinese-speaking students

          That’s not what the chart says. First of all, you are comparing Chinese EAL to all Yoruba. Second,
          Chinese score higher (388) than Yoruba (365). The only way the Yoruba beat the Chinese is by the threshold of high passing 5 tests, including English. But of course the EAL do badly on the English test.

          • INH5 says:

            The second chart is separated into “EAL” and “English First Langauge.” The first chart separates the various black African groups by “First Language.” So if I’m reading them right, everyone in the “Yoruba” group in the first chart would be under the “ELA” category. Any Yoruba that had English as their first language would presumably fall under the “English” row on the first chart.

            However, I suppose it is possible that Nigerian immigrants tend to speak and read English better than Chinese immigrants, so this is a valid objection.

          • Douglas Knight says:

            No, you aren’t reading it right. You can check my interpretation by looking at N or computing weighted averages.

            Added: also, EAL/ESL doesn’t mean that English wasn’t the first language. It means that the student was specifically given additional resources for being bad at English, so it is conditioning on an English test.

          • Jiro says:

            And black Africans are a very heterogenous group, even if they all have dark skin color. It’s perfectly possible that some groups of them are high intelligence and others aren’t.

          • INH5 says:

            @Douglas Knight: Are you sure? Because the numbers for the “English” row in the first chart are exactly the same as the numbers in the “English First Language” columns in the “Black African” row in the second chart. Best 8 Mean is 356.2, 5 test passing rate is 69.1%.

            @Jiro: True, but some of the high-performing African ethnic groups that Chisala talks about also made up a large portion of the African slaves that were sent to the New World. For example, a lot of his examples belong to the Igbo ethnic group, which by some estimates made up around 15% of all African slaves shipped across the Atlantic Ocean. In Virginia in particular, somewhere around 30% of all slaves may have been Igbo.

          • Douglas Knight says:

            Oops, I missed the English line. But my other point remains: Chinese score higher, but pass lower. It is probably because of the required English test.

        • Enkidum says:

          These are really very good links. Are there any (good) extensive rebuttals to his articles? As you say, the responses in the comments are generally not great (at least to his first article).

          • Wrong Species says:

            Jay Man has a response. Make of that what you will.

          • Enkidum says:

            Yeah, I found that later. It’s… not particularly convincing, to say the least. Speaking as someone who is very much an outsider and ignorant of this field, reading something like Jay Man’s rebuttal makes me feel like I’m actually getting less informed, not moreso.

            (That being said, the specific article he’s responding to is the least convincing of the Chisala articles I’ve skimmed over.)

        • Desertopa says:

          I think it’s a mistake to identify what Chisala is arguing against as “the hereditarian model of intelligence.” The popular discourse which most people are familiar with regarding hereditarianism is probably about whether certain races are more or less genetically predisposed to high intelligence, but this discussion is largely speculation built on the more scientifically mainstream discussion of hereditary intelligence within, rather than between, populations. I think Chisala is better understood, not as anti-hereditarian relative to the scientific mainstream, but as representing a different position within that scientific mainstream regarding how much measured differences between populations reflect innate genetic differences.

          The notion that there can’t possibly be any group differences in intelligence is almost certainly an ideologically motivated one, but that doesn’t mean that the notion that the entirety of measured differences between populations is genetic is not ideologically motivated as well. And (not to say that there’s no prevalence of ideologically motivated reasoning among them,) scientists in the mainstream of hereditary research generally don’t represent either position.

          • Enkidum says:

            To be fair, Chisala refers to it as the hereditarian model in at least one of those papers.

        • Conrad Honcho says:

          Fascinating, thank you! I’ve bookmarked those for reading later.

        • Squirrel of Doom says:

          This seems like an obvious argument, but I’ve never seen it made:

          One of the dominant factors of life in west africa for a few centuries was the American slave trade. It seems natural that those who ended up getting captured and sold were on average less intelligent than those who avoided that fate.

          If that’s true, you’d expect a lower IQ in the slave descendents in America, and a higher among the descendents of the remaining population in Africa.

        • suntzuanime says:

          He has examined the performance of the children of these immigrants, which hereditarian models predict would regress towards the mean and perform much less well than their parents, and finds little evidence of any significant regression.

          Would the hereditarian model predict this? It seems to me that the selection takes place before the immigrants enter the United States, since the filter is whether they immigrate at all. So measurements taken afterwards, of performance of the immigrants once in the United States, aren’t being selected on and so shouldn’t create a regression towards the mean effect in any model, unless I’m missing something. You would only expect a regression towards the mean if you compared performance in the home country to performance after arriving, not comparing performance after arriving to children’s performance.

          • InferentialDistance says:

            No, it’s worse. You’d only expect regression to the mean if IQ is random centered on some sort of racially determined value. Hereditary intellect predicts the children of intelligent immigrants will also be intelligent, because the intelligent immigrants are intelligent because of their genes, and the children will inherit those genes and the corresponding traits, such as intelligence.

            What’s being asserted as the hereditary position is some nonsensical blacks-are-intrinsically-dumb-so-smart-blacks-are-an-outlier thing that doesn’t actually follow from genetics. Basically that smart blacks don’t have smart genes so their intelligence is a random quirk and their children will regress to the average of their (not smart) genes.

          • INH5 says:

            The hereditarian model of group differences in intelligence predicts this because we see exactly this sort of thing in other polygenic traits like height. From Wikipedia:

            The concept of regression comes from genetics and was popularized by Sir Francis Galton during the late 19th century with the publication of Regression towards mediocrity in hereditary stature. Galton observed that extreme characteristics (e.g., height) in parents are not passed on completely to their offspring. Rather, the characteristics in the offspring regress towards a mediocre point (a point which has since been identified as the mean). By measuring the heights of hundreds of people, he was able to quantify regression to the mean, and estimate the size of the effect. Galton wrote that, “the average regression of the offspring is a constant fraction of their respective mid-parental deviations”. This means that the difference between a child and its parents for some characteristic is proportional to its parents’ deviation from typical people in the population. If its parents are each two inches taller than the averages for men and women, on average, it will be shorter than its parents by some factor (which, today, we would call one minus the regression coefficient) times two inches. For height, Galton estimated this coefficient to be about 2/3: the height of an individual will measure around a midpoint that is two thirds of the parents’ deviation from the population average.

            (Emphasis added.)

          • suntzuanime says:

            This means that the difference between a child and its parents for some characteristic is proportional to its parents’ deviation from typical people in the population.

            But the relevant population in this case is immigrants, because that’s the population being measured. So we expect to see the children of the best-scoring immigrants scoring worse than their parents and the children of the worst-scoring immigrants scoring better than their parents, for no net change population-wide.

            Regression towards the mean isn’t a real genetic effect, it’s a pure statistical artifact. Which means it’s very sensitive to how you measure and categorize things, and you need to be careful how you apply it.

          • INH5 says:

            But the relevant population in this case is immigrants, because that’s the population being measured. So we expect to see the children of the best-scoring immigrants scoring worse than their parents and the children of the worst-scoring immigrants scoring better than their parents, for no net change population-wide.

            No, the relevant population is the source population that the immigrants come from.

            The claim that Chisala is responding to is that African immigrants are highly selected for intelligence and thus only the highest IQ outliers in most African countries are able to immigrate to Western countries. He argues that if high-achieving African immigrants really are outliers then we should expect their children to regress towards a lower mean just like the children of outliers would in their home country. A plane ride doesn’t change genes.

            Now things do change if you get past the second generation. From what I understand, the grandchildren of the original immigrants won’t regress any further towards the mean of the source population, assuming endogamous mating in the second generation. But third and further generation African immigrants aren’t the subject of the discussion.

            Regression towards the mean isn’t a real genetic effect, it’s a pure statistical artifact. Which means it’s very sensitive to how you measure and categorize things, and you need to be careful how you apply it.

            I think we may be talking about different kinds of regression to the mean. I (and Chisala) am discussing genetic regression to the mean, which is different from statistical regression to the mean. The children of exceptionally tall people being shorter than either of their parents isn’t a statistical artifact.

            Wikipedia again:

            In sharp contrast to this population genetic phenomenon of regression to the mean, which is best thought of as a combination of a binomially distributed process of inheritance (plus normally distributed environmental influences), the term “regression to the mean” is now often used to describe completely different phenomena in which an initial sampling bias may disappear as new, repeated, or larger samples display sample means that are closer to the true underlying population mean.

          • Douglas Knight says:

            There is only one regression to the mean. Genetic regression is statistical regression as Galton figured out, late in life, after mangling it the first time. Wikipedia is wrong.

          • InferentialDistance says:

            From said Wikipedia article:

            Galton’s explanation for the regression phenomenon he observed is now known to be incorrect. He stated: “A child inherits partly from his parents, partly from his ancestors. Speaking generally, the further his genealogy goes back, the more numerous and varied will his ancestry become, until they cease to differ from any equally numerous sample taken at haphazard from the race at large.”[9] This is incorrect, since a child receives its genetic makeup exclusively from its parents. There is no generation-skipping in genetic material: any genetic material from earlier ancestors than the parents must have passed through the parents, but it may not have been expressed in them. The phenomenon is better understood if we assume that the inherited trait (e.g., height) is controlled by a large number of recessive genes. Exceptionally tall individuals must be homozygous for increased height mutations on a large proportion of these loci. But the loci which carry these mutations are not necessarily shared between two tall individuals, and if these individuals mate, their offspring will be on average homozygous for “tall” mutations on fewer loci than either of their parents. In addition, height is not entirely genetically determined, but also subject to environmental influences during development, which make offspring of exceptional parents even more likely to be closer to the average than their parents.

            Emphasis mine.

          • suntzuanime says:

            The children of exceptionally tall people being shorter than their parents is a statistical artifact. It’s caused by you looking specifically at the exceptionally tall people. But I think I can sort of see the distinction the wikipedia page is making…

            So basically, the idea is that if immigration acts as a filter on IQ, we’d expect to see the people who pass the filter to have some combination of heritable contributions to IQ, nonheritable but individually persistent contributions to IQ, and circumstantial nonpersistent luck in passing the filter. Looking at the immigrant population filters out the circumstantial luck, and so you end up with a population selected for high heritable and nonheritable contributions to IQ. And then their children are only selected for high heritable contributions to IQ, so you expect to see a regression to a mean based on the fraction of the contribution to IQ which is heritable, relative to the fraction which is persistent but nonheritable. Which is high, but not 100%.

            It doesn’t seem like that’s the standard the person in your link is using, though. The person says:

            The problem is not that the black immigrant children were not regressing to the point of equaling their source population mean IQ (that’s also not what hereditarians predict either), but that they were clearly not even moving (or being pulled) towards that extremely low IQ, as hereditarians predict.

            The children shouldn’t be pulled towards the extremely low IQ, they should be pulled toward the much higher IQ based on the fraction of the IQ in the population mean that’s heritable. If you select the top 1% (or w/e) of the population on any heritable metric, their children are not going to be dragged down to the mean of the whole population, they’re going to fit a much higher mean.

          • INH5 says:

            @suntzuanime: I don’t think that you and Chisala are actually disagreeing with each other. You say that “their children are not going to be dragged down to the mean of the whole population, they’re going to fit a much higher mean,” but Chisala admits that in the section that you quoted. The argument is that even this higher mean should be below the white mean barring implausibly high levels of selection, and at the very least the children of African immigrants shouldn’t be academically outperforming the children of elite American blacks.

            One of Chisala’s main pieces of evidence is statistics like this that show that even the children of very wealthy American blacks score worse than average (or by some measures, even poor) white children. The argument is that if this achievement gap is due to racial genetics, then the achievement gap for native Africans should be even worse since most of them don’t have partial white ancestry like most American blacks do.

          • suntzuanime says:

            I think we are talking about different things. The person keeps mentioning the source population IQ like it’s supposed to be relevant. And I definitely don’t see how American blacks are supposed to be relevant, since they’re a different population from either the selected immigrant population or the source population.

            I guess the question is, is immigrating from Nigeria a harsher filter than earning $200,000 a year once in America? What does that table look like for the African immigrant population? The article seems short on African-immigrant-specific stats, mostly anecdotes.

          • INH5 says:

            The source population IQ is relevant because that helps us determine what the expected mean IQ of the children should be. As Razib Khan describes here, the expected deviation from the source population mean will be lower than the average deviation of the parents due to the influence of non-heritable factors. So the expected mean IQ of the children will be somewhere between the mean IQ of the parents and the mean IQ of the source population. The children will not regress to the mean of the source population, but they will regress towards it.

            Black Americans are relevant because the achievement gap between black and white Americans is one of the most commonly cited pieces of evidence by those arguing for genetic differences in intelligence between racial groups, and in particular between people of European descent and people of Sub-Saharan African descent.

            I guess the question is, is immigrating from Nigeria a harsher filter than earning $200,000 a year once in America? What does that table look like for the African immigrant population? The article seems short on African-immigrant-specific stats, mostly anecdotes.

            You’re right, that is the question. And I agree that including more statistics about African immigrants specifically would have helped Chisala’s case. However, this article does include some stats. Apparently at the time the article was written, Nigerian-Americans had an average yearly income of $57,000. Now it’s around $62,000, which is still lower than a lot of other immigrant groups. He also presents this chart of the occupation classification of ethnic groups in the UK, though unfortunately that lumps all African immigrant groups into the category of “Black African.”

          • suntzuanime says:

            Yeah there’s a point made in Razib Khan’s article that seems to rather undercut Chanda Chisala’s argument:

            And that is why there is a flip side: even though the offspring of exceptional individuals are likely to regress back toward the mean, they are also much more likely to be even more exceptional than the parents than any random individual off the street!

            Just like height, IQ is mostly hereditary, the non-hereditary portion is relatively small. So Chanda Chisala being able to point to a couple anecdotes about high performing children of African immigrants is exactly what we’d expect, and not remotely the knockdown argument Chanda Chisala seems to think.

          • INH5 says:

            In hindsight, I probably should have posted a direct link to Chisala’s original article, which contains most of his original data about the performance of the children of African immigrants. It isn’t just a few anecdotes of highly achieving individuals, it’s that children of African and Caribbean immigrants are over-represented relative to American blacks in places like highly selective elite schools, as well as high mean performance of certain African immigrant groups in the UK.

          • Jiro says:

            Apparently at the time the article was written, Nigerian-Americans had an average yearly income of $57,000. Now it’s around $62,000, which is still lower than a lot of other immigrant groups.

            On the other hand, it is the highest African ethnicity on the list. It’s entirely plausible that “Nigerians” has a high IQ component.

        • Steve Sailer says:

          According to Gregory Clark’s surname analysis book, the highest average achieving black surname in the U.S. is Appiah.

          The most famous Appiah in the U.S. is the philosopher Kwame Anthony Appiah, whose white grandfather was Sir Stafford Cripps, Chancellor of the Exchequer to King George VI. I come across other smart folks named Appiah frequently.

          One question that is pretty opaque to Westerners is is whether or not parts of Africa had minorities promoting high achievement in intelligence-intensive fields, like the Puritans and Jews in Europe did.

      • Yosarian2 says:

        In think the two strongest arguments I’ve heard against it are:

        1. Access to an “enriched enviroment” when young may have a significant impact on IQ. People who argue for this point of view often point to the Flynn effect as evidence (although of course there are other explanations for this).

        2. I know Scott is very dubious about stereotype threat research, but if there is something there it could have an impact on IQ tests taken by different groups in the US.

        3. There are other kinds of cultural issues that can have an impact even in tests without a cultural bias. For example there may be a feature in some East Asian cultures where students are willing to spend more time thinking about a question before trying to answer it.

      • Worley says:

        My approach would be a warning that the research on inheritance of intelligence has been mostly in populations that are a very narrow subset of the human race. E.g., if you study Americans, you find that adult height is very heritable. But from that, you can’t conclude that the Japanese circa 1940 had a strong genetic tendency toward short stature.

        In the case of IQ, I’ve seen references to studies that say that among low socioeconomic status people, the measured heritability of IQ is considerably lower than the standard values, and that the shared environment between siblings effect is considerably higher. It’s possible that low-income households differ from each other in ways that middle-income households do not.

        Getting to more politically sensitive cases, I can think of three effects that could be important in poor countries (especially in regard to the popular “Africa is genetically doomed” argument). One is the Flynn effect — over the last 150 years or so in advanced countries, something about the environment has raised IQ values by something like 30 points. It appears to only be in “crystallized intelligence” measures, so it may have something to do with modern life giving people continuous cognitive exercise leading to better performance.

        Another is childhood malnutrition and disease (especially persistent parasitic infections), which are essentially absent in advanced countries but very common in poor countries and are known to stunt mental development.

        A third is inbreeding. I’ve seen claims that in many preindustrial countries, the preferred marriage pattern is usually between cousins within a clan, leading to a fairly high level of inbreeding. If I remember the numbers correctly, this is estimated to reduce IQ by 15 to 30 points. (This is a peculiar effect, because for the individual it is inherited. But for society as a whole, it is entirely cultural — changing marriage to random mating will greatly reduce homozygousity without changing the “gene pool” at all.)

  3. IvanFyodorovich says:

    “Epidemiology buffs, is this true? US life expectancy, long believed one of the worst in the developed world, is actually the best in the developed world if you correct for our very high violent death rate – exculpating the US health system and suggesting our health care policies are doing something right.”

    Not true. Here’s a paper in JAMA where authors calculated life expectancy gaps with other countries with and without violent deaths (suicide, accident, drug OD etc). In a violence-free world, we are still behind most first world countries. See chart on page 2.

    • shakeddown says:

      What happens if you also adjust for race (assuming the difference in lifespan between races is genetic rather than a consequence of discrimination of some sort)?

      • IvanFyodorovich says:

        White life expectancy in 2012 (same year as the data in the JAMA paper) was only 0.3 years longer than the national average in the U.S., so it doesn’t explain how we are worse off than Europe. Source

        • What if you adjust for obesity?

          More generally, has anyone made a serious attempt to tease out the effect of medical care as distinct from all other effects on life expectancy? I’m not sure how one would do it.

          • Besserwisser says:

            Obesity rates aren’t that much lower in other First World countries. At least, the differences in life expectancy are higher than the differences in obesity rates. I think Scott mentioned some studies showing the differences in health care systems explained barely any difference in life expectancy but I could be wrong.

        • Careless says:

          You’re aware that about a quarter of the country now comes from longer-lived races than whites, right?

    • Scott Alexander says:

      Thanks, I’ve added that to the link.

    • Nebfocus says:

      There is also a rural/urban divide- people in rural areas suffer from the lack of nearby hospitals resulting in some number of unnecessary deaths.

    • Douglas Knight says:

      The paper makes the weird decision to only consider a few violent causes. This doesn’t seem to be a limit of the data, which uses ICD codes. You might think that if they considered more causes, violent deaths would account for more of the gap. For example, they fail to correct for the half of homicides that aren’t by firearm.

      But, actually, I think that their gerrymandering exaggerates how much of the gap is due to violence. In particular, they seem to say that 20% of the gap is due to firearm suicide that doesn’t occur elsewhere. But if they compared all suicides to all suicides, there would probably be a much smaller discrepancy to attribute to suicide.

      • jessriedel says:

        > For example, they fail to correct for the half of homicides that aren’t by firearm.

        Strongly agree on the importance of the overall point you’re making for interpreting the CDC study, but this FBI data says firearms are 2/3 of all homicides, not 1/2.

    • Jules says:

      I suspect that the use of the word “simply normalizing” means that the process does not account for the typology of the victims of violence.
      Those would tend to be more poor and less healthy. Controlling for violence might therefore disproportionately inflate you health results?

    • Janet says:

      I wonder about the infant and perinatal mortality data skewing things also– I remember talking to a British woman who was HIGHLY upset that her daughter was classified as a “miscarriage” because she was born at 23 weeks 5 days, and they told her that the policy was not to treat preemies under age 24 weeks and instead to count them as a miscarriage.* Since the baby was breathing (in fact, survived for several hours unassisted), in the US the baby would have been given a birth certificate (counted as a live birth), and then given a death certificate (counted in the infant mortality rates, and counted as a 4-hour lifespan towards the average lifespan calculations). It wouldn’t take too many outliers like that to skew the average noticeably.

      * I don’t know if this was/is actually British NHS policy or not, but that’s what they told her at the time.

      • bean says:

        My understanding is that this also is a major driver of the high US infant mortality. Ironically, our better treatment of premies is making us look worse.

      • Douglas Knight says:

        Infant morality reduces American life expectancy by 3 months compared to Japan and half that compared to Britain.

    • jessriedel says:

      I’m really glad IvanFyodorovich pointed out this CDC paper, but I’m looking at it and I don’t think it conflicts with the AEI paper. The CDC authors only account/renormalize life expectancy for three types of injuries: Drug poisonings, Firearm-related injuries, motor vehicle crashes. This is not the same as the original study, which accounted for all violent and accidental deaths. Obviously firearms and motor vehicles are huge contributors. But just those three things do, in fact, cut the life expectance gap in half. Furthermore, a third of homicides aren’t by firearm and quarter of accidental deaths are from falling not motor vehicle or poisoning.

      The talking point — that the relatively life expectancy in the US is an argument for healthcare changes — is still very clearly undermined, and I think the AEA study could be fully compatible with the CDC claims. Really wish we had a clear statement on this!

      • IvanFyodorovich says:

        Okay a number of points here:

        1. The AEI analysis does NOT simply discount all causes of violent deaths. It conducts a complex regression which factors in things like mean GDP. I don’t entirely understand what they did and I don’t have three hours to spend on this, the most obvious way they put their finger on the scale was using data from 1980-1999, which helps the US because our gap with other countries was lowest in 1980 even though we had much higher homicide and traffic accident rates than today (source). Furthermore, when called out on this (unfortunately this link is paywalled) Ohsfeldt retreated into his motte and more or less said that they were trying to open minds rather than actually make the claim that American life expectancy is on top when you factor out violent deaths. Which is swell, but their study has bounced around the conservative press for years with the oversimplified descriptor.

        2. The CDC paper admittedly excludes some causes of death (non-gun homicides, non-gun suicides, drownings, accidental falls etc). However, as Douglas Knight points out, that probably increases the effect of violence in the US because we have a relatively low non-gun suicide rate relative to other countries, and suicides kill a lot more people than homicides. Furthermore, it includes Drug ODs as “violent”, which are something the U.S. does especially badly in. The CDC paper probably over rather than under estimates the violent death effect.

        3. Japanese people live 4.3 years longer than we do. An American’s lifetime risk of dying of any sort of common accident or gun homicide are just above 1.6% (source). It is not mathematically possible for things that kill 1.6% of people to account for 4.3 years difference (and before you say it, Japan has more suicides than we do). I wish I could give you cleaner numbers than the CDC, but the Ohsfeldt/Schneider result should be transparently wrong.

        • Steve Sailer says:

          “3. Japanese people live 4.3 years longer than we do.”

          Comedienne Ali Wong has some observations on how long Asian-American women, like her mother, live.

        • Jayson Virissimo says:

          According to the Kaiser Family Foundation, Asian-American life expectancy at birth is 86.5 years, but for White-Americans it is only 78.9 years.

          • Steve Sailer says:

            Hispanic-American life expectancy is also quite long.

            Mexico has almost caught up to the U.S. in life expectancy, despite the high homicide rate and obesity perhaps being even worse than in the U.S.

          • IvanFyodorovich says:

            I agree that Japan’s longer life expectancy could reflect differences in diet and even genetics so comparing us might not be fair. That said, if AEI wants to claim we are ahead of them when violence is discounted, I’m allowed to point out that we are not.

            Fairer is to compare white Americans to Canadians and Britains. Assuming white Americans are 0.3 years above the American average (see above), the U.K. is 1.6 years ahead of us and Canada is 2.6. Violence accounts for less than a year of that difference (see below), obesity rates are only a little higher here, and we spend twice as much as they do. Hard to see what we get from that money.

            Mexico is an interesting case in that it showcases the diminishing returns associated with medical spending. I wouldn’t accept Mexican levels of care for my family, but they can achieve near-American life expectancy with one eighth the expenditure.

          • cassander says:

            @IvanFyodorovich

            Last I checked, americans of japanese descent live longer than japanese japanese.

        • jessriedel says:

          Thanks for the thoughtful analysis!

          To me, the key statement is just that the AEI paper is doing a regression, since now it’s no longer a matter of one paper refuting another but a fight over the fairness of their assumptions (as always, unfortunately). Glad to agree that the summarizing the article as “merely correcting for violence eliminates the gap” is basically false.

          I’m not sure how the high accident rate in the ’80s makes America look unfairly better in non-violent/non-accident deaths. The whole point of the study is to remove contributions from violence/accidents. It’s only unfair help to America if the non-violent/non-accident gap has widened since then (e.g., for actual healthcare reasons, which might very well be true), but that’s a separate question.

          • IvanFyodorovich says:

            My point is that in 1980, the U.S. did much better in international rankings than we did in 2000, and violence was a larger factor. The statement “we’re at the top if violence is discounted” wasn’t true then either (at least based on the 1970 data from the Beltrán-Sánchez paper I cite below), but it was much closer to being true. The decision by the AEI people to use data going back 20 years, rather than just using the most recent year for which data was available, looks to me like a conscious effort to skew the data. I suspect their are other things wrong with their regression (this guy also has criticisms) but that was the one I could notice as a layperson.

            Even if they had an innocent reason for going back to 1980 (like wanting points for their regression), it still means that conservative news sources in 2017 are trying to make claims based on data from 1980. I mean, if we just focus on the 1945 data, we totally do better than Japan and Europe.

            Beyond this, the whole idea of doing the regression was weird, the correct thing is that recalculate life expectancy without violent death like Beltran-Sanchez did.

      • IvanFyodorovich says:

        You know what, I am so tired of seeing the Ohsfeldt AEI analysis cited I will disprove it myself, even though I have work I should be doing. From this source, we can estimate that the lifetime risk of dying of any sort of all major accident (including car, plane, drowning, fire, gun discharge, falls, floods and dog bites) or is 1.33% (sum the fractions in the column excluding homicide). They estimate a 0.29% chance of dying by firearm homicide, let’s overestimate that half of all homicides are non-firearm so we get 0.58%. So 1.91% So that nobody can complain that it doesn’t include bear attacks and champagne cork mishaps I will round that up to 2%. Based on 44,193 suicides a year, we can calculate a 1.1% lifetime risk of suicide. So that’s 3.1% of Americans dying of suicide, homicide or accident.

        Now I will assume that all of these deaths knock 80 years off the victim’s life expectancy. This is obviously an overestimate since very few babies commit suicide but I am trying to be as generous as possible here. 0.031*80 =2.5 years of life.

        Now if we take the American life expectancy of 79.3 years, add 2.5, we get 81.8 years. That still puts us behind 14 other countries. And that’s without even factoring in that people in other countries die of homicide/suicide/accident at some frequency too and that my 2.5 years is a gross exaggeration.

        The AEI Ohsfeldt/Schneider simply cannot be right, the same way a bowling ball cannot be bigger than Jupiter. And yet it has been cited by Krautheimmer, Forbes, Wall Street Journal etc. Please Scott, do not propagate that evil thing.

        • Steve Sailer says:

          White people in the U.S. in 2017 have distressingly high mortality rates.

          I can recall first noticing around 2011 that European whites were pulling away from American whites in life expectancy, but the concept didn’t become a Thing until Angus Deaton published a paper on high white mortality rates in November 2015, just after winning the (quasi-)Nobel in Econ.

      • IvanFyodorovich says:

        Okay after hours of digging I finally found it, a paper that defines violence as all homicide, suicide or accident and calculates how many years of life are lost to it in the United States. In the year 2000, violence took 1.2 years off the American life expectancy. If we assume this number hold true to today, then violence free-America has a life expectancy of 80.5 years, which would put us tied for 28th in the world. Factor in that violence happens at some level in other countries and it contributes less than a year of the gap with other countries, which is in line with the CDC paper.

        The violence definition is in the last paragraph of page 1329 (9th page of the PDF) of the Beltrán-Sánchez paper. See Table 4 on page 1335 for the 1.20 number. Also, this paper estimates days of life lost from almost all violent death causes and comes to an extremely similar number as Beltran-Sanchez (sum the entries in Table 3).

  4. Bugmaster says:

    The Myth Of Superhuman AI is basically “poorly thought repetition of the same anti-AI claims #2895552”, and in many cases uses exactly the arguments the article above is parodying.

    Uh-huh. Do you have a FAQ available somewhere (similar to your anti-certain-political-ideology FAQ, perhaps) that debunks all of these anti-AI claims ? Otherwise, it just sounds like argument from assertion.

    By the way, if you told me, “Machines are getting exponentially larger, so one day soon they will become larger than the Sun, the Earth, and our entire Galaxy; and this will happen faster than you could blink”, I wouldn’t believe you for the same reasons that I don’t believe the statement “Machines are getting exponentially smarter, so…”

    Hey, wait a minute. Doesn’t the Singularity hypothesis actually propose that machines will get arbitrarily large as well as arbitrarily smart ? Hmm…

    • Scott Alexander says:

      I have some of it collected here and other parts scattered throughout Part III here, though I’m not sure it’s really got everything I would want to respond to that article. Very quickly:

      Their first claim is that “Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.”. I think the size article correctly mocks that as an isolated demand for rigor – after all, size isn’t a single dimension, so “larger than humans” is a meaningless concept! Yet we use it all the time and anyone who claimed that things could never be “larger than humans” would be dumb. You might be able to sketch out a more formal argument against it from http://slatestarcodex.com/2013/05/05/ambijectivity/

      Their second claim is that “Humans do not have general purpose minds, and neither will AIs.” There is some truth to this, though Eliezer was writing about it back in 2008 which suggests it’s not completely shocking to AI researchers. But there’s certainly an effect where minds that evolved to help monkeys have sex are capable of understanding quantum mechanics, which suggests it’s not 100% specificity all the way down. I think arguing “perfectly general” vs. “perfectly specific” is dumb – it’s a continuum, but it’s one that AI scientists will try to solve and probably will solve eventually, for the same reasons it got solved in humans. Also, it’s dumb to argue that because no single mind can “maximize all forms of thinking”, that nothing can ever be smarter than human. No single giant battlemech could maximize all combat parameters (relative to other equally expensive battlemechs), but all of them could defeat a human in single combat.

      “Emulation of human thinking in other media will be constrained by cost” – yeah, just like having computers that could run Unreal Tournament was constrained by cost for a couple of years. They say “it will be cheaper to just make a human”. Yeah, if you could clone Johann von Neumann, that would be pretty nice. If you can’t, you’re either arguing that computers can be as smart as regular humans but not as smart as von Neumann for some reason, or that there’s no extra benefit to having von Neumann vs. popping out a random baby.

      “Dimensions of intelligence are not infinite”. AGAIN, JOHANN VON NEUMANN EXISTED. I seriously can’t imagine the level of confusion it takes to think that it’s impossible to be smarter than a normal human when every history book gives examples of humans who are smarter than normal humans. And since we know that existing humans are constrained by brain size and mutational load, there’s no reason whatsoever to think that Johann von Neumann is anywhere near an upper bound. This is like saying “There’s no reason to believe that the universe is infinite and height can just keep increasing forever, therefore we’ll probably never build machines taller than a human”. Aside from the weirdness of guessing that hey, for all we know space probably ends a few inches above your head, the person saying this has never seen an elephant.

      “Another unchallenged belief of a super AI takeover, with little evidence, is that a super, near-infinite intelligence can quickly solve our major unsolved problems…No amount of ‘thinkism’ will discover how the cell ages, or how telomeres fall off.” NOW YOU’RE JUST TROLLING ME. Anyway, Eliezer’s written a bunch about this, see eg here and here, but what I really want to link to is this.

      And throughout all of this, he’s saying things like “this unexamined dogma…” or “this thing people just assume without thinking about it…” for all the stuff we’ve been debating for over a decade and there’s a pretty big literature on which he’s clearly never read. It’s so annoying and I don’t know what to do about it.

      • reasoned argumentation says:

        I think arguing “perfectly general” vs. “perfectly specific” is dumb – it’s a continuum, but it’s one that AI scientists will try to solve and probably will solve eventually, for the same reasons it got solved in humans.

        Why though? The reason that it got solved in humans is that it’s useful for the owner of the brain – which is the body in which the brain is housed (or the genes that code for the brain and body depending on what level you’re looking at). Why would someone go through the effort of making their visual pattern recognition AI (frex) have its own set of goals that it can take actions to fulfill?

        • Scott Alexander says:

          I don’t think a visual pattern recognizer is going to magically discover general intelligence.

          But there are a bunch of teams who are specifically trying to develop general intelligence, and it seems like maybe one of them will succeed.

          I think all we need is for every form of intelligence not to be completely different, the sort of thing where nobody can ever make a general reasoner because if they don’t include the meteorology module then it will never be able to learn meteorology.

          If we accept that some kind of general reasoner is possible, then people will try to build it, and maybe some of them will succeed.

          • Bugmaster says:

            Well, in the trivial sense, building a general reasoner is definitely possible: all it takes is a lack of birth control and nine months of gestation. But I am not 100% convinced that it’s possible to build a general reasoner that will be better at every task — or even, in the same ballpark — than every specialized reasoner. If we assume that there are some non-trivial physical limits on computation, then this could be a huge obstacle in the path of the Singularity.

            But then, when I say “not convinced”, I really mean that; I don’t mean to say, “no, this is obviously impossible”. Rather, I am open to being convinced.

          • Kevin C. says:

            @Bugmaster

            Well, in the trivial sense, building a general reasoner is definitely possible: all it takes is a lack of birth control and nine months of gestation.

            But is a human being a fully “general reasoner”? Has this really been solidly established?

            But I am not 100% convinced that it’s possible to build a general reasoner that will be better at every task — or even, in the same ballpark — than every specialized reasoner.

            I heartily second this.

          • Scott Alexander says:

            “But I am not 100% convinced that it’s possible to build a general reasoner that will be better at every task — or even, in the same ballpark — than every specialized reasoner. If we assume that there are some non-trivial physical limits on computation, then this could be a huge obstacle in the path of the Singularity. ”

            But I think “better than every specialized reasoner” is moving the goalpost. Sure, if you build a Jupiter-sized AI general reasoner out of 31st century technology, it might be worse at number theory than a Jupiter-sized AI specialized number theory agent made from 31st century technology. But it might still be vastly better than a human. In fact, it would be really surprising if it wasn’t.

            This is what I was trying to say with my battlemech analogy above. Am I missing something?

          • Bugmaster says:

            @Scott Alexander:
            I think you are confusing the claim, “generalized reasoners will never be better than modern humans” (which is absurd, I agree, given that even ye olde humans are getting smarter all the time), with the claim, “generalized reasoners will never be so much smarter than humans that they can become godlike”. The second claim is what I’m arguing against.

            To put it another way, there’s a difference between saying, “there’s an upper limit on what raw computational power and/or intelligence can achieve, and this limit is not as high as you think”; and saying, “not only does such a limit exist, but modern humans are actually it”.

            If you need to screw in an ordinary Phillips-head screw, then your Swiss Army multitool will probably do the job. If you want to fix the tiny screw in your glasses; or a huge rusted bolt; or a tricky hex-bolt all the way inside your engine; then you need a special screwdriver. Several of them, in fact. They will all be about as big as that multitool, but they will be way better at their specific jobs.

          • ilkarnal says:

            If we accept that some kind of general reasoner is possible, then people will try to build it, and maybe some of them will succeed.

            What is the significance of that success? I think it is large, but finite – and much smaller than you seem to think it is. I think there are diminishing returns to information processing ability generally, and that’s why we don’t have three foot diameter heads. Your fundamental appraisal of the value of intelligence is out of wack, in my opinion.

            Copying over part of a comment I made on Otium a couple weeks ago:

            …I believe it is part of a more general problem, an overestimation of the importance of intelligence, compounded by the smearing of the concept of ‘intelligence’ into all sorts of places where it doesn’t belong. Ender Wiggin crushes the opposition at fistfights and computer games and war. In the book, that’s because he’s smart. In real life the computer games part is because he is smart, the war part is partly because he is smart and partly coincidence, and the fistfight part has nothing to do with him being smart. We have a good idea of what intelligence is. There is a single factor that predicts your ability at basically all ‘intellectual’ pursuits. Besides that, it doesn’t do jack – it doesn’t ‘bleed over.’ There’s some correlation with other stuff (say, lifespan) because of overlap between stuff that fucks with your intelligence and stuff that fucks with your everything – but there’s no more than we’d expect from that alone.

            That’s strange, because you would think that being smaht would lead to you being a better fighter, a better hunter, a better socializer – etc. Certainly it *feels* like all those benefits should accrue. In books they do accrue, and thinking about how to fight makes you a much better fighter. The power of reason manifests, the hero thinks – when he punches like this I shall move like that, then my arm will reach up like this – and kaPOW! Yet in real life – no. There’s a reason why hunter-gatherers, and for that matter wolves, are spectacular hunters, fighters, and socializers, but don’t have a very high IQ. *General* intelligence is *specific* to intellectual pursuits.

            There’s a huge blind spot, an underestimation of the older systems that underlie general intelligence. I think it’s accurate to say that general intelligence is a slow and clumsy tool whose purpose is to make itself obsolete – that is, I figure something out so that I can somehow embody that knowledge and not have to think of it anymore. If I constantly have to think about what to do next, I am slow, and that’s not where the payoff is. The payoff is when I understand sufficiently to put the matter ‘in the back of my mind’ and move quickly.

            I think the hysteria about artificial intelligence owes itself in large part to people not understanding this.

            You could say – well we wouldhave three foot diameter heads if we had time to evolve to fit our current/ circumstances where intelligence is so much much much more valuable than it ever was.

            I think that’s wrong. Intelligence isn’t actually that much more valuable, and its value – while still huge – is getting smaller all the time as the body of knowledge we acquire about the universe grows. Moreover, I would expect our brains, and the resources we devote to information more generally, to go in the other direction if they were to change significantly. There’s a reason why general intelligence is rare in the animal kingdom – because specific instinct, once it has time to evolve, is vastly superior in speed and efficiency.

            We need general intelligence because of the great novelty of our environment. One might assume the environment will continue to change the way it has for the past few hundred years indefinitely, an almost infinite explosion in technological capability.

            I think that’s wrong and we’ll hit a soft ceiling, where the marginal improvement in technology becomes infinitesimal, very very soon. At that point the environment won’t stop changing, but the rate of change will be incomparably slow.

            Fundamentally I look at technology as exploitation of new prime movers. First we have our muscles, then we have beasts of burden that can produce an order of magnitude more power. Then we have combustion engines that can go a few orders of magnitude above that. The final stage will be using fusion energy to move things around, and we’ve already got a viable way of producing fusion energy. Project Ulam envisioned a spaceship several times larger than the largest container ships, blasted into space by thermonuclear explosions. I think that’s more or less where the story ends – you figure out how to move stuff around efficiently with thermonuclear energy, and then there are no more fundamental leaps like there were from foot to carriage, or carriage to train.

            At that point we’ll be in a new world as different from this one as a modern cityscape is from the savanna – but unlike the savanna dwellers we won’t have a new more powerful horse to tie our carriage to. There will be no more revolutions like moving from horse cavalry to tanks – just gradual progress, like the current crawl forward of tank to slightly better tank. That crawl is significant, but it’s nothing like the gap in worlds a member of an uncontacted tribe experiences looking up at a plane.

            Being super-smart can only allow you to exploit new reservoirs of power if there are new reservoirs of power to exploit. The geniuses behind the Manhattan project very quickly moved from nothing to nuclear bomb to thermonuclear bomb to miniaturized thermonuclear bomb to…. Slightly more miniaturized and streamlined thermonuclear bomb. First we exploited the power that keeps the earth molten beneath our feet – then the power that keeps the sun burning in our sky. There doesn’t seem to be anywhere to go from there. Looking around the universe we have a pretty good idea of what makes things usefully ‘go,’ and that’s nuclear interactions. There’s a lot of energetic gravitational stuff happening, but it isn’t relevant in the same way – much like there’s a lot of dark matter and dark energy, but by its very nature we can’t ever use it to make a vehicle move or a cannon fire. Thinking we’ll figure out how to make a ‘dark energy reactor’ or something is fundamentally misunderstanding dark energy, and thinking we’ll make a ‘gravitic drive’ or ‘gravity wave drive’ is a fundamental misunderstanding of gravity and gravitational waves.

            The universe is pretty well characterized. There are mysteries, but the mysteries are smaller and less promising than ever before – because our theories are more powerful and resilient than ever before. Whatever happens, quantum electrodynamics will retain its incredible predictive power, much as Newton’s laws still retain theirs within those scales that define everyday situations. That’s a very fundamental constraint on what we might imagine is possible, and it represents a fundamental reduction in the utility of general intelligence moving out from here.

          • ilkarnal says:

            Project Ulam envisioned a spaceship several times larger than the largest container ships, blasted into space by thermonuclear explosions.

            Correction: Project Orion. Stanislaw Ulam was the guy who first proposed nuclear propulsion and made preliminary calculations, but the project wasn’t named after him.

          • JulieK says:

            Pity. Here I thought it came from the Hebrew word “ulam” meaning very large room, banquet hall, etc.

          • Elijah says:

            Well, in the trivial sense, building a general reasoner is definitely possible: all it takes is a lack of birth control and nine months of gestation. But I am not 100% convinced that it’s possible to build a general reasoner that will be better at every task — or even, in the same ballpark — than every specialized reasoner. If we assume that there are some non-trivial physical limits on computation, then this could be a huge obstacle in the path of the Singularity.

            Wouldn’t a network of thousands of specialized reasoners(including SRs whose specialty was in coordinating and utilizing the other SRs) be trivially distinct from a general reasoner at parity with those SRs in all their respective specialities?

          • Understand that there’s one world, one pie, and what’s going on is partly a wrestling match between different virtues trying to claw in as much of the pie as possible. You can imagine some transcendent manifestation of one virtue clawing in everything, but in fact the beings that claw in the most have a mix of virtues. Beings with only a lot of intelligence, in actual practice, aren’t getting very much and never got very much.

            A machine can be a lot better at information processing. Also it can develop a lot more physical power. Which matters more? Would you rather have a super powerful tank or a super smart computer?

            What if the super smart computer has a tank?
            if you stipulatively define pure intelligence as not having
            any mechanics, any “Body”, then you might be able to defeat it with a tank,. The argument is valid but uninteresting, because what we are concerned with is the safety of a computer, and an ASI that is inadequately boxed has the to opportunity to wreak havoc by taking over automated weapons systems, and in many other ways.

          • The universe is pretty well characterized.

            Isn’t that what people use to think in the 1890s?

          • But is a human being a fully “general reasoner”? Has this really been solidly established?c

            here seems to be an ambiguity between “general” qua “universal” and “general” qua “whatever a human can do”.

          • Bugmaster says:

            @TheAncientGeekAKA1Z says:

            What if the super smart computer has a tank?

            I think the point here is that having a tank is much more important than being smart (as long as you’re just smart enough to drive it, I will grant you). If I am a dummy with a tank, and you’re a genius with a tricycle, then you lose every time.

            That is to say, intelligence can help you achieve your goals (and defeating your enemies), but it is not sufficient by itself. In many cases, it’s not even the major resource you need to possess in order to accomplish your goals.

          • ilkarnal says:

            Isn’t that what people use to think in the 1890s?

            They were right about the stuff they knew they were right about. You can’t compare, say, the idea of luminferous aether and Newton’s laws. That would be like comparing many-worlds theory to quantum electrodynamics. On the one had you have basically spitballing to try to put your finger on something that isn’t very well understood and that you know you don’t understand very well, on the other hand you have the solid characterizations of phenomena that can be tested extensively, are tested extensively, and pass. Newton’s laws aren’t wrong – Newton’s laws will never will be wrong. They are not perfect – footnotes need to be added about very small and very large energy-scales. Footnotes may need to be added to quantum electrodynamics and the theory of relativity. But all they ever will be are footnotes, because the excellent records of predictability within the bounds of what they try to describe stand, and can never be erased by any new discovery.

            We know about a lot more stuff than in 1890. In particular we know what lights up the universe. We don’t know much about a lot of ‘dark’ mechanisms but by their nature we know won’t be able to use them to fire a gun or propel a spaceship. What I mean by this is that we know that ‘dark’ energy and ‘dark’ matter are dark, they don’t do much photonic interaction. This results in very diffuse matter, because photonic interactions drive the clustering of matter into stars and planets. We’re interested in dense stuff because we’re very very dense. We’re interested in stuff we can touch and manipulate, not stuff we can only pull and manipulate with the extremely weak string of gravity.

            We understand pretty well how the dense stuff that we care about works. Now the very dense stuff we don’t understand so well – the stuff of the beginning of the universe, the stuff of neutron stars. We could conceivably profit greatly by greater understanding there, like if we figured out how to explode neutron stars so we could harvest the resultant scattered high z material. But fundamentally that would just be another way of mining high-z material, not something radically new. We’d be able to use it because we bring it out into conditions that give rise to the laws that govern our scale of existence, the stuff we care about. We know that we don’t care about neutron stars if we can’t do that, because they annihilate the conditions necessary for our chemistry on contact.

            The point is that the set of stuff we care about is relatively limited, and we have a foundational understanding of it. The jump from nothing to here is unimaginably larger than the jump from here to anywhere else. There’s gonna be no woo-woo bullshit. No dark energy reactors, no antigravity drive, no faster than light travel. We know what the rules of the game are at the scales we care about. The reason why technology looks like magic is that you don’t see what makes it go. A car is magic when you don’t see how burning hydrocarbons makes it go. A nuke or nuclear reactor is magic when you don’t see how neutron-driven chain reactions make them go.

            There’s no more magic coming.

          • Joe says:

            @Scott

            Critics of the ‘minds as modular systems’ hypothesis often give this argument. They say things like, “Human minds can’t be modular, because we can do quantum physics, yet we clearly didn’t evolve to have a Quantum Physics Module”.

            I think this fundamentally misinterprets the concept that modularity proponents are thinking of. The idea isn’t that our minds have one module to do each task, it’s that they are modular in the same way software is modular. In a big complex program that can do many real-world tasks, typically there are many, many small subsystems, some of which are used for almost all tasks, some only for a few tasks, others in-between. A real task will involve some combination of many modules interacting with one another, each doing a small part of the work.

            Microsoft Excel does not have a totally isolated self-contained ‘do your taxes module’, separate from its ‘household budget module’ and its ‘revenue calculations module’. That’s just entirely the wrong conception of modularity. Yes, modularity exists in this kind of ‘horizontal’ form, where modules are used for some high-level tasks but not others. But it also exists as ‘vertical’ modularity — one high-level task performed by a stack of many layered modules, with each taking its input from the previous module or modules, performing some small local subtask or transformation, and sending output to the next module(s), with the end result of all this being the execution of the high-level task. (And of course tasks will have both kinds of modularity.)

            Whether or not you think minds are a special case, simple and general unlike other software systems which are usually complex and modular, is a different question. But I at least want to be clear on what’s being argued, and I’m reasonably sure ‘meteorology modules’ is not it.

          • I think the point here is that having a tank is much more important than being smart (as long as you’re just smart enough to drive it, I will grant you). If I am a dummy with a tank, and you’re a genius with a tricycle, then you lose every time.

            That is to say, intelligence can help you achieve your goals (and defeating your enemies), but it is not sufficient by itself. In many cases, it’s not even the major resource you need to possess in order to accomplish your goals.

            That’s a reiteration of your original point, and still has the same problem: it fails to imply that a superitnelligent AI will be safe, since, in a world where everything is on the net or has a processor embedded in it, getting the tank will be easy.

          • Bugmaster says:

            @TheAncientGeekAKA1Z says:
            Well, given that I am not at all convinced that a superintelligent AI — of the “it can hack anything anywhere” kind — is even possible, I don’t think that it makes sense to think up ways of making it safer.

            That said, of course an AI could be dangerous. In fact, we are experiencing some of those dangers right now, e.g. with the Mirai IoT botnet. But it’s not some sort of an imminent apocalyptic threat; especially not compared to things like nuclear weapons and Ebola.

            The same mechanisms that are currently in place to prevent rogue human actors from killing everyone with nukes or Ebola (though I’d love some of those mechanisms to be greatly enhanced, no doubt about that) would also work on rogue agents who want to kill everyone with AI — even if those agents are themselves AIs. Naturally, these safety mechanisms might fail when confronted with a nearly instantaneous godlike AI FOOM, but since I don’t believe such a thing is possible, I’m not too worried.

          • I don’t place a high probability on fast takeoff or super-high intelligence. I do put a reasonable probability on the hypothetical “if superintelligeny, then can get body”. I’ts helpful to state your real objection.

          • John Schilling says:

            in a world where everything is on the net or has a processor embedded in it, getting the tank will be easy.

            A world in which getting tanks is easy will not exist. Even for “…if you’re a Superintelligent AI” levels of easy, because before Superintelligent AI can exist, Very Capable Not-Quite-AI will exist and will be used as a tool by Black Hats.

            Either the tanks will not be connected to the net, or the net will be rendered secure even against AI-level hacking, or merely human net wars bleeding into meatspace will render civilization incapable of maintaining the net. Place your bets.

          • Trofim_Lysenko says:

            I’ll put down $100 on “locomotive/weapons systems, whether they have autonomous modes or not, air-gapped from the rest of C4I for manned vehicles”, though I predict that we won’t implement that until AFTER we’ve been burned at least a few times.

            I’m not sure if some American UAV video feeds are still broadcast omnidirectionally in the clear, but that was the case for years after it was known as a horrible idea. If it’s been fixed since, it was only AFTER it became clear and was publicized in the news that insurgents were knocking together remote video terminals using COTS components and watching the feeds.

          • @ilkarni

            > They were right about the stuff they knew they were right about

            They didn’t know how much they didn’t know, and you don’t either. Your problem is unknown unknowns — things you don’t know and don’t know that you don’t know. You can’t figure out how many there are based on what you do know. There’s a basic logical problem there.

          • A world in which getting tanks is easy will not exist. Even for “…if you’re a Superintelligent AI” levels of easy, because before Superintelligent AI can exist, Very Capable Not-Quite-AI will exist and will be used as a tool by Black Hats.

            That’s just lowering the bar on how smart the AI needs to be.

          • random832 says:

            @ilkarnal

            There’s no more magic coming.

            I think if it manages to actually execute an NP algorithm in (smallish-power) polynomial time, quantum computers will seem pretty magic even if we still don’t know whether P = NP (and even if PostBQP != NP but contains problems outside of P or whose specific power complexity is different, for those who just can’t wait to point out my misconception)

          • Bugmaster says:

            I think if it manages to actually execute an NP algorithm in (smallish-power) polynomial time…

            Well yeah, and if it manages to build an FTL engine, an inertialess drive, a perpetual motion machine, or some gray goo nanotechnology, that would be pretty cool, too. Except that all of those things are “miracles”, and I agree that they probably — very probably — aren’t in the cards.

      • Bugmaster says:

        First of all, thanks for the reply, I really do appreciate it (despite my abrasive demeanor). And now, on to more abrasion:

        “Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.”.

        I think that this is the weakest point in the original article, IMO. I agree that “intelligence” is a useful concept, despite the “multiple dimensions” objection (which, AFAIK, was invented by people whose oppositions to this concept were mainly political).

        However, as per your “ambijectivity” article, I think the usefulness of the single-dimensional concept breaks down as you start talking about outliers — Mozart and Beethoven being such. We can all agree that Mozard, Beethoven, and even Jimi Hendrix are better than your 3-year old upstairs neighbour, but who is better: Jimi Hendrix or Beethoven ? Jimi Hendrix or Slayer ? It is no longer so easy to decide.

        The problem with machine intelligence is that it’s mostly outliers already. Who is smarter: an average human, Watson, or the latest version of Excel ? Well, it depends on what you are trying to achieve; however, all of them are absolutely smarter than a rock.

        Their second claim is that “Humans do not have general purpose minds, and neither will AIs.” … I think arguing “perfectly general” vs. “perfectly specific” is dumb – it’s a continuum … No single giant battlemech could maximize all combat parameters (relative to other equally expensive battlemechs), but all of them could defeat a human in single combat.

        I agree with you about generality being a continuum; however, I am not convinced that the right end of the continuum can be extended indefinitely toward perfect generality. The battlemech can crush a human every time, but it may not be able to boil the perfect egg (or even any egg for that matter) or write a sonnet. I am not convinced that the same tools (both mental and physical) that are useful for crushing humans are also useful for writing sonnets, and vice versa. Certainly, humans can do both, but they do so rather poorly.

        “Emulation of human thinking in other media will be constrained by cost”

        Yeah, this is the second weakest argument in the article, IMO, probably due to poor phrasing. Financial costs decrease with economies of scale, obviously; however, there is a physical limit to how much computation you can stuff into 1cc of physical volume, and I am far from convinced that this limit is either arbitrarily large, or even “merely” billions of times larger than our current technology allows. The claim that computing costs will always keep dropping exponentially is similar to the claim that computing power, or machine size, or, I don’t know, ponytail length will keep rising exponentially. There’s no law that says that the middle part of a curve has to look exactly like the entire curve.

        That said, I don’t particularly care about the distinction of “emulation of human thinking” vs. “artificial intelligence”; to me they sound like implementation details, not two different categories.

        I seriously can’t imagine the level of confusion it takes to think that it’s impossible to be smarter than a normal human when every history book gives examples of humans who are smarter than normal humans. And … there’s no reason whatsoever to think that Johann von Neumann is anywhere near an upper bound.

        That’s like saying, “chickens keep getting bigger, so it’s absurd to deny that future chickens will be 1000 meters tall, or in fact infinitely tall”. Smarter than a normal human ? Sure. Smarter than the smartest human who ever lived ? Well, maybe, assuming that this claim is even coherent (as per above). Infinitely smart ? No, most probably not. There are physical limits involved.

        “Another unchallenged belief of a super AI takeover, with little evidence, is that a super, near-infinite intelligence can quickly solve our major unsolved problems…” NOW YOU’RE JUST TROLLING ME. Eliezer’s written a bunch about this…

        Yes, and I was unimpressed with his articles (although I must admit my weakness to the Avatar meme). Imagine that you live in Ancient Greece. You are crazy smart. Smarter than Von Neumann. Smarter than ten Von Neumanns. Without ever stepping outside of your bathtub, would you be able to think your way toward correctly predicting black holes ? Or semiconductors ? Or even cell theory ? I would argue that you cannot, for two reasons. First of all, even posing the question of “how do semiconductors work ?” requires entire concepts that you simply do not possess, and you will never possess them until you notice something about the world… and the world inside your bathtub is kinda limited. Secondly, even if you did possess those concepts, there are a wide number of perfectly internally consistent and elegant models that can explain them; and most of them are wrong, and you will never find out which is which until you actually get your hands on some germanium. The entire field of modern science is based on abandoning this elegant philosophical notion that one can think its way out every problem, and on getting one’s hands dirty, instead.

        I suppose you could argue that, by today, we have collected all the useful data that we’ll ever need, and all that remains is to analyze it, but AFAIK that’s not what you meant.

        Furthermore, the stronger version of this claim is IMO even less defensible: that the AI would somehow be able to solve problems that, according to our current understanding of science, and all of that wonderful data, are impossible to solve even in theory. I’m talking about things like FTL travel, perpetuum mobile, and possibly “gray goo” nanotechnology. But, again, I’m not sure if this is a claim that you’d be willing to make.

        It’s so annoying and I don’t know what to do about it.

        You already know my answer: write a FAQ ! I promise you I’ll read it… Not that this means much, but still 🙂

        EDIT: trying to add missing linebreaks

        • Scott Alexander says:

          I agree that it’s wrong to compare human intelligence to Watson. I think the whole point of a superintelligence is that we’re talking about a general purpose reasoner that’s smarter than humans along all dimensions in the same obvious way that Mt. Everest is larger than humans along all dimensions. I think whether that’s possible is part of my disagreement with his latter points, so I won’t go into that here.

          I think it’s plausible that some early AIs might be better than humans in some areas but worse in others. Again, the degenerate case it Watson, which is great at Jeopardy but awful at everything else. I think at some point, someone will get enough things right that the AI is human-level at most scientific/technological tasks, after which it can figure out how to replicate other human abilities at its leisure. This seems like a strong answer to the contingent argument. I worry the original article was making a necessary argument, that if something is good at one thing it must be worse at something else. That seems completely wrong to me; humans are smarter than bacteria along pretty much every dimension, no tradeoffs required.

          Part of me wants to argue that it would be very strange if the maximum computation per unit area were anywhere near human scale, but I feel like maybe we should just avoid that entire argument. Fine, maybe cramming computational power into a small area is hard. So make a bigger computer! Come on now!

          Agreed that there may be some bound saying nothing will be “infinitely smart”. But we know why humans aren’t smarter than they are (head size capacity and mutational load), so if they also happened to be near the cosmic maximum for smartness that would be a crazy coincidence. I also think that “just” having von Neumann implemented on a machine that had Wikipedia integrated into its memory banks, could call on Mathematica as a “native app”, could call on Napoleon and Shakespeare as subroutines, and was running at 1000x human speed would be pretty darned superintelligent for all practical purposes. But this doesn’t require the theoretical limit to intelligence to be any higher than what’s been observed already.

          I don’t think we’ve “collected all the data we ever need” today, but number one, for the reasons mentioned in Eliezer’s posts I think a smart person can take data further than most people do now (if some theoretical biologist, looking over existing papers, came up with a coherent theory of aging, would we be shocked and say this was impossible?) and number two, the computer can just do the tests! Sure, the computer thinks up a good theory of cancer but isn’t sure about it. Why can’t it test it the same way any human scientist would, except probably better since it’s much smarter? This is like saying “A computer might be very smart, but it could never fry an egg, because only stoves can do that.” You’re going to look very silly when the Terminator just buys a stove.

          • Bugmaster says:

            I think at some point, someone will get enough things right that the AI is human-level at most scientific/technological tasks, after which it can figure out how to replicate other human abilities at its leisure.

            Ok, and I’d love to see some evidence for why you believe that. I’m not even sure if you can lump all “scientific/technological tasks” into the same category (though maybe you could; again, I’d like to see your reasoning).

            I worry the original article was making a necessary argument, that if something is good at one thing it must be worse at something else.

            Well, I wouldn’t say “necessarily worse”, but if your resources are non-trivially limited, then being able to do everything must come at the expense of being super-amazing at just one thing. This is why e.g. FPGAs outperform conventional microcontrollers at some basic image processing tasks, given the same size/power/cost constraints.

            Fine, maybe cramming computational power into a small area is hard. So make a bigger computer! Come on now!

            Really ? Ok, how much bigger ? Arbitrarily big ? Bigger than our galaxy, maybe ? This is how we got into this whole argument in the first place. Furthermore, I am far from convinced that computation can be scaled even linearly with volume without running into some pretty serious diminishing returns.

            But we know why humans aren’t smarter than they are (head size capacity and mutational load), so if they also happened to be near the cosmic maximum for smartness that would be a crazy coincidence.

            I never argued that humans are “the pinnacle of smartness”; rather, I think it’s epistemically reckless to assume that, just because current humans are not the pinnacle of smartness, then the pinnacle either does not exist, or is arbitrarily high, or even merely so high that intellects who reach that pinnacle would be quasi-godlike compared to humans.

            By analogy, chickens circa 1952 are not the pinnacle of fatness; it would’ve been foolish for someone living in 1952 to assume that chickens will never get bigger — but it would be equally foolish to assume that they’d become mountain-sized one day.

            I also think that “just” having von Neumann implemented on a machine that had Wikipedia integrated into its memory banks, could call on Mathematica as a “native app”, could call on Napoleon and Shakespeare as subroutines…

            Isn’t that basically what modern human society already is ?

            …and was running at 1000x human speed…

            Ok, so it’s not that, but a). I’m not convinced that this is possible (though I’ll gladly grant you, say, 2x speed), or that problem-solving ability scales linearly with speed (for some of the reasons I’d already mentioned), and b). see below.

            but number one, for the reasons mentioned in Eliezer’s posts I think a smart person can take data further than most people do now

            Agreed, though I am not sure how much further. For example, genome-wide association studies were supposed to herald in a new era of biological understanding, but so far, they’ve kinda heralded in the opposite of that.

            and number two, the computer can just do the tests!

            This is where we run into a problem, because tests take time — and they take the same amount of time regardless of how smart you are. If you want to grow a crop of rice (or, I don’t know, cows) to test your latest genetic prediction, you have to wait a couple of months (or however long it is that cows take to gestate). Sure, if you’re super-smart, then maybe you can eliminate some unnecessary rice-growing; but the bottom line is, you have to wait. And if you want to confirm the Higgs Boson, you need to build a supercollider. This will take longer than growing some rice. And if you want to land on Alpha Centauri Bb… hoo boy.

            This problem leads to two immediate consequences. First of all, all that waiting eliminates some of the AI’s speed and intelligence advantages. Secondly, it prevents it from going “FOOM” in the blink of an eye, as most Singularity proponents (though maybe not you, I’m not sure) tend to believe it would — thus negating most of its proposed dangers.

            Furthermore, things like supercolliders are incredibly expensive; meaning, they consume a significant portion of resources that are available to us. The AI would have to compete with other actors (notably, humans) to acquire these resources. The usual objection to that is, “duh, a superintelligent AI can outcompete any human”, but you can’t convince me that a Singularity is possible by assuming that it already happened and then reasoning about what it would do with its time.

          • Scott Alexander says:

            “Ok, and I’d love to see some evidence for why you believe that. I’m not even sure if you can lump all ‘scientific/technological tasks’ into the same category (though maybe you could; again, I’d like to see your reasoning).”

            Just to be clear, are you doubting that even human-level AI is possible?

            “Well, I wouldn’t say “necessarily worse”, but if your resources are non-trivially limited, then being able to do everything must come at the expense of being super-amazing at just one thing. This is why e.g. FPGAs outperform conventional microcontrollers at some basic image processing tasks, given the same size/power/cost constraints.”

            Again, I’m saying that this is only true within technology levels. A 2017 computer is better at everything than a 1987 computer. I don’t know what century’s computers human brains “correspond to”, but I bet there’s a century that comes after it.

            “Really ? Ok, how much bigger ? Arbitrarily big ? Bigger than our galaxy, maybe ? This is how we got into this whole argument in the first place. Furthermore, I am far from convinced that computation can be scaled even linearly with volume without running into some pretty serious diminishing returns.”

            I want to step back and ask whether you feel like you’re saying things that you think are true, or you’re objecting “but there’s still a chance, right?”. I agree it might be possible that for some reason the human brain is the densest possible computer, and that if you try to build a computer bigger than the human brain for some reason it can’t scale, but all of this sounds kind of like you’re making up an ad hoc system of physics specifically to limit intelligence at around human-level. I agree we can imagine a system of physics that limits intelligence at around this point. But it seems a lot like saying “maybe there are weird constraints we don’t know about that ensure an F-35 is the best possible fighter, so if the military tries to prepare for a further generation of fighter jets after that they’re just being pointlessly alarmist.”

            “I never argued that humans are “the pinnacle of smartness”; rather, I think it’s epistemically reckless to assume that, just because current humans are not the pinnacle of smartness, then the pinnacle either does not exist, or is arbitrarily high, or even merely so high that intellects who reach that pinnacle would be quasi-godlike compared to humans.”

            Again, I admit it’s possible that there are weird limits at exactly the place it would be necessary for your case to make sense, I just don’t see any evidence for this. Remember, my thesis isn’t “superintelligence will be created with 100% certainty”, it’s “superintelligence might be possible and is something we should think about”. I feel like this might be our most fundamental disagreement.

            “Isn’t that basically what modern human society already is?”

            http://slatestarcodex.com/2015/12/27/things-that-are-not-superintelligences/

            “This is where we run into a problem, because tests take time — and they take the same amount of time regardless of how smart you are. If you want to grow a crop of rice (or, I don’t know, cows) to test your latest genetic prediction, you have to wait a couple of months (or however long it is that cows take to gestate). Sure, if you’re super-smart, then maybe you can eliminate some unnecessary rice-growing; but the bottom line is, you have to wait.”

            Smart people have an advantage in knowing what tests to do, and knowing how to design the tests well. Just to give an example, if I knew all of modern science (including the experiments that had been done to prove it) and my only job was to replicate all of those experiments and confirm that they still worked, I could probably do most of it in a few months to a few years. Building a supercollider would admittedly be the hard part, but not if I was super-rich and had the resources of an entire civilization, and there might be ways to avoid using supercolliders if I were smart enough to think of them.

            But Napoleon conquered Europe without needing to do any tests, and Einstein discovered relativity without making a supercollider. There are probably lots of things you can do before “building supercolliders” is an impassable bottleneck.

          • ilkarnal says:

            We have real life humans now who are much much much smarter than their peers. Did John von Neumann take over the world? Could he have? Was he even as formidable an adversary, if you could choose between the two, as a young, strong, low FTO thug? In the universe of Ender’s Game and the universe of HPMOR intelligent people are terrifying! They are the best fighters, the best manipulators, they win all the time at everything. They win so much they get tired of winning.

            You might put a bit more weight on what actually happens in the real world. Intelligence, practically speaking, is one factor determining how efficiently you use the resources at your disposal. You’re still limited by the extent of the resources, and by other factors that may reduce how efficiently you can utilize them.

            You can use intelligence to gather more resources – true. You can use strength to gather more resources – true. You can use beauty to gather more resources – true. You can use social acumen to gather more resources – true. You can use resources to gather more resources – true.

            None of these differ cardinally in that regard. If you have a lot of money you can invest it relatively safely and make a decent return. If you are very strong you can win fights and contests and make a very significant amount of money, which you can then invest – etc. If you are very beautiful, fast, socially adept – all the same. Intelligence isn’t special.

            If I’m SUUUUUPER intelligent I can crush everyone in the stock market and become a bazillionare! Sure, I guess. If you’re SUUUUPER beautiful you can have everyone falling over heels in love with you. If you’re SUUUUPER rich you can own everything. If you’re SUUUUPER strong you can be super rich, super admired and intimidate anyone who is standing next to you. If you’re SUUUUPER socially adept you can get people to do whatever you want.

            The thing is that while we live in a world of contests, and the theoretical reward for being able to win at some domain constantly is basically infinite, pointing that out ignores the also-infinite theoretical exploitation of other traits. Can you imagine a being so intelligent that it foresees everything, makes all the right decisions, takes over the world? Sure. Can you imagine a master manipulator so expert that they get everything they want out of their victims? Sure. Can you imagine a girl so beautiful everyone she meets falls in love with her and wants to please her? Sure.

            Understand that there’s one world, one pie, and what’s going on is partly a wrestling match between different virtues trying to claw in as much of the pie as possible. You can imagine some transcendent manifestation of one virtue clawing in everything, but in fact the beings that claw in the most have a mix of virtues. Beings with only a lot of intelligence, in actual practice, aren’t getting very much and never got very much.

            A machine can be a lot better at information processing. Also it can develop a lot more physical power. Which matters more? Would you rather have a super powerful tank or a super smart computer?

            I can use my tank to blow up your computer. Oh – but you can use your computer to make money on the stock market and buy lots of tanks! But I can use my super-tank to intimidate people into paying me tribute and buy still more tanks. Maybe more tanks than any amount of strategic genius on your part can counterbalance.

            This whole thing is nonsensical because it imagines an incredible manifestation of a virtue and how it could win at everything but doesn’t imagine incredible manifestations of other virtues in turn. Why do you think it’s more likely someone will make a super-intelligent computer and take over the world, than it is that someone will make a super-strong tank army and take over the world? I tell you, I bet on the people putting whose plan is making the largest possible number of the best possible tanks, if I had to choose.

          • Deiseach says:

            a machine that …could call on Napoleon and Shakespeare as subroutines

            And then decided that nuking Elba and St Helena into dust was the smartest thing it could do in order to preserve itself from danger 🙂

          • Bugmaster says:

            @Scott Alexander:

            Just to be clear, are you doubting that even human-level AI is possible?

            Not at all, with two caveats.

            First of all, as per our previous discussion, judging what is “human-level” is a bit tough; by some metrics, certain AIs today are already superhuman at some tasks. That said, I’d be ok with saying, “an AI that can reliably pass for a human in all day-to-day communications” (normally I’d say “passes the Turing Test”, but I don’t want to get bogged down in semantics).

            And secondly, I would grant you “possible” and even “inevitable” (modulo humanity surviving long enough to build it), but I am not at all certain about “real soon now”.

            A 2017 computer is better at everything than a 1987 computer. I don’t know what century’s computers human brains “correspond to”, but I bet there’s a century that comes after it.

            I think we are arguing about matters of degree, not of principle. Yes, of course computers will keep improving — but I am not willing to believe that a). they will keep improving exponentially forever, or b). they will improve too quickly for anyone to notice, or c). they will improve to the point where a relatively compact machine can become a quasi-god.

            In other words, I’ll totally give you Von Neummann; I’ll even grant you uber-Neumann with Napoleon addons; but I’m not willing to grant you instant Shodan. At least, not without some additional evidence. I think that, as the original article says, you are vastly overestimating what can be accomplished with intelligence alone, as well as how far general human-style intelligence can be increased by conventional means (as contrasted with, say, Dyson spheres and such).

            but all of this sounds kind of like you’re making up an ad hoc system of physics specifically to limit intelligence at around human-level.

            I think that the problem is that you have just two categories in your mental model of intelligence: “the current human level” and “Singularity”. So when I say, “I doubt the Singularity is possible”, what you hear is, “I doubt computers can get smarter than the current human level”. But I disagree with this model; I think there is a spectrum between our current human level and the Singularity.

            True, I doubt anything will ever reach the far end of the spectrum, but that’s not the same as saying that our current place on the spectrum is already the far end. That said, I am denying the claim that you can move across this spectrum in the blink of an eye.

            Remember, my thesis isn’t “superintelligence will be created with 100% certainty”, it’s “superintelligence might be possible and is something we should think about”.

            Well, yes, I agree that it is possible, but I think it’s so unlikely that it’s not worth talking about (at least, not for the next 500 years or so). Again, this is not the same as saying, “nothing will ever be as smart as present-dat humans”; rather, I’m saying “sure, even humans will get significantly smarter, and AIs may get smarter yet, but not to the point where the Singularity becomes remotely realistic”.

            By analogy, I think that commercial airplanes will keep getting faster. I think that the idea of a supersonic commercial plane might even be revived at some point (though I’m not an aviation buff, I could be wrong). However, I would disagree that, because of this fact, we should worry about people using commercial planes as light-speed projectiles.

            [Things that are not superintelligences]

            My point was, that in our current world today we already have half of the thing you described: smart people who specialize at some task, and collaborate with other smart people who specialize at different tasks. I did grant you that they do so at ordinary human speeds, not 1000x human speed. In the article you link, you grant your hypothetical AI many other powers besides these. I admit that we don’t have those, but these are new things that you are bringing up, so my reply to your original statement still stands.

            Smart people have an advantage in knowing what tests to do, and knowing how to design the tests well.

            True, and I said as much, but eventually you still have to grow the rice.

            Just to give an example, if I knew all of modern science (including the experiments that had been done to prove it) and my only job was to replicate all of those experiments and confirm that they still worked, I could probably do most of it in a few months to a few years.

            What, you personally ? Not some sort of a super-AI running on a Dyson Sphere, but you, Scott Alexander the human ? I disbelieve it. Our science took hundreds of years (at least !) to get to the point where we are today; are you saying that 99.5% of it has been a waste of time ? I mean, it will take you longer than that just to build one little particle accelerator, and that’s assuming you can cheat your way out of grinding lenses for your telescope !

            Building a supercollider would admittedly be the hard part, but not if I was super-rich and had the resources of an entire civilization…

            As I said in my previous post, if a superintelligent Singularity-grade AI already existed, then it could totally have all that. But seeing as it needs to have such resources in order to exist in the first place, this looks an awful lot like begging the question.

            and there might be ways to avoid using supercolliders if I were smart enough to think of them.

            Like what ? How will you think of them without already knowing the answers that supercolliders would’ve given you ? Forget bosons, how will you make rice (or cows) grow faster ?

          • Bugmaster says:

            @Scott Alexander:
            Sorry, forgot to add one more thing:

            But Napoleon conquered Europe without needing to do any tests…

            This is obviously false. Napoleon performed plenty of tests. He sent out scouts, organized logistics, and even fought real battles and learned from the results. He didn’t just wave his hand and say, “ok, I’ve thought of everything and Europe is mine now”, only to be proven 100% correct in the next instant. Napoleon may have been a military genius, but at the end of the day, he still had to fight in the real, physical world.

          • John Schilling says:

            But Napoleon conquered Europe without needing to do any tests

            Napoleon rather famously failed to conquer Europe, got himself boxed on Elba, escaped the box, failed to conquer Europe again in spite of the full-scale test data from his first experience, got securely boxed on St. Helena, and never escaped that one.

            ETA: as Bugmaster notes, even that first failed attempt incorporated the results of many smaller-scale tests, dynamically conducted over the course of the attempted conquest.

          • lupis42 says:

            I want to step back and ask whether you feel like you’re saying things that you think are true, or you’re objecting “but there’s still a chance, right?”. I agree it might be possible that for some reason the human brain is the densest possible computer, and that if you try to build a computer bigger than the human brain for some reason it can’t scale, but all of this sounds kind of like you’re making up an ad hoc system of physics specifically to limit intelligence at around human-level. I agree we can imagine a system of physics that limits intelligence at around this point.

            Again, I admit it’s possible that there are weird limits at exactly the place it would be necessary for your case to make sense, I just don’t see any evidence for this.

            Scott, I feel like you’re taking on the weakest conceivable version of this argument, and I’d like to see you try to steelman it a little bit more. In general, we expect linear improvements any given area to have an exponential cost curve in terms of effort/resources/etc required to solve them. I would expect the ‘mutation load’ for cosmic rays and quantum effects to place very strange limits on the ability of data processing systems to increase in density arbitrarily, and energy input and thermal dissipation provide other sources of ‘suddenly on the steep part of an exponential curve’.
            Even when these are problems that we solve through ‘intelligence’, the solutions generally aren’t backwards compatible in any way that would be useful for a general AI trying to upgrade itself, unless it’s willing to ‘upgrade’ by building new bodies with wholly different architectures and then migrating, which doesn’t seem like a ‘foom’ problem.

          • sohois says:

            @Bugmaster

            you appear to be making two separate arguments in your post, first that superintelligence is basically not possible within the medium term, and second that superintelligence, even if it were achieved, would not necessarily pose a great danger due to the limits of what a mere intelligence can do.

            For your first point you say this:

            Yes, of course computers will keep improving — but I am not willing to believe that a). they will keep improving exponentially forever, or b). they will improve too quickly for anyone to notice, or c). they will improve to the point where a relatively compact machine can become a quasi-god.

            However none of those three statements are required for superintelligence. I don’t know that anyone posits infinite, unbounded intelligence. As long as there is sufficient space beyond human, then a superintelligence will be possible. B) addresses the idea of a hard takeoff, to use Bostrom’s terminology. On this it should be noted that, IIRC, the timescale for a hard takeoff is up to one year, not so fast that nobody notices. And even on that point it is not accepted by everyone, there will still be those who argue for a soft or medium takeoff. No matter how fast superintelligence develops, the problem of unfriendly AI would still exist, it just gives more time to prepare in the latter cases. As for part C), as has already been pointed out it is not required that the size of a computer be limited in some way, however given that we already know that human level intelligence can be achieved in with the size of a brain it does not seem to be an incorrect assumption that future AI will not be supermassive, given the advantages of silicon over neurons.

            Then you appear to be make two competing claims. First you state that you believe a spectrum of intelligence will have quite a distance between human, post human and full superintelligence. Fair enough. However, you then state that you don’t believe AI will ever be able to reach the end of the spectrum, but contradict yourself by saying “it won’t be achieved in the blink of an eye” and that it isn’t worth worrying about for 500 years. Which is it? Do you believe that something recognized as superintelligence can exist but humans will not achieve it, or that it will simply take a really long time? The former I would find somewhat incoherent, so if it is the latter I would say that the burden of evidence for this rests with you, not with Scott or other supporters. Since dangerous superintelligence has apparently now become the establishment view, i would ask what evidence you have that the majority of AI experts are wrong in their assumptions? (and point you to Bostrom’s 2013 survey for direct evidence of the expert view).

            Then you move onto claims that superintelligence could not achieve much without resources and tests. I think this is a much more defensible claim against superintelligence. I would use the following arguments against it: first, I’d say it is very likely that humanity will simply give a superintelligence access to some resources. Even with the admitted danger, people are still not going to want to build a superintelligence and then simply lock it away, what would be the point? (and perhaps it could trick people to let it out anyway). So it will probably have access to a lot of human knowledge, and not need to replicate vast amounts of science. It will probably have a way of physically interacting with the world like cameras and robotic arms. It may even get access to the internet, and subsequently be able to hack into a lot of more secure resources.

            Secondly, I would suggest that simulation brings a lot of risk. Even if the AI is very heavily supervised in its interactions with the outside world, what is to stop it from creating an accurate physical simulation and running experiments within it? Knowledge of physics and superintelligence would probably be enough to devise an extremely accurate world simulation that could be relied upon to perform experiments and then take actions in the real world.

          • Iain says:

            Even if the AI is very heavily supervised in its interactions with the outside world, what is to stop it from creating an accurate physical simulation and running experiments within it? Knowledge of physics and superintelligence would probably be enough to devise an extremely accurate world simulation that could be relied upon to perform experiments and then take actions in the real world.

            First: if your goal is to figure out the laws of physics, then it is pretty hard to experiment in a simulated reality, because you don’t know what the rules of your simulation should be.

            Second: physical simulation is incredibly expensive computationally. Huge supercomputers crank away for days to simulate a second or two of a nuclear explosion. That’s why the Lawrence Livermore National Lab keeps buying such fancy machines.

          • Bugmaster says:

            @sohois:
            Ok, let me clarify some of my claims.

            First of all, I agree with Scott that modern-age humans are not the pinnacle of intelligence. Future humans may be smarter, and AIs may be smarter still.

            However, I disagree that AIs will one day get so smart as to become godlike. I’m not claiming this is impossible in principle, just that it’s incredibly unlikely — so unlikely that it’s not worth worrying about for the next 500 years (at least).

            I think one problem we are having is that of semantics: the word “superintelligence” can mean, “something that is at least marginally smarter than the smartest human”, but also, “something so smart that it could unilaterally take over the world and convert it all to computronium via nanotechnology”. The former is interesting, but not really all that dangerous (at least, not by itself); I’m strictly concerned with the latter.

            As you have correctly pointed out, I also disbelieve in the “hard takeoff” claim. Regardless of how smart AIs can practically get, I doubt that they will be able to do so incredibly quickly.

            Since dangerous superintelligence has apparently now become the establishment view, i would ask what evidence you have that the majority of AI experts are wrong in their assumptions?

            That’s like saying, “since most people believe in God, what is your evidence that He doesn’t exist ?” I can’t prove a negative. That said, I’ve already mentioned some of the reasons I do not believe in the Singularity; in fact, the article that started this whole thread articulates some of these reasons better than I could. Still, here are a few of them:

            * I am not convinced that it makes sense to speak of “intelligence” as though it’s a simple linear quantity. Surely, both you and Google are smarter than a rock; but is Google smarter than you ? Well, in the domain of performing searches, it’s superintelligent; but in the domain of arguing with people on SSC, maybe not… Scott also wrote an article about this.

            * Similarly, while a human-like intelligence is certainly possible (at least in principle), I’m not convinced that it is going to be terribly useful in practice. When I want to add up numbers very quickly, I use Excel, despite the fact that modern humans outsmart Excel by a massive margin. If you insist on making your AI as general as possible (or perhaps as human-like as possible), you may be placing significant limits on what it can achieve in any specific area.

            * I think people tend to vastly overestimate how much computing power can be packed into a single CC of space; and they tend to underestimate how quickly diminishing returns kick in once you start networking computers together. Of course, if your AI has access to nanotechnology and Dyson spheres, it doesn’t have such problems — but if it already has to be superintelligent in order to become superintelligent enough to develop all that nanotech, then you’re begging the question.

            * Intelligence is not the only thing that is required to achieve real-world goals, such as e.g. building an even smarter computer. In many cases, it’s not even the most important thing. You also need physical resources, labor, and, above all, time. Combined with the point above, this puts some pretty harsh limits on what an AI could reasonably achieve.

            * I am not convinced that all the powers commonly attributed to AIs — nanotechnology, functional omniscience, total mental domination of humans, etc. — are possible to achieve even in principle. Compare these powers with FTL travel: the smarter we get, and the more we learn, the more certain we become that FTL is impossible.

            what is to stop it from creating an accurate physical simulation and running experiments within it?

            The whole point of running experiments is to figure out the rules that govern the world, so that you can program them in your simulation. To put it another way, there are a ton of beautiful, internally consistent hypotheses about the true nature of those little dots of light in the night sky; but you won’t know which (if any) of them are correct until you build a real, physical telescope.

          • Bugmaster says:

            I guess I should also point out that, despite all of my objections to the Singularity, I do agree that AI can be dangerous — just like nuclear power, fossil fuels, mass production, the Internet, and even fire. But the key to avoiding these dangers is to approach them realistically, and IMO spending time and money on trying to avert the Singularity/UFAI/FOOM/etc. is kind of the opposite of that.

          • Marshayne Lonehand says:

            Scott Alexander postulates  “I think the whole point of a superintelligence is that we’re talking about a general purpose reasoner that’s smarter than humans along all dimensions in the same obvious way that Mt. Everest is larger than humans along all dimensions.”

            Well, there’s the problem, right there! 🙂

            Let us suppose that we understand “reasoning” sensu stricto, as something like “a raciocinative process by which (Bayesian) chains of inferences are constructed from sets of observations”.

            We reflect that 20th century AI projects to achieve human-level intelligence — or surpass it — by purely raciocinative processes, have failed utterly.

            We reflect further that an emerging consensus of mathematicians appreciates that raciocination has mainly to do with the social process(es) by which mathematicans convince one another of theorems, yet little to do with essentially human cognitive processes by which mathematicians conceive theorems, and perceive beauty in them, and share them.

            Consonantly, in the present-day AI Golden Age, human-level cognitive performance is exhibited by algorithms that have little or no ratiocinative component whatsoever.

            Moreover, in psychiatric practice, “generalized ratiocinative deficiency” is not a diagnostic category, whereas hypertrophic ratiocination (“overthinking”) is characteristic of many personality disorders/

            In a nutshell, therefore, shouldn’t we anticipate that a coming generation of “supercognitive” machines (as they might be called) will exhibit, not ratiocinative “superintelligence”, but rather “superempathy” and “superintuition” — cognitive capacities that are (literally) “superhuman”?

            We are led to foresee a generation of superempathic and superintuitive AIs that look and talk rather like Fred Rogers — and are comparably self-aware to Fred Rogers — AIs whose intuitive and empathic capacities are to those of adult humans, rather as Fred Rogers’ cognition is to that of preschool children?

            The above multiple reasons lead us to reasonably foresee that a generation from now (more-or-less), every psychiatric therapy group will include a few superempathic, superintuitive, “Rogers-class” AIs … and isn’t it the case, that AI-capacities already are advancing — rapidly and irretrievably — to fill this supercognitive human / social / therapeutic need?

            To borrow a phrase from Young Frankenstein (1974), “They — meaning Rogers-class AIs — are going to be very popular!” 🙂

          • pontifex says:

            I think the biggest weakness in the Kurzweil or LessWrong-style “AI foom” argument is the assumption that powerful AIs will easily be able to create still more powerful ones. We don’t really know that to be true. Maybe making better AIs is exponentially hard, in which case AI foom still won’t happen, just gradual improvement.

          • I guess I should also point out that, despite all of my objections to the Singularity, I do agree that AI can be dangerous — just like nuclear power, fossil fuels, mass production, the Internet, and even

            It’s important to distinguish X-risk from risk, as it is important to distinguish AGI from ASI.

          • sohois says:

            @Iain

            My first point leads into the second about simulation; i would expect any superintelligence to be at least given some knowledge and resources, or else it would be entirely useless, and from that I assume it would not need to do physical experiments to gain the knowledge required for an accurate simulation.

            Also, on the power point I should point out that even the most optimistic forecasts (Kurzweil aside) don’t anticipate such AI for another 30 years, when simulation could be considerably more efficient.

            @Bug
            i certainly don’t expect you to prove a negative, but seeing where we are posting I’m going to assume we can all reason in a roughly Bayesian manner and move confidence levels up or down without needing to ‘prove’ something. So regarding your points:

            On intelligence as linear quantity, I agree that there is probably a lot to human intelligence, and even with something like IQ you might witness a lot of variance in certain domains between two individuals with the same value. But I fail to see how this makes the risk of superintelligence less. It doesn’t matter if Excel 2081 is better at spreadsheets than the AI, since the AI can just make use of these software tools itself. It doesn’t matter if human beings are still much better at getting votes in a democracy than an AI, since there are hundreds of routes to danger that don’t involve the AI becoming an elected leader

            Your third point I think is strong and I don’t have a good counterargument, so will bear that in mind.

            Your final two points, I believe, are arguments against some utopian vision of superintelligence but do not reduce the dangers of superintelligence. The point of yourself and pontifex, that scaling up intelligence into every more powerful computers may take time is valid, but even if it is at some point you’re still going to need to solve the friendliness problem, and given the risk of a hard takeoff remains it would be foolhardy to just hope for a slower development period. And yes, not having access to things like nanotechnology certainly reduces a lot of potential apocalypses, but there are still plenty that won’t require that kind of advanced technology

          • random832 says:

            @Scott Alexander

            Again, I’m saying that this is only true within technology levels. A 2017 computer is better at everything than a 1987 computer. I don’t know what century’s computers human brains “correspond to”, but I bet there’s a century that comes after it.

            I’m going to look foolish for posting this so soon after posting something optimistic about quantum computers, but I don’t think it’s 100% certain there will be a century after this one (in the sense that we will ever have computers that are as far ahead of today’s on any measurable axis as today’s are ahead of 1987, let alone three times over).

          • Bugmaster says:

            @sohois:
            My first two points were meant to attack the idea of an exponential growing intelligence. Today, we have AIs that can compose poetry, recognize images, plot routes, play Go, etc. Some of them (e.g. the AI that scans your envelopes at the post office, or AlphaGo) do so much better than any human ever could. However, they do so in a way that is very different from how humans, with their more general intelligence, approach the same tasks. It may very well be the case that general intelligence is simply not the right tool for the job.

            You are right in saying that an AGI could just buy an Excel license, but then, so can anyone else. The claim is not merely that AGI is dangerous — any technology is dangerous, after all — but that it is orders of magnitude more dangerous than humans, because it can improve itself exponentially and nearly instantaneously. By contrast, if you had an intelligence that developed at the same rate as humans do, then it would be only as dangerous as the average human, and we already know how to deal with those (more or less).

            But I believe that the failure of AI-FOOM is overdetermined at this point.

            If if the very notion of exponential growth does not apply to intelligence because it is not a single quantity, then there’s no AI-FOOM, since the concept doesn’t even make sense. It’s tempting to say, “but that’s easy, the AI will be smarter than an army of Von Neumanns !”, but those are just words, and it’s not clear (at least, to me) what they mean. I admit that this is probably the weakest of my arguments.

            If general intelligence is insufficient for rapid exponential growth, then there won’t be any AI-FOOM either, since narrowly focused intelligences will just do their narrowly focused things. And slow exponential growth is no danger; or, at least, not any more dangerous than the growth we humans have been experiencing throughout history (wich is pretty dangerous, admittedly).

            If there are strict limits on how much computing power can be packed into a unit of space, then there may not be any AI-FOOM either, since there’s simply not enough space where we would put it (and the situation gets even worse when you consider diminishing returns due to networking).

            If the AI has to perform real actions in the physical world in order to acquire superhuman powers, then it has to do so at real-world speeds, and there’s no FOOM. The real world is super slow; for example, it takes a whole year just to observe all the four seasons. Same problem occurs when the AI wants to build something — only it’s even worse, since now it will run into many more limitations, such as “sorry, no one has that much lithium available for sale at the moment”.

            And finally, even if the AI could somehow become superintelligent, it is only dangerous if it can actually do something with that intelligence. If it can’t invent nanotechnology, achieve global mind control, hack the planet, or become omniscient by will alone — because none of these things are physically possible — then its intelligence won’t count for much. And if it needs to acquire these powers in order to become superintelligent in the first place, then the whole idea is a bust.

        • Speaker To Animals says:

          Smarter than ten Von Neumanns

          Are ten Von Neumanns smarter than one Von Neumann? I don’t think it works that way – otherwise you could just throw more people at a problem and, if there were enough of them, the problem would be solved

          A bunch of dumb people aren’t smarter than a smart person.

          • kboon says:

            Granting that a bunch of dumb people aren’t smarter than a smart person, they might still be able to produce more work, faster. Ten Von Neumanns could do about ten times as much work as one Von Neumann, barring overhead. They could explore ten alternatives in parallel, rather than in series.

      • Johann von Neumann says:
    • MawBTS says:

      Hey, wait a minute. Doesn’t the Singularity hypothesis actually propose that machines will get arbitrarily large as well as arbitrarily smart ? Hmm…

      I don’t think so. The Singularity is a point of sudden runaway technological growth initiated by an artificial intelligence. We cannot predict what will happen in such an event.

      • The Singularity is a point of sudden runaway technological growth initiated by an artificial intelligence.

        I don’t think that’s a correct description of the singularity as Vinge hypothesized it. It’s runaway change, but it doesn’t have to be due to an artificial intelligence. It could, for example, be a result of technologies that raised human intelligence, with that increased intelligence used to improve those technologies, with those improved technologies used … .

        • Bugmaster says:

          Doesn’t that all end up with Dyson Spheres made of computronium at some point ?

  5. Mitch Lindgren says:

    Software developer here. I don’t have actual data to dispute deBoer’s claim, but I recall a post Scott made a while ago in which he described how anti-depressants and other medications might be both overprescribed and underprescribed because our methods for diagnosis are imperfect. As a result, some people who should be medicated aren’t, and others who shouldn’t be are.

    I wonder if the computer science graduate shortage could be a sort of paradox like that, where there is a shortage of truly highly skilled candidates for employment, but an excess of people who are credentialed but lack the actual skills required. I can’t speak for STEM in general, but CS is a very difficult field and graduating from a 4-year CS degree is by no means a guarantee that one can write good code. Anecdotally, many employers will tell you that a huge number of applicants can’t answer even extremely basic programming questions. This, combined with the fact that a bad programmer can actually have negative net productivity, makes it completely unsurprising to me that some people with CS degrees are unable to find jobs. 5% unemployment among CS graduates doesn’t seem surprising. When I think about the bottom 5% of people I took CS classes with, there is absolutely no way I would hire any of them for any project I considered remotely important.

    Of course, this is just my two cents. I would like to see more thorough studies that try to quantify these effects rather than comparing raw unemployment numbers.

    • Bugmaster says:

      Am programmer, can confirm (or rather, add a data point). At best, maybe 15..20% of candidates with CS degrees that we’d interviewed could answer a basic question about writing a simple 4-line loop. Most common failure modes include, in order of decreasing frequency:

      * Randomly throwing out vaguely CS-sounding terms in hope that something sticks.
      * Proclaiming that such low-level problems are simply beneath a programmer of the applicant’s stature. Yes, really.
      * Describing (inevitably, incorrectly) how one would use Excel, Oracle, or some esoteric tool that no one’d ever heard of to obtain the answer.
      * Making a good show of it, but failing to include the correct termination condition, the counters, or the inputs.

      • In interviewing you’re getting a biased sample since horrible programmers stick around applying to jobs but excellent programmers get the first one they want. And good programmers frequently get jobs via connections and never show up in resume piles at all. Which is to say that the level of incompetence is way less than 80% in the programmer population but it’s still quite significant.

        • Forlorn Hopes says:

          Have you read Joel on Software?

          • ThirteenthLetter says:

            Shame about what happened to that guy. Really good writer on tech issues, but the brain eater got him a few years ago.

          • Bugmaster says:

            Er… what happened to him ? I used to read his articles back in the day, but eventually I stopped — partially because I moved on to other things, and partially because he started repeating himself…

          • https://blog.codinghorror.com/has-joel-spolsky-jumped-the-shark/

            I sympathise: one of the worst experiences I had was trying to make a small change to something written in an in-house language. Oh, and yes, you can extend Ruby with C, and that is the right way to write a dynamic application with a small critical section.

          • Brad says:

            I agree with everything in that blog post. Wasabi was a crazy idea.

            But it’s worth noting that since it was written the blog post author and Joel co-founded Stack Overflow. Subsequently Joel’s company launched Trello, which was eventually spun off. It recently sold to Atlassian for $425MM.

            If that’s what happens after jumping the shark, someone get me a pair of water skis.

      • Enkidum says:

        Out of curiosity, what was the question?

        • Bugmaster says:

          It was similar to the FizzBuzz problem, only perhaps a little easier (if you can believe that). We didn’t use the FizzBuzz problem itself because we figured people would’ve memorized the answer by now.

          • Jaskologist says:

            Nope, they still fail it.

          • Bugmaster says:

            Yeah, I’ve read the article. It’s refreshing to know that even our most pessimistic models of CS candidates were way too optimistic 🙁

          • Brad says:

            Memorizing fizzbuzz at least shows some ambition.

          • suntzuanime says:

            It’s weird, because I’ve worked on lots of group projects with CS-degree-holders-to-be, and while some of them were what I would call incompetent, any of them could have handled something like FizzBuzz no sweat. Is this just a matter of all CS degrees not being created equal and the candidates people complain about coming from diploma mills? It’s not like I went to a top school even, just a random state university. Can employers really not distinguish the diploma mills from the legit schools?

          • gbdub says:

            Similar thoughts to suntzuanime, and I’m not even a software engineer.

            Does “Failed FizzBuzz” mean:
            1) totally stumped, can’t even come up with messy pseudo code?
            2) had serious language-independent flaws in their implementation (forgot to use an exit criterion for the for loop, didn’t use if – else if – else structure)?
            3) made code that would work, but inefficiently?
            4) had a gap in their language-specific knowledge (e.g. forgot an “#include ” line or forgot the symbol for the modulus operator in C++)?
            5) made a “typo” or other minor language specific flaw, despite otherwise knowing what they were doing?

            I can believe a 99% of CS majors might fail at level 4 or 5 when put on the spot, but level 1?

            And it might seem like an excuse, but depending on your experience, starting a C++ or whatever program from a blank slate may not be something you’ve done since college (instead starting from existing code or templates), so you could have a bit of the “familiar with calculus, can’t do long division by hand anymore” problem.

          • rlms says:

            I can’t find the source where I read this, but apparently many interviewees don’t know what the modulus operator is (which obviously makes FizzBuzz a bit harder).

          • johnmcg says:

            And it might seem like an excuse, but depending on your experience, starting a C++ or whatever program from a blank slate may not be something you’ve done since college (instead starting from existing code or templates), so you could have a bit of the “familiar with calculus, can’t do long division by hand anymore” problem.

            For experienced candidates, this can often be the case.

            But, it can still be a red flag if you’re looking for someone for a technical team. Getting so out of touch with actual coding that you can’t even whip up a solution to FizzBuzz or a similar problem indicates someone who’s spent more time waving his hands at whiteboards than actually producing working code.

          • Enkidum says:

            Huh. I’m not a CS grad, so I’d never heard of Fizzbuzz, but I basically program for a living, and after googling it I immediately thought of how I’d solve it. I assumed my solution was inefficient / stupid / whatever, but apparently it’s sensible enough. Looks like I can get a job at your firm.

          • johnmcg says:

            Huh. I’m not a CS grad, so I’d never heard of Fizzbuzz, but I basically program for a living, and after googling it I immediately thought of how I’d solve it. I assumed my solution was inefficient / stupid / whatever, but apparently it’s sensible enough. Looks like I can get a job at your firm.

            The thing about it is that it’s not a computer science problem; and there’s really not a clever/efficient way to do it.

            It’s pretty much a pure “Can this person code in the language he claims he knows?” test.

          • Enkidum says:

            I like that link, thanks. I started thinking about how to solve the birth/death/population problem and then realized I have actual work to do.

          • Bugmaster says:

            @gbdub:
            Our version of the problem did not need the modulo operator, and most people still failed for reasons #1 and #2.

          • Mary says:

            It would be easier to be more clever and efficient at FizzBuzz than some code I’ve actually seen in use.

            Though I was thinking that it would depend on the language on whether it was more efficient to use modulus, or to have counters to 3 or 5, which of course would use only addition.

    • Scott Alexander says:

      Thanks. That was my guess too – see my replies to Freddie here.

    • lycotic says:

      There’s certainly a difference of perspective here.

      deBoer’s numbers for college majors show many CS grads don’t get jobs in the field.

      $BIG_TECH_CO is desperately trying to hire programmers, using ~10% of its programmers’ time in interviewing and tracking notes. Think Google’s 20% time is weird? Try the colossal time spent on mostly failed job applications, endless hours sitting in rooms watching people fail to code. At least a TA gets to feel like they’re helping them along toward something.

      deBoer casts this as being a tech “star”, like a rock star, but that’s horribly misleading, Silicon Valley puffery aside. There are a quite a number of very fine programmers worth hiring, much more than there are legit rock stars, but companies’ ability to find them is wanting, and so when they come across one, foreign or domestic, they move heaven and earth to get them hired.

      So back to those CS grads deBoer is talking about. BigValleyCo isn’t going to hire them, not because they’d like to pay foreign workers less money, but because they’d contribute negative progress toward the organization. BigValleyCo, unlike some places, isn’t built to handle low-output workers. If Trump toasted the H1-Bs entirely, Wipro might melt, but BigValleyCo would just grow more slowly.

      (Now I looked it up and discovered Wipro’s offices are in Mountain View. I… didn’t know that. I don’t mean them, or Oracle, much of which is more like Wipro than they’d like to admit.)

      • Scott Alexander says:

        What do you think of http://danluu.com/programmer-moneyball/ arguing that companies are terrible at identifying good programmers even when it should be obvious?

        • lycotic says:

          I’m not sure about “obvious”. I stopped looking at resumes for the candidates I interview — it was too depressing. There *are* signals that can be acquired from resumes, but they’re not in the straightforward bits, and lean uncomfortably to unfair signals like “if it says ‘Enterprise’ on it anywhere, drop it on the floor.” The recruiters look for us, but I can’t claim that their methods are any fairer.

          As for the given complaints:

          1. Tech experience is in irrelevant tech

          And his specific complaint here is that Microsoft tech isn’t cool and thus likely to be dropped. I think that this is a typical stereotype thing — see my comment about ‘Enterprise’ above, and the Microsoft stack was targeted in the late ’90s by a large pool of mediocre programmers. So if you see it on a resume, then… yeah stereotypes.

          2. “Experience is too random, with payments, mobile, data analytics, and UX.”

          I hope our recruiters don’t do that. That’s… the opposite of a problem.

          3. Contractors are generally not the strongest technically

          Bias against the hoards of Oracle certified engineers?

          Google has a hard time hiring the right people, and so mostly deals with a horrible precision/recall tradeoff by selecting the low recall end of it. (There’s a joke that goes around that only 60% of Googlers would succeed in making it through the process again.) Only a few very privileged companies can afford to do that.

          Smaller companies do have the advantage that their hiring practices can be more fluid, since they’re less likely to be directly gamed. But they don’t get to draw quite the same caliber of applicants.

          Where does that leave us?

          * “Experience” only matters if you actually *learn* something from it, and it is all too clear that many haven’t.
          * What’s left in the resume are more subtle signals (‘EJB? ewww.’) that likely unfairly bias us against some, but not all candidates.

          If that sounds like a machine learning problem, well then there you go. It may not be any worse than what we have now, but isn’t likely to be much better. (‘EJB? -0.95’).

          FWIW, I haven’t been tracking it closely, but the caliber of people I’m interviewing has gone gradually up (to the point where the vast majority *can* code a nested loop), and they’re still keeping us plenty busy doing it, so that suggests that the first step has been lifting the P/R line a good bit. I’m not sure how, and I probably couldn’t tell you if I knew.

          EDIT: Sorry, I didn’t answer the question directly. In short, the answer is “Yes, but it’s rarely obvious from the company’s end.”

        • Bugmaster says:

          I’m not an analyst or even any kind of a smart person, but in my experience, this is an inevitable side effect of scaling.

          If you are a small software startup, you can afford to individually interview every applicant, observe how they solve real-world problems, and maybe even mentor them. If you are the kind of company who hires 1000 people per day, you need some sort of an optimization, otherwise you’ll never get anything done.

          So, you create metrics, intended to act as a filter to remove, say, 80% of unqualified people. These filters have to be super-efficient (otherwise you just get bogged down again); in CS terms, you are willing to accept some loss of correctness in exchange for execution speed.

          But the reason you had to go that far is because you’re a large company with a ton of money; and the easiest way to get at some of that money (both internally and externally !) is to game your filters. Now, whole industries spring up around evading your filters, selling you better filters, evading these better filters, and so on, until all is consumed in the fiery maw of Moloch.

        • sketerpot says:

          It looks like he’s mostly complaining about the earlier stages of the hiring funnel. Which, yeah, tend to be done pretty poorly — but there are structural reasons why it’s tricky to do better.

          Hiring for programmers usually comes in several stages, with later stages considering fewer people but expending more effort per person. By far the biggest reduction in people-per-stage happens in the resume screening phase, where you take a stack of resumes (or LinkedIn pages, or whatever) and decide which ones look most interesting to you. This is typically done by hand, and manages to be both labor-intensive and surprisingly difficult: people’s qualifications on paper are a surprisingly poor predictor of their qualifications in reality. Because it’s so time-consuming, this part is usually done by recruiters who don’t actually understand what they’re looking at, and could probably be replaced by a simple linear scoring function: “Stanford” gets +10 points, “MUMPS” get -5 points, etc. The article does a good job enumerating woes, so I won’t repeat it.

          You might try to get people who know what they’re doing to handle the resume screening — but those people will probably cost a lot more per hour, and this stuff takes a lot of hours. And they still probably wouldn’t be that good at it, because there’s so little signal in the noise. You could invest more in training people, but that still leaves the problem of identifying the people who are worth training — and that was most of the problem in the first place! You could try to automate as much of this process as possible and have people who are good at statistics try to optimize things rationally — and I’m actually pretty hopeful about this one, since it’s the kind of Hard that we might actually be able to solve technologically. The folks over at Triplebyte are making a valiant attempt at it. Crucially, it looks like they’re doing more than just the equivalent of resume screening: they also have some kind of standardized test component, which is what you really need to do adequately here.

          So, bringing it back to the topic of the rest of the thread: it’s true that the early stages of the hiring funnel are often done in a capricious, buzzword-driven, poorly thought out way, because it would be difficult and expensive to do better. It’s also true that most of the applicant pool isn’t actually competent, and once you get to the later stages, where people have to actually show off some basic skills in front of other engineers, this becomes painfully obvious.

        • Brad says:

          I do think there’s a moneyball type opportunity created by the optimizing for minimal false positives methods the BigTech use. But it’s a tricky thing to capitalize on. If you invest a bunch of money is separating out the wheat from the chaff and then do training on top of that, as Dan Luu suggests, you may well end up finding that you’ve done a bang up job of recruiting for BigTech. Billy Beane had ironclad employment contracts, that doesn’t exist in the tech industry.

        • Iain says:

          Thomas Ptacek has good thoughts on hiring for tech. Quick summary: current technical interviews are a terrible way to identify competent candidates, and should be replaced with lower-pressure coding exercises. The process should be standardized as much as possible. (But read the whole thing.)

        • Alex Zavoluk says:

          Hiring is fundamentally an anti-inductive problem, which you’ve written about before. My team is desperately trying to hire dozens of people, but we have to interview people with “3 years python experience” on their resume who can barely write a function or use a dictionary. Yesterday a coworker interviewed someone with a masters in comp sci (and apparently going for their PhD) who could not figure out a dead-simple algorithm or properly differentiate between or use lists, dictionaries, or maps (in python).

        • Edward Scizorhands says:

          #1. Every developer has their private ideas of what it means “to be a good developer.” This nearly always means “people like me.”

          #2. Each developer has debated the concept of “what really matters” in assessing candidates.

          #3. When any developer gets into a role of hiring people, he can retroactively win all his flame wars from #2 by enforcing his rules from #1.

          #4. Note that it doesn’t matter if the rules from #1 were any good. They were likely completely random or even bad. By the time anyone figures out that the hiring process sucks, there is a new batch of developers coming through with their own biases.

        • The Nybbler says:

          It’s true that companies are terrible at identifying good programmers. But it’s _never_ obvious from a resume or an interview. Let a good or great programmer work on a project with another programmer, and they’ll figure out if the latter is good, great, medicore, or bad pretty quickly, but it takes much longer than an interview. And mediocre programmers can’t do this; someone struggling to do their own job may confuse competence with idiocy.

          The pre-interview steps are probably often worse than useless (that is, you might have a higher proportion of better programmers among your culls than your picks). They’re often being done by non-technical people based on buzzword bingo and random HR-type consideration (Gap in work history! Burn the witch!)

    • hnau says:

      +1 to this. (Should’ve read the previous comments before posting my own… I’ve been scooped.)

      To Scott’s / lyotic’s point: from my (limited) experience, tech companies tend to be pretty darn good at finding people who will be productive at that company. The company I’m currently at does a good job of avoiding rejection reasons (1) and (2) from Scott’s link– we do care about relevant experience, and we’re open-minded as to background. But I’ve interviewed and rejected several candidates who fell under reason (3). Attitude / communication / flexibility / being able to work with people / “culture fit” is a big part of productivity, and a big part of what we interview for. The contractors I’ve interviewed tend to be bossy, set in their ways, and not actually that good at reasoning about an unfamiliar system (which is the other thing a whiteboard interview does a good job of testing). New grads, on the other hand, are appealing to recruit: they’re wired to learn, the “culture fit” is easy to train into them, and they haven’t had time to develop big egos or bad habits.

    • Anonymous says:

      Freddie deBoer gives lots of evidence that there is no shortage of qualified STEM workers relative to other fields and the industry is actually pretty saturated. But Wall Street Journal seems to think they have evidence for the opposite? Curious what all of the tech workers here think.

      IME, there’s like 100-200 applicants for every advertised position in tech. Even if half of them were pure hopefuls who can’t pass FizzBuzz, the other 50-100 are at least marginally useful techies. However, I never got the impression that it was the aptitude with the technical skills was the main reason for rejection. The impression I got was that they were rejecting people mostly on grounds of mismatched personality.

      Pretty much all job ads I’ve seen have had these fanciful personality requirement sections in addition to the technical requirement sections. And to at least some degree, those are actually relevant. If you hire someone who likes being handed clearly defined tasks to a position that requires initiative – like one-man projects – he’s going to have a bad time, and you’re going to have bad results. If you hire someone who is bustling with initiative to a position that consists of quality-assuring automated data conversion by hand, he’s going to have a bad time and you’re going to have bad results too.

      My estimation of the situation is that companies are extremely picky, and complain that the market doesn’t provide a custom-tailored worker for their exact use case.

      • lycotic says:

        In all the companies that I worked, rejection of candidates due to culture/personality issues has been *very* rare, and most of the cases have been extreme.

        • Anonymous says:

          Whereas I’ve never been subjected to an actual technical test*, even for the jobs that I actually got. All the interviews I’ve been through could have been rather easily faked by someone with above-average verbal intelligence and some background in IT.

          * During an interview. A couple of times I was asked to send in some basic stuff beforehand – easily faked by someone with access to a few bucks and a small jobs board.

          • Nornagest says:

            Funny, I’m not sure I’ve ever interviewed somewhere that didn’t ask me to cut some code.

            Usually hilariously easy code, but still.

          • Anonymous says:

            The only time my ability was actually properly tested was one time that I was asked to work for free for a week. But I got that far without any actual checks except references.

      • AnthonyC says:

        This matches my experience. I work at a company just over 100 people. We’re a professional services/advisory firm, so our “research” team needs a STEM background but doesn’t do research. We read research, interview startups/professors/big companies, things like that, and advise clients about who is or isn’t a good partner for their needs. Communication ability is at least as important to this as actual scientific competency, and the scientific capability we do need is less “specific knowledge from textbooks” and more “able to process information logically, think about things in new contexts.” Which, really, should be true at any R&D organization too – you need to be able to understand why corporate is running a project, what aspects of your work are important, how to communicate that to your boss(es).

        Because we don’t need deep technical capabilities, we hire BAs and MAs as often as PhDs. We get hundreds to thousands of resumes for every position. At most a handful are capable of holding enough of a conversation or putting together a good enough writing sample (the latter on their own before the interview, not under time pressure) for us to even believe we can train them well enough to put in front of a client.

        I do think part of the problem is turnover, too. Most people don’t expect to stay in a job for more than a few years. That + impatient employers (either start-ups with no choice but to move quickly, or public companies beholden to quarterly results) means you can’t afford to spent half a year or more training up a new hire.

        I also think there is simultaneous over- and under- supply of STEM workers. Plenty of people with STEM credentials, few with STEM aptitude and competency.

        • wintermute92 says:

          I also think there is simultaneous over- and under- supply of STEM workers. Plenty of people with STEM credentials, few with STEM aptitude and competency.

          I’m strongly convinced that this is the answer. STEM is a ludicrously ill-defined field. People talking ‘surplus’ often include associate’s degree in IT, while people talking ‘shortage’ often restrict to uncommonly skilled programmers. It turns out that the whole metric is sloppy and absurd, so the two claims are actually consistent.

          Both groups tend to have an incentive for their view – lots of ‘surplus’ claimants are pushing against government STEM funding, while ‘shortage’ claimants are often pushing for visas and scholarships to train excess programmers and keep wages low. (N.b. tech giants want a massive surplus of programmers, not a merely adequate number.)

          https://www.bls.gov/opub/mlr/2015/article/stem-crisis-or-stem-surplus-yes-and-yes.htm

      • Alex Zavoluk says:

        I do interviews at a technology company. The vast majority of the time, we’re rejecting based on skills (can’t explain X, can’t code Y). Rejecting based on culture for technical positions is quite rare, and likely means the person said something pretty egregiously out there (like “we could get sued if this person worked here” out there).

        • Anonymous says:

          And such things weren’t even routinely asked during interviews. Actually, what I said about never being tested was wrong. I had forgotten one interview in which the interviewer was a techie and did some basic probing into whether I remembered anything at all about RIPv2 from a high school CISCO certification. Never happened again before or after, though.

          • Alex Zavoluk says:

            I guess we have different experiences, but it seems based on other comments here and on the subreddit that most technology companies have to impose basic checks on programming competence in order to weed out completely unqualified candidates.

    • Emanuel Rylke says:

      Anecdotal evidence: in 2015 everything (formal education, employment history, mental health…) except my ability to develop software signaled “do not hire”, yet I was hired for the third job I applied for. So if I wasn’t just incredibly lucky there is a shortage of tech workers.

      • Alex Zavoluk says:

        Alternative explanation: someone’s system worked properly and ID’d you as being able to develop software in spite of things like formal education.

    • Besserwisser says:

      How difficult were your CS classes? I’m fairly certain the bottom 5% in mine didn’t make the course. Probably neither did the 5% above them or the 5% above them. Even I didn’t make it and the I find the questions in the link insultingly easy. How far is this comic from the truth and how different do you assume this to be in other countries?

      • Mitch Lindgren says:

        To clarify, when I say I wouldn’t hire the bottom 5%, I mean the bottom 5% who actually graduated. There were plenty of people who dropped out during first and second year, and obviously I wouldn’t hire them, but neither would I hire the people who managed to squeak by with a C- average.

        I’m not sure how to quantify the relative difficulty of my CS courses, but I have to assume they were about average for a CS program. Which is to say, not exceedingly difficult for competent students, but still requiring strong logic and math skills and a willingness to put in long hours on assignments in addition to lectures and labs.

        For the record, I went to school in Canada, but I now work in the US. I assume that Canadian universities are generally comparable to US universities. I think there’s some truth to that comic, but it’s obviously exaggerated. You can’t entirely fake your way through a four year degree at any respectable institution, but you can rely heavily on your more competent peers for help and/or spend twice as long on every assignment as you should have to.

        At the end of my degree I felt like I’d carried enough dead weight through group projects to last a lifetime; fortunately I now work at a company where the vast majority of developers are not just competent but in fact excellent, to the point that they often make me feel like the under-performer. 🙂

        (PS: Sorry if this gets posted twice. The comment system seems to be having some trouble.)

    • Deiseach says:

      I wonder if part of it may be that “become a computer programmer!” is the advice being pushed by government when it comes to “and where will people find jobs?” as we’re in the throes of converting from the industrial to the service to the knowledge economy. So the idea is out there “there are lots of high-paying jobs in computer industries, do some kind of computer science course, you’ll be snapped up by a big firm” and people who have little to no aptitude are being funnelled into the education path for STEM fields.

      And then they come out with a qualification and the companies all want, as is pointed out, the stars: the really talented, the really highly-productive. The average guys may get a job somewhere, but for the Big Firms with Big Salaries, they need to be Big Talents and of course they’re not; only a certain percentage are Big Talents.

      So that the panacea of “become a computer programmer, industry and business is crying out for them” is both true and untrue – business is crying out for the top (taking this at random, I have no actual idea) thirty to forty percent, or someone who has the particular mix of ability/training/experience that their particular company wants, but those below that 40% cut-off are going to find it just as hard to find a job in the field of IT as any other employment area nowadays.

      • Nornagest says:

        That might also provide an answer to the question of a CS degree’s value given that academic CS has almost nothing to do with the software engineering trade. CS doesn’t teach you to code in the ways that’re most valuable in industry (it will, however, teach you to solve FizzBuzz if you’re paying any amount of attention), but you can’t get a CS degree without passing a number of fairly hard math classes that’ll weed out most of the credential-seekers.

    • Snailprincess says:

      I’m a software developer living in the Silicon Valley and there is definitely still an extremely high demand for good developers. I do think it’s probably true that there is a huge demand for the top x% of developers (by some metric) while demand is a lot softer for less skilled or competent developers.

      I also think the focus on CS degrees vs. CS jobs might be slightly misleading. I feel like Silicon valley and software engineering in general may be one of the last places where your college degree doesn’t matter that much. It’s helpful if you’re just out of college but mostly what people care about is can you do the work. I don’t have a computer science degree (mine is in physics) and I know a lot of similarly situated people who either have a non-CS degree or no degree at all who are working as software developers.

    • Jacob says:

      “These companies are all trying to get the same 50 students.” This, more than anything, may be the source of the persistent STEM shortage myth: the inarguable value of being a star in a STEM field.

      This has been my experience. The superstars are in ridiculously high demand that people mistakenly think that’s typical.

      I’ve worked in STEM for 10 years and never been unemployed, except for the year I spent in graduate school. After that I thought it would be easy for me to find my dream job. It wasn’t. I ended up taking an awesome job (pays well, roughly in my field, easy commute) that doesn’t happen to be as EA-oriented as I would like. C’est la vie. I’m certainly not hurting for opportunities to just make a living.

    • spinystellate says:

      I see two competing explanations for the alleged STEM worker shortage.
      1) The specific skills of the workers rarely match the specific skills required by the employer.
      2) Most of the workers just aren’t very good.

      Both of these can be true, and I think both are almost uniquely modern phenomena.

      (1) is uniquely modern in that, possibly for the first time in history, a huge sector of the population has skills so specialized that they only apply to a very narrow range of job openings. This isn’t even a glorification of high-tech skills over the trades; trades that require high intelligence and a lot of experience still leave tradesmen well equipped to do a lot of different tasks in their trade. A plumber with tens years of experience may or may not be good at plumbing, but the good ones are probably able to handle just about any plumbing task that might arise. In contrast, a good node.js developer isn’t in a position to, say, program embedded hardware, at least not on day 1 of a new job.

      (2) Overproduction of college graduates, coming from too many colleges, low admissions selectivity, grade inflation, and low course difficulty, is probably a specifically modern development as well. People who majored in computer science or electrical engineering in 1985 were probably better equipped to do basic job tasks when they graduates than the same people today. Evidence against this would be the numbers showing that the number of STEM graduates now is about the same as it was in 1985, and that overproduction of college graduates is entirely accounted for by non-STEM majors.

      Then there are places like academia, where (1) and (2) are probably both true, and on top of that there is a major supply/demand imbalance.

      • Jiro says:

        The flaw is the assumption that the specialized skills are required to perform the job, rather than just stated by the employer out of either incompetence or a desire to not hire anyone combined with a requirement to advertise the position.

        • bean says:

          How much is it worth to the employer to have someone who knows those things, as opposed to someone who they have to train in them? It might take a couple of months to bring someone up to speed on a new set of tools/languages/whatever, and that’s time when the employee is not being productive.

          • cassander says:

            Speaking as someone who currently hires people based heavily on a job knowledge test that amounts to a trivia test….a lot. Getting people who are subject matters experts up to speed takes months, without it takes even longer. And it’s not just the time they theoretically need to learn the stuff in question, it’s the time it takes me to teach them that stuff, plus the time I have to spend checking their work to make sure they haven’t missed something obvious that someone with a lot of knowledge would know.

        • spinystellate says:

          I’m familiar with the “advertise for a job that you deliberately hope not to fill” strategy, but I’ve tried to hire for jobs the I *did* hope to fill and still couldn’t really find anyone with the right qualifications, at least at the salary my organization was willing to offer.

          Also with so much work being in teams, or requiring some sort of extended onboarding process that takes a lot of time investment, it could be pretty easy for an applicant’s aptitude or personality to render them ZMP or negative MP, and so for it not being worth “settling” for that candidate because they aren’t really worth hiring at any wage.

    • JayT says:

      I’m a programmer the Bay Area, and my experience is pretty much the same as everyone else’s. There is no shortage of applicants, but there is a shortage of qualified applicants.

      I have STEM degrees (CS and Math), and I had some difficulty finding my first job (this was in the middle of the Dot Com Bust), but since then I haven’t been without work unless I chose to be.

    • Alex Zavoluk says:

      I can confirm that many nominal CS graduates are incapable of answering the most elementary questions.

    • poipoipoi says:

      Since this seems to be the big long STEM thread…

      How much of this is the coastal housing shortage?

      I mean, I took a 50% paycut to flee the Bay Area in order to move to NYC, and once my knee finishes blowing out (Read: I no longer live in tiny apartments and drop thousands of dollars going bucket list hiking in a desperate race against my knee blowing out), there’s a lot to be said for taking another 50% paycut to move to Cleveland and buy a house that costs less than my annual salary and actually has working plumbing.

      • Alex Zavoluk says:

        We have have just as much difficulty hiring for a much cheaper, growing city as we do for New York.

    • SamChevre says:

      Another comment on STEM “shortages.” Very specific requirements may be in short supply; however, I can point you to a couple dozen programmers that I have worked with, who were very competent who have been completely unable to find work and have been looking for a year or more.

      (Key note–all of them are over 50).

      There’s not a shortage of general technical professionals; there’s a shortage of some very narrow skill sets, and a lot of people who would be competent programmers, network admins, database admins, etc but who aren’t as cheap or flexible as a 25 year old with an H1-B and so can’t find any job in tech at all.

      • John Schilling says:

        (Key note–all of them are over 50).

        That, I think, is unique to “Tech” and not to STEM generally. Well, OK, not sure about Math, but Science and Engineering generally place high value on experience.

        • SamChevre says:

          Interesting; that had not been my anecdotal impression.

          My impression was that the same dynamic was very prominent for engineers–20 years experience as a working engineer (as opposed to as a manager) meant that losing a job was a disaster.

          • The Nybbler says:

            My impression was that the same dynamic was very prominent for engineers–20 years experience as a working engineer (as opposed to as a manager) meant that losing a job was a disaster.

            Anecdote to the contrary: Last year I found another software engineering job (actually 2 competing offers) within a couple of months; this is after 24 years in the business. This is likely largely because I have a trendy tech company on the resume. But then again, I was recruited by them when I had 18 years in the business.

  6. reasoned argumentation says:

    Current Affairs on the back-stabbing, infighting, and comical errors of Hillary Clinton’s campaign. Although of course if a few Michiganders had voted differently, we’d be praising every one of these people as geniuses right now.

    Because if she lost 290-248 (the EC result if Michigan flipped to her) they’d be geniuses but since she lost 306-232 (the actual outcome) they’re not? There were no close states that could have flipped to give Hillary a victory. If she flipped Texas she’d still have been behind 271-270.

    • Scott Alexander says:

      You know what I mean, but fine, I’ll edit it to the less snappy “if a collection of people across a range of electorally close states had voted differently…”, just for you.

      • Conrad Honcho says:

        In the aftermath of the election I’ve seen tons of takedowns of Hillary’s campaign but almost nothing about the success of Trump’s. You’d think his extremely cost-efficient campaign would merit more think pieces but I don’t think I’ve seen any. I can only imagine the vituperation against every aspect of Trump’s campaign and the Republican party apparatus that allowed him to run that we’d have seen had he lost, however.

        • Mary says:

          You expect them to treat anything he did as showing any signs of skill?

          • Conrad Honcho says:

            I suppose not. Trump must be the luckiest guy the world. Apparently a complete incompetent who just trips over a small loan of a million dollars one day and ends up with billions, drools his way through 14 seasons of a top-rated TV show and then stumbles into the most powerful office in the world by random chance.

          • I think you are overstating the case a little. He got quite a lot of money from his father, and it isn’t clear whether he did better or worse with it than an ordinary market return.

            On the other hand, if he was as incompetent as a lot of the critics seem to believe, one would expect that he would have lost money, not made it.

          • Anonymous Bosch says:

            This group interview with Trump biographers I think addresses this point pretty starkly:

            Barrett: So Fred was on top of every loose dollar or possible subsidy, and he was devouring it.

            But, you know, this debate that Marco Rubio stirred, about whether or not Fred bequeathed $200 million to Donald, I think this is the whole point. I don’t believe it’s true, but I think it misses the point, and I think it’s a point that almost all of our books make, is that all of the original deals—Fred had to come in and sign the bank documents. None of them could have been done without Fred’s signature.

            O’Brien: The Grand Hyatt [a New York hotel Donald Trump bought and refurbished in the 1970s] was co-signed.

            Barrett: Yeah. I tell the tale about how Fred has to come to the closing in Atlantic City, and he’s against Donald going into Atlantic City. But he goes to the closing, they sit up there and sign all the documents with all the mob guys, you know, to buy all the leaseholds. And Fred and Donald leave and they go down to the limo, and somebody upstairs realizes that Fred missed one document. And they call out the window for Fred to come back, because they’re not going to do a deal with Donald.

            I mean, I had his tax returns at that time. We got them—probably Tim got them—from the [New Jersey] Division of Gaming Enforcement, and Donald was worth nothing. He was worth nothing. Even the $35 million credit line that they started with for Trump Tower was signed by Fred.

            O’Brien: So this whole notion that he’s said a lot—that, “Oh, I got a million dollars from my father”—that’s just pure hokum. His father’s political connections and his financial connections launched him, kept him supported. His father bought $3.5 million worth of chips at Trump Castle [the Atlantic City hotel and casino] when the bonds were coming due, to keep him afloat so he could make a bond payment. He inherited, probably conservatively, over $150 million from Fred, so that’s more than $1 million, just for the record.

          • Conrad Honcho says:

            Fred Trump died in 1999. Donald was already a billionaire by the time he inherited the $200 million. Even a $35 million line of credit doesn’t mean much. Very few people with $35 million turn that into billions.

            Yes, starting from nothing and making billions is admirable, but it’s also a different skill set than taking something that’s already good and making it great. Which skill set is more useful for running an already established (and pretty phenomenal) nation?

            “Can be handed something really nice and not wreck it or perhaps make it even better” sounds like a good skill set for a President.

          • MugaSofer says:

            Which skill set is more useful for running an already established (and pretty phenomenal) nation?

            This isn’t about which skill set is more useful (a question which was discussed to death before the election.)

            We’re discussing whether it’s possible for a rich kid to remain rich while being a total idiot, to which the answer is obviously yes.

          • Nornagest says:

            We’re discussing whether it’s possible for a rich kid to remain rich while being a total idiot, to which the answer is obviously yes.

            For some values of “rich”. I can’t dig up the reference right now, but I remember reading somewhere that it usually takes about three generations to wreck a fortune.

            Wealth management isn’t as hard as creating the wealth in the first place, but it’s definitely a skill.

          • Edward Scizorhands says:

            That “three generations” thing includes the generation that earned the wealth.

            “First generation creates, second generation maintains, third generation squanders” is one way it goes. I have friends from Asian countries say they’ve heard similar rules, and always with three generations.

            It’s not hard to see why. Some guy works super hard, his kids see him work super hard and gain his work ethic. But they make sure to take time to raise their own kids, and all those kids see is that they are rich and dad works normal hours, I bet I can get that too.

        • esrogs says:

          > tons of takedowns of Hillary’s campaign but almost nothing about the success of Trump’s

          Here was one: https://www.forbes.com/sites/stevenbertoni/2016/11/22/exclusive-interview-how-jared-kushner-won-trump-the-white-house

  7. hnau says:

    On the STEM issue, deBoer’s analysis seems more or less correct. (I couldn’t see the full WSJ article, because paywall.) In most industries, being qualified for a job is a step function, or close to it: either you have the necessary skills to be a doctor or a plumber or a teacher, or you don’t. Maybe there’s some variation in how well the qualified people do their jobs, but it’s much less than an order of magnitude and in many cases employability will be driven by other factors.

    Contrast that with software engineering, where some people are the proverbial “10x” employees who get work done that it might otherwise take 3, 8, or infinity people the same amount of time to do. I work in the industry and while I think “10x” is usually an exaggeration, I can confirm that (1) productivity varies exponentially among programmers, and (2) it is relatively easy for companies to estimate (at least compared to other industries).

    So what I think’s happening is that more people are “in STEM”, “STEM jobs” are in high demand, and people “in STEM” are having trouble finding jobs. With a large, exponentially distributed talent pool, all of these can be true at the same time. Companies are heavily incentivized to compete for the “10x” STEM workers as the most efficient way to boost their productivity. Offering $200K to lure a top programmer away is more cost-effective (in productivity terms) than paying, say, $70K to three minimally-qualified programmers. In fact the talent pool is even more distributed than this: if a project *can* be done on schedule by minimally-qualified programmers, it probably makes more sense to hire a team of 10 programmers in China and lay off the moderately-productive Americans they’re replacing. (I know several people who’ve had this happen to their teams.)

    TL;DR: It’s possible and even likely that we have a large and growing pool of STEM workers, but that companies are still unsatisfied and most of the benefits are going to the most qualified people.

    • hnau says:

      Two postscipts…

      First, a good employment sector to compare STEM to at this point might be pro sports or entertainment. There you have the same situation of a big talent pool, big money being made at the top, disproportionate productivity from top employees (due to scaling), employers that are desperate to find and hire top talent, and a talent pool that’s mostly desperate to be hired.

      Second, one reason why a tech company might outsource rather than hire minimally-qualified local workers is cost-of-living. In some parts of the Bay Area, $70K is about the minimum you could pay employees if they’re trying to find decent (by middle-class US standards) local housing within the standard “25%-ish of gross income” budget. Pay any less and it starts to look like it’s not worth moving there. I wonder if it would be good business for a big tech company to set up a campus in Oklahoma City or something, hire a bunch of minimally-qualified programmers, and start building business apps and login pages for the entire US.

      • Kevin C. says:

        First, a good employment sector to compare STEM to at this point might be pro sports or entertainment. There you have the same situation of a big talent pool, big money being made at the top, disproportionate productivity from top employees (due to scaling), employers that are desperate to find and hire top talent, and a talent pool that’s mostly desperate to be hired.

        This sounds like what Brynjolfsson & McAfee, in The Second Machine Age, called “talent-biased technical change”, whereby a job market becomes more “winner-take-all” and proportionally-greater remuneration goes to the “superstars”. Music from pre-Edison to now is one of the key go-to examples, which fits under your entertainment comparison.

        I wonder if it would be good business for a big tech company to set up a campus in Oklahoma City or something, hire a bunch of minimally-qualified programmers, and start building business apps and login pages for the entire US.

        Yes, the geographic concentration of tech companies into high cost-of-living areas does look like it would be open to the sort of idea you have above, but if it really were that simple, wouldn’t somebody have done it by now. Despite computing in particular seeming to be the sort of thing “you can do anywhere”, I note that there is, indeed, plenty of clustering in physical space. I suspect there are “meatspace” network effects probably in play here behind the geographic concentration which would undermine the above proposal. Plus, if you’re going to move somewhere else and hire “minimally-qualified programmers”, then, as hnau says, why not go as far to China, which has even cheaper cost of living, and cheaper programmers, than Oklahoma City?

        • poipoipoi says:

          A good question to ask is “Where’s my next next job”?

          I’d love to move back to the Midwest, but outside of Chicago, there’s nothing.

          So there might be that one company in Clearwater, Florida, but if they’re the only company in Clearwater, Florida…. Where’s your next next job.

          Hence why the Bay Area and NYC and Chicago and Austin.

    • Z says:

      I work in the industry and while I think “10x” is usually an exaggeration

      It could just be that you and your peers are clustered in the same area of the scale. There are many studies on programmer productivity, and the data seems pretty conclusive…If there are newer studies with contradicting findings, please share.

  8. manwhoisthursday says:

    It’s a bad idea to associate science with left wing ideology and SJWism, which the March for Science did, to an extent. Fortunately, the public wasn’t much paying attention.

    • Deiseach says:

      It’s a bad idea to associate science with left wing ideology and SJWism, which the March for Science did, to an extent.

      While I was very cool on the whole March for Science thing (and we had corresponding marches in Europe why, exactly? There was a lot of virtue signalling about being anti-Trump going on there which was ludicrous), I have some sympathy for them on this: it does look like the original idea was “science funding is being cut all over the place, we are going to march to protest this” then it became “and in order to get the public behind us we will march for science being generally cool’n’groovy and it does all this neat stuff which you get the benefit of!” and then the Usual Gang jumped in to demand queer feminist glaciers in science should be represented and it all went to pot.

      But the people who like to go on protests and put up selfies of themselves being achingly politically correct with their clever placards got their day out, and at least no windows were smashed in the process (not that I heard of, anyway) so that was nice.

      Though on the other hand – too uncool to be infiltrated by the Black Bloc? Oh, the ignominy! 🙂

      • Jacob says:

        Some antifas set up a table at the march I went to. Not an official table, they were not sponsored by the organizers, they were basically just squatters. They didn’t cause any trouble though.

        Violence always gets more coverage. I wonder what the equivalence curve is; how many peaceful protesters do I have to assemble to get the same media coverage as punching one nazi?

        • Conrad Honcho says:

          Violence always gets more coverage. I wonder what the equivalence curve is; how many peaceful protesters do I have to assemble to get the same media coverage as punching one nazi?

          I wonder if protesting has essentially become a cargo cult. It wasn’t the marching that got people to come around to Dr. King’s way of thinking, it was the fact respectable-looking people got dogs and fire hoses turned on them for simply marching that got people to come around to Dr. King’s way of thinking.

          I think leftists attacking Trump supporters during the primaries did more for Trump than women and science fans marching unmolested against him did after the election.

          • vV_Vv says:

            > I wonder if protesting has essentially become a cargo cult.

            Kinda.

            Do the protesters really expect (in an epistemic sense) to achieve some political or social change with their activism, or is their activism something they do mainly to fullfill a psychological and ingroup social need (the infamous “virtue signalling”)?

            What about actual cargo cultists? Did they really expected airplanes full of cargo to land as a result of their rituals, or it was more like a standard religious thing, where they gathered together and had a good time performing some rituals, showing off to their peers how pious they were, without actually expecting anything unusual to happen? I guess we’ll never know.

          • dndnrsn says:

            Protest can function by more than just “look at how awful the guys beating us with truncheons” are – it can be a show of power, of defiance, etc.

          • Iain says:

            I feel like a broken record on this, but: protests aren’t always about convincing the other side. They are about building up energy on your own side: “Look at all the people who are with me!”

            The Democrats have been doing ridiculously well in fund-raising lately:

            In about 24 hours, the political advocacy group Swing Left raised more than $800,000 for Democratic House candidates, with at least $100,000 going to challenge Rep. Darrell Issa (R-CA) alone. Daily Kos beat its single-day fundraising total by pulling in around $900,000 in just one day.

            “When we first endorsed Ossoff, we raised $400,000 in the first week — and that totally blew us away,” said Nir, of Daily Kos. “That broke the all-time prior record of raising $400,000 for [Sen. Elizabeth] Warren over the course of an entire election cycle. In less than a day [over AHCA], we will raise more than we raised for in Ossoff in a week — which is more than we raised for Warren in a year.”

            Obviously that’s not all due to protests, but I’d be extremely surprised if protests weren’t a factor.

          • nimim.k.m. says:

            There have been a couple of examples of major political changes (change in a government) caused by something that could be described as “enough people protesting on the streets”. The post-Japanese war Russian revolution of 1905 might be the ur-example. Or the German revolution of 1918.

            However, these weren’t simply “large demonstrations”, they involved enough organized people that when a “general strike” was called for, significant portions of society stopped working. Mass civil disobedience, or worse. Faced with such protests, government would be dead serious: not just canceling leaves of police officers because they might be needed for riot control duty for day or two, but pondering the question whether the military would stand with them if they gave them the order (which regiments? what orders?) and if they could crush the unrest fast successfully (where the “not successfully” would mean civil war).

            Very different from “day out marching to show how angry we are, and then return back to business as normal”. But that history might still the background influence on the culture where people think “marching on the streets is effective”.

            On the other hand, all the examples I can think of date prior to modern mass media. And statistics and opinion polling. I’m a bit fuzzy if there’s a causal relationship.

            (A mass gathering in the market square might be of far more importance, if the only way you can know what happens there is to be present there, instead of observing via TV. A very important unrest might involve a truly significant proportion of the local populace. And other hand, all the people would not be thinking that the silent majority was staying at home listening to news reports about their signs. It would feel like more like it truly was a demonstration of the will of the people rising against the government, and the individuals staying at home might even agree.)

          • Aapje says:

            @nimim.k.m.

            Serbia, Ukraine?

          • nimim.k.m. says:

            I don’t remember what you are referring to with Serbia.

            I thought about mentioning Maidan / Ukraine, but that was simply a bit too weird. Somebody shot at the protestors at Maidan. And then suddenly the then-president was giving speeches how he is the lawful president, from abroad.

            Egypt / Arab spring might be actually a more stronger case of the kind of revolution I’m talking about, so maybe the modern media technology isn’t the reason at all. Or another more recent inverted example of vaguely similar thing: lots of Turkish people ran to the streets to show their support to Erdogan, and it was part of why the coup against him collapsed in hours.

          • Aapje says:

            People forget so easily 🙂

            Milošević was ousted after a protest by Otpor!

  9. Emma the trust fund baby says:

    In the link about the scientific mavericks at http://marginalrevolution.com/marginalrevolution/2017/04/wednesday-assorted-links-102.html#comment-159623218 you said,

    “Sorry about the language. By “top ten” I literally meant “the ten at the top of the page” and didn’t realize it could be interpreted as “ten best”.

    I think a third party observer would see that as quite a weak excuse. Usually you would say “first ten”, and you’re known as a good writer.

  10. Virbie says:

    > “they repealed a less-than-one-year-old regulation that hadn’t come into effect yet, changing literally nothing”.

    This is true only in an unhelpfully narrow sense. The gov’t had been pretty noncommittal about what ISPs were allowed to do with customer data, and even before the rule was passed, ISPs had been taking a wait-and-see approach before diving in and having to deal with the horrible press and legal headaches of the gov’t deciding to take action. This was resolved in the direction of not allowing them to do it, and now it’s been resolved in the other direction. The steelman of the claim that this changes nothing is something like: “this will take us back to the earlier regulatory environment, when somehow ISPs weren’t selling user data despite being unregulated” but is missing the fact that regulatory environments consist of more than the law on the books and now there’s a clear signal what the government’s views are on this issue.

    EDIT: I should note that I feel the same way as you as far as misinformation and hysteria about Internet privacy, but this is one of the first issues in that arena that has me at least a little concerned. The main difference to me between ISPs and every other service is that those are generally discretionary in a way that ISPs are very much not. Alternatives like DDG are good enough for plenty of people (as half of HN will rush to tell you every time Google comes up) and I personally know a couple dozen people who chose not to use Facebook and have very active social lives. Voting with your feet is a lot harder when it comes to a market that’s far more critical to a normal life and where most Americans deal with only one or two provider choices?. That’s what worries me a little.

    • Scott Alexander says:

      Thanks, this is interesting and I’ve added it to the original post.

    • pelebro says:

      One can use tor I suppose (I’m using it now). Though that can be sometimes uncooperative e.g. I’ve had trouble commenting here before using tor, maybe it triggered some sort of spam filter in wordpress or something; and several websites blacklist tor exit nodes. Though if enough people feel their privacy violated enough to use tor, one can hope more websites will allow its use.

      • Virbie says:

        You’d be surprised at how low a barrier utterly defeats people when it comes to technology. I can’t relate to this, but I’ve come into contact with it enough times that I’m very aware of it. Tor is definitely above that bar, basically for the reasons you’re describing. And since standards-oriented things like the Web are always multi-party, a lot of these issues are an intractable coordination problem: making every website someone might want to use Tor-friendly requires action from a billion different entities.

    • MostlyCredibleHulk says:

      when somehow ISPs weren’t selling user data despite being unregulated

      But that’s completely untrue. ISPs weren’t unregulated before this particular regulation, they were regulated by volumes of other regulations, pertaining to communications, privacy, contractual law, data ownership and so on. You can say there wasn’t regulation for this particular narrow application of the law specifically (I have no idea it it’s true or not but at least it would be a more solid argument), but it is routine thing to extend existing regulations to new things that look like old things but slightly (or even significantly) different. Pretending as if without this particular one there’s complete legal void and anarchy makes no sense. If you argue “without this regulation ISPs will to X”, you have to explain why they didn’t do X for previous 20 years, and an argument “they were afraid of uncertainty” does not fly here – there are tons of areas where uncertainty exists and people try all kinds of things still. Much better answer would be “because all other regulations already prevent this, and the ISPs know that, but political activists bent on pushing their favorite regulation don’t care”.

      Alternatives like DDG are good enough for plenty of people

      So you’re saying if you don’t like a major near-monopoly search engine, you can go to a smaller competitor which suits your particular taste better. But with ISPs you can’t go to a smaller competitor which suits your needs better because…?

      most Americans deal with only one or two provider choices

      OK, let’s say it is true, and then it’s a problem, privacy or not. Even if we had only one choice which never thought about violating privacy and so on, it’d still be a problem. And the we need to look into why that happens? Does it have any relation to how heavily the communications market is regulated and how hard it is to establish a new company in the space which creates so much entry barriers and compliance costs that only the most deep-pocketed and deep-connected companies dare to enter it? Maybe instead of piling more regulations on top of it, it’s worth to consider some strategy that would make it easier to enhance the choice? Focusing in that instead of imagining horror scenarios of ISPs stealing one’s cat pictures would be much better target for efforts, IMO.

      • Virbie says:

        I don’t actually disagree with most of your points, probably because your “rebuttals” mostly consist of disingenuously setting up strawmen and then courageously knocking them down.

        >> when somehow ISPs weren’t selling user data despite being unregulated
        > But that’s completely untrue. ISPs weren’t unregulated before this particular regulation, they were regulated by volumes of other regulations, pertaining to communications, privacy, contractual law, data ownership and so on. You can say there wasn’t regulation for this particular narrow application of the law specifically (I have no idea it it’s true or not but at least it would be a more solid argument), but it is routine thing to extend existing regulations to new things that look like old things but slightly (or even significantly) different

        Well, duh. I assumed no one here was a literal alien from outer space, which is the only way one could interpret my statement as “ISPs are literally completely free from any regulation” as opposed to “ISPs weren’t directly prohibited by regulation from doing what we’re talking about in this thread”.

        > Pretending as if without this particular one there’s complete legal void and anarchy makes no sense.

        If you care to actually read my comment, you’ll notice that the meat of it is about _precisely_ this lack of void in the absence of direct regulation. Part of figuring out what the gov’t legal & enforcement position will be is looking at explicit signals from them like the creation of this rule under Obama and its revocation under Trump. I’m really just re-stating portions of my comment here on the off chance that you’re mistaken instead of dishonest.

        > So you’re saying if you don’t like a major near-monopoly search engine, you can go to a smaller competitor which suits your particular taste better. But with ISPs you can’t go to a smaller competitor which suits your needs better because…?

        Because….of the remainder of my fucking comment? I was taking your response point-by-point because I was assuming good faith but it’s clear to me by this point that you’re just a troll. Most of what you’re complaining about are positions I didn’t express and that I don’t hold regarding ISP regulation. The only point I did make is that “literally nothing changed” in the regulatory environment isn’t quite true, a descriptive fact that people can interpret in their own prescriptive framework.

        Just because you’re too simple-minded to conceive of more than two possible sets of policy positions on gov’t’s relationship to ISPs doesn’t mean everyone else is.

        • Paul Brinkley says:

          The tone of this post is making me really not want to read your prior one. Reconsider.

  11. Anonymous Bosch says:

    EDIT: Oh, it was deleted. I honestly would rather you hadn’t but it was gonna be my last reply before turning in anyway so no big loss.

    • Scott Alexander says:

      Yeah, sorry, that was unfair to you but I wanted to enforce my own rule at least on myself. If you want to talk about it more feel free to shoot me an email.

      • Bugmaster says:

        I think it would be a lot clearer if you replaced the original post with the word “DELETED” instead of totally deleting it… Or maybe that’s exactly what you did, but the comment system is bugged ?

  12. Peter Gerdes says:

    I think the reason to be uncomfortable with the march for science in the form it happened in is that it risks making support for science into an issue of partisan divide. There are plenty of examples of issues that are in principle not partisan but because they get associated with one side of the aisle the other side starts reacting very negatively.

    Thus I’m separately in favor of clearly non-partisan marches for general scientific funding and marches specifically on climate change and issues related to reproduction etc.. I just think there is very little benefit for combining them and a grave potential risk. This march made liberal points about science very visible while not doing so for any conservative issues (pro-GMO, studies of the benefits of trade etc..). Even if you think that is because conservatives are more guilty of ignoring science it still risks creating a perception of partisanship and a backlash against science support/funding.

    • Kevin C. says:

      I just think there is very little benefit for combining them and a grave potential risk.

      I recall liberal essayists making similar criticisms about the failure of Occpy Wall Street. Namely, that they had a cause — opposition to “crony capitalism” and evasion of consequences by the wealthy and connected — that could get broad support across political lines, and they “squandered” it by tying it indelibly to the usual constellation of partisan issues.

      I’d note that this seems to be a recurring “failure mode” of modern “march”-type protests. It seems to me as if it occurs more commonly on the left than the right, but that may just be that (again, from what I’ve seen) there are more and larger such protests and marches in general from the left than the right in the present US.* It might be interesting to examine why this sort of “combining” keeps happening, and how counterproductive it actually is.

      *The go-to explanation in my neck of the woods for this latter disparity is that: of course the liberals protest more because they have more time and resources to do so, as we “conservatives” have jobs and mortgages and children and such (‘unlike those dirty hippies’ being the implication). In short, “bourgeois values”.

      • Peter Gerdes says:

        It’s just what you would expect from team/cheering based politics. People aren’t motivated by abstract policy benefits but by cheering their team on. There has been no shortage of nonpartisan calls to support greater science funding in the past or greater use of science in policy but those are never going to get anyone but a few scientists out on the street to protest. Even if a cause starts nonpartisan (as this one may have) people will try to bring in their favorite partisan issues and make the march partisan. As organization of such marches tend to spread through social networks it will be hard to avoid attracting far more of one party than another…and as that organization leans towards including other issues they care about it will further exaggerate that partisan leaning.

        As for why protests more frequently come from the left I don’t think it’s about time/resources. It’s a combination of a positive historical association with liberal protests/marches (civil rights, 60s anti-war etc..), the fact that liberals are currently associated with disobedience/disorder/anti-government and that liberal causes are more slogan/soundbite/image friendly. One could imagine a realignment of the political spectrum in which the big government and racial/gender/ethnic interest group party attracted those who believe in order, discipline and have low tolerance for non-conformity while the small government party allied with non-conformists and anti-government libertarians in which case much of this would change. In other words conservatives are still the law and order party and liberals the anti-authoritarian party.

        Most importantly, however, is that conservative causes tend to be more pessimistic, complex and don’t pull at the heartstrings. Liberals can point to people suffering right now while conservatives generally point to abstract rights violations or the dangers of government overreach or the risk of tyranny. As a result (I believe) liberals also tend to skew more towards the age rage who is likely to go to marches to have fun, hang out and get dates.

        • I think the general issue is the public good problem in changing the world. If I spend time, effort and money on making the world better, based on my views, whether that’s campaigning for socialism or against it, I am, or believe I am, producing a public good, making the world better for everyone. I get a tiny share of that, and I know my efforts have a tiny effect, so the payoff to me of my efforts is unlikely to justify the cost.

          We solve that problem by linking the activity that is intended to change the world to indirect private benefits, most obviously an opportunity to socialize with people who have a lot in common with you, but also various sorts of rewards in fun, status, and the like. But then the activity tends to optimize for those purposes at some cost to its ultimate objective.

        • Conrad Honcho says:

          the fact that liberals are currently associated with disobedience/disorder/anti-government

          The liberals are anti-government? I thought they generally wanted more and bigger government to do stuff to the other citizens they don’t like. That is, they’re out there protesting for 50 Stalins.

          • MugaSofer says:

            Right, they’re (mostly) in favour of the idea of government but (mostly) opposed to the specific people who are in government right now.

            Meanwhile, the conservatives who control the government are theoretically anti-government and in favour of limiting their own power, although mysteriously this fails to materialize in practice.

    • Trofim_Lysenko says:

      1) We’ve had a Democrat in office since 2008. This saps the popular support from left/liberal protests very effectively, hollowing them out to a core of what I tend to think of as career/professional protesters (look at the way the anti-war protests evaporated long before the actual drawdown of troops, and didn’t make a come back even when there were redeployments or an increase in drone strikes). That is, people for whom organizing and showing up at protests is not just something they do sometimes, but is a core part of their lifestyle for financial reasons (career lobbyist/activist), personal/emotional reasons (it’s their primary social/recreational hobby, their identity-affirming activity instead of church membership, etc), or both. This means that in order to draw enough people for a good-sized, noticeable protest that won’t get dismissed, you’re almost going to HAVE to bundle issues.

      2) A lot of the longer term left-wing protest/activist groups embrace ideologies where issues AREN’T separate, but are inextricably linked. For example, I don’t think the biggest ant-Iraq War protest coalition called itself “Act Now to Stop War and End Racism” solely because ANSWER is a snappier acronym. There was sincere belief that racism and war reinforce and enable each other, and that you can’t fight against one without addressing both. I think that same pattern plays out with OWS, the March For Science, etc.

      • suntzuanime says:

        To what extent is 2 the result of 1 do you suppose? Ideology is flexible enough to make a virtue out of necessity a lot of the time.

    • shenanigans24 says:

      Be a lot more useful if instead of marching to get government funding they just held fund raisers. I suspect being a marcher is more important than the cause though.

  13. Sniffnoy says:

    Broken link patrol: Childhood trauma link is broken due to a missing “http://”.

    Edit: OK, fine, it actually works due to some fancy redirect thingy, but it’s still wrong. 😛

  14. Stationary Feast says:

    Does that mean that a decision to go ahead with the signs and costumes reflects some kind of subconscious feeling that this isn’t really that bad, or a motivation springing from something other than true outrage?

    As far as I can tell, going to marches is a fine way to meet like-minded people, especially if you don’t have the old standby of going to church on Sunday. Sounds like the people who go don’t purchase their fuzzies and utilons separately, for some expansive definition of “fuzzy” and “utilon”.

  15. sixo says:

    Even if the conflicts they find are so severe as to reasonably call into question the entire thing, by that time such people have invested so much in learning details of their religion that they’d lose a lot of ability to show off if they just left and never talked about it again.

    From experience and observation, this is similar to highly-addictive video games like league of legends, world of warcraft, etc. There’s a large, mostly-flat space of game knowledge which can be picked up ~linearly in time, allowing new players to improve easily and keep seeing results over a long period of time.

  16. jsmp says:

    I haven’t read all of the mandatory class attendance study either (it’s really long!), but I don’t see how it could possibly work. The samples of students will be self-selected into groups of those who can score high enough to avoid mandatory classes, and those who can’t. Of course the latter group will have worse outcomes. This doesn’t imply any casuation on the mandatory classes themselves.

    • MawBTS says:

      It looks like they used the first year results as a control for the second year results, which was when the mandatory classes kicked in. See Figure 4. You’d expect a student who got a bad score in the 1st year to get about an equally bad score in the 2nd, after controlling for confounders. Instead, they got worse, and the paper says the mandatory classes were responsible.

  17. sov says:

    Something tells me they’ll be skipping over the 30th Pope John as well.

  18. Anatoly says:

    >“Before thermometers, people mocked the idea of temperature ever being measurable, with all its nuance, complexity, and subjectivity.”

    The exact quotation (the above is a paraphrase) in the tweet goes “The idea that anything as subtle and complex as all the manifestations of changes in temperature could be measured and quantified on a single numerical scale was scoffed at as impossible, even by the leading philosophers of the sixteenth century.” Googling reveals that this comes from a book by Arthur Jansen on IQ Bias in Mental Testing, is unsourced in the book, and the book’s bibliography doesn’t mention temperature or Galileo. So… is this really true?

    • The original Mr. X says:

      So… is this really true?

      I’m going to guess not, both because it’s uncited and because these “har har, people in the olden days were so backwards and stupid” stories almost always turn out to be false.

      • J Mann says:

        As I said downthread, this book is pretty fascinating. It doesn’t support people “mocking” the idea of temperature, but accurately measuring temperature was initially a lot harder than people think, in part because it was really hard to establish a fixed boiling point of water or another similar fixed point for calibration.

    • MawBTS says:

      Yeah, can someone back this up?

      It sounds suspiciously neat and pat, and I can’t find a source.

    • Marshayne Lonehand says:

      Scholarly references for gaining insight into the measurability (or not) of temperature — references that include plenty of snappy quotes and polemical rants!  — are Clifford Truesdell’s textbooks The Tragicomical History of Thermodynamics, 1822-1854 (1980) and Rational Thermodynamics (1984), together with Truesdell’s essay “The computer: ruin of science and threat to mankind” (1982), read side-by-side with Philip Lervig’s critique of Truesdell’s approach “What Is Heat? Truesdell’s View of Thermodynamics. A Critical Discussion” (Centaurus, 1982).

      These works show Clifford Truesdell to be a conservative-minded polymath who was possessed of extraordinary intelligence, broad learning, indefatigable scholarly energy, and a wonderfully facile pen.

      Despite Truesdell’s many virtues, the verdict of modern thermodynamicists is that Truesdell got it mostly wrong, specifically in respect to Truesdell’s vehement (yet entirely wrong-headed) opposition to the methods of Lars Onsager, combined with Truesdell’s equally vehement (and equally wrong-headed) opposition to computational simulation as a primary tool for research in thermodynamics and statistical mechanics.

      The anti-Truesdell verdict of modern science is seen most clearly, in that every student of thermodynamics, statistical mechanics, and computational simulation learns to apply Onsager’s methods, while scarcely any students (nowadays) learn Truesdell’s methods.

      The cautionary lesson of Truesdell (as I appreciate it) is that conservative-minded investigators into human cognition — investigators themselves possessed of extraordinary intelligence, broad learning, indefatigable scholarly energy, and wonderfully facile pens — can nonetheless be mostly wrong about the nature of human intelligence, and specifically wrong in respect to their appreciation of the measurability (or not) of human intelligence.

      How can such failures in intelligence-research come about? To learn how, just study the failure-modes — the marvelously illuminating failure-modes — of Clifford Truesdell!’s appreciation of thermodynamics. 🙂

      • Bugmaster says:

        “The computer: ruin of science and threat to mankind” (1982)

        Oh man, that sounds amazing. Is there a non-paywalled copy available somewhere ? I want to find out if the essay is satire 🙂

        • Marshayne Lonehand says:

          Lol … although I am no fan of Truesdell’s hidebound thermodynamic formalisms, I am a YUGE fan of Truesdell’s uniquely “peppery and to the point” prose — prose that is Truesdell’s scholarly remediation of Twain’s immortal critique “mush-and-milk [academic writing] gives me the fan-tods.”

          Regrettably I know of no on-line version of Truesdell’s essay “The computer: ruin of science and threat to mankind”; still it’s no waste of time to seek out the Truesdell collection An Idiot’s Fugitive Essays on Science: Methods, Criticism, Training, Circumstances (1984) in which that essay appears.

          Definitely, the essay is NOT satire — Truesdell was constitutionally incapable of even the mildest forms of social indirection — hence the vigor of his scholarly prose and the refreshing acerbity of his scientific assessments! 🙂

    • J Mann says:

      Googling the question is tough because you get all the climate science chatter.

      Here’s a potential clue I found, which you should read primarily because it’s super-interesting, and secondarily because it might point out where Jensen got the idea. (And if so, that he’s kind of wrong in his interpretation).

      Hasok Chang tells the history of temperature measurement beginning with the invention of the thermometer – the devices were accepted fairly quickly, but then scientists spent a surprising amount of time trying to decide on fixed points to use to standardize thermometers so that you could compare measurements. The boiling and freezing points of water at sea level air pressure seems pretty easy to modern audiences, but at the time it was pretty hard. Isaac Newton argued for human body heat (technically “blood heat”), one guy proposed the melting point of butter. The main problem seems to be that based on the measurements coming back from the early thermometers, the boiling point of water didn’t seem any more fixed than human blood temperature.

      I suppose if we’re being charitable to Jensen, and if this is the history he’s discussing, then he might argue that just because scientists couldn’t figure out how to measure the boiling point of water immediately, that didn’t mean it was impossible to measure. Of course, that didn’t mean it was possible to measure either …

      • Douglas Knight says:

        the devices were accepted fairly quickly

        Does he say that? Could you give a precise citation? Or is it that he restricts his attention to those who accepted them?

        • J Mann says:

          You got me – I was imprecisely recalling and summarizing the following statements.

          “Galileo and his contemporaries were already using thermometers around 1600. By the late seventeenth century, thermometers were very fashionable but still notoriously unstandardized”

          I can’t actually say how quickly after Galileo thermometers were accepted – any thoughts?

          • Douglas Knight says:

            That quote matches Jensen, that it took a century. Have you considered the possibility that he knows what he is talking about, rather than mangling what is to you easy to find under the streetlight?

      • Marshayne Lonehand says:

        A crucial advance in thermometry was the articulation of a principle that (in 1935) came to be called “The Zeroth Law of Thermodynamics“, viz., “If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other”.

        An early, notably clear articulation of the Zeroth Law — which however would not receive that name for another 128 years — begins in the section “On the Distribution of Heat”, which may be read in its entirely (thanks, Google!), beginning on page 73 of Joseph Black’s Lectures on the Elements of Chemistry (1807)

        This [thermal] equilibrium is somewhat curious … no previous acquaintance with the peculiar relation of [temperature] to heat could have assured us of this [the Zeroth Law], and we owe the discovery entirely to the thermometer.

        Applying the lessons of thermodynamical history to neural science, should we expect a clarified appreciation of cognition awaits a more nearly synoptic microscopic understanding of neural dynamics and anatomy? Yah, sure, you betcha! 🙂

        Perhaps it’s asking too much of 20th century behaviorists (like Arthur Jensen) for them to show much enthusiasm for the 21st century’s anatomy-centric/dynamics-centric path toward cognitive understanding.

      • drossbucket says:

        I also really like Chang’s book, and I’m sure that if people were literally mocking the idea of temperature ever being measurable at all he’d have mentioned it, so I’m also skeptical of that quote being accurate.

      • Anatoly says:

        Thanks, that’s a great find. Another book focusing on the early history of thermometers is
        here: https://en.wikisource.org/wiki/Index:Evolution_of_the_thermometer.djvu
        It quotes many different scientists from the 17th century; sometimes they gripe about not having a uniform measurement standard, but any surprise, incredulity or mocking of the whole idea of measuring temperature doesn’t seem to be there.

    • Deiseach says:

      But we still don’t have a single numerical scale for temperature, do we? I’ve seen enough “Fahrenheit versus Celsius” arguments by those who use one system commonly (usually for weather temperatures) arguing with those who use the other to be wary of this, and then we get to the people who want to talk about the Kelvin scale, and then somebody pops up with yet another even more obscure specialised scale.

      So we really can’t say we all agree that (for instance) room temperature is 21 degrees. You’ll have someone saying “you mean in Celsius, but what’s that in Fahrenheit?” and those who say “I have no idea if an American temperature in the 60s is hot or cold, I’m accustomed to Celsius”.

      • Marshayne Lonehand says:

        There’s a marvelously effective international treaty-organization, the Bureau international des poids et mesures (BIPM), that maintains the International System of Units (SI), under the aegis of the longstanding (since 1875) Convention du Mètre.

        This is the venue in which all systems of measurement (Celsius vs Fahrenheit, feet vs meters, etc.) are amicably and consonantly reconciled.

        The BIPM/SI ranks (as it seems to me) among humanity’s most outstandingly effective, peaceful, and universal enterprises.

        Perhaps 21st century medical practices will evolve to become more comparably effective, peaceful, and universal to the BIPM/SI? We can hope so, anyway! 🙂

    • Marshayne Lonehand says:

      The consensus assessment of SSC commenters seems to be, that remarks in respect to thermometry by Siberian Fox/Arthur Jensen — as provided in the OP — are well-founded neither in science nor in history.

      How shall we adjust our Bayesian confidence in Arthur Jensen’s overall competence in respect to measurement science?

  19. hnau says:

    I had always thought of Rod Dreher as some sort of crotchety conservative blogger who was deeply concerned about The Gays. Apparently he is actually a tragic figure resembling an Old Testament prophet come to life. I regret the error.

    It really says something about your attitude toward religious traditionalists when reading the New Yorker gives you a better opinion of them. And even then you’ve got it abysmally wrong.

    • manwhoisthursday says:

      Yeah, most secularists have no idea what we are like, or what motivates us. Which is why you get crazy fever dreams like The Handmaid’s Tale out there.

      • Besserwisser says:

        Isn’t The Handmaid’s Tale based off the Quiverfull sect, an actual organization which acts or wants to act as in the book? I have no idea how accurate it is but seeing a book describing a specific kind of religious extremism and concluding “secularists just don’t understand it” seems overly defensive.

        • hls2003 says:

          Not in any reality-based sense. If she does think they are related, that is persuasive evidence of the general lack of comprehension cited above.

        • gbdub says:

          I’m pretty sure the people calling the Netflix adaptation “especially timely” are not referring to the Quiverfull movement.

          • Nornagest says:

            I’m pretty sure most of those people would not recognize any difference between the Quiverfulls and bog-standard conservative Republicans.

        • herbert herberson says:

          Handmaid’s Tale was written in 1985. According to Wiki, the first book to make quiverful-type arguments actually came out the same year, but I doubt Atwood had heard of it, and it certainly didn’t seem to be a popular or well-known thing until well into the 00s.

          The important thing to understand about Handmaid’s Tale is that the starting point is widespread infertility. The message isn’t “this is what Christians secretly want,” it’s “if our society ever were presented with huge infertility problem, these are the tools it might reach for.” After all: the religion in Gilead barely even qualifies as Christianity. They almost never even say the name of Jesus, there’s pretty much no discussion of salvation, or, really, any afterlife at all, and the story of Sarah and Hager is given a weird central place that no real Christians (including Quiverfuls) put it in. It’s at least as much about the importance of a healthy enviroment to human society (a theme that runs through everything else I’ve ever read by her) as it is about fundamentalism (a topic that I don’t believe she’s ever touched on elsewhere).

          • Deiseach says:

            The message isn’t “this is what Christians secretly want,” it’s “if our society ever were presented with huge infertility problem, these are the tools it might reach for.”

            No, that novel would be P.D. James’ The Children of Men. I am rather huffy about Atwood and her attitude to science fiction (very happy to use the tropes, very unhappy to be lumped in with those grubby genre authors, she writes literary speculative fiction doncha know!) so I’m the wrong person to critique this book.

            But basically, I think if you hold the same attitudes Atwood does (as in that Freddie de Boer piece about “The conservatives were right about social changes and it’s great”) then you’ll be nodding along in agreement as you read/watch; if you think it’s the biggest collection of strawmen outside of a Worzel Gummidge convention, you’re less likely to do so.

          • herbert herberson says:

            I actually feel the exact same way about Atwood, and most of her other stuff leaves me very cold (ironically, most of it seems to suffer from the exact sort of failure mode science fiction most often runs into: building a particular setting to make a particular point but neglecting all the other elements of a novel), but I am fond of Handmaid’s Tale and like the show so far.

            And it’s worth noting that although both of those books/works (because if you’re talking Children of Men, you have to include the amazingly good movie) involve fertility problems, I think it’s reasonable to think that there would be very different results between infertility becoming extremely common and it becoming universal.

        • p duggie says:

          I don’t think Quivverfull is really a “sect” more of a sub-interest within conservative Christians. They think women having kids is a divine blessing. Lots of them are women.

          The thing that I think is risable about the Handmaid’s Tale is that every conservative christian patriarchalist I’ve read considers the biblical story that the Handmaids are based on to be an example of “what not to do.” Abraham should have WAITED for God to open Sarah’s womb, and his agreement with Sarah that having a baby with Hagar and it would “count” as Sarah’s is always considered sin, (and also often considered evidence that God doesn’t want Christians being surrogate mothers.)

        • BBA says:

          I thought it was based on the Iranian Revolution, translated to America and Christianity so the reader could identify with it better.

          (I haven’t read it, since as a white man I figure it’ll be a whole book telling me what an awful person I am and how I’m responsible for everything bad in the world, and I already know that! I read, and watch movies and TV, to escape from my own awfulness.)

          • Anonymous says:

            (I haven’t read it, since as a white man I figure it’ll be a whole book telling me what an awful person I am and how I’m responsible for everything bad in the world, and I already know that! I read, and watch movies and TV, to escape from my own awfulness.)

            I’m not sure if you’re being sarcastic or not.

          • BBA says:

            I’m not sure either.

        • John Schilling says:

          The message isn’t “this is what Christians secretly want,” it’s “if our society ever were presented with huge infertility problem, these are the tools it might reach for.”

          If that were the message, we wouldn’t have had feminists from 1985 on down to the present saying, “This could not be more relevant than it is today! If we don’t win [current political fight], this is the future we will live in!”.

          After all: the religion in Gilead barely even qualifies as Christianity. They almost never even say the name of Jesus, there’s pretty much no discussion of salvation, or, really, any afterlife at all, and the story of Sarah and Hager is given a weird central place that no real Christians (including Quiverfuls) put it in.

          Christianity as seen and presented by its critics, rarely includes much of Jesus or Salvation. How could it, when those are the most clearly positive aspects of Christianity? Christianity as seen and presented by its critics, is mostly about evil and/or stupid priests using the myth of the Old Testament God to implement the non-metaphorical Patriarchy.

          The received message of Handmaid’s Tale, and I am fairly certain the intended one, is that yes, this is what Christians really want. The author doesn’t much understand Christians. And the bit about mass infertility, like the bit about a perfect decapitation attack on the US government, is a clumsy hack at the fundamental worldbuilding problem that you really can’t get there from here in one generation.

          • Vojtas says:

            Christianity as seen and presented by its critics, rarely includes much of Jesus or Salvation. How could it, when those are the most clearly positive aspects of Christianity?

            I agree with most of your post, but how is Salvation a positive aspect of Christianity from an outsider’s perspective? It’s like saying the promise of liberation from Samsara is one of the most clearly positive aspects of Buddhism to people who don’t believe in rebirth.

          • herbert herberson says:

            https://www.nytimes.com/2017/03/10/books/review/margaret-atwood-handmaids-tale-age-of-trump.html?_r=0

            The second question that comes up frequently: Is “The Handmaid’s Tale” antireligion? Again, it depends what you may mean by that. True, a group of authoritarian men seize control and attempt to restore an extreme version of the patriarchy, in which women (like 19th-century American slaves) are forbidden to read. Further, they can’t control money or have jobs outside the home, unlike some women in the Bible. The regime uses biblical symbols, as any authoritarian regime taking over America doubtless would: They wouldn’t be Communists or Muslims.

            The modesty costumes worn by the women of Gilead are derived from Western religious iconography — the Wives wear the blue of purity, from the Virgin Mary; the Handmaids wear red, from the blood of parturition, but also from Mary Magdalene. Also, red is easier to see if you happen to be fleeing. The wives of men lower in the social scale are called Econowives, and wear stripes. I must confess that the face-hiding bonnets came not only from mid-Victorian costume and from nuns, but from the Old Dutch Cleanser package of the 1940s, which showed a woman with her face hidden, and which frightened me as a child. Many totalitarianisms have used clothing, both forbidden and enforced, to identify and control people — think of yellow stars and Roman purple — and many have ruled behind a religious front. It makes the creation of heretics that much easier.

            In the book, the dominant “religion” is moving to seize doctrinal control, and religious denominations familiar to us are being annihilated. Just as the Bolsheviks destroyed the Mensheviks in order to eliminate political competition and Red Guard factions fought to the death against one another, the Catholics and the Baptists are being targeted and eliminated. The Quakers have gone underground, and are running an escape route to Canada, as — I suspect — they would. Offred herself has a private version of the Lord’s Prayer and refuses to believe that this regime has been mandated by a just and merciful God. In the real world today, some religious groups are leading movements for the protection of vulnerable groups, including women.

            So the book is not “antireligion.” It is against the use of religion as a front for tyranny; which is a different thing altogether.

            https://www.theguardian.com/books/2012/jan/20/handmaids-tale-margaret-atwood

            Stories about the future always have a “what-if” premise, and The Handmaid’s Tale has several. For instance: if you wanted to seize power in the US, abolish liberal democracy and set up a dictatorship, how would you go about it? What would be your cover story? It would not resemble any form of communism or socialism: those would be too unpopular. It might use the name of democracy as an excuse for abolishing liberal democracy: that’s not out of the question, though I didn’t consider it possible in 1985.

            Nations never build apparently radical forms of government on foundations that aren’t there already. Thus China replaced a state bureaucracy with a similar state bureaucracy under a different name, the USSR replaced the dreaded imperial secret police with an even more dreaded secret police, and so forth. The deep foundation of the US – so went my thinking – was not the comparatively recent 18th-century Enlightenment structures of the republic, with their talk of equality and their separation of church and state, but the heavy-handed theocracy of 17th-century Puritan New England, with its marked bias against women, which would need only the opportunity of a period of social chaos to reassert itself.

            Like any theocracy, this one would select a few passages from the Bible to justify its actions, and it would lean heavily towards the Old Testament, not towards the New. Since ruling classes always make sure they get the best and rarest of desirable goods and services, and as it is one of the axioms of the novel that fertility in the industrialised west has come under threat, the rare and desirable would include fertile women – always on the human wish list, one way or another – and reproductive control. Who shall have babies, who shall claim and raise those babies, who shall be blamed if anything goes wrong with those babies? These are questions with which human beings have busied themselves for a long time.

            I think Atwood definitely considers it a mark against Christianity that one can find the tools for totalitarianism inside it, but I don’t think she thinks it is unique in that or is, therefore, uniquely damned. She thinks Christianity can be used for good or ill and wrote a story where the latter occurred. Contrast this with the patriarchal impulses and ideologies that she suggests are the true agents behind this exploitation of Christianity–Atwood takes pains in both the original book and in interviews to note the “good” versions of Christianity, but I’m pretty sure she doesn’t think there are any good versions of patriarchy.

          • herbert herberson says:

            I do agree that much of both the marketing and the fan commentary are around the lines you mention, and are dumb.

            Watching the Hulu show, it did seem eerily relevant, but only in the sense of how fragile our current order is and how easily and quickly it could dramatically change under the right circumstances. I haven’t read any the commentary; I hope at least some of it makes that more limited point, because I’d rather believe no one who gets paid to write is stupid enough to think that Trump doesn’t validate Ross Douthat’s “[i]f you dislike the religious right, wait till you meet the post-religious right” line.

          • Deiseach says:

            The modesty costumes worn by the women of Gilead are derived from Western religious iconography — the Wives wear the blue of purity, from the Virgin Mary; the Handmaids wear red, from the blood of parturition, but also from Mary Magdalene.

            Well, that’s wrong for a start. The colour symbolism of iconography (and the clue is in the term there) derives from the Greek tradition, where blue refers to the heavenly, the divine, and red to the mortal (so that is why icons and images of Christ have red robes to indicate the Incarnation, I refer you to Roman Catholic imagery of the Sacred Heart):

            Blue signifies the heavens and the kingdom of God not on this earth.
            Byzantine icons of Mary show her with red outer garments and blue ones on the inside. This signifies her original human nature (the red) and her heavenly nature (the blue). In Eastern iconography Mary was depicted in red or brown to depict her as a physical (grounded) being but the earliest icons depict her in blue. It could have depended on the availability of pigment. Lapis Lazuli was ground to create the blue colour and was a very expensive stone.

            Clothing: Icons of Christ will show him with Blue outer clothing and red inner clothing. Christ’s inner garment is red and symbolizes his humanity. His outer garments are blue and symbolize his true divinity. In addition to blue, red and green are also reserved for Christ and Virgin Mary.

            As any fule kno, white is the colour associated with purity, innocence, etc. White and blue are the colours of Mary associated with her Immaculate Conceptionnot the same as the Virgin Birth, please note, even Hillary Clinton got that one wrong; white alone (and gold) with the imagery of Our Lady of Fatima.

            As for the Dan Brown-level stuff about St Mary of Magdala, I’m not going to touch it with a barge pole.

            Look, let’s just assume that Atwood is riffing on “The handmaids wear red because they are SCARLET WOMEN” and leave it at that. She didn’t need to have a theocracy as the villain of the piece in a post-fertility crash apocalyptic dystopia, Frank Herbert didn’t need one in his 1982 novel The White Plague when dealing with similar themes. The villains you pick as your Big Bads say a lot about what you find threatening and what you find credible as villainous motivation and capacity to be a real threat, and for Atwood it’s pretty clearly very conservative social and religious types. The Handmaid’s Tale was published in 1985, during the hey-day of the Moral Majority (which declined with the ending of the 80s). That was the powerful bloc influencing political thought of the party in power at the time, we needn’t look any further afield to find out why a liberal progressive would have cast a Christianist theocracy as the Big Bad Wolf in a cautionary fable.

            Offred herself has a private version of the Lord’s Prayer

            Atwood takes pains in both the original book and in interviews to note the “good” versions of Christianity

            Yes, and that’s Atwood’s version of “good” Christianity – one where you have your own private tweaks and alterations to make it one that suits you and is comfortable to your secular beliefs and assumptions. Offred doesn’t pray the prayer as commonly known, the original Gospel version untwisted by the Rabid Theocrats, she invents her own version. Offred’s ‘God’ is not Abba, Father, which puts God in a personal relationship with the person praying but the impersonal and unknown You (Atwood can’t see anything good anywhere in notions of fatherhood or that fathers are loving and protective, not God as Father, not even as a foil to The Patriarchy and its distortions that are causing ruin). That’s Atwood’s “good Christianity” in a nutshell: my impersonal God located within me and under my direction.

            My God. Who Art in the Kingdom of Heaven, which is within.

            I wish you would tell me Your Name, the real one I mean. But You will do as well as anything.

            I wish I knew what You were up to. But whatever it is, help me to get through it, please. Though maybe it’s not our doing: I don’t believe for an instant that what’s going on out there is what You meant.

            I have enough daily bread, so I won’t waste time on that. It isn’t the main problem. The problem is getting it down without choking on it.

            Now we come to forgiveness. Don’t worry about forgiving me right now. There are more important things. For instance: keep the others safe, if they are safe. Don’t let them suffer too much. If they have to die, let it be fast. You might even provide a Heaven for them. We need You for that. Hell we can make for ourselves.

            I suppose I should say I forgive whoever did this, and whatever they’re doing now. I’ll try, but it isn’t easy.

            Temptation comes next. At the Center, temptation was anything much more than eating and sleeping. Knowing was a temptation. What you don’t know won’t tempt you, Aunt Lydia used to say.

            Maybe I don’t really want to know what’s going on. Maybe I’d rather not know. Maybe I couldn’t bear to know. The Fall was a fall from innocence to knowledge.

            …Deliver us from evil.

            Then there’s Kingdom, power, and glory. It takes a lot to believe in those right now. But I’ll try it anyway. In Hope, as they say on the gravestones.

          • dndnrsn says:

            Nothing I’ve read of Atwood’s impressed me much. I get a major whiff of CanCon – I don’t think anyone would give a hoot were she an American.

          • Douglas Knight says:

            What would Americans care about CanCon? Can local promotion in Canada lead to popularity snowballing and taking off in America?

            How about: the book is about Cambridge, MA, and literary elites promote it out of nostalgia for their time at Harvard.

          • dndnrsn says:

            It’s easier to come to prominence on the Canadian literary scene than the American, relatively speaking, because there are fewer people, and while “popular in Canada” is neither a sufficient nor necessary condition to become popular in the US, a Canadian popular in Canada is more likely to become popular in the US than a Canadian unpopular in Canada. And it’s not as though we don’t have other cultural exports.

          • manwhoisthursday says:

            It’s easier to climb to the top of the Canadian literary heap than the U.S. heap, but the incestuous CanLit scene tends to reward mediocrity.

          • INH5 says:

            How about: the book is about Cambridge, MA, and literary elites promote it out of nostalgia for their time at Harvard.

            Which, personally, I find a totally bizarre choice of setting that by itself makes it very hard for me to take the premise at all seriously. If I were to write a list of “places in America most likely to fall under the sway of an oppressive Christian theocracy,” the Boston metro area would be pretty close to dead last, and I don’t think that would change at all if I was writing the list in 1985.

            Even if Margaret Atwood really did think that the US Christian Right was the boogeyman, would it have killed her to do some research on what parts of the country they were actually dominant in?

          • The Nybbler says:

            Even if Margaret Atwood really did think that the US Christian Right was the boogeyman, would it have killed her to do some research on what parts of the country they were actually dominant in?

            The book loses most of its impact with its intended audience if you set it in the Bible Belt. It’s gotta be “It can happen here”, not “It can happen over there in the other tribe’s territory”.

      • Bugmaster says:

        Hey, believe it or not, I actually liked The Handmaid’s Tale (the book, I haven’t seen the TV series). I think it was a well-written sociopolitical allegory, similar to 1984 (though, granted, not nearly as impactful). The only part I hated about the book was the tacked on “50 years later” ending; the ambiguity of the actual ending was perfect, IMO, and the add-on totally ruined it just to drop some pointless anvils.

        • manwhoisthursday says:

          I liked the book too. The narrative is well handled, and the worldbuilding, while wholly implausible, has its own paranoid, dreamlike splendours.

      • J Mann says:

        As horror, works like The Handmaid’s Tale, Get Out, It Came From Outer Space (or 1984, I suppose) are effective because they play off our fear of Christians, white people, Communists and Fascists respectively, whether or not those fears are justified.

        As allegory, it’s important to keep an eye on whether Communism actually is some infectious idea that is infiltrating America, etc.,

      • manwhoisthursday says:

        In actual fact, the way theocratic movements have taken over lately is that religious conservatives in the countryside demographically overwhelm the more liberal urban population and start imposing their ways on the city folk. So, the Quiverfull movement would have been a more plausible mechanism for theocracy to come to power in America, if Atwood had actually based her dystopia on the Quiverfull movement, which she didn’t know about at the time.

        Problem is, while lots of feminists would want to say that all this is just patriarchal indoctrination, the fact is that women actually vary quite a bit in how much they want to have children, and, as you would expect, in the modern world, the women that really value having children over having a career . . . tend to end up having more children. Then, everything being heritable, the daughters of these women in turn will also tend to value having children over having a career. And so on. You can peel off some of these with the delights of modern civilization, and propaganda of your own, but then that just leaves the women who really really really want to have lots of kids.

      • Marshayne Lonehand says:

        manwhoisthursday deprecates “crazy fever dreams like The Handmaid’s Tale” [1985]

        Lol  how many SSC readers have read the “crazy fever dream” that is Robert Heinlein’s novella If This Goes On — (1940), as collected in Heinlein’s Revolt in 2100 (1953)?

        The Handmaid’s Tale and If This Goes On — aren’t essentially different, are they? Except that Heinlein is writing chiefly for kids? 🙂

        • The Nybbler says:

          ROTFL, nice try John. Those aren’t juveniles. I found _The Handmaid’s Tale_ to be more similar to “Eclipse” and “The Stone Pillow” rather than the later (in the Future History) “If This Goes On –“, but Heinlein did a much better job, being Heinlein.

          • Marshayne Lonehand says:

            Nybbler, your Heinlein erudition is astounding!

            PLEASE enlighten SSC readers with some quotations from Heinlein’s The Stone Pillow!

            But note carefully the publication date of Jo Walton’s hilarious review of Stone Pillow … which is April 1. 🙂

          • The Nybbler says:

            Alas, my cross-ficton library card has expired, and I no longer have a copy of those stories. I do agree that Lucille Ball was an odd choice in the movie adaptation (which was not at all faithful to the original), however.

      • DrBeat says:

        Not understanding the religious is not the primary problem with the Handmaid’s Tale.

        It’s feminist science fiction. Feminist science fiction requires people to act absolutely nothing like people in the real world, doing things they would absolutely never do in the real world, for goals that nobody has and nobody wants and nobody will ever want and betray a complete incomprehension of the motivations of everyone that isn’t a doctrinaire feminist, and then gestures at this mass of incorrectness and says “Isn’t this all hauntingly familiar?”

  20. Salem says:

    Not all STEM graduates go work in STEM jobs – apparently that means there is a glut of STEM graduates. By that measure, we have a Brobdingnagian glut of humanities graduates – yet DeBoer calls the number of humanities graduates “low.” Does he even believe his own argument?

    The idea is not that we need more STEM graduates because Microsoft can’t hire. Of course Microsoft can hire, and if they can’t that’s their business (maybe raise salaries?) rather than a public policy concern. The idea is that we need more people with mathematical and technical skills across the board, even (especially?) in non STEM professions.

    DeBoer’s measure is absurd and he must know it.

    • Freddie deBoer says:

      “a Brobdingnagian glut of humanities graduates”

      No, we don’t. That’s simply not true. The numbers are low:

      https://nces.ed.gov/programs/digest/d16/tables/dt16_322.10.asp?current=yes

      They are falling:

      https://www.insidehighered.com/news/2017/02/21/liberal-arts-students-fears-about-job-market-upon-graduation-are-increasingly

      Humanities majors don’t perform poorly in the job market:

      http://www.augusta.edu/provost/documents/38-how_liberal_arts_and_science_majors_fare_in_employment.pdf

      “The idea is that we need more people with mathematical and technical skills across the board, even (especially?) in non STEM professions.”

      Cite evidence for this claim. Or for any of your claims.

      • Salem says:

        You’re hilarious.

        ~50% STEM graduates don’t go on to work in STEM fields – according to you, that’s a glut. Far fewer humanities graduates go on to work in humanities professions – if you were consistent, you’d call that a glut too. Why is your measure of whether we have a STEM glut so different to your measure of whether we have a humanities glut? Because you’re a [fill-in-the-blank].

        And when called on the inconsistency, it doesn’t bother you at all. You just repeat your claim that 12-15% of graduates is “low.” Low compared to what? (Tumbleweed). Seemingly, low compared to your arbitrary judgement of how many students “should” graduate in humanities. If only all public policy issues could be settled by the Freddie Feel-O-Meter!

        You helpfully point to a link showing that STEM graduates, and particularly engineers, earn more than humanities graduates, which seems to directly refute your point, but as you are cheerfully unconcerned by logical consistency, we can hardly expect you to be bothered by mere empirical evidence.

        “The idea is that we need more people with mathematical and technical skills across the board, even (especially?) in non STEM professions.”

        Cite evidence for this claim.

        Look at what everyone from Barack Obama to Rick Scott have said on the subject. Heck, your own twitter feed is full of people telling you that’s what they mean when they say we need more STEM graduates. What is your evidence that I am wrong about what your opponents believe?

        I didn’t come into this with any particular view on whether we need more STEM graduates or not – I was just calling out your dishonesty and strawmanning. But the more I hear from you, the more I think that your opponents have a point.

        • Acedia says:

          You helpfully point to a link showing that STEM graduates, and particularly engineers, earn more than humanities graduates, which seems to directly refute your point

          Which point of his does that refute?

          • Salem says:

            That humanities majors don’t perform poorly in the job market.

          • Acedia says:

            Your claim that a person who makes less money than STEM graduates or engineers is therefore performing poorly on the job market is deeply bizarre. It’s very unlikely that DeBoer (or indeed any reasonable person reading his words) was defining poor performance in that way.

          • Besserwisser says:

            I think there is a bit of a miscommunication going on. Since Mr. deBoer already posted here he might correct me on this but the major claim of his article didn’t seem to be “STEM does no better than any other field” but “no field does exceptionally well, including STEM”. It seems less about the comparison, or he really should have used more comparisons, but against the idea that a degree in a STEM field is a job guarantee. He might have overstated his case in regards to the relative career prospects of STEM fields and also the case of his opponents but whatever.

      • bean says:

        No, we don’t. That’s simply not true. The numbers are low:

        Low relative to what? The ideal world where everyone gets a humanities degree? The rate in 1975?
        Doing some crunching of those numbers, I don’t see how the humanities as a field have had losses since the 70s. Some fields are down, some are up. Such is life. Yes, as a proportion of degrees, they may be down, but the fields that gained a lot over the interval are things like Criminal Justice, Recreation, Health Care and Business which I think we can all agree require degrees a lot more than they used to. I think it’s reasonable to attempt to factor them out, so I used degrees issued/US population. (Not perfect, but better than doing complicated demographic work.) By this metric, the humanities seem to be doing OK. English is down by 50%, but liberal arts/humanities is up 300%. Visual arts is up 100%. Area studies is up 100%. Philosophy is down 10%.
        By this metric, it looks like a wash. Yes, there has been a small drop 2010-2014, which we can attribute to the recession. But I don’t think that there’s any reason to treat 2010 as the natural rate (keep in mind that English in 2010 was still only ~60% of 1970, although 1970 looks like it may have been an outlier) and the current situation as dangerous. In fact, eyeballing the table shows that there have been previous swings in rates in these subjects. Some of that may have been Vietnam, but some is just market forces at work.

  21. dumky2 says:

    So, I have this evil idea for a prank using VantaBlack. Paint the inside of an elevator with it. When the doors open people will freak out 😉

    • Luke Perrin says:

      Being in a room with the floor, walls and ceiling painted would be very weird. If it was well lit you’d be able to see the other people and tables and chairs, but it would feel like you had been dropped into the matrix before the rest of the environment had loaded. Just infinite blackness stretching on forever (until you bump into it).

      • random832 says:

        I suspect shadows would be noticeably darker in such a room too, since the walls don’t serve as a diffuse light source.

        • Bugmaster says:

          I don’t think you would even see any shadows, other than on your own body and clothing.

          • random832 says:

            Well, the comment I was replying to implied that there would be other non-Vantablack objects (tables and chairs) in the room. You’d see any shadows cast onto them.

  22. onyomi says:

    Re. “hereditarian left”‘s first three points:

    1. The idea that some people are inferior to other people is abhorrent.
    2. The mainstream scientific consensus is that genetic differences between people (within ancestrally homogeneous populations) do predict individual differences in traits and outcomes (e.g., abstract reasoning, conscientiousness, academic achievement, job performance) that are highly valued in our post-industrial, capitalist society. (my emphasis)
    3. Acknowledging the evidence for #2 is perfectly compatible with belief #1.

    I agree with all three of these, but the problem is, when someone says “no, no I’m not saying Hispanic people are inferior; I’m just saying they have, on average, fewer of the skills which make you a valuable member of 21st century society!” I can understand how a pundit or journalist makes the leap to “so, you think Hispanic people are inferior…”

    Yes, in theory, almost everyone in the Western world will tell you that everyone is created equal on some metaphysical, ethical level they have a hard time putting into words (or “before the law” might be a more tangible criterion), but in practice, everyone mourns the death of Princess Diana and, well, Prince, a lot more than they mourn the death of a homeless, unknown, old guy.

    The other problem, of course, is that most people have internalized some form of mind-body dualism. They see physical features like height and hair color as not part of the “core” of a person’s being, whatever that means. Your height is just a “feature,” but your mind is you, in some sense. Add to that the fact that most other features, like height, are not a simple good-bad binary (one can imagine someone saying “well, I’d like my son to be tall, but not >7 feet tall,” but it’s hard to imagine someone saying “I’d like my son to be smart, but not too smart.”), and it’s also easy to see why people do a mental calculation which goes “genetically smarter, on average”–>”better.”

    I’m not saying people should think that way. I also wish people knew what the hell it meant to say “on average,” but they don’t seem to get that either.

    • Anonymous says:

      I think I recall a Moldbug post, somewhere on Medium, on this topic. If you implicitly ascribe value to people based on their intelligence – such as when you see lefties bashing righties over the fact that righties tend to be on average less smart – then you’re up against a little bit of a cognitive dissonance if you try to square your political beliefs and values with intelligence being highly heritable.

      One way to resolve this is to deny the genetic basis for intelligence. It’s not true, but it’s consistent with your prior political beliefs.

      Another way is to deny that intelligence has any impact on the valuation of a human being as a human being – this is what Moldbug does.

      • leoboiko says:

        Even if intelligence is not even a bit heritable, it would still be a betrayal of the Left’s values to bash right-wingers for being unintelligent (or virgin, or fat, or rednecks, etc.).

        Yes, a lot of people do that, but they only bring shame to our cause.

      • Dabbler says:

        If you’re going to deny intelligence has value you have a serious problem. Namely- what makes humans more valuable than animals, if not intelligence? There is no quality you can use for which there are no differences between people. Even consciousness- some people would have stronger and “deeper” emotions and others would have shallower ones that could never be as deep.

        • Anonymous says:

          Belonging to our species?

          Being sophonts at all?

          Having souls?

          Imago Dei?

        • Kevin C. says:

          Namely- what makes humans more valuable than animals, if not intelligence?

          “Humanity”? Inherent possession of “inalienable human rights”? Immortal souls? It seems to me plenty of human societies throughout history have had no problems drawing sharp value lines between human and non-human on grounds other than “intelligence”. (Whether or not one might agree with the validity of any particular justification of the line.)

          Even consciousness- some people would have stronger and “deeper” emotions and others would have shallower ones that could never be as deep.

          How do you know? It’s not like there’s any good way to compare such subjective experiences “inside people’s heads”. How can we know if “consciousness” varies between people, or if it is binary (an entity is either “conscious” or not)?

          • Progressive Reformation says:

            I don’t have 100% proof of course, but in my own experience I’m quite sure that I experience different levels of consciousness, and that it’s not binary.

            For example, when I was a teenager I needed a 3 teeth pulled (for some reason, my jaw decided I needed 3 copies of a particular tooth, but not the 4th because screw symmetry) and they gave me Nitrous Oxide. I only remember snippets of this experience, but I certainly wasn’t unconscious, and I don’t think I was ‘conscious’ in the same sense as I am now.

            In any case, it seems fairly clear to me that if different chemical levels in the same brain can produce different states of ‘consciousness’, then different brain structures could easily do the same. Whether or not (normal) variance in human brain structures is sufficient to produce really different baseline-consciousness states is of course not answered by this.

          • Le Maistre Chat says:

            “Humanity”? Inherent possession of “inalienable human rights”? Immortal souls? It seems to me plenty of human societies throughout history have had no problems drawing sharp value lines between human and non-human on grounds other than “intelligence”.

            Yeah, no kidding. Though it’s questionable how many pre-Christian societies treated physically or mentally disabled humans as having value. Remember Spartan eugenics, or how other the Greek polities thought leaving less desirable babies to die was a parent’s right?

            I think y’all should read Augustine’s City of God XVI.8, where he talks about how people who have deformed hands or feet, are intersex, or conjoined twins are equal parts of the omniscient God’s plan, and how anyone who ranks them by physical abilities (or mental) is ignorant of the whole.

    • leoboiko says:

      My doubts are more in the line of:

      a) How much averaged, between-group genetic differences matter, compared to intra-group variance?

      b) How much genetic tendencies matter, when compared to environmental and social factors? What are the relative effect sizes of each?

      c) How sure are we of any of that, given the reliability crisis and the strong personal incentives to believe in both genetic determinism and tabula rasa-ism?

      • Anonymous says:

        a) How much averaged, between-group genetic differences matter, compared to intra-group variance?

        I’d say, a lot.

        b) How much genetic tendencies matter, when compared to environmental and social factors? What are the relative effect sizes of each?

        Well, according to the stuff I’ve seen, like Clark’s research into heritability, the breakdown is something like:
        – 70% genetic
        – 0% environmental
        – 30% we don’t know

        c) How sure are we of any of that, given the reliability crisis and the strong personal incentives to believe in both genetic determinism and tabula rasa-ism?

        Research into IQ has been going on for a long, long time. Per Scott’s recent post on the self-correcting nature of science, I would definitely expect it to right itself by now if it were substantially wrong.

        • Anonymous Bosch says:

          Well, according to the stuff I’ve seen, like Clark’s research into heritability, the breakdown is something like:
          – 70% genetic
          – 0% environmental
          – 30% we don’t know

          Here is a good example of the kind of lazy hereditarianism I got on Scott’s case for in the deleted thread. If your idea of discussion about race and intelligence is “according to stuff, including [one dude], it’s 0% environmental” you are exactly the sort of person who has no business discussing it. This recent Frontiers survey listed the opinions of intelligence researchers as follows:

          Asked: What are the sources of U.S. black-white differences in IQ?

          0% of differences due to genes: (17% of our experts)
          0-40% of differences due to genes: 42% of our experts
          50% of differences due to genes: 18% of our experts
          60-100% of differences due to genes: 39% of our experts
          100% of differences due to genes: (5% of our experts)
          M=47% of differences due to genes (SD=31%)

          This kind of uncertainty among experts should inspire incredible humility in non-experts. Our understanding of the genetic basis for intelligence has advanced very little beyond “it’s substantially heritable” and the shared environment is similarly hard to quantify. Someone breezily waving off this kind of uncertainty and spouting “0% environmental” is, at best, projecting a glib desire to appear much more well-informed than they are.

          • Douglas Knight says:

            Those are difference questions. There really is a scientific consensus that more than half of within race variation in IQ is genetic.

            Added: According to your link, 73% claim that it is reasonable to quantify the heritability of intelligence.

          • Nornagest says:

            That’s an interestingly bimodal distribution.

          • Douglas Knight says:

            It looks pretty close to uniform to me. It could be (17,25,18,34,5) on (0,25,50,75,100) or (17,13,12,18,17,17,5) on (0,20,40,50,60,80,100). Trying not to be too precise leads to peaked distributions. In particular, everyone in 41-59 said 50 for that reason.

    • Anon. says:

      Yes, in theory, almost everyone in the Western world will tell you that everyone is created equal on some metaphysical, ethical level they have a hard time putting into words

      This is just a side-effect of our folkbiological system, which is essentialist. See eg How Biological is Essentialism?.

    • Winter Shaker says:

      most people have internalized some form of mind-body dualism

      I recently had cause to re-read the second Harry Potter book (in English, as a prelude to reading it again in Dutch, but that’s another matter), and I am reminded of the scene where our two lanky heroes take a magic potion to transform themselves into the two stocky henchpersons of the school bully – and they naturally find themselves taking the shape of those people, and even speaking with their voices, but have to consciously make an effort to ape their mannerisms (and of course, don’t have access to their memory at all).

      Probably not many of the people were bothered by ‘wait – if Harry and Ron can transform into Crabbe and Goyle and yet not take on their minds and personalities … assuming their brains were not identically sized and shaped to begin with, that’s got to cause some serious brain trauma’.

      • Anonymous says:

        A Wizard Did It. Literally.

      • Kevin C. says:

        Or how about animagus McGonagall turning to and from a cat in Book 1? Or Voldemort’s disembodied state? These are even more extreme. Remember, though, that this is a series in which Dualism is solidly established; “souls” are clearly enough of a thing that they can, under appropriate conditions, be extracted, or even divided.

        • Winter Shaker says:

          True, I was just bringing up the example I’d read most recently. Point is, in agreement with Onyomi, that it’s weird how not-weird we find those sorts of idea.

          As an aside, the Dutch language is delightful – I have already gleaned (from the scene where the Dursleys treat Harry ‘like a bomb that might go off at any moment’) that the word for explode is ‘ontploffen’. Apologies to Aapje, and any other Dutch speakers who will find this patronising, but that is such a cute onomatopoeia 🙂

          • Aapje says:

            Onomatopoeia are fun, nothing wrong with pointing them out.

            I like the fairly recent word “plofkip” (literally: exploding chicken) which refers to the fastest growing breeds of meat chickens. They grow so fast that it’s like an explosion, blink and they are twice the size 🙂 An animal rights organization was very effective using that word as a meme to shame the meat industry.

            BTW. I’m interested in your little project. Care to tell us a bit more?

          • Winter Shaker says:

            BTW. I’m interested in your little project. Care to tell us a bit more?

            Trolling, mostly 🙂
            I met a troupe of Dutch speakers through the amateur folk music festival circuit that I move in, and want to be vaguely able to speak their language properly the next time we meet (since they were surprised and delighted that I knew even the basic-est of basics, learned from a book in time for a holiday in Belgium).
            But it is a fun language, for an English speaker – so similar in many ways, yet so alien in others. I’m even just about getting to grips, albeit slowly, with the ‘gr’ sound combination in, eg. ‘graag’ (which, for the sake of everyone else, in most of the dialects I’ve heard, involves making two different very non-English guttural back-of-the-tongue sounds in succession, and is quite tricky)

          • Aapje says:

            The word ‘ontploffen’ is actually quite interesting from a linguistic point of view, to tell you a little bit about Dutch.

            The actual onomatopoeia is ‘plof,’ which mimics the bass-heavy sound of an explosion. In Dutch, to turn a word into an unmodified verb, you add ‘en’ at the end, similar to ‘ing’ in English. So just like ‘bang’ becomes ‘banging,’ so does ‘plof’ become ‘ploffen’ (the extra f is to get the right pronunciation, or else you would say it like ploo-fen). So now we have the act of exploding, so why do we need the ‘ont’?

            The ‘ont’ indicates a state change from a stable state. This is most clear if you look at the Dutch word for ‘burning,’ which is ‘branden.’ You use ‘branden’ to tell people that your fireplace is lit. If you are trying to get it lit and you have trouble getting the (wet?) wood to burn, you can say that you have trouble getting it to burn or in Dutch that ‘ontbranden’ is hard. However, if you toss a bucket of water in your fireplace, you don’t say ‘ontbranden’ to indicate that the state changed from fire to non-fire.

            However, in other cases, ‘ont’ does indicate that the state changes away from the verb. For example, if you want turn a man into a eunuch, that is ‘ontmannen.’ Changing the state away of the person from a man into a non-man.

            So in Dutch you can put the emphasis on the state change or the state itself, depending on whether you add the ‘ont’ or not.

            PS. The ‘en’ can not just be used to create an unmodified verb, but also to denote multiples of a noun. So one friend is a ‘vriend,’ but two are ‘vrienden.’ This can create confusion when a word is both a noun and a verb. Due to Facebook, friending and unfriending have become Dutch words, so ‘vrienden’ can now both indicate multiple friends or befriending someone on Facebook*. You need context to figure out what is meant.

            * ‘Bevrienden’ already existed, but is not used in the context of Facebook.

          • Aapje says:

            The Dutch like to be trolled like that, it’s not like many foreigners want to learn our language. So any effort is appreciated.

            Anyway, my favorite Dutch onomatopoeia is ‘plofkraak,’ which are actually two onomatopoeias and literally means ‘boom crack.’ You know ‘plof’ now, but ‘kraak’ is an onomatopoeia for burglary and is the word that actual burglars use. It probably refers to the sound of using a crowbar on a door or window sill: the sound of wood cracking.

            Fairly recently, thieves have been using explosives to blow up ATMs, which we call ‘plofkraak’. So the burglary part is the ‘kraak’ and the explosives part is the ‘plof.’

          • Winter Shaker says:

            Cool; dankuwel voor deze informatie 🙂
            The other one I like that springs to mind is ‘brommers’ for mopeds.
            I hope that that is directly analogous to calling them ‘vroomers’ in English.

          • Aapje says:

            The etymology is quite different. The original name was bicycle with assisting engine (fiets met hulpmotor). Then in 1950 three journalists decided that a better name was needed and they came up with bromfiets, where brom is indeed a onomatopoeia, but one that existed before and means making a bass-heavy vibratory noise (like if you clear your throat). At least one word combination with brom already existed then: bromvlieg (vlieg = fly), which is the Dutch word for the blow fly.

            Later, bromfiets was turned into brommer in popular language, although bromfiets is still the legal term.

            We also have snorfiets, which is very similar, but refers to scooters. Snorren is similar to purring (as in: what cats do).

            PS. ‘Dank u wel’ are three separate words. We don’t write everything together (although much more than English) 🙂

          • Winter Shaker says:

            We don’t write everything together

            Easy mistake to make 🙂
            I’ve got the the point where the Wemel / Weasley family are referring to Voldemort as ‘Jeweetwel’…

          • Aapje says:

            That’s a choice by the translator, really. Je-weet-wel would have been correct as well and would have matched the English text 100%.

            Having it written as one word is very atypical and mainly because Rowling made the choice to convert ‘you know who’ to a name, which is atypical both in English and in Dutch.

            BTW. There is a Dutch band/act called Boys of You Know What I Mean (Jongens van Je Weet Wel). They play requests, so it’s a rather amusing reference to vague requests that they probably get a lot: can you play that song about love? You know what I mean, right?

          • Ruud says:

            The word ‘ontploffen’ is actually quite interesting from a linguistic point of view, to tell you a little bit about Dutch.

            Curiously, a very similar structure exists in English, where things ex-plode, possibly after de-flagration.

          • Aapje says:

            @Ruud

            Good point, I looked it up and plode seems to come from the latin plōdō, which means clapping. So it does seem quite similar.

        • JulieK says:

          Or how about animagus McGonagall turning to and from a cat in Book 1?

          And how can that transformation still work in HPMOR? Presumably that story’s author would not say “because of souls.”

          • Creutzer says:

            The fact that this should be impossible is actually brought up in chapter 2 of HPMOR. I don’t think it’s ever explained, though.

          • Paul Brinkley says:

            That’s my recollection, too. Harry never explores the problem again, nor does anyone else. Rather, the point of that scene is to illustrate the reaction of someone who reads books and is rational (Harry). Harry accepts his observation; his father the professor insists magic must be some mistake of perception or something. (His dad strangely never figures into the transfiguration incident, even though he’s standing right there. I never understood that. Maybe the levitation was enough to shut him up.)

    • veeloxtrox says:

      I am curious, what is your moral framework that supports 1?

    • manwhoisthursday says:

      Superior and inferior are related to a goal. For many goals, intelligence is superior, which is why more intelligent people tend to be regarded as superior. But it is not necessarily the ultimate value.

    • Conrad Honcho says:

      Yes, in theory, almost everyone in the Western world will tell you that everyone is created equal on some metaphysical, ethical level they have a hard time putting into words (or “before the law” might be a more tangible criterion)

      I prefer “People are equal in the eyes of God, should be treated as equally as reasonably possible under the law, and are not at all biologically equivalent.”

  23. Sniffnoy says:

    So (just going by your own link 😛 ) the Pope John story is more interesting than what you wrote. Due to a misreading of an old list of popes, people mistakenly got the idea that there had been an earlier error in pope numbering, that there had been two different popes who had been known as Pope John XIV — that one of these (“Pope John XIV II”) should have been Pope John XV, and Pope John XV should have been called “Pope John XVI”, and so on. Pope John XXI, the first Pope John after people came to believe there was such an error, deliberately skipped 20 in order to correct for the perceived error, so that at least all the popes from then on would be numbered correctly. But in fact there was no error; there was only ever one Pope John XIV after all. And so in fact rather than correcting an error he introduced one, which has continued to be carried forward. (The original text that was misread as listing two Popes John XIV was in fact listing first the duration of Pope John XIV’s reign, and then the length of his imprisonment at the hands of Antipope Boniface VII; this was interpreted as listing the durations of the reigns of two different Popes John XIV.)

  24. Jack V says:

    Coming from a UK background I’ve two views on forced attendance.
    One is, this seems patronising and stifling.
    The other is, when asked to schedule their own time, everyone screws it up at first. Better to get that learning experience out of the way at university if you haven’t already, than bomb your first job. (Better still even earlier)

  25. leoboiko says:

    I’m delighted that someone managed to sneak a glyph variant like the multiocular O into Unicode. You can see some N-ocular manuscript sources in the proposal (p. 46).

    For an angelic mark, it looks positively Lovecraftian. (ꙮ_ꙮ)

    It’s also an early codepoint in the BMP, so by now most computers should have a font to display it.

    • MawBTS says:

      It’s like an insectile node of eyes. Very disturbing. We should have left it in the Middle Ages.

      I like looking at foreign language letters and ascribing aesthetic content to them. I always thought Georgian script was particularly horrific. The letters look like torture equipment.

      The English alphabet is dull. Though maybe I just think that because I was raised with it.

      Question for people who were raised with an alphabet other than the English one: do you find English letters boring, or interesting?

      • leoboiko says:

        What English letters? There’s no such thing. There’s only the Roman alphabet, which was spread equally to Britannia and Germania and Hispania and Africa and all the provinces. It’s not English no more than Frank or Gothic or any other barbarian’s.

        What’s my opinion of it? The Roman alphabet fills me with awe, to be frank. The Empire is long gone, but its mark has reached so far. I look at an uppercase screed in a dirty bathroom in Brazil and think of the Trajan column, and wow it still looks pretty much the same. Five hundred years ago this was a jungle rich in biodiversity and freedom and now that the hand of Progress worked its magic, it’s brick and pollution and Rome. And it’s such a great, rational writing system. I mean sure, the alphabetic principle is due the Greeks, but that’s the point; the Empire takes; the Empire is not a petty, weak little thing like the Nation, it doesn’t want homogeneity or purity, but on the contrary it wants the Other within its borders, paying tribute, submitting; once the Greek were made to bent the knee, the entirety of Greek culture belonged to the Empire, even the Greek Gods belonged to the Empire (as did the Zoroastrian Gods, the Egyptian Gods et cetera). And, in spirit at least, Rome still conquers; most undocumented languages, when documented, are now given a notation in Roman writing. The more languages use it, the more incentive is there to adopt it for even more languages. imperium sine fine dedi. Urbi et orbi. SPQR.

        (Italic letters make me think of Chancery and the Renaissance and high humanism; I can all but feel it written by hand by some genius polymath solving all the mysteries of the world.)

        (Ok, ok, there is such a thing as English letters, and I do think it’s a shame no one ever uses them anymore. I find your quaint, barbaric writing to be quite picturesque; the shapes are rustic yet warm, kind of like an Enya limited edition release of Nordic runes. I of course understand why you had to submit yourselves to the glory of Roman writing, but part of me wishes you still kept using English letters, if nothing else for your tribal ceremonies and such. Would add a pleasant local color to Britannia and other related provinces.)

        • Winter Shaker says:

          part of me wishes you still kept using English letters, if nothing else for your tribal ceremonies and such.

          Apparently there is a movement, though how successful I don’t know, to revive the Old Hungarian Alphabet. Runic enough for you?

    • Le Maistre Chat says:

      For an angelic mark, it looks positively Lovecraftian.

      Eh, doesn’t seem contradictory to me anymore. One of the things I’ve noticed about Lovecraft’s tales is that the nebbish bookworm narrators never gain knowledge that cosmic nihilism is true. Like in At the Mountains of Madness, the narrator gains knowledge that plate tectonics is true (!), about the relationship between the Old Ones, shoggoths, and Earth animals, but nothing about materialism being true, the non-existence of superhuman intelligences that care about men, etc. In The Call of Cthulhu, the narrator gets an earful of Nietzscheanism from the cultist Old Castro, but very little knowledge aside from “an octopus-headed humanoid dragon exists, and can be temporarily damaged by a speeding yacht”. When he ghostwrote The Mound, the Spanish Catholic narrator interprets Cthulhu not as an Overman, but a pagan “spirit of universal harmony” (!).

      These narrators have, in-universe, a load of cognitive bias. That’s what I’m trying to say. =)
      So the idea of Seraphim having a Lovecraftian appearance that makes modern non-Christian bookworms run shrieking in existential horror doesn’t seem incongruous or heretical.

  26. Peter Gerdes says:

    As for the question of “Are we dating or just friends who have sex” I don’t think it is even a meaningful distinction for most poly relationships. I mean there is a real sense in which my wife and I are just friends who live together and have sexual relations. Ultimately, I think the only distinction which this question tracks is whether or not your partner is interested in a `serious’ relationship, i.e. , one which might lead to marriage or at least involves giving up other sexual partners and advertising your relationship in a way that cuts off other sexual opportunities. If you are in a poly relationship that you already know isn’t headed for marriage I’m not sure there is any fact still at issue in this regard.

    • leoboiko says:

      Me and my torrid intellectual affair’s preferred expression these days is “torrid intellectual affair”. It seems to describe how we feel about each other (something stronger than “having casual sex with a friend”) while still avoiding the cultural baggage of words like “boyfriend”, “dating” or “marriage”.

    • blacktrance says:

      I think the distinction between dating and friends who have sex (and marriage vs long-term friendship with sex, living together, etc) is whether romantic attraction is present. Romantic interest, as a quale, is more than just the overlap between sexual interest and friendship.

    • Brad says:

      I assume everyone recognizes that the article (and it appears the whole site) is satirical and is just using the title as a jumping off point?

  27. AlphaGamma says:

    On Taser/Axon and body cameras:

    There have been complaints about where and how the footage from these body cameras is stored.

  28. James says:

    Are the people in Kernel mostly UK-based, or does it include a significant contingent of people from the US (the bay area!?)? I only ask because of how they’re talking about it as an alternative to existing “hubs”.

    And where did the group come from? Is it a rationalist tumblr thing? I see the facebook group is fairly sizeable.

    • Scott Alexander says:

      I think mostly UK-based, and I know rationalist Tumblr was involved but I’m not sure if it was the starting point.

    • magicalbendini says:

      The Kernel is mostly Europe-based, with a large contingent currently living in the UK.

      The group membership itself comes from facebook mostly but has quite a few people from tumblr, the rest are an assortment from discord/slack/meatspace.

      The starting point was a branch from an existing project a couple months ago, although the main project it came from is no longer exists. This provided the initial group but it is now a small percentage of the total.

      As the person running the group I am happy to take questions here or on Facebook.

      • James says:

        Thanks. I’m vaguely interested. I live in Bristol at the moment but have been considering moving to Manchester for a long time. (Also, my brother lives there.) I’m not really in a position to move right now, so I won’t be one of the early adopters, but I will pay attention and continue to consider it as an option. I might drop you a line by email so we’re in touch that way.

        • magicalbendini says:

          Email is more than welcome.

          The current count for people considering moving (including those who are certain) is at 26, not including today’s group, who I haven’t surveyed yet.

          The SSC meetups survey had 19 Cambridge respondents, for reference.

  29. rlms says:

    Nice to see someone else making a sonnet-writing program. I didn’t use deep learning, but I think my results were pretty similar. Since they did things properly, their poems are better in terms of scanning and sentence structure, but the differences are fairly marginal. For both projects, the main thing that stops the poems being convincing is that they simply don’t make much sense semantically. It’s cool that they’ve managed to invent a way of finding related words. I chose words based on their (frequency in poetry)/(frequency in general) ratio. This illustrates an important point about AI: deep learning stuff might be able to do things with natural language that looks impressive, but we aren’t really any nearer to writing programs that can actually *understand*.

    One of my program’s poems:

    incessantly inhabited the kite
    the dumb mumbles remembered thereabouts
    furtively fell the flammable delight
    i lopped the unimaginable shouts
    the goddess is like the northeastern pan
    the wildest eucalyptus caved afloat
    they bounded the unfathomable clan
    the yellowish despair is like the goat
    the jagged window diminished anymore
    and infinitely overwhelmed and bowed
    fitfully scattered the obsessive drawer
    he camped the unimaginable crowd
    the squeaky filament is like the rain
    as discontented as the meaty reign

    • James says:

      Yes, I was impressed by how smoothly (some of the lines of) the examples read in the paper. Yours is good too. Where did you get your poetry corpus?

    • nimim.k.m. says:

      Yes. It’s remarkable that these systems produce either poetry, humor that’s fun because it’s nonsensical, or texts of very rigid or repetitive structure / format.

      This illustrates an important point about AI: deep learning stuff might be able to do things with natural language that looks impressive, but we aren’t really any nearer to writing programs that can actually *understand*.

      Or the way I like to put it after playing with similar language generators a little bit: a program that is able to produce texts that appear to demonstrate true understanding of semantics would be a mighty step closer to an AGI (initiative is another), and probably one necessary cognitive tool to create some we’d call a human level intelligence.

      This brings me to one reason why the largeness parody paper did not convince me. It just assumes that human level intelligence is a thing alike to sizeness of mountains, and then makes fun of applying that to arguments made from the assumption that it isn’t. When creating an “intelligence” from more or less scratch, it could very well turn out that the “semantics block” is a distinct ability to the “rational capacity” as measured by e.g. Raven’s matrices. Maybe even orthogonal one, instead of mere distinct.

      Who says that the semantic “understanding” ability scales with computing time or memory as well as, say, ability to process logical problems in a more restricted domain? Greater “semantic understanding” and “intelligence as measured by Ravens matrices” (and bunch of others) coincide in humans, but this is because our brains and minds are a product of evolutionary process.

      • nimim.k.m. says:

        A separate tangent.

        Yes. It’s remarkable that these systems produce either poetry, humor that’s fun because it’s nonsensical, or texts of very rigid or repetitive structure / format.

        Just after clicking “post”, I thought adding this: I’d guess chatbots that manage ritualistic conversations (as per a half-remembered quote from Heinlein’s Citizen of the Galaxy, where someone remarks to the protagonist that in certain kind of cultures, one can spend a day talking to other people according to societal niceties and customs and polite reactions and other empty phrases, without really saying anything.). We even might be there already, I haven’t looked recently.

  30. Alejandro says:

    I had always thought of Rod Dreher as some sort of crotchety conservative blogger who was deeply concerned about The Gays. Apparently he is actually a tragic figure resembling an Old Testament prophet come to life.

    The joke here, of course, is that Old Testament prohets were just the 600-700s BCE version of crotchety conservative bloggers deeply concerned about The Gays (replacing “The Gays” with “The Foreign Gods Worshippers”).

  31. Freddie deBoer says:

    The sign thing rings so true to me. And I have to say that it’s part of this bigger overall trend on the left towards a therapeutic mode of engagement rather than a political one. “Hey, we can’t change anything, so let’s have a good time while we go down.” It’s very aggravating to me.

    • HeelBearCub says:

      Eh.

      Consider:

      “Don’t be angry because you will just be confused for hating us, as an outgroup, and we won’t pay attention. You should be welcoming and friendly so we might listen to you.”

      “You aren’t angry? You must not care.”

      • entobat says:

        +1

      • Conrad Honcho says:

        I think the left has just misunderstood the persuasive power of marches/demonstrations and marching has become a cargo cult. No one came around to Dr. King’s way of thinking because they saw Dr. King and friends marching, they came around to Dr. King’s way of thinking because respectable-looking people got set on by dogs and fire hoses for merely marching. It’s rather difficult to make a case that you’re being horribly oppressed when you’re well fed, well clothed, and freely speaking your mind in public unmolested.

        Marching and protests probably have good tribal bonding utility but very little if any persuasive power when you’re not getting beaten.

    • Progressive Reformation says:

      I don’t think the left is “go[ing] down”. As Moldbug says, “Cthulhu swims left”.

      We have a Republican House, a Republican Senate, and a Republican President. Our Supreme Court now leans (slightly) conservative. And yet there has been practically no effort whatsoever to reverse, say, marriage equality – something that happened only two years ago and used to be a key issue for conservatives. So even with certain electoral reversals, I don’t think the left is on its way out or “can’t change anything”.

      [It shocks me that marriage equality is basically ironclad now, when barely ten years ago the country was seriously considering the Federal Marriage Amendment, i.e. illegalizing gay marriage through the freaking Constitution (though I’m quite happy about this particular development)]

      • suntzuanime says:

        The reason they tried to make it illegal through the freaking Constitution is that they realized if they didn’t, this would happen. They failed, and so this happened. It’s similar to how people tried and indeed even succeeded at pushing alcohol prohibition into the Constitution, because they knew that the Supreme Court was unlikely to find a penumbral right to be prevented from drinking alcohol on its own.

        • wintermute92 says:

          Yeah, that. The push to use the Constitution wasn’t because the position was so bulletproof it was to be a fundamental trait of the country, it was because there was no other way to make it happen. If you want to massively restrict people’s personal freedoms by law, you pretty much have to use an Amendment or it’ll fall in court immediately. Which was pretty much the point of the system, so I tend to file the attempt-and-failure of the FMA under “system working as intended”.

      • Anonymous Bosch says:

        We have a Republican House, a Republican Senate, and a Republican President. Our Supreme Court now leans (slightly) conservative. And yet there has been practically no effort whatsoever to reverse, say, marriage equality – something that happened only two years ago and used to be a key issue for conservatives. So even with certain electoral reversals, I don’t think the left is on its way out or “can’t change anything”.

        There’s definitely been a rightward shift on immigration. Even if you assume Trump won’t get Congress to fund his wall you’re already seeing what is essentially Romney’s “self-deportation” strategy on steroids. Plus the second travel ban will likely survive SCOTUS.

        • cassander says:

          >There’s definitely been a rightward shift on immigration.

          Immigration policy hasn’t changed at all. Maybe it will in the future, but until then, at most, it’s rhetoric, not reality.

          • Conrad Honcho says:

            But you would agree immigration enforcement has shifted rightward, correct? So Anonymous Bosch’s observation that “there’s definitely been a rightward shift on immigration” is accurate.

          • cassander says:

            @Conrad Honcho

            I’m not sure what you mean by enforcement as distinct from policy. I mean, I’m sure there are more border patrol agents than there were a decade ago, but that’s really just bureaucratic growth. Other than that, how has enforcement changed?

          • Conrad Honcho says:

            @cassander

            Deportations are up, DACA recipients are also being deported, attempted border crossings are down 70%, Texas passed a law against sanctuary cities and is already acting, the DoJ is attempting to go after others…

          • cassander says:

            >Deportations are up,

            That’s a complicated question that I don’t know enough to argue cogently, but know enough to know that simply quoting headline figures doesn’t give you the full picture.

            attempted border crossings are down 70%,

            That seems likely to be the result of a change in the attitude of immigrants, not US policy. Immigrant attitude might very well respond to changes in policy, but it can also change in response to other things.

            Texas passed a law against sanctuary cities and is already acting, the DoJ is attempting to go after others…

            And cities in other states have declared themselves sanctuaries. I’m going to call that a wash.

          • Conrad Honcho says:

            That seems likely to be the result of a change in the attitude of immigrants, not US policy. Immigrant attitude might very well respond to changes in policy, but it can also change in response to other things.

            When the guy who characterizes a not-insignificant portion of illegal Mexican immigrants as rapists (and some good people too, I guess), who talks about the “bad hombres” who “have to go back,” when the media blares for two years that this guy is hyper-racist against mexicans, when this man is elected President and then his attorney general (also billed by the media as a hateful white supremacist) announces “The most important thing for us is to send a message to the world that the border is not open. Please don’t come. You will be apprehended if you do come, and you will be deported promptly. And if you’re a criminal, you’ll be prosecuted. And if you assault our officers, we’re going to come at you with a ton of bricks” and then border crossings go way down…I don’t know, man. I think William of Ockham would like a word with you.

          • cassander says:

            @Conrad Honcho

            I’m not disputing that trump’s rhetoric might have had a chilling effect on potential immigrants. I’m certain it did, but rhetoric is not a shift in policy. If trump doesn’t match his rhetoric with actual policy, then the chilling effect will fade.

    • shakeddown says:

      I think there’s a tendency to assume that people who joke about something don’t genuinely care about it, and that it’s wrong. For a trivial example, Jews (especially Israeli Jews) make way more holocaust jokes than anyone else.

  32. Mark Paskowitz says:

    I’m a bit surprised that you don’t see any reason to be concerned with the partisanship of the March for Science. I see a lot of parallels between it and the growing partisanship of free speech, in that one side of the political divide IS worse but neither is pure, and actively exacerbating the conflict, even if you’re in the right, is counterproductive.

    Of course, I don’t have to tell you this. You made exactly this point in your Sacred Principles as Exhaustible Resources post.

    • Progressive Reformation says:

      “one side of the political divide IS worse but neither is pure, and actively exacerbating the conflict, even if you’re in the right, is counterproductive.”

      I don’t see why “exacerbating the conflict” is necessarily counterproductive. Surely it sometimes is and sometimes isn’t. Martin Luther King Jr. and Gandhi exacerbated their conflicts, and I don’t think it was counterproductive in either case. On the other hand, these guys are also exacerbating their chosen conflict, and I do think it’s counterproductive.

      I think the difference is simply whether you have an effective solution to the problem in mind. Exacerbating the conflict often leads to your preferred solution being implemented (assuming you have sympathizers in powerful places, as King did, you can motivate them to act). If the solution is effective, bang, done. If not, then the problem worsens and you exacerbate the conflict more, etc. until either you fizzle out or something drastic and probably tragic happens. Sadly, this rules out quick-and-easy ways to evaluate political movements, but hey, what can you do.

      • Mark Paskowitz says:

        Sure, it isn’t necessarily counterproductive. I didn’t mean to imply that this was a general rule, merely that I think it applies to the March for Science and the free speech campus movement (2017 version). It seems we agree on the broad principle that sometimes it can be productive, other times not.

        As for the specifics, we may or may not agree. I was just pointing out that Scott himself had raised this concern parenthetically in an earlier post, so I was surprised he was a bit dismissive of it here. To that, I can add introspection (n=1). I’m a pretty scientifically inclined person, but I find myself more and more skeptical that loud claims of “science” reflect actual underlying science.

  33. Tatu Ahponen says:

    It’s not like March for Science made a collective decision for everyone to bring witty signs and wear funny costumes. It’s individuals who have come to the protest who did that – and of course there’s a lot of pictures of witty signs, because they tend to get readers and Facebook likes.

    Arguably the sort of people who go to March for Science are also the sort of people who are ready to crack witticisms even about things that they believe in quite seriously, compared to other sorts of protests, because that’s just their preferred mode of communication.

    • Marshayne Lonehand says:

      This x 100. See (e.g.) PhD Comics and/or xkcd and/or The Far Side and/or Calvin and Hobbes.

    • herbert herberson says:

      Also: I wonder how much of it is just a proportion of locals. Anecdotally, I saw a lot of people who traveled to go to the Women’s March on my social media, while the March for Science was mostly people who lived and worked in the DC area–which in turn makes sense, there’s a lot science jobs in that area between the government, the military, and all the various contractors.

      If a protest is a day trip for you, it makes sense that you’d be a little less grim about the whole thing, and I can’t see anything wrong with that at all.

      • J Mann says:

        True story: I covered the march against Iraq War I for the school paper, and ended up marching near the border between a gay anti-war group and a group of anarchists. They spent several minutes chanting “We’re Here! We’re Queer! We’re Going to Smash the State!”

    • J Mann says:

      I would say the March for Science is:

      (1) less urgent than a march to prevent Rwandan genocide or abortion. America’s an intensely technocratic and scientific country, either under a Trump administration or an Obama administration, although both have their scientific failings

      (2) but still important. The marchers are arguing for more scientific funding, a more scientific approach to problems, etc. That’s an important value to them that they want to share, but it’s not too serious for jokes.

      Those two factors create more room for wit than trying to save lives.

      • Winter Shaker says:

        less urgent than a march to prevent Rwandan genocide or abortion.

        Not that I want to start a flamewar, but you are at least aware how ‘arson, murder and jaywalking’-ish that sounds to a lot of people?

        • herbert herberson says:

          Does it? People disagree on whether or not abortion should be legal, but I’d say there’s a pretty wide consensus that it’s a very important question either way.

          • Winter Shaker says:

            Well, people also disagree on whether jaywalking should be illegal, but there isn’t (to my knowledge) a large contingent who think that it should be treated in the same ballpark of seriousness as arson or murder. J Mann wasn’t just saying that the issue of abortion was important to come to some sort of consensus on (with a lot of people arguing that it shouldn’t be illegal at all); they were saying something that kind of implies that it is uncontroversial that abortion should be prevented with the same sort of urgency that genocide should be prevented, which, to someone someone on the mainstream other side of that dispute, sounds very much like ‘conflating very serious crimes with (should-be) non-crimes’

          • J Mann says:

            ETA: I meant “less urgent to the marchers”, but left that implied instead of stated clearly. On reflection, my post was capable of being interpreted both as I intended and as Winter Shaker read it. I apologize for the imprecision.

            I answered below – I think if you review what I said, I never said that the issues were uncontroversial among non-marchers, only that they marchers believed in their cause with a particular level of intensity.

            For myself, I think that any group that worships Bill Nye either does not actually know much about either science, Bill Nye, or both, or doesn’t care about the conflict, but that isn’t relevant to the intensity of the marchers.

            Although now that I think about it, the presence of Nye does suggest that the whole thing is a lark.

          • Marshayne Lonehand says:

            As the Arctic sea-ice melt gets underway, the citizen-scientists at Arctic Sea Ice Forum are documenting 2017 as yet another year of record-setting declines in sea-ice volume.

            Whatever Bayesian probability one assigns to the proposition “anthropogenic climate change is real, serious, and accelerating” … the citizen-scientists at Arctic Sea Ice Forum are providing evidence that substantially increases that probability.

            Meanwhile the alt.SCC is focusing their indignant cognition chiefly upon … juvenile mockery of Bill Nye the Science Guy? What’s the rational point (if any) of that particular alt.obsession, the world wonders?

          • Nornagest says:

            Is there a Mad Libs template for these posts somewhere, or something? I could swear I’ve read this before.

          • CatCube says:

            As far as I can tell, he keeps trying to invoke an object destructor.

          • Marshayne Lonehand says:

            Nornagest says  “I could swear I’ve read this before [Arctic sea-ice news].”

            Close, yet rationally speaking, Arctic sea-ice loss is (obviously) not a repetitive annual gyre, but rather a cumulative climatological “death spiral”.

            This is the scientific reason why unhappy Arctic climate-news will almost certainly persist throughout coming decades, centuries, and millennia — and even increase — isn’t that so? 🙁

            What’s similarly repetitive — yet far less easily understood than climate-change — is alt.denialism’s obsessive antipathy toward bow-tie wearing science-advocates.

            Whence this persistent — yet rationally inexplicable (hmmm … or is it?) — alt.antipathy toward climate-science advocacy, the world wonders?

          • Nornagest says:

            Okay, now I’m sure you’re trolling.

          • Marshayne Lonehand says:

            The Onion nails it:

            Nation’s Climatologists Exhibiting Strange Behavior

            “Well, I hate to see them [climatologists] all agitated, but these old professors sure do look funny waving their skinny little weak arms and pushing their glasses back up! And what is that chattering noise they’re making?”

            What are the objective and substantive differences (if any) between The Onion’s skillful parodies of willfully ignorant, abusively personalizing, consistently anti-scientific, denialist cognition, and the alt.SSCs exemplars of it — for example the above reflexively alt.paradigmatic Bill Nye/the Science Guy alt.mockery?

          • J Mann says:

            Maybe we can start a discussion on one of the open threads about Bill Nye mockery, which I think is easily defensible. Short version:

            – My Facebook feed is full of friends who gleefully repost stuff about how much they Fucking Love Science, including all kinds of Bill Nye for president garbage. If your idea of a scientific hero is a stand up comedian who has the word “science” in his show name, you’re well on the way to being a cargo cultist.

            – I’m mostly mad at Nye because of his intervention into deflategate, which made everyone who takes him seriously a little bit dumber. If he thinks science needs a mascot, he should hire a couple decent scientists to tell him what to say on an issue before he opines.

          • Marshayne Lonehand says:

            Hmmmm … perhaps alt.SSC comments should include disclaimers? For example:

            “The following comment is not rationally relevant to any scientific evidence or concern, but rather expresses alt.SSC irritation in respect to Bill Nye’s personage.”

            This would be helpful! 🙂

          • J Mann says:

            @Marshayne Lonehand

            If I say Bill Nye annoys me, I’m not sure why I have to disclaim a position on climate science. That’s baggage you brought to the discussion. The fact that Nye is a carbon rationing advocate doesn’t mean that I’m not entitled to find him annoying.

            Besides, I specifically referenced Deflategate, which is (1) a scientific criticism, and (2) not climate, and therefore a clue of one of the things I find annoying about the guy.

            He’s not good at science, he just wears a lab coat. If Kerry Alexander or William Connolley were the spokesman for climate change, I’d be a lot happier.

        • J Mann says:

          ETA: I meant “less urgent to the marchers”, but left that implied instead of stated clearly. On reflection, my post was capable of being interpreted both as I intended and as Winter Shaker read it. I apologize for the imprecision.

          are at least aware how ‘arson, murder and jaywalking’-ish that sounds to a lot of people?

          I was trying to convey that those were issues that the marchers viewed with a particular level of intensity, and that my perception was that the March for Science marchers who held funny signs probably didn’t see the issue as intensely as anti-Rwandan genocide marchers and anti-abortion marchers see their issues, at least as a matter of revealed preference.

          As to your specific question, I hadn’t specifically thought about whether people who don’t march for at least one of those cause might think they are different, since it wasn’t relevant to my point, but if asked, I know that they obviously do. Are you implying that you think it was offensive to include the two issues in the same set notwithstanding that they share relevant properties?

          Specifically I think everyone except some genocide apologists (if any) are opposed to the Rwandan genocide, while only some people are strongly opposed to abortion, but as I said, that isn’t directly relevant to a discussion of marcher intensity.

          • Winter Shaker says:

            Don’t worry, I’m not trying claim it was offensive, just that it was amusing to see what looked looked like an unintentional real-life equivalent of ‘arson, murder, jaywalking’ in the wild. Your wording made it sound like you personally rated abortion and genocide as obviously universally-recognised-as-comparable evils, that’s all.

  34. episcience says:

    I just wanted to say that the The American Interest piece on cost disease was really well-done. I liked it more than the original blog piece; it was punchier and pulled out more points of contemporary relevance.

    How did you like working with an editor? Did the piece feel better to you after the editing process?

    • Scott Alexander says:

      I thought the editor made everything worse and I tried to roll back as many of their changes as possible (which was less than 100%).

      It’s nice to know other people liked it better, makes me think that maybe editors exist for a reason, and helps me recalibrate my thought processes here.

      • sketerpot says:

        Not so fast! My reaction was that almost all of the editor’s changes made it slightly worse, and the subreddit’s reaction could not be called enthusiastic, so we’re probably not the only two people who felt that way.

      • RLM says:

        I wanted to give a detailed comparison of why I think the American Interest article is much worse than the original blog post, so that you don’t recalibrate your thought processes too hard! These are my own personal impressions from reading the article:

        The intro

        Your blog post starts in right away and gets me interested in some mysterious, secret force that’s making everything cost more, and which people aren’t really talking about / know the true extent of. It feels exciting, like you’re about to delve with us into something very important and general, and the chart is perfectly timed to quickly reveal that something is very wrong with education — with the promise that this is just the tip of the iceberg.

        The AI article starts off, first, by informing me that this article is behind a paywall, although they will deign to give me ONE free article a month. Then it proceeds into a meandering discussion about some sort of “paradox” that’s “hard-to-measure”, immediately transitioning into a weird jargon-filled discussion about “the Baumol effect” which I don’t care about or have any context for, and which it dismisses anyway in the next sentence. Then it spends three paragraphs giving a sort of “abstract” for the article. Paragraph 1 focuses mostly on the fact that we won’t be talking about military weapon systems, paragraph 2 brings up the (already dismissed and still jargon) “Baumol effect” and then dismisses the effect again while pointing out that this article will not offer any better alternatives, and paragraph 3 says that we will also briefly talk about politics. Reading the intro does NOT get me excited at all. I imagine I’m about to read some boring treatise about the “Baumol effect” (whatever that is), and then learn about some hard-to-measure (and therefore probably small and unimportant) paradoxes in a few sectors like healthcare. The “Baumol effect” I’m about to learn about will turn out to be unrelated to the paradox I’m about to learn about, and then the article will finish by offering no real explanation followed by a cursory bit about politics. This is a VERY bad intro! It fails to get me excited, wastes time with jargon, and worst of all, it fails to accurately convey the tone and content of the article!

        Primary Education

        Your blog post provides more info in the chart along with lots of hyperlinks to investigate the claim at both a high-level (politifact) and a low-level (the Cato institute’s actual numbers). Buried in paragraph 3 you point out the bit about the 20% improvement in minorities, and it’s clear to me that even though the graph looks pretty darn flat, I could confirm this for myself by checking the numbers.

        The AI article (which has already not done a very good job of signposting), just shows the graph and says in the caption “Source: Cato Institute”. It decides to START with the contradictory bit about the 20% improvement in minority education, whereupon I get very confused because I can’t find anything that looks like a 20% improvement in the graph, and the graph doesn’t seem to even be split by any sort of minority/white distinction. I start to seriously consider one of the following:

        – I am making such an extreme error in comprehending the graph that this article is probably above my head and I should stop reading. (but the graph seems simple enough!)

        – I’m looking at the wrong graph and there’s another education related graph split by race with a 20% improvement somewhere (there isn’t, this also requires a forward scan to the next graph in the article to confirm and is jarring)

        – The author is trying to hide something from me / doesn’t understand their own graph / accidentally picked the wrong graph to show.

        – The graph is an aggregate and doesn’t show any dips because the minority scores get averaged in. (but then why are the minority scores so important that it’s the first thing that’s brought up in the discussion)

        All of these conclusions are confusing and annoying and none of them bode well for the rest of the article! There are no hyperlinks for me to easily follow to resolve my confusion, and the only thing I can do is just shove the whole thing aside and hope it will make sense later.

        The last sentence of the paragraph is “As far as cost disease is concerned, the key point is that most of the increase in school spending per capita took place after 1985, and demonstrably helped neither whites nor minorities.” This is confusing to me because I thought we were talking about some economic paradox and not things having to do with race. The blog post properly treats this as just a bit of minutiae, which you can follow-up on if you want to, while at this point in the AI article I’ve got three concepts bouncing around in my head: some economic paradox which is too small to measure well, the “Baumol effect” which I still don’t know anything about other than that it’s probably irrelevant to the discussion, and now something about how it’s very important that minorities had a 20% improvement while whites did not, although the graph seems to contradict the statement. At this point I would normally abandon reading the article unless there was a very compelling reason to continue reading.

        In the blog post, this graph was a glorious harbinger of dread, revealing the first symptoms of the mysterious “cost disease” and leading us to understand that we’re paying a lot more for education and getting nothing in return! I’m led to feeling personally cheated, wondering what’s going on, and dreading that this problem might not just be limited to education…

        The last paragraph of the education section in the AI article starts “In that light, imagine a choice set before a poor person—white, black, or any other demographic. Would you prefer to send your child to a 2016 school, or to send them to a 1975 school and get a check for $5,000 every year?”

        The blog entry reads : “So, imagine you’re a poor person. White, minority, whatever. Which would you prefer? Sending your child to a 2016 school? Or sending your child to a 1975 school, and getting a check for $5,000 every year?”

        The AI article uses distancing language, talking about “a choice set before a poor person”. And it still starts out focusing on race like that’s somehow important! In contrast, the blog entry really puts us in the shoes of someone having to make this decision. When I read it, it feels like I’m the one who just got cheated out of a $5000 check for no reason, and gets me to actually think about the injustice of such a profound missed opportunity.

      • RLM says:

        College education

        Again, the AI article strips out the excellent hyperlink references, leaving me powerless to confirm any of the numbers. The other changes in this part include changing from:

        “I don’t know if there’s an equivalent of “test scores” measuring how well colleges perform, so just use your best judgment.”

        which is a great rhetorical device to get me thinking about how I personally judge the monetary value of college, to:

        “There is no equivalent of “test scores” to measure how well colleges perform, despite some recent efforts to create reliable metrics.”

        which just asserts to me that I can’t really judge how colleges perform. The blog post gets me in the right frame of mind to start thinking about the actual value I should be getting from extra money spent on college, the AI article just makes me want to pedantically argue against the assertion that there’s no way to measure college successes, and brings me away from the flow of the argument.

        The blog post finishes this section with a delightful first person perspective that you share comparing your college experience with your parents. I get the warm feeling of imagining your parents talking to you about their college hi-jinks “back in their day” and sharing their stories with you. Then, you reflect on your own experience, thinking “wait a minute, I think I just got taken for a ride for my $72,000!” This makes the argument personal, relatable, and makes me sympathize with you and think the same things myself.

        The AI article, in contrast, keeps the first person perspective but looses the charm. “As far you can see” your parents had a similar experience in college as yourself. I don’t imagine you actually talking with your parents in this case, I imagine you just bringing them up as a rhetorical prop for the article. The addition of “standard-issue angst” to the list of college experiences and the shallowness of the reference to your parents subtly alter the interpretation of the final line in my mind: instead of sympathizing with you, my first reaction is to think, “this guy is just complaining and probably doesn’t even actually know how college really was back in the day, he’s just guessing.”

        Health Care

        Again the lack of hyperlinks and other in-line references in the AI article is annoying. At this point the simplicity of the charts is starting to make me wonder if they’re really trustworthy or not. They feel like they could have been crudely mocked up in MS Paint to illustrate some non-rigorous data from the sources. It’s not easy for me to get at the sources, so who knows? In contrast, the blog has very engaging graphs, including the life-expectancy graph, which adds a bit of morbid humor to the train of discussion, breaking up what could otherwise be a tedious continuation of the last two sections. The AI article has no such humor, and I find myself scrolling through the health care section, still wondering what the point is and whether we’ll ever get to the “Baumol effect” I heard so much about in the intro.

        In the blog post, I felt that this health care section had the most punch out of all the sections, because I could tell that you personally live through these effects since you’re a doctor. In the AI article, I don’t get the impression that you’re a medical doctor who has personally seen his friends / himself suffer through some of these cost increases, because many of the grounding references have been stripped away. For example, when I first read your blog article, the single most powerful visual I took away from it was when you described how “…even when I was young in the ’80s my father would still go to the houses of difficult patients who were too sick to come to his office”. With just this one sentence, you lay bare the core human tragedy that cost disease really is: even with all of our supposedly time-saving technology, the son cannot carry on his father’s tradition of making house calls. We laugh at the idea of house calls today, but then think, “wait, why can’t Scott do house calls just like his father?”. And then it hits us that this highly relatable personal tragedy, in aggregate, is an economic cancer that is eating away our entire society’s ability to take care of itself, and no one knows why. In the AI article, we get: “Doctors used to make house calls.”

        When you talk about ACE inhibitors in your final game of “choose between our modern system / the old system and a large check,” it’s with a sense of authority, and it allows me to really clearly visualize the idea of a 60’s-80’s flavored hospital with modern drugs and vastly reduced prices. The typesetting is great too: you introduce the choice with an indented list which offers a sharp binary choice, and the choice feels like an indictment of the world we’ve somehow built vs the one we could have if we could only solve this cost disease problem. The AI article introduces this same choice in-line with: “That said, we can ask the standard-form question we have used before… “. Weak!

        Conclusion

        I’m not going to treat the other sections in too much detail. The general problems with in-line hyperlinks / graphs continue, and discussion of the “Baumol effect” is buried in a single paragraph and then dismissed. The language continues to be stilted, talking about things like “prestige dampers” while the blog article used more relatable language.

        In the blog you clearly paint a picture of what we could have, by describing Keynes’ 15 hour work week, your own personal ideal of a hospital like the one your father worked at, and affordable college like your parents had. You bring up the option of “having things be as efficient as they were a few decades ago” and point out that it would be the single greatest poverty elimination program in American history, and it really hits that something is going very wrong here! The AI article is missing these references and generally does a poor job of getting us to visualize just how much better the world would be if cost disease could be cured, and how much we’re getting screwed now.

        Every article starts with a promise for what it’s going to teach you, and should be judged on how well it delivers. In the blog, you open by implicitly promising to show us something scary and concerning, and then you immediately deliver over and over again, sprinkling personal anecdotes and humor and ending with a fourth-wall-breaking, rhetorically brilliant appeal to your readership stating that you’re scared and really want to know what’s going on. I left thinking that I had received a very valuable picture of a powerful economic enemy, as well as a new way to conceptualize left/right economic disagreements. The bold, all-caps statement that “ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SEEM TO BE GOING DOWN IN QUALITY, AND NOBODY KNOWS WHY”, really drives home the point that there’s a more compelling dimension than the tired old left/right divide. Basically, you promised something compelling and then delivered very effectively and then some.

        The AI article, in contrast, starts off talking about jargon, devalues its own later political discussion at the end, and describes the core problem we’re going to talk about as a “difficult-to-measure paradox”. It meanders for a while with neither humor nor personal connection, and then peters out at the end with endless qualifications like “The same is true to greater or lesser degrees”, “Not everybody understands all of this”, and refusing to refer to the left or right but instead only using weak terms like “Some get upset about teachers’ unions” and “Some promote free universal college education”. It ends saying that the “future of American politics may not get much better from here”, offering no real solutions, sense of urgency, or calls to action.

        I’m writing this as a counterbalance to the idea that editors always make things better; I consider the original blog post to be a masterful piece of rhetoric which I’m happy to share with my friends, but I would generally not share the AI article with anyone because of its many flaws.

      • Ralf says:

        I want to back up sketerpot and RLM (excellent) post. The subreddit was also rather irritated about the tone of the followup:

        https://www.reddit.com/r/slatestarcodex/comments/67j0j3/notes_on_notes_on_cost_disease/dgqxzf7/

        I guess you two just didn’t mesh together.

  35. ss4johnny says:

    From the CDC paper:
    “Although the reasons for the gap in life expectancy at birth between the United States and comparable countries are complex, a substantial portion of this gap reflects just 3 causes of injury.”
    These seems consistent with the other paper…

  36. Brad says:

    Matthew Yglesias changes my mind and convinces me that Obama accepting a $400,000 Wall Street speaking fee is bad. Basic argument: as long as corporations can offer politicians lucrative deals after they retire, they can reward pro-corporate decisions with plausible deniability, which incentivizes politicians to be pro-corporate. If you’re anti-corporate, this is directly bad; if you’re pro-corporate, this makes it impossible to convince people that you’re really making well-considered decisions in their best interests and not just being corrupt.

    If we are going to elect people in their 40s to be President, what do we expect them to do with the rest of lives?

    • Scott Alexander says:

      If people can’t think of anything to do after age 40 other than give speeches to Wall Street for money, I support euthanizing the elderly.

      Presidents get a nice pension. They make hundreds of millions off book deals. I’m okay with them taking it easy for the rest of their lives, building philanthropic foundations, engaging in activism, or whatever.

      Heck, George W Bush is having fun painting nice pictures of the people whose deaths he’s responsible for, maybe we should make it a rule that everyone has to do that.

      • Brad says:

        If we take the foundation suggestion specifically, is it okay if they solicit Wall Street banks and bankers for donations to it?

      • meh says:

        That is just as bad, presidents will get rewarded for pro-publishing decisions with plausible deniability.

        President gets $203,700 a year pension (from wikipedia) which is of course a lot of money, yet still probably the smallest income of anyone in his/her circle. This president is also staying in the district, which has the 3rd highest cost of living of US cities.

        • Jiro says:

          Corruption in the publishing business is a lot more innocuous than corruption in all industries.

          • Jaskologist says:

            It’s easy enough to launder corruption in any given industry through a non-corrupt one. That’s the beauty of money.

          • meh says:

            I think I was joking. It’s just easy to say there is a problem, but hard to find a solution. And this doesn’t help

            “If people can’t think of anything to do after age 40 other than give speeches to Wall Street for money, I support euthanizing the elderly.”

      • cassander says:

        Just pointing out, Bill Clinton got 15 million to do his biography, which at the time was the largest advance in history. I have no idea how much he made in total, but the book sold around million copies, so at 30 bucks each the book only made 60 million. 15 million is a fortune, but after taxes and buying a house, it’s not all that much money when you spend all day hobnobbing with the elite of the elite.

        The point of this is not to nitpick, but to point out something important, the problem of wealth/status mismatch. I’m not defending the practice of taking outrageous speaking fees, but it’s an entirely predictable thing to do when your status vastly exceeds your wealth. We should pay our leaders more, not because they deserve it, they manifestly do not, but because if we pay them more they’ll be more expensive to bribe. We should shower them with money while in office, then heavily restrict how much they can take in after they leave office.

        • hls2003 says:

          It’s also important not to be fooled by the “$200K pension” claims. Sure, that’s chump change in the circles where ex-Presidents are expected to move. But let’s not pretend that the $200K is the only direct benefit to Bill Clinton et al. For example, what does it cost to have round-the-clock Secret Service protection? How rich do you have to be to be able to legally move, inconvenience, or detain people with your bodyguards if they look vaguely threatening? How much money would it take to buy a rich non-entity a free lifetime pass to any event they wanted, and a private experience of it if they prefer? In that sense, we pay him $200K in money but millions in services, prestige, and status.

          • cassander says:

            This is true, but because it’s not fungible it’s the worst of both wolds. We get stuck with a huge bill for protection, but it doesn’t leave the protected person feeling that much richer. It effectively is just more status, and the problem is status/wealth mismatch. I’d rather we gave them as much money as the protection costs then let them do them buy as much security as they wanted.

          • suntzuanime says:

            I’d rather presidential decisionmaking not be affected by fear of assassination as much as possible. The nation has a selfish interest in providing security to ex-presidents, it’s not a pure perk-in-lieu-of-cash.

          • hls2003 says:

            @cassander: The security is just one example. I guess my point is that, to some extent, the millions in speaking fees and all the monetary foundation /library / whatever donations are almost (maybe not quite) preferable because they’re at least trackable. But it seems many of the real benefits are release from most anxieties that drive the average person. No college will ever turn down your daughters (at least for a Democratic ex-President). Your kids will never have trouble finding a job. You can do whatever interests you and someone will be happy to have you on board. I mean, if I were an ex-President, I might want to work with cheetah conservation efforts where I could work with the animals; my wife would probably want to find a bear sanctuary where she could feed them whipped cream. Or if you like science, and you want to get super into a field, you’ll get priority. Or if you want to travel, you’ll get free trips to conferences and events worldwide. Any golf course will beg you to play. Whatever it may be – work, kids, hobbies, interests – you have carte blanche. That sort of status and prestige is much tougher to value and to track than a simple count of how many dollars the latest Wall Street firm wired to your account.

          • cassander says:

            @suntzuanime

            I’m not sure there’s a measurable effect there, but if you give them money, they can buy all the security they want. And who assassinates and ex-president anyway?

            @hls2003

            I agree that being president gives you immense status. That’s unavoidable. What I want is to make sure it also gives you immense wealth, then make it illegal for anyone else to try to rent your status for money.

          • suntzuanime says:

            We’re talking post-presidential security here, not in office security.

            Yes, but if you claim that one’s presidential decisionmaking may be swayed by the prospect of a post-presidential bribe, surely also you must concede that one’s presidential decisionmaking may be swayed by the prospect of a post-presidential assassination.

            EDIT: The above post was edited to change the argument made after I had already posted a response. I will note that Saddam Hussein attempted to assassinate Bush after he had left office, so it can in fact happen.

          • cassander says:

            @suntzuanime says:

            Yes, but if you claim that one’s presidential decisionmaking may be swayed by the prospect of a post-presidential bribe, surely also you must concede that one’s presidential decisionmaking may be swayed by the prospect of a post-presidential assassination.

            the post presidential bribe is very certain to happen, assassination very unlikely. People respond at least as much to the likelihood of reward/punishment as the magnitude.

            EDIT: The above post was edited to change the argument made after I had already posted a response. I will note that Saddam Hussein attempted to assassinate Bush after he had left office, so it can in fact happen.

            Apologies for that, I misread your comment and changed my initial answer almost immediately.

          • Or if you want to travel, you’ll get free trips to conferences and events worldwide.

            You can get that with a lot less status than being an ex-president. I pretty routinely get invitations to go somewhere and give a talk at someone else’s expense. So do a lot of other people.

            The rest of your list, on the other hand …

        • Gobbobobble says:

          Isn’t that what Singapore does?

        • Ratte says:

          My Life actually sold around 2.25 million copies, but IIRC the publisher only gets ~30% of the sale price of a book, the rest going to the retailer and distribution chain, printing, royalties, etc. – and that’s setting aside discounts, promo copies, and suchlike. It’s very possible that Knopf lost money on the Clinton books, or at least made marginal returns.

          This wouldn’t really be unexpected, either. I know Reagan’s books didn’t even cover the advance from S&S.

    • dndnrsn says:

      Sent to Presidents’ Island, obviously.

  37. j1000000 says:

    Re: Sumner, not sure what the definition of “unskilled” is, but I assure you that no one in Dallas is just begging someone to grab a shovel and dig for $100k a year. Plus, Sumner now has a brief update hidden within his post that basically admits his numbers are wrong. Commenters on the article suggest the real wage might be more like $45k.

    It’s not exclusively an issue of hard work. A lot of people in this generation don’t even know the very basics of construction — tools, problem solving, etc. — and that makes it hard to even start your way up the ladder when no one wants to teach you on the job anymore. Part of the problem might be that products over the past 30 years aren’t built to be repaired, they’re built to be replaced, but maybe I’m making excuses.

    • Deiseach says:

      Or, more accurately, this generation doesn’t know the hundreds of small, peripheral things that pop up when you do construction: how to use tools other than a hammer, how to improvise solutions with tools at hand, etc.

      Which is why apprenticeships are necessary, and why “unskilled” labour isn’t really; it’s only “unskilled” if you mean “how hard can it be to handle a shovel?” Very hard, if you don’t know the first thing about it. Formerly, “unskilled” manual workers would have some basic knowledge picked up from learning from their fathers and doing odd jobs around home and for neighbours and could quickly pick up the rest on the job. Now that background knowledge is missing.

      There has been discussion on here before about credentialism and how silly it is to require someone to have a certificate before they can start hairbraiding, but there really is a need for proof of a basic qualification that you know how to do the tasks because the practical knowledge can no longer be taken for granted. (At work, we’ve had tiles come off the bathroom wall because the person tiling didn’t put the adhesive on correctly).

      • j1000000 says:

        (Sorry, I had already rephrased the part you quoted because I decided my comment was too long.)

    • sohois says:

      Do you think that such skills are somehow exclusive to certain generations though? Presumably all those little things that construction workers know how to do are built up by years of experience and training rather than just being imbued into young 1960s job seekers. There is presumably an argument that the construction seeking youth of earlier ages had already built up some skill through parental guidance or doing things in their home life, but how long would it really take to train that into a young worker today? Is it an impossible barrier?

      The issue does not appear to be with the quality of the applicants in any case. It appears that they simply cannot attract sufficient applicants in the first place, not that the applicants just aren’t good enough.

      I would posit 2 additional explanations that Sumner does not raise: 1) that the lack of ‘millenial’ applicants arises from a fairly rational long term evaluation of prospects working construction. Yes, construction work pays good wages initially and there is the possibility of quite high wages with advancement, but increases in pay are not the only thing that motivates people to move up. In a more typical office based career, the nature of the role will change a lot as well. As you move up into management, the commonly held view is that everything gets easier, as you dump your more boring or time consuming tasks onto subordinates whilst you sit in a comfortable corner office. Contrast that with a senior builder, who will probably still be busting their arse for 40 hours a week even with heavy seniority. Not only that, but on the pay scale as well, office work will appear considerably more unbounded; you can always imagine yourself rising to a C-level exec with high 6 figure salaries.

      Secondly, and related to the point about the ease of the role, even low level office positions of today appear to have far, far higher hourly rates than would they would seem at first glance, and far more “leisure” time. Any construction worker will be guaranteed to work 40 hours or more every week. If you slack off on a construction site, messing around on facebook, people are going to see and call you out on it. On the other hand, in my current, fairly low level role, I am actually working at best ten hours a week out of 40 hours at work. The rest of the time can be spent reading, or posting on SSC, or doing some other time waster on the internet or phone. It’s not perfectly free time but it’s a far sight better than filling it with physical labour. I could probably be making a lot more money right now if I had joined an apprenticeship out of high school instead of university, and I was at the time well aware of the hefty salaries being reported for plumbers or electricians, but per hour of actual work I think I’m doing a hell of a lot better in my current position.

      • Deiseach says:

        a fairly rational long term evaluation of prospects working construction

        Oh yeah. When there’s a boom, it’s a great job and you can make serious money. When there’s a slump, you’re in trouble. Our Celtic Tiger years had a huge property bubble, and the boost that gave to the construction industry drove part of our economic prosperity. When the bubble burst, all those jobs went with it, and several property developers and construction firms went bust, and people either went back home to Poland (we had immigration during the good times! people were coming to Ireland to work on the building sites!) or the natives tried going to Australia and Canada for work there just like the old days before the Tiger.

    • AnthonyC says:

      A lot of people in this generation don’t even know the very basics of construction — tools, problem solving, etc. — and that makes it hard to even start your way up the ladder when no one wants to teach you on the job anymore.

      I’m in that group. No one would ever want to see me on a construction site for exactly that reason.

      I also wonder how important the point about regional labor shortages is, because I’ve never really seen it explored before. If you’re married/in a long term relationship, you’re likely dealing with two earners, and it’s really hard to move to a new state, or even a new town. You have to find two jobs, in the same area, at the same time, in fields both partners are willing and able to enter. Maybe Dallas is hot right now, and five years from now SF will change zoning laws and need lots of new housing, and 5 years after that there’s a boom in Chicago or something. If you’re married to a school teacher whose pay is tied to seniority at a single institution, you’re not chasing those booms.

  38. Deiseach says:

    From that A’Lee Frost piece:

    The SSC grew into the largest, most prominent leftist summit in the United States

    Coincidence? But nothing is ever a coincidence! 🙂

    So according to Overcoming Bias, people are only interested in religion because it gives them a chance to show off? Ah yes, of course, that is why I liked mythology as a child – so I could show off to the approximately zero (0) people in my family and amongst my schoolmates who were interested in my weird interests. About the only “showing off” about how much I know about religion I got to do was six years ago, explaining the Rosary to a blog of interested (I hope) Protestants.

  39. JulieK says:

    “It turned out that the author [of “Notes on Cost Disease”] is not, as many of his blogosphere followers assumed, an economist or a social scientist of another flavor, but a psychiatrist practicing “somewhere in the Midwest.””

    Who knew? Good thing the editor enlightened us!

    • eyeballfrog says:

      I’m pretty sure we represent a tiny fraction of Scott’s “blogosphere” followers.

  40. bean says:

    I looked pretty carefully into the STEM surplus thing a while ago, and am pretty certain that it’s at the very least vastly overstated. Yes, 50% of STEM degree-holders have non-STEM jobs, but that depends very heavily on the classification of STEM jobs. For instance, something like a third of math majors are in teaching, and I’m pretty sure that most of them planned that from the start, instead of going there when they couldn’t get a job with the NSA or as an actuary. But teaching is classified as ‘non-STEM’, so they’re part of that 50%. A similar proportion of science majors are in healthcare, also non-STEM, and I again will posit that most of them are there by choice, and not due to inability to get jobs in STEM-classified fields. The largest non-STEM category for engineering majors is management. I won’t claim that this is intended in quite the same way that teaching and healthcare are, but it also doesn’t seem quite like a category people are likely to be forced into due to inability to get a job in a STEM field.
    (All of this ignores the fact that STEM workers aren’t totally fungible. The unemployment rate among aerospace engineers was probably very high in the 70s, but the people in question couldn’t suddenly become biologists.)
    Numbers are here.

    • Besserwisser says:

      How rigid are the categorizations? Health informatics is an actual field and I wouldn’t consider a CS graduate who mostly works infront of the computer while dealing with patient data as having an inadequate job for his qualifications.

    • bean says:

      I found an interesting article from BLS that provides data for there being a crisis in certain areas (most notably in areas that require US citizenship), while there is massive oversupply in others.

      • Jiro says:

        That sounds like a corollary of “hiring foreigners is cheaper, and the real problem with the “shortage” is that employers don’t want to pay what people are worth.”

        • bean says:

          Not exactly. The US citizenship requirements have to do with defense-related projects, and a lot of our PhDs are foreign-born. (I should have stated that, but forgot that most people don’t marinate in the defense world). It’s not a matter of competition so much as supply, particularly as some fraction of US citizen STEM people have moral qualms about defense work. (I don’t know what this fraction is, but I’ve had a couple of friends tell me that they wouldn’t be willing to do it.)

  41. loki-zen says:

    Some past studies that I took somewhat seriously suggested that antidepressant use during the first trimester pregnancy could slightly raise autism risk. The latest very large study fails to replicate this result and finds only a slightly increased risk of preterm birth.

    But doesn’t preterm birth raise autism risk? So could this just be the mechanism by which it raises autism risk?

  42. John Schilling says:

    …a flexible, living, bendable law will always tend to be bent in the direction of the powerful

    “I am altering the Constitution. Pray I don’t alter it any further.”

  43. ConnGator says:

    As a seasoned software developer I get emails every week saying there are 30,000 open development position in the greater Raleigh area. Either there is a shortage of tech workers or the Internet is not telling me the truth.

    • Scott Alexander says:

      I get ads all the time saying there are lots of sexy singles in my area desperate for sex. Maybe we should send them all to Raleigh to work in software.

      • Deiseach says:

        I get ads all the time telling me how I can make $100,000 a year working from home, and I wouldn’t even have to move to Raleigh!

      • Janet says:

        I get ads all the time offering to enhance organs that I don’t even have. Maybe we could clarify that biology does, indeed, fall within the bounds of STEM education? And then send them all to Raleigh, once we’re sure they’ll know what to do with it when they get there? (I’m having a horrible mental image about what answer they’d give to the “FizzBuzz” test, right now.)

    • Zodiac says:

      Are you saying you get 30.000 emails of open positions or is this from some job portals that tell they have that many notices?
      At least in Germany it has become common practice to leave job offers out constantly and just ignore applications that come in.

      • ConnGator says:

        Actual LinkedIn jobs. I’m sure they have an incentive to overstate the number of jobs, but in talking to other tech folks it does seem that local employers are all hiring.

        But, relevant to other posts on this, most seem to have pretty specific skill requirements. The average recent college grad would not be qualified for 95% of them.

        • Jiro says:

          “Pretty specific skill requirements” is often some combination of:
          — The employer wants to hire a particular H1-B candidate at a lower salary, but is required to prove he tried to hire Americans first, so he deliberately adds requirements so as to prevent that, and
          — some employers and/or human resources people are clueless about tech jobs and write down more requirements than they actually need, or than may even be possible (3 years experience in each of 10 fields, 5 years experience with a product that hasn’t been on the market for 5 years, etc.)

          • John Schilling says:

            This may be true in Tech; it is definitely not true in Aerospace and Defense. H1-B candidates aren’t a big deal because too many programs are for US citizens only and even if you’re hiring for e.g. commercial airliner development today, your first choice is going to be someone you can have designing a bomber tomorrow if need be. And there isn’t a short list of buzzwords and shiny new tools that an HR person can imagine they understand well enough to write specifications for. The technical managers with the job that needs filling write that part. Also, almost everything we do in this business is done with technologies that are at least a decade old, because those are the only ones we can trust to work every time, so no “five years experience with a product that hasn’t been on the market for five years” crap.

            And since, in A&D, we genuinely do have trouble finding US citizens with high-level expertise in the well-established technologies that we really are interested, I suspect that this may be true in Tech as well.

          • The Nybbler says:

            Defense, at least the software end, still has the problem of “Job requirements written to exactly the technology we use”, though. “Oh, you have 5 years experience programming on Solaris? That’s too bad, we’re looking for AIX people”.

        • ConnGator says:

          Ok, I looked more carefully at the next LinkedIn email I got and it was 31k jobs total in the Raleigh area. The number of actual software developer jobs is under 300, but total STEM jobs looks to be over 3k. So I think I was somewhat correct about lots of tech jobs being available, but quite wrong about the actual number.

    • Besserwisser says:

      I regularly get emails on my university account about job offers because I took a CS course. This isn’t the case for any other course I took, though admittedly most of those are also STEM related. My professor in geoinformatics also told me they had trouble finding people to do a Master degree because most students left to find jobs with BSc and were very happy with that. No other professors were that adamant about our job perspectives being awesome.

    • The Nybbler says:

      I am sure that somewhere, there is a database where if you ran a query which appeared to find open software postings in Raleigh, it would give a number over 30,000. I would not put a number on how many of those postings were erroneous, phony, duplicates (hint: most of them), or otherwise invalid. Or how many of the rest were offering a ridiculously low salary for the job, or specified an impossible set of requirements (of the “10 years of experience in Windows 10” variety). Or, maybe there isn’t.

  44. ADifferentAnonymous says:

    Am I the only one who found The American Interest editor’s commentary to be economically incoherent? I have no idea what cost-pull progress is supposed to be–is the idea that the hype around the innovation does irrational overconsumption? His Baumol idea–that number of workers, rather than wages, had increased–invalidates the mechanism by which the original effect operates, and he doesn’t seem to realize it. And saying that companies are too big is all very well, but without even speculating as to why, it’s not that interesting.

    • pdbarnlsey says:

      Yeah, speaking as an economist, it wasn’t great. “Perhaps it’s just that more teachers are being used to teach the same number of students to the same level” is really just rephrasing the question.

      And as for “there might be a new thing, and everyone wants it! But they can’t all get it, because… reasons, and then price goes up!” felt like something which wouldn’t have survived someone sitting the author down with a supply and demand diagram.

      There might be something to internal transaction costs, though, again, that’s really just re-specifying the thing you’re trying to explain. If big organisations are less productive, why are more things being produced by big organisations? And then you’re back to a lot of Scott’s points.

  45. daniel says:

    Is accepting a speaking fee sending any signal? If a corporation offers lucrative deals to presidents in a way future candidates hear about them it should suffice to create the same problem regardless of anyone taking them up on the offer.
    It would take many presidents refusing to make it seem pointless and even then the corporation in question loses nothing by making the offer.

    • Scott Alexander says:

      I agree it would take many presidents, but it does seem like one could make a cultural norm against this sort of thing which is so strong that nobody needs to consider it.

    • IrishDude says:

      It would take many presidents refusing to make it seem pointless

      A president precedent.

  46. John Schilling says:

    [Freddie deBoer says] there is no shortage of qualified STEM workers … Curious what all of the tech workers here think.

    I think we need to be careful equating “STEM” with “tech”. “Tech”, in contemporary usage, seems to be specifically consumer electronics and software, which is a very unique corner of the STEM landscape with a distinct culture that you don’t see in e.g. engineering. And yes, this causes problems when e.g. Elon Musk decides apply a tech-style approach to building rockets.

    On the engineering side, there is an absolute shortage of veteran engineers with certain sorts of specific expertise. And a surplus of veteran engineers with other sorts of specific expertise that unfortunately isn’t in demand. But the last time I posted a single opening for a job that only required a college degree (MS, or a BS and a few years’ relevant experience), I got over two hundred resumes that had the required credentials and at least twenty that were worth talking to.

    There have been articles bemoaning the (usually impending) shortage of aerospace engineers; reading between the lines I see these as a mix of people who really need specific expertise and don’t have time to develop it in-house because of the way government contracting works, and people who just don’t want to pay the market rate for engineering talent and/or credentials.

    • Scott Alexander says:

      “And yes, this causes problems when e.g. Elon Musk decides apply a tech-style approach to building rockets.”

      Can you explain this? Elon Musk’s rocket-building approach hasn’t seemed problematic to me.

      • bean says:

        SpaceX consistently takes twice as long to do anything as Musk says it will, and they’re notorious for burning out engineers.

      • John Schilling says:

        1. SpaceX has blown up two out of thirty-four Falcon 9 rockets. One of them in a pad test that didn’t need to have the customer’s very expensive payload on top but did because it would have saved SpaceX two whole days (IIRC) on rolling out the deliverable. There are very few industries in which a 5% catastrophic failure rate is acceptable in a commercial product; “tech” can sometimes get away with it because their failures are never truly catastrophic (well, almost never).

        SpaceX’s main domestic competitor, ULA, has launched one hundred six of its Atlas V and Delta IV launch vehicles without losing a payload. Most of my company’s business is in providing technical oversight to people launching satellites for the government to minimize the probability of catastrophic failure; when we show up at SpaceX, the response is usually along the lines of “we appreciate your technical expertise, but you’re cramping our style with all these rules and procedures”.

        2. SpaceX is burning through talent almost as fast as it burns through rockets. Expecting people to work sixty to eighty hours a week every week for modest pay plus stock options, is not sustainable. It can work in an industry where anyone over forty is either a millionaire entrepreneur who hires people to do the technical work, or a geriatric has-been who needs to go away and stop embarrassing everyone with his presence. In the rest of the STEM world, it’s the forty-year-olds who have the experience you need to keep your rockets from exploding 5% of the time.

        The smart ones, escape from SpaceX while they are still in their twenties and early thirties, and come to work for someone like, well, me. The ones who stick with SpaceX until true burnout, aren’t going to be much good to anyone.

        3. It may be that there is a path from where SpaceX is, to a reliable low-cost space transportation service. That path is almost certainly going to require largely abandoning the tech ideal of caffeinated youthful enthusiasm Just Doing It, and a much larger dose of engineering discipline.

        • Incurian says:

          SpaceX has blown up two out of thirty-four Falcon 9 rockets.

          SpaceX’s main domestic competitor, ULA, has launched one hundred six of its Atlas V and Delta IV launch vehicles without losing a payload.

          Is this a fair comparison? New rockets engineered in-house versus decades old proven technology?

          • John Schilling says:

            ULA’s first thirty-four Atlas V rockets had zero explosions and their first thirty-four Delta IV rockets had zero explosions. And, notwithstanding the roman numerals in their names, these were substantially new designs by a new manufacturing consortium. But they were new designs implemented with engineering best practices that SpaceX knows about and chooses not to adopt.

            Pre-ULA Boeing did use the Delta III to beta-test what would become the Delta IV upper stage; that did have two explosions, but IIRC was explicitly advertised as a developmental vehicle with payload space offered at a discount due to the added risk.

          • gbdub says:

            Two explosions also leaves out a 3rd lost payload due to an engine failure during boost (it was a secondary payload stranded in a low orbit that re-entered soon after launch), plus the failure of the first 3 Falcon 1s (and I think another that they crumpled in a defueling test?).

            It’s not crazy to lose a couple rockets early on, but I think the SpaceX style contributed to a lot of them – the Falcon 1 failures all seemed like things that could have been prevented with more industry knowledge / cooperation / best practices (instead of telling John Schilling types to buzz off, old timer). The first Falcon 9 explosion was due to insufficient testing / quality control of parts that probably would have gotten more scrutiny in a traditional manufacturer. The second lost an expensive payload that, as John notes, was only there to save a couple days, and additionally the whole superchilled propellant concept (particularly gambling it all on submerged COPVs) seems like it could have used more testing before going straight to implementation.

            Another thing is that they have been constantly tweaking the vehicle, which leads to faster innovation but adds risk and makes it harder to certify reliability (since they haven’t actually flown “the same rocket” all that many times). Air Force / NASA customers are not as excited to fly on an experiment.

          • John Schilling says:

            To be fair, secondary payloads are generally launched on a “we usually have enough fuel left over after delivering the primary…” basis. And I believe Elon has admitted the Falcon I was basically a developmental vehicle, aimed at building his team’s expertise and credibility at minimum cost per explosion, which is a reasonable approach.

            This, however:

            Another thing is that they have been constantly tweaking the vehicle, which leads to faster innovation but adds risk and makes it harder to certify reliability (since they haven’t actually flown “the same rocket” all that many times).

            is dead on, and bears repetition. In any sort of engineering where failures are not to be tolerated (i.e. just about any sort of engineering except commercial software), you have to freeze the design before going into production. After that, all the clever ideas about how to do it better are put on hold until the next version. This slows the development cycle, but greatly reduces the number of explosions due to incompatible cleverness.

            If you want a parallel program where you do incremental upgrades to an experimental prototype that you can afford to have explode, great, but that’s not what SpaceX is doing. It’s also not what ULA is doing, and maybe they should be, but that’s another criticism.

          • bean says:

            In any sort of engineering where failures are not to be tolerated (i.e. just about any sort of engineering except commercial software), you have to freeze the design before going into production. After that, all the clever ideas about how to do it better are put on hold until the next version. This slows the development cycle, but greatly reduces the number of explosions due to incompatible cleverness.

            Or, for stuff that doesn’t really have versions (rockets and airplanes) you have to throw absurd amounts of effort at making sure that the cleverness is compatible. And you can’t really afford versions in that environment because the build lag is so long.

          • Incurian says:

            And, notwithstanding the roman numerals in their names, these were substantially new designs by a new manufacturing consortium.

            Thank you, I stand corrected!

          • John Schilling says:

            Or, for stuff that doesn’t really have versions (rockets and airplanes) you have to throw absurd amounts of effort at making sure that the cleverness is compatible.

            The less black-and-white way to frame this is that there is an axis defined by how hard it is to recover from failure, and this defines the culture for doing a class of thing. Within STEM, “Tech” is way out on one end of that axis where, in development, the last stable build is only a few keystrokes away and, in the market, the customer will accept ridiculously buggy software so long as known bugs get patched before too long. So the overworked caffeinated youngsters trying everything until they get a combination that works, may be the fastest path to success.

            Civil engineering may mark the other extreme. Bridges, dams, etc, those really really need to work the first time, and there may be no going back if they don’t. Aerospace is pretty close; there’s some tolerance for crashing experimental airplanes and blowing up rockets, but those are very expensive failures and it may take years to recover.

            But then there’s the entire world outside of STEM, which at least has unambiguous definitions of failure. If an artist makes a painting that everybody thinks is ugly, but offers a sufficiently eloquent defense of its Deeper Meaning and/or has sufficient status within the artistic community, praise and large cash payments will be forthcoming. There’s no degree of eloquence or status that can explain away a collapsed bridge, or code that won’t compile.

        • IrishDude says:

          @John Schilling

          SpaceX has blown up two out of thirty-four Falcon 9 rockets.

          SpaceX’s main domestic competitor, ULA, has launched one hundred six of its Atlas V and Delta IV launch vehicles without losing a payload.

          Do you happen to know the pricing difference between ULA and SpaceX? I’d guess SpaceX is much cheaper and the customers of the lower cost option might be willing to tolerate more risk. Otherwise, it seems everyone that wants something in space should use ULA given its reliability.

          • bean says:

            The cost differential is exactly why SpaceX has any business at all. That said, I’m pretty sure it’s possible to do a better job than SpaceX has in learning the lessons of the past without bogging down entirely in the traditional aerospace procedure of ‘think everything through until everyone is so tired of it they’re willing to sign off’.
            (If I was in charge of a large order of satellite launches, SpaceX’s loss rate would probably be acceptable. Build an extra bird out of the savings from the lower launch costs. But that’s not really acceptable if I only have one payload.)

          • John Schilling says:

            SpaceX quotes $62 million for a baseline Falcon 9 launch for commercial customers; their price for government customers is about 50% higher in large part because those contracts involve e.g. letting my colleagues wander around their shop telling their people they are doing things wrong and generally making a nuisance of ourselves in the name of making sure the rockets don’t blow up (we hope).

            ULA markets almost exclusively to the government, in bulk contracts that average $225 million per launch but that includes some number of Delta IV Heavies (really three Falcon-class rockets strapped together) and Delta IV or Atlas V models with multiple solid rocket boosters attached. And a bulk-buy discount. I’ve seen estimates of $100-160 million for the effective cost of a single bare Delta IV or Atlas V of roughly equivalent capability to a Falcon 9. The Aerospace Corporation mission assurance team is already solidly integrated into ULA’s operations, so that doesn’t cost extra.

            It is very rare for ULA to sell to commercial clients; when it does happen the terms are not publicly released, and it is probably the case that the commercial client’s business plan really, really cannot withstand “sorry we blew up your satellite, here’s your insurance check”.

            Most western commercial satellites are I believe launched on the European Ariane V, which represents an intermediate case – $160 million per launch but can carry two satellites at a time, and blows up roughly 2% of the time. That’s been typical of the industry’s performance the past few decades, though they did let the cost creep up a bit before SpaceX came along.

    • Anonymous Bosch says:

      I think we need to be careful equating “STEM” with “tech”. “Tech”, in contemporary usage, seems to be specifically consumer electronics and software, which is a very unique corner of the STEM landscape with a distinct culture that you don’t see in e.g. engineering. And yes, this causes problems when e.g. Elon Musk decides apply a tech-style approach to building rockets.

      Absolutely. I’m technically a “STEM” guy in the sense that a technical degree is required to be a patent attorney but even my peripheral connection to the market makes it very clear that they aren’t all created equal. In particular there’s a huge glut in the life sciences; someone with a bachelor’s in biology or biochemistry is only marginally more employable than a liberal arts graduate and even when you add a JD to the mix, most patent attorney job postings the EE/CE/CS positions will require only a BSc while the plant and pharm positions will ask for a PhD.

    • Marshayne Lonehand says:

      Yes, Elon Musk presently is hiring across a broad spectrum of STEAM-professions, however he is seeking to hire solely that creative minority of STEM-workers who — in Bill GASARCH’s vernacular phrase — “understand rather than memorize“. No one presently knows how to pedagogically inculcate — with any very notable efficiency and universality, at any rate — this prized creative cognitive capacity.

      And yes, “memorizing”-class STEAM-workers placed in “understanding”-class work-environments will become demoralized and/or burn out pretty quickly … this being a harsh reality of all of the creative STEAM-professions.

    • wintermute92 says:

      I think we need to be careful equating “STEM” with “tech”. “Tech”, in contemporary usage, seems to be specifically consumer electronics and software, which is a very unique corner of the STEM landscape with a distinct culture…

      This is a key issue that’s frequently ignored. There are lots of things to be said about the “STEM shortage” and “STEM surplus”, but the simplest point is that STEM is a nebulous term that means totally different things in different studies. Some standards count a BS in Biology as a “STEM grad”, some limit to very hard sciences, CS, and engineering. If that Biology BS goes on to work as a nurse or doctor, some standards will call that “working in STEM”, many won’t. Some people call a math major with a finance job a STEM worker, some don’t.

      If one person is including all of medicine in their STEM, and another is only including physics/math/CS/engineering, it’s not a surprise that their numbers differ. And making it worse, a lot of people say STEM when they basically just mean computing, or software dev. So some people are including psychologists and other people are excluding IT, and then they’re surprised to disagree with each other.

    • The Nybbler says:

      On the engineering side, there is an absolute shortage of veteran engineers with certain sorts of specific expertise.

      I have heard that much of that is because the fields in question got smaller. So the companies stopped hiring many inexperienced engineers, instead keeping their veterans and not replacing them as they left due to attrition. Now those veterans are retiring… and the next generation doesn’t exist.

  47. Deiseach says:

    Okay, having read that “why can’t construction companies in Dallas find enough workers?” article, I then out of curiosity Googled for “construction jobs in Dallas” and out of about four sites, here’s a representative link or two.

    There are very few “turn up with your shovel” jobs on there, a lot of them are foreman (at least) and higher level, and most of them are skilled labour. If the idea is “construction work is unskilled work, so therefore why can’t these jobs be filled by those without college education?” then it is a mistaken idea. Companies are not looking for “turn up to work on site, shovel provided by us, all you need to do is stick it in the ground, no previous experience necessary”, they’re looking for guys who can read blueprints, operate CNC lathes, and ideally have a couple of years’ experience to boot.

    • wintermute92 says:

      I had a suspicion this would be the case, just by analogy to the STEM debate. The easiest way to produce results like “high unemployment while lots of lucrative jobs are unfilled” is if the job requirements and the worker training don’t actually match. In STEM, that often means “Google wants AI PhDs, but someone with an IT associate’s degree is unemployed. What a conflict!” It looks like a pretty similar result here: “People willing to work with their hands are unemployed, but there’s a shortage of licensed welders and experienced CAD workers! Bizarre!”

      And of course, this tends to be intentionally made worse by efforts to train people in what’s cheap rather than what’s employable. I’m not sure about construction, but in STEM I see a lot of people gesturing to Google salaries to justify totally unrelated training.

  48. ADifferentAnonymous says:

    Otium’s summary at the end says that developmental stages mostly don’t hold up, but I was surprised at how well they do. Kohlberg’s stages pass all the tests with flying colors, and Kegan’s basically pass every test they’ve been given, though they need to be tested more.

    The post points out that Kohlberg’s stages characterize verbal arguments, but not necessarily actual motivations. But they occur reliably in sequence, which is very striking. If they were just regurgitated arguments, you’d expect people to skip stages. I conclude that they do represent increasingly sophisticated, though perhaps sophistic, reasoning.

    I expected these would all get shot down by the first hint of empiricism. I’m updating in favor of David Chapman right now.

  49. Deiseach says:

    Although of course if a handful of Rust Belters had voted differently, we’d be praising every one of these people as geniuses right now

    If “a handful of Rust Belters had voted differently”, those would be the people in the current administration. Oh yeah: Anthony Weiner (as part of the job-lot with Huma, who is very close to Hillary and would have got some plum position as reward) with access to the corridors of power and all those pretty young things working as White House interns. Like Monica Lewinsky did during That Time We Don’t Talk About. What could possibly go wrong there? And all that back-stabbing and jockeying for power and doing down one another and sabotaging each other – well, that is exactly what you want in all the unelected special advisers attached to Secretaries of Departments and other posts dispensed as favours for the loyal campaign leaders, isn’t it?

    If what is recounted in Shattered is any way true at all, maybe the result was really for the best!

    Clinton campaign manager Robby Mook comes across very badly indeed, and appears to have been the wrong man for the job.

    Nominative determinism strikes again!

    • BBA says:

      I daresay they wouldn’t be considered geniuses, because how moronic do you have to be to lose to an absolute shitshow like Donald Fucking Trump?

      Well, now we know how moronic.

      (Granted, Bernie would’ve done even worse – the Yoopers he’d win over are more than offset by the Detroiters he’d alienate. Clinton was both a terrible candidate and the best candidate the Democrats had.)

      • cassander says:

        Success is its own justification, always has been.

        Put it to you this way, of all the prominent successful campaigns of the last few decades, can you remember a single one where the after action consensus was “yeah, it was a shit show, good thing for them X happened”? Or a losing campaign where it was “they fought the better fight top to bottom, but couldn’t overcome Y ”

        I can’t think of one. Now it could be that better run campaigns always win, but that strikes me as less likely than that our sense of “better run” is excessively conflated with “victorious”.

        • suntzuanime says:

          Bill Clinton’s first campaign might fit this bill?

          • cassander says:

            Before my time. Was that the opinion afterwards? All I have are vague cultural memories of someone called sister souljah and saxophone playing.

        • Progressive Reformation says:

          “they fought the better fight top to bottom, but couldn’t overcome Y” arguably describes the popular perception of the Sanders campaign.

          • cassander says:

            my perception is that this thinking is more on the lines of of “we fought the morally better fight/we had better ideas”, not “our campaign was better organized and had a better strategy.”

          • Nornagest says:

            Well, I’m not a Sandersista, but it seems to me that Sanders did have better organization and strategy. He out-fundraised her, after all, which is not easy to do from the position he started in. And I’m still seeing Bernie bumper stickers, whereas I only saw Clinton bumper stickers after the election.

            But Clinton had way better connections and name recognition, and those are nearly unbeatable in politics. Her gender probably helped a bit, too, but I think it would have played out similarly if it had been Bill rather than Hillary (leaving aside all the charisma that Bill has and Hillary doesn’t).

          • Deiseach says:

            I think it would have played out similarly if it had been Bill rather than Hillary (leaving aside all the charisma that Bill has and Hillary doesn’t)

            If the term limit on the American presidency was done away with, I’d be willing to fight a campaign with Bill as the candidate (yes, even after all the scandals and the tarnishing of the image) any day rather than Hillary. Indeed, given some of the names floated very vaguely for 2020, with Bill rather than whatever plastic candidate is spat out by the selection algorithms. Besides the charisma which is really invaluable, he has a talent for politics and being on the hustings that Hillary just does not.

          • engleberg says:

            @’they fought the better fight top to bottom, but couldn’t overcome Y’ arguably describes the popular perception of the Sanders campaign’

            Yes, for values of Y= Clinton stole the primary. Clinton is an establishment D party True Believer- she doesn’t talk to the darkness and wind outside the D party. When she demanded that Trump say he’d accept the election results, she wasn’t talking to Trump. She was demanding Berniebros accept the stolen primary and vote for her. Not enough did.

    • poipoipoi says:

      Is that book 3 stars because politics or because it’s actually garbage?

    • Progressive Reformation says:

      Nominative determinism strikes again!

      Another top Clinton advisor (the one who really didn’t want to concede the election) is Minyon Moore. Nothing is ever a coincidence, I guess.

    • pdbarnlsey says:

      If preventing sexual harassment is your primary concern in selecting executive government, things may still not have worked out optimally.

      But of course we’d need to look at the estranged spouses of Trump administration appointees to really get a comparative sense of how the oval office would have changed, genital-grabbing-wise. That’s where the real action is, apparently.

    • nimim.k.m. says:

      Anthony Weiner (as part of the job-lot with Huma, who is very close to Hillary and would have got some plum position as reward) with access to the corridors of power and all those pretty young things working as White House interns.

      I have a suspicion that the amount of workplace affairs between the middle-aged / or older persons in position in power and pretty young interns is a stable constant, and the only thing special about Anthony Weiner is his ability to make his escapades a public scandal.

  50. Urstoff says:

    “Before thermometers, people mocked the idea of temperature ever being measurable, with all its nuance, complexity, and subjectivity.”

    While the author seems to be taking the standard “dumb anti-science rubes/philosophers” tack, it seems to me that the skeptics were and still are correct. The concept of temperature before the widespread use of thermometers was probably something quite complex and multifaceted. The concept of temperature post-thermometer adoption is much simpler, with all those nuances, complexities, and subjectivities discussed as additional factors, some of which were also eventually operationalized (e.g., humidity). Before thermometers, I imagine temperature was spoken in various terms of “warm”, “balmy”, “frigid”, “crisp”, etc., which are terms whose range of applicability are not wholly determined by the number on the thermometer. The use of the thermometer as core tool of the measure of temperature, and the subsequent circumscription of the concept of temperature because of that adoption, was the adoption of a new concept that is more precise than the older concept but also contained less information. I would guess (but only guess) that this is largely true for the explicit quantitative measurement of any previously qualitative concept, and it doesn’t seem helpful to frame this (in temperature, intelligence, or any other area) as a “rubes vs. enlightened scientist” struggle.

    • Alex Zavoluk says:

      Temperature has a pretty specific definition (proportional to the average energy of the molecules) which is intentionally supposed to ignore factors like wind, humidity, etc. that would effect how a person feels in air of a given temperature, and which generalizes to pretty much any form of matter.

      Temperature turned out to be completely measurable, and the other factors that affect how you feel are measurable as well, though not everyone will feel a particular combination of factors the same way, so a model that incorporates all of that will have some subjective term.

      • Urstoff says:

        It has that definition after the invention and widespread use of thermometers and subsequent development of theory. My point is that pre-theoretic concepts are often fairly complex, and while the operationalize and conquer method of science is obviously productive, don’t try to equate the post-theoretic concept with the pre-theoretic concept. They will both be enriched and impoverished in various ways compared to each other.

        • Alex Zavoluk says:

          I mean, to a certain extent they sort of refer to different concepts, since the physics definition does not take into account facts of biology, but I think it pretty much captures what you would naively expect it to capture. But then, maybe my intuition for thermodynamical concepts is already too engrained with modern physics knowledge.

          • smocc says:

            I suspect that it is.

            The first intuition that of a concept like temperature probably comes from “is the weather hot or cold” or “does this thing feel hot or cold when I touch it”. While the modern concept of temperature is a big factor in the questions, there are other significant confounding concepts. Humidity and wind chill can make two different locales feel very different despite technically having the same temperature. Two materials with different conductance coefficients feel very different to the touch even at the same technical temperature.

            But before you have thermometers and barometers and calorimeters you don’t even know that there are multiple variables to be confounded.

            The next hints towards temperature you’d probably consider would come from cooking processes. You have to put a water over a flame for a certain amount of time before it will boil, and it will feel warmer and warmer as it gets closer to boiling, linking the two intuitive concepts. But again, there are confounding variables. The boiling point of water depends on both pressure and temperature (and relative humidity?) This was the initial problem with defining temperature scales — the boiling temperature of water appeared to vary day to day until people figured out how to control for pressure, which of course required inventing another measurement device.

            And conversely, the rigorous definition of temperature is not sufficient for describing the phenomena above. If you want to know when water will boil you need temperature plus pressure. If you want to know how it feels outside you need temperature plus relative humidity.

    • suntzuanime says:

      It seems to me less like a morality tale of rubes vs. scientists and more like an instructive parable for theorists, about the need to look past surface distractions to try to narrow in on more fundamental essences that may be simpler and more tractable. It’s not like there was a sharp distinction between philosophers and scientists back then. It’s not “scientists rule philosophers drool”, it’s more “once upon a time a scientist claimed a thing was impossible to measure because people’s perceptions of it differed, and then he was eaten by a bear. don’t be like this scientist”.

  51. TK-421 says:

    The Left Forum article was particularly interesting (and darkly hilarious). This section in particular caught my interest:

    No, the worst part of Left Forum is the crackpots, the paranoiacs, the hysterics, and all the other truly dysfunctional personalities attracted by the conference’s most infamous policy: no panel submission will be rejected.

    That’s right: If you pay your registration fee and fill out the proper forms, you get a room and a table and a spot on the schedule. So in addition to all those experienced and intelligent rabble-rousers, Left Forum is a home for 9/11 Truthers, those who would save us from the terrors of “mandatory fluoridation,” and the generally batshit and/or pathologically anti-social.

    On the one hand, I really like the idea of there being a convention where you can just show up, plunk down a nominal fee, and get to speak your peace. Sure, most of them would be terrible, because Sturgeon’s Law, but every once in a while you’d get something really interesting. On the other hand, if I was at a convention for something I actually cared about and saw this kind of thing going on, I would probably be pretty annoyed, so maybe it’s just the emotional distance from it that makes the idea sound so appealing.

    But on the gripping hand… what’s the alternative? A convention dedicated solely to panels that couldn’t get hosted at other conventions would end up as the same sort of crackpot brigade that Frost describes, only a thousand times worse—and probably at each others’ throats before the end of the first day if they came from drastically different political backgrounds. At least when it’s hosted somewhere which purports to have some coherent content as well, there might be some attendees who aren’t already on the fringe that could separate the wheat from the chaff.

    • Aapje says:

      Isn’t that why they invented parks?

      Show up, put down your box, get on it and proselytize away.

      • TK-421 says:

        Rather a different audience, though. At a convention, people are there specifically to go to panels and hear people give presentations, and someone giving a talk is a Convention Speaker, however weird the topic may be. In the park most people are there for other reasons, and someone giving a talk there is just some dude on a box yelling about chemtrails.

  52. Alex Zavoluk says:

    Basic argument: as long as corporations can offer politicians lucrative deals after they retire, they can reward pro-corporate decisions with plausible deniability, which incentivizes politicians to be pro-corporate. If you’re anti-corporate, this is directly bad; if you’re pro-corporate, this makes it impossible to convince people that you’re really making well-considered decisions in their best interests and not just being corrupt.

    That’s basically the argument I made in real life, and I’m glad I’m not the only one. Reading the article, I’m also glad I’m not the only one who noticed that the Clintons have made over 100 million dollars over a few decades in “public service.”

    Current Affairs on the back-stabbing, infighting, and comical errors of Hillary Clinton’s campaign. Although of course if a handful of Rust Belters had voted differently, we’d be praising every one of these people as geniuses right now.

    Much of this article reminds me of the phrase “win-more” which I learned from the Magic: The Gathering community and which refers to cards that are only good when you already are winning. Such cards are generally considered bad, since they do not help in close games or when you are behind. Some other commenter here speculated that Clinton was gambling on a “crush Trump” strategy, thinking they had to completely dominate the election in order to thoroughly repudiate the “danger” that Trump represented, but this article makes it seem like they just didn’t understand this basic concept.

    Between these articles and another I read about Chelsea Clinton being boring a not doing anything meaningful, I think I have to conclude that the Clintons are the reverse D’Anconias: the result of a long optimization process that was optimizing for vaguely left-leaning power-hungry American politicians.

    • Iain says:

      Win-more makes sense as a concept in games because winning or losing is a binary outcome: either you win the game, or you don’t. In the context of an election, expanding your map can (in theory) help carry along the House or the Senate on your coattails. There is a big difference between holding the presidency and the Senate vs just holding the presidency. Winning more is valuable.

      (Of course, before you start pouring resources into winning more, you should probably make sure that you are actually winning.)

      • Alex Zavoluk says:

        Good point. But, in this case, the “win-more” wasn’t even picking up more states (which would have been mostly equivalent to just picking up the swing states she should have been focusing on), it was mostly appealing to people in states like New York and California which are already blue. See the +3 million vote difference in CA for Clinton.

        (Of course, before you start pouring resources into winning more, you should probably make sure that you are actually winning.)

        And that, of course, is the other major mistake.

        • Iain says:

          No, there were definitely aspects of Clinton’s campaign where she tried to run up the score in traditionally Republican states. This article, for example, mentions Georgia, Utah, and Arizona.

          • Alex Zavoluk says:

            My point was not that Clinton literally didn’t campaign at all outside of NY and CA, but Clinton focusing on Georgia and Utah (like Texas) seems to me like a long-shot attempt to crush Trump on the assumption that she was already winning, rather than an actual strategy to increase her probability of winning a close race.

            edit–and in fact, the wording of the article seems to back up my interpretation.

          • Iain says:

            seems to me like a long-shot attempt to crush Trump on the assumption that she was already winning, rather than an actual strategy to increase her probability of winning a close race.

            In other words: she mistakenly thought she was already winning, and attempted to win more?

          • Alex Zavoluk says:

            In other words: she mistakenly thought she was already winning, and attempted to win more?

            As far as I can tell, yes.

          • Adam Berman says:

            I read that article and, my god, the sick feeling in my stomach is overwhelming.

            “The idea of a fair election — of a peaceful transition of power — is not a Democratic value. It’s not a Republican value. It, literally, is an American value. I volunteered with 7th and 8th graders a couple of months ago and we talked about the peaceful transition of power, and those kids understood it,” said Rebecca DeHart, executive director of the Democratic Party of Georgia. “It’s crazy to me that a candidate for President of the United States doesn’t. So the only way we can stamp this out is to have incredible turnout, and let it die in a corner.”

            “The larger the margin, the less relevant Trump and the Trump philosophy will be post-election,” added former South Carolina governor Jim Hodges, a Clinton ally.

          • Deiseach says:

            That article is interesting in what it says about early voting:

            Internally, the Clinton team is closely watching — and cheering — early voting and registration figures in strategically imperative battlegrounds.

            In Nevada, they’re pointing to huge Democratic turnout, including in Las Vegas’ Clark County, over the first two days of early voting, compared to anemic performance among Republicans. In Arizona, a 20,000 vote deficit for Democrats at this point in 2012 has turned into a 1,000 vote lead now. In Colorado, the number of registered Democrats recently overtook the number of registered Republicans for the first time ever. And in Florida, Republicans entered the week only ahead of Democrats by 1.7 percent, compared to a 5.3 percent lead at this point four years ago. That’s largely on the back of a 99 percent increase in Latino voting compared to this time in 2012.

            I wonder if this helped contribute to the eventual loss? First that the campaign took the good lead in early voting too much as a sign that they’d retain the same kind of lead in the rest of the voting, and secondly that if undecided/unenthused potential voters were seeing results like this that “Clinton is killing Trump in early voting”, they’d think she pretty much had it won and so no point in voting themselves?

            It also sounds a bit like she made a big push at the end of the race, in order to get the crushing victory she (and nearly everyone else) expected, and that if she’d done this earlier or even steadily through the campaign, things might have turned out differently. But “what-ifs” and “might have beens” are easy to speculate about.

            Okay, I’m an awful person, but I read a linked article written at the same time as that one, about Hillary’s campaign being so positive they had it won that she then went on to try and pull other Democrats over the winning line with her, and then I compared the results of the elections, and I can’t help laughing:

            “As we’re traveling in these last 17 days we’re going to be emphasizing the importance of electing Democrats down the ballot,” Clinton told reporters aboard her campaign plane.

            It was the surest declaration of confidence yet from a candidate and a campaign that enters the home stretch in so commanding a position that they are redirecting cash and manpower to traditionally red states, including Arizona, Missouri, Indiana and Georgia.

            Clinton delivered a preview of her coming rhetorical focus at a rally in Pittsburgh, as she excoriated Republican Sen. Pat Toomey for standing with Trump and sought to saddle Toomey with some of Trump’s most incendiary remarks.

            …It amounted to one of her sharpest and longest attacks on a sitting Republican senator of the campaign. And aides forecast more such barbs in the days to come, as she heads to North Carolina on Sunday, where Democrats are targeting Sen. Richard Burr, who faces a surprisingly stiff late challenge, and to Florida on Tuesday, where Sen. Marco Rubio is on the ballot.

            Election results? Toomey – won by just under 2% against the hoped-for first female Democratic senator for Pennsylvania. Burr – won by 6% over female Democrat opponent. Rubio – won by 8% over his Democrat opponent.

            What’s the opposite of the Midas Touch, where everything you touch turns to lead? Classical Greek playwrights made entire careers of writing about hubris of this sort.

        • Deiseach says:

          Oh, man: reading the Politico articles of the time and how absolutely assured they were that Hillary had it done and dusted, it’s amazing to compare their sure’n’certain forecasts with the actual results:

          In June, POLITICO identified 11 key battleground states — totaling 146 electoral votes — that would effectively decide the presidential election in November. A new examination of polling data and strategic campaign ad buys indicates that six of those 11 are now comfortably in Hillary Clinton’s column.

          Clinton leads Donald Trump by 5 points or greater in POLITICO’s Battleground States polling average in Colorado, Michigan, New Hampshire, Pennsylvania, Virginia and Wisconsin. If the Democratic nominee won those six states, plus all the other reliably Democratic states President Barack Obama captured in both 2008 and 2012, she would eclipse the 270-electoral-vote threshold and win the presidency.

          Even if Trump ran the table in the remaining battleground states — Florida, Iowa, Nevada, North Carolina and Ohio — he would fall short of the White House if he cannot flip another state where Clinton currently leads in the polls.

          According to Politico – Hillary has Colorado, Michigan, New Hampshire, Pennsylvania, Virginia and Wisconsin.
          According to results – won Colorado, New Hampshire and Virginia, lost Michigan, Pennsylvania and Wisconsin

          According to Politico – even if Trump wins Florida, Iowa, Nebraska, North Carolina and Ohio, he is still going to need to flip one of Hillary’s six states (and that ain’t gonna happen)
          According to results – Trump won Florida, Iowa, North Carolina and Ohio, lost Nevada, but succeeded in flipping three of Hillary’s six: Michigan, Pennsylvania and Wisconsin.

          If the Democrats take anything away as a lesson from this campaign loss, it should be “don’t count your chickens before they’re hatched” and definitely “don’t believe the media rah-rah about how you have this bagged, stuffed and mounted above your mantelpiece”.

          • Jaskologist says:

            In retrospect, the fact that Trump was pretty clearly winning Ohio should have set off a lot more warning bells all around.

          • HeelBearCub says:

            @Jaskologist:
            I actually commented before the election about the fact that Ohio was not a swing state probably presaged the end of what I will now call “union blue” states.

  53. vaniver says:

    Curious what all of the tech workers here think.

    For software/statistics/analysis, there appears to be a significant difference in tech worker quality; there’s no shortage of bad workers and a massive shortage of good workers. (If I had had twenty-five clones, my previous company would have hired all of us at once. Seriously.) Like other commenters mention, in other fields there’s a bunch of specialized expertise that makes mismatch very easy; if the semiconductor fab hires a mechanical engineer, it’ll be a year or two until they know what they’re doing, and then if they try to get a job in automotive, they’ll be starting from the ground floor again, almost (they’ll know some about engineering practice, but also won’t be as much of a fresh-faced youth ready to put in the hours to learn).

  54. Brad says:

    I unfortunately don’t see civil forfeiture going anywhere. The precedents in favor of its constitutionality are too long standing. You’d have to cobble together some kind of strange cross ideological coalition to make it happen. I can see how they’d get to three: Thomas (see denial of cert in Leonard v Texas), Kennedy (dissent in Bennis v. Michigan), and Sotomayor (Krimstock v. Kelly 2nd circuit). Who know about Gorsuch, but I highly doubt they’d get Roberts, Alito, Ginsberg, or Breyer (the latter two voted with the majority in Bennis v. Michigan).

    Even if you managed to peel off Gorsuch and Breyer and got to five votes, it would end up being one of these 2-2-1 plaurity situations that are barely precedental.

    • gbdub says:

      Which sucks, because civil forfeiture seems like one of those low-hanging fruits that a pretty big bipartisan swath of people would like to see fixed, once they learn anything about it.

      • Brad says:

        You’d think the bipartisan coalition would make for a political solution rather than a constitutional one. But apparently this is one of those “deep state” things where the institutional interests of government cannot be overcome even by strong majorities of voters.

        • gbdub says:

          Or just the standard “special interest group cares enough to lobby, majority stays silent” issue? A lot of civil forfeiture funds local police departments, and it’s hard to win local elections if you piss off the sheriff / police union.

    • Anonymous Bosch says:

      Plus Thomas was the lone dissenter on the very Colorado case linked so clearly his views aren’t necessarily as clear-cut as those who hopefully read his Leonard dissent would like them to be

      • Brad says:

        And it wasn’t just a dissent, it was a doozy of a dissent. It’s hard to see how it can be squared with the view that the due process clause forbids civil forfeiture, especially in light of footnote 1. But I have confidence that Thomas would somehow make that happen. Probably by claiming that it wasn’t raised by the parties or something like that.

    • Sandy says:

      Gorsuch once lavished praise on Thomas’s dissent in Kelo, so I suspect he may be aligned with Thomas on the civil forfeiture issue as well.

    • Winter Shaker says:

      Yeah, that one is pretty weird to me. You’d think that a country in which so many people are ready to defend their 2nd Amendment rights to not have the government arbitrarily take their firearms away (rights which don’t really exist in many other democracies, at least to the same extent), there’d also be a massive amount of people ready to defend their 4th amendment rights to not have the government arbitrarily take their stuff-that-isn’t-firearms away (rights which generally do exist in most comparable societies I think – though I could be mistaken). And yet, the response seems to be a few people are outraged, a lot just shrug, and a non-trivial fraction are basically going ‘that’s fine, as long as they are inconveniencing drug dealers, the police should be able to take what they want from whom they want’.

      But I am not an American – am I missing something obvious?

      • gbdub says:

        Heck, a lot of times firearms are seized in civil forfeiture.

      • Incurian says:

        But I am not an American – am I missing something obvious?

        I am hoping it’s just an issue of education – they only hear the part about stopping drug dealers and not the part about how it doesn’t require a conviction and the incentives that lead it to be so easily abused.

      • suntzuanime says:

        I think it’s dangerous to be too quick to blame things on classism/racism, but it seems likely that’s a major component here. Police have a lot of leeway as to what they use forfeiture on, meaning they can restrict themselves to unsympathetic targets. Compare this to, say, a gun ban, which hits sympathetic law-abiding citizens the hardest, as criminals can get guns on the black market. If cops are robbing drug dealers, hey, I’m not a drug dealer, and fuck anybody who is. If cops are taking guns from gun owners, hey, I’m a gun owner, or maybe I’m not but my buddy I buy fresh venison from is, and he’s a good person who doesn’t deserve to have his rights violated.

        To some extent the growing concern over forfeiture is because the cops have gotten carried away and started robbing people real people care about, people who can give a good interview to the media. People that people care about can read these interviews and say “hey, that could be me, or somebody I care about”, and so the impetus for change can start rippling through society.

        • shenanigans24 says:

          There’s also that a lot more people have guns than have assets seized. In a real way one issue affects a lot of people, and the other does not.

      • Alex Zavoluk says:

        Unfortunately, a lot of the gun owners are some of the most blindly supportive of police, except in the most egregious police abuse cases, when their victims are obviously innocent of any crime. I suspect this is partially a “law and order” mentality and partially a result of decades of declaring over and over in the strongest possible terms that they don’t support criminals getting their hands on guns in order to appease gun control advocates.

        • Incurian says:

          a lot of the gun owners are some of the most blindly supportive of police

          I’m working on it!

      • JayT says:

        I think the average American probably hasn’t heard of civil forfeiture. As bad a thing it is, it still affects a relatively small part of the population. Most people don’t have any personal experience with it, and even if they’ve heard about it, it’s just as likely that they would have heard about it in a positive light as a negative one.

        On the other hand, pretty much everybody has an opinion on guns.

      • Trofim_Lysenko says:

        No one who isn’t a dedicated reader of political opinion pieces or has actually been a target of it has heard of it.

        Remember when the ATF issued decorative gear with the slogan “Always Think Forfeiture”?

        I just did a few google and google news searches to check for coverage and the only national level results I got were Reason and one Forbes article.

        Expand that to blogs, and you get Hot Air, JFPO (speaking of gun groups), BoingBoing, FreeRepublic…and actually quite a FEW firearms blogs, forums, and websites discussing it in negative terms, contra Alex.

        • Incurian says:

          and actually quite a FEW firearms blogs, forums, and websites discussing it in negative terms, contra Alex.

          It may be that “gun nuts” (I mean that in a positive way) lean libertarian, while the general population of pro-gun republicans are, well, republicans.

        • Trofim_Lysenko says:

          Another possibility worth noting since one of the other sites that showed up high on search results was frickin’ -Stormfront- is that for a lot of Americans, arguments about how unaccountable Government Stormtroopers and Jack-Booted Thugs are out to Violate Your Rights and Take Everything You Have pattern match to “Extremist Whacko”.

          So nice people don’t hate BATFE, DEA, FBI, and the police. After all, what are you, some sort of right wing anti-government militia nut? Throw in whatever -ists you feel add spice, but you take my point.

          Maybe libertarians and anti-government right wing types should be friendlier to BLM activists for having mainstreamed at least -one- possible criticism of American law enforcement, making it clear that you can hate bad conduct on the part of law enforcement without being a skinhead in a compound out west.

          • Incurian says:

            Maybe libertarians and anti-government right wing types should be friendlier to BLM activists for having mainstreamed at least -one- possible criticism of American law enforcement?

            Or unhappy that they flubbed it.

          • suntzuanime says:

            Yeah, when people tried to extend it to non-black victims of police brutality and got called racist for thinking that white lives could possibly matter, that sort of reinforced the “nice people don’t hate BATFE, DEA, FBI, and the police” from the other direction.

          • Steve Sailer says:

            I spent several weeks in 2010 investigating a law enforcement killing in my neighborhood of an 18-year-old viola player. I happened to run into the bereaved mother at the scene also looking for clues and I told her the cops’ story sounded fishy to me and she should consider a lawsuit.

            This case got very little media coverage until 3 year later when the L.A. Times headlined on its front page that the family had been awarded $3 million by a judge.

            I’ve always thought this case would have made a good illustration for reformers that the police kill too many people and they need reforms such as better training and more accountability.

            But there was so little media interest in this killing because the dead kid was white.

          • CatCube says:

            This perception about only extremist wackos call the FBI/BATF jackbooted thugs is not new. In the 90s, several BATF and FBI operations (Waco, Ruby Ridge) resulted in innocent deaths and a(n unwarranted) reaction in the Oklahoma City bombing. When the head of the NRA used the exact phrase “jackbooted thugs” to refer to federal agents, George Bush Sr. publicly resigned his NRA membership.

          • Trofim_Lysenko says:

            @Catcube

            Yeah, the original version of my reply included references to Ruby Ridge and G. Gordon Liddy, but I deleted them. Half because I start to believe a lot of people don’t even remember the whole 90s “Right Wing Militia” panic and half because I didn’t want to get dismissively pattern-matched the same way.

            When I build -my- compound in the woods, all races and creeds will be welcome 😉

          • Deiseach says:

            In this era, who would argue that it would be best if northern judges applied the full force of the law against runaway slaves?

            And of course the opposite applied if a southern judge was making the ruling? Stick to the letter of the law even if you think a free person of colour hasn’t the same legal standing as a white man? Bad law is bad regardless of whether our case is being heard north or south.

            And I note the end result of the solution to slavery was not “we’ll keep the laws but our morally improved and more socially discriminating judges will re-interpret them in the light of an ongoing, evolving, personal moral judgement” but “these laws are no longer law”.

      • shenanigans24 says:

        From my observations few people really know what it is, and people on the right have a vague support of law enforcement.

  55. Ialdabaoth says:

    “A flexible, living, bendable law will always tend to be bent in the direction of the powerful.”

    Yes, and a rigid, constructionist law will always end up favoring those who can most afford to exploit its loopholes and idiosyncrasies (i.e., the powerful).

    • hlynkacg says:

      I don’t think that follows. Sure there are cases where it might be true, but it seems more like it would favor the clever/knowledgeable over the “powerful”. In any case vectors for abuse and corruption are obviously more limited and egalitarian than they would be in the “bendable” approach.

      • shakeddown says:

        I think there are enough exploits either way for those both corrupt and powerful to get away with roughly the same amount of stuff. The difference is that rigid constructionism also has the disadvantage of being pointle