THE JOYFUL REDUCTION OF UNCERTAINTY

OT110: Opendragon Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. Comment of the week is Stefferi on the circumstances leading to the rise of Hitler. See also idontknow: “The strongest defense against extreme right wingers is a moderate right wing party that is vigorous.”

2. Please vote for your favorite adversarial collaboration from the last week. The entries were:

a. Does The Education System Adequately Serve Advanced Students?
b. Are Islam And Liberal Democracy Compatible?
c. Should Childhood Vaccination Be Mandatory?
d. Should Transgender Children Transition?

After some discussion with the contestants, the winner of the popular vote will get a $500 prize, and the winner of my vote will get a second $500 prize; these may or may not be the same entry. After you’ve read all the entries, you can vote here.

3. I would like contestants to email me their experience participating in the contest. There is no particular structure or prompt I want, but here are some questions (adopted from a list sent by John Buridan) to get you started:

– What were your initial positions?
– How much did your positions shift and in what ways?
– How much debate and argument was there during the course of the work?
– How did you resolve it?
– Is the conclusion closer to one or the other person’s original position?
– What advice would you give future adversarial collaborators?

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

1,130 Responses to OT110: Opendragon Thread

  1. johan_larson says:

    Your mission, should you choose to accept it, is to serve for a year in a combat assignment on the Allied side of WWII. Expect to fire on the enemy and take enemy fire in turn.

    You will be trained to do the job and to fit into the time and place you are going to. Don’t expect to change the course of the war; you will not be placed anywhere you could decisively change world history. Fight well, and you might shorten the war by a few days. Die in the past, and you’re dead for real.

    Given the facts of life in the forties, actual combat assignments for women in WWII are very rare. Accordingly, for women only, non-combatant support roles in forward areas are acceptable choices in this exercise. You might, for example, be a telephone operator in London during the Blitz or a nurse on a naval base in the Pacific within range of Japanese bombers.

    What do you want to do?

    • Murphy says:

      This feels like it’s missing something, what part are people getting a choice about?

      I’d probably go for not spending a year there at all if I’m not allowed post crib notes to bletchley park on efficient ways to crack enigma etc.

      • johan_larson says:

        You can choose, with some limitations, what job you want, where you will serve, and when.

        • Murphy says:

          I don’t know enough to know what areas were hellscapes so I’ll include a minor note.

          My grandfather was an absolutely excellent marksman, but you wouldn’t know it from his army assessments when he was drafted.

          Snipers didn’t tend to survive, they’d be left behind to slow an enemy advance when an army withdrew and tended to get a bullet to the back of the head if captured because they’re never popular with the other side.

          So, the only thing I’d say for certain is that I’d want to avoid being a sniper.

    • fion says:

      should you choose to accept it

      I do not. Sure I’d get some good stories out of it, but it would be scary and unpleasant and I might die.

      • Civilis says:

        I agree with you that this is something I’d never* do, but it’s easy to add enough details to the intellectual exercise to make it worth thinking about.

        Suppose, using his time machine, Johan_Larson has managed to take over the entire world. As a rebel against his tyrannical hypothetical-question-based rule, you’ve been sentenced to the option of painful death or being sent back to spend a year in World War II. In that case, it’s easy to imagine saying ‘how much detail can I give when I pick my assignment?’

        Then its ‘what level of punishment would I be willing to avoid by taking the offer’? Ten years in prison? Five? Spending 10 years back 30,000 years ago? If you assume there’s some level of punishment you’d be willing to avoid by spending a year in World War II in some capacity, what would you choose then?

        I think it would be more interesting with one of the following qualifications:
        A) If nobody from your unit dies in combat, obviously you missed the whole ‘taking fire in return’ part and you get sent back for another year assigned to ‘Soviet Infantry, Winter War’**.
        B) If too many people pick the same unit, all the extras (chosen at random) get assigned to ‘Soviet Infantry, Winter War’.

        * There’s obviously some level of carrot for which this becomes worth doing, but it’s much easier to just imagine a stick. [Added – for a carrot, and an addition to the discussion, when you get sent back, your body is rejuvenated to that of a healthy and fit 18 year old, and you get to keep it in whatever condition it survives the war.]

        ** The worst assignment that meets the original criteria I could think of on short notice.

        • fion says:

          You’re right; there are plenty of ways I could have engaged with the spirit of the hypothetical. I could also have just scrolled on and left everybody else to it. But don’t get me wrong; I’m not saying for a second that engaging in silly hypotheticals isn’t a good use of time. I’m just saying that this one seems to have no upside, which is unusual in johan_larson’s ‘missions’.

          • johan_larson says:

            Some people like to do exciting things even if they are dangerous and uncomfortable. And WWII certainly has moments that offer plenty of all three. Isn’t excitement enough of an upside?

          • Civilis says:

            I didn’t mean to sound dismissive; ‘I wouldn’t do this under any circumstances’ is a valid and important data point.

            This is also an interesting discussion in that 16 million Americans served during the war; of those, 10 million were drafted, so that’s close to 6 million volunteers (I assume there are other categories, such as ‘serving at the start of the war’). If I recall right, the airborne units were all volunteer (but not the air-landing units).

          • fion says:

            @johan_larson

            Fair enough. I guess what I’m saying, then, is that no: for me excitement isn’t enough of an upside to make me want to enter a war zone.

            @Civillis

            That’s an interesting point. For some reason “volunteering to serve in a current war” seems like a more reasonable decision than “going back in time to serve in a past war”, but I’m not really sure what the difference is.

          • albatross11 says:

            It’s also at least somehow emotionally relevant that I already know who wins. There’s not any sense that I can change the course of the war. That’s generally true–unless you’re at the Alan Turing/Robert Oppenheimer/Dwight Eisenhower level of ability, you’re probably not able to directly have a large impact on the war, though a good gunnery officer on the Iowa or a good combat medic or whatever could certainly make things better locally.

            But still, knowing for sure that the outcome is the outcome, and that my involvement isn’t going to have any important impact on it, is a big demotivator.

          • Matt M says:

            That’s an interesting point. For some reason “volunteering to serve in a current war” seems like a more reasonable decision than “going back in time to serve in a past war”, but I’m not really sure what the difference is.

            Medical technology? And general changes in the structure of warfare such that “line up and march towards those machine guns” is now a somewhat rare strategy, when in the past, that’s what war was?

          • John Schilling says:

            For some reason “volunteering to serve in a current war” seems like a more reasonable decision than “going back in time to serve in a past war”, but I’m not really sure what the difference is.

            Moral obligation to defend your own community but not others, and possibility of making a real difference (which is ground-ruled out in this case)

    • bean says:

      Gunnery officer on the USS Iowa, of course. I’ll take the whole time she was in commission if I can. If not, we’ll go with the first year in the Pacific.

      • bean says:

        Alternate plan: Get on Halsey’s staff at Leyte Gulf, and convince him to detach the battleships to guard the San Bernadino strati. Instead of Samar, we get the Japanese battleships on the bottom, courtesy of American battleships, and I don’t have to hear winging about the battleships being useless.

        • Nornagest says:

          The Battle off Samar is probably my favorite naval engagement of all time and arguably the US Navy’s finest moment, though. It would be a shame to lose it, even if it was only possible because of a blunder on Halsey’s part.

          • bean says:

            To be honest, there’s a lot of me which completely agrees with you, to the point where I’d probably pick Iowa’s gunnery department over this plan. But it seemed like an interesting way to change the course of the war in a direction I want it to go.

    • Civilis says:

      Assuming I’m forced to choose one, the options I’d consider first are:

      US Field Artillery crew, US 7th Army, France / Germany
      US Battleship crew, Pacific, post-Pearl Harbor (thanks, Bean)
      US Carrier crew, non-aviator, Pacific, post-Midway
      US/British warship crew, Atlantic, post-Pearl Harbor

      If forced to go front line combatant, I’d take ‘US Infantry, US 7th Army, France / Germany’, and if I had to prioritize those three factors, I’d put 7th Army first. If I can’t pick my assignment with that much precision, I’d generally go North Africa > Sicily > Italy > France / Germany > Pacific. As much as I’m interested in armored warfare, without checking the actual statistics my gut tells me that I’d do better as line infantry… unless I know I’m going to the Pacific, in which case I’ll take Tank Crew in an instant.

      If forced to go British or other Commonwealth, the results are similar, though it becomes a toss-up between infantry and armor. With the British, the order is North Africa > Sicily > France / Low Countries / Germany (D-Day and after) > Italy > France / Low Countries (the Blitzkrieg) > Indo-China. About the only Commonwealth country I’d consider a better option than Britain itself is Canada.

      I’d put ‘Soviet Field Artillery Crew, Post-Kursk’ ahead of US/British Infantry or Tank Crew. While I’d generally rank Soviet frontline combatants below their US/British equivalents, with the Soviets I’d definitely prefer tank (or SP Artillery/Tank Destroyer) over infantry. I might very well take ‘Soviet Tank Crew, Post-Kursk’ over ‘US Infantry / US Marine, Pacific’. (I don’t know if Kursk is the best break point between the early war meatgrinder Red Army and the far more capable forces that crushed what was left of the German Army, but it works as a safe break point; I would almost certainly risk choosing Anzio or Bastogne over the early war Red Army.)

      Way down on the bottom of the list are anything with submarines or aircraft crew (except perhaps ASW aircraft). Doubly excluded is anything which involves jumping out of an aircraft (or riding an aircraft into the ground).

      • bean says:

        US/British warship crew, Atlantic, post-Pearl Harbor

        You need to be more specific. I’d do the year April 1944-April 1945, in a subchaser assigned to the Caribbean Sea Frontier. Leaving it this wide open probably gets you on a Flower-class in the North Atlantic during 1942, which is not a pleasant place to be at all. Living conditions were horrific, and there were ships going up all around you.

        Actually, your best option is probably as a patrol aircraft crewman somewhere in the South Pacific.

        • johan_larson says:

          The question is whether these observation planes are in enough danger. You need to meet the “Expect to fire on the enemy and take enemy fire in turn, ” requirement.

          Personally, I’m thinking of serving aboard a destroyer in the Battle of the Atlantic, chasing subs around. Much more comfortable than the corvettes, and reasonably safe.

          • bean says:

            The question is whether these observation planes are in enough danger. You need to meet the “Expect to fire on the enemy and take enemy fire in turn, ” requirement.

            I’m sure I can find a time when they took fire and returned it. Maybe one of the bomber groups assigned to keeping bypassed Japanese pounded down would work. They definitely fired on the enemy, and technically took fire in return, but didn’t see that many casualties.

            Personally, I’m thinking of serving aboard a destroyer in the Battle of the Atlantic, chasing subs around. Much more comfortable than the corvettes, and reasonably safe.

            I’m reading The Cruel Sea right now, and I’d advise against it. That was a peculiarly brutal battle. If you do so, make sure it’s during the last year of the war. Or we could go late 1943-late 1944 in the Caribbean Sea Frontier. There’d be some shooting, but not much.

        • Civilis says:

          I defer to the expert on this. I tried to answer this question based on what I remembered of history rather than look things up online. I know that the early days of the Battle of the Atlantic (when the Germans could throw battlecruisers out as commerce raiders and ships were going down within sight of the US coast almost daily) were not a safe time period, and I guessed that the change-over happened around the time the Bismark went down.

      • Lillian says:

        If forced to go front line combatant, I’d take ‘US Infantry, US 7th Army, France / Germany’, and if I had to prioritize those three factors, I’d put 7th Army first. If I can’t pick my assignment with that much precision, I’d generally go North Africa > Sicily > Italy > France / Germany > Pacific. As much as I’m interested in armored warfare, without checking the actual statistics my gut tells me that I’d do better as line infantry… unless I know I’m going to the Pacific, in which case I’ll take Tank Crew in an instant.

        If forced to go British or other Commonwealth, the results are similar, though it becomes a toss-up between infantry and armor. With the British, the order is North Africa > Sicily > France / Low Countries / Germany (D-Day and after) > Italy > France / Low Countries (the Blitzkrieg) > Indo-China. About the only Commonwealth country I’d consider a better option than Britain itself is Canada.

        Your gut is extremely wrong. Among US troops deployed overseas 18.5% of infantrymen were killed in action or died of their wounds. That number is 3% for tankers. God help you if you get posted to the 4th Infantry Division, which suffered 700% casualties in its line infantry between its initial deployment at Normandy and the end of the war. You really, really do not want to be infantry in any army in the war.

        For that matter, if you are in a tank you want to be in an American tank above anything else, as their crews had lowest casualty rates thanks to their excellent ergonomics allowing for high situational awareness and easy crew evacuation. If you are in an American tank, you preferentially want to be an American tanker rather than a British tanker since the latter don’t get helmets, which doubles their chances of being killed. Though you probably still want to be a British tanker over a Soviet one because the Eastern Front was a brutal charnel house. Unless you’re a Soviet tanker posted to Far East, in which case enjoy lording it over the comically bad Japanese armoured forces.

        You might find this video instructive, the above figures are from there.

    • DragonMilk says:

      With the benefit of hindsight, I’d just go to North Africa. Rommel treated POWs pretty well

    • Chalid says:

      Not sure if manning an AA gun in say London or Pearl Harbor is dangerous enough to fit the requirement, but if it is, then that’s the safest and most comfortable way to spend the war that I can think of.

      • Michael Handy says:

        AA crew, Darwin would be pretty nice, if hot weather and beaches are your thing.

        Or Port Defence, Sydney, now I think about it.

    • John Schilling says:

      Robert A. Heinlein’s “The Man Who Was Too Lazy to Fail” convincingly argues that, for an American of suitable aptitude, flying a PBY (twin-engine flying boats used for reconaissance and antisubmarine warfare) during the Battle of the Atlantic would be the safest, easiest, and most rewarding way to get Combat Veteran status as necessary for advancement in postwar America.

      Large multiengine aircraft were by that part reasonably safe in routine flight, and while the U-boats did have AA guns they usually didn’t put up a fight on the surface and it is in any event up to the pilot how close he wants to get. Mostly, you’re flying a cool plane and looking at the ocean view, maybe once or twice dropping depth charges on something. Good chance you’re based on the US East Coast, which means better living conditions than any other combat posting plus you get to start cashing in on your War Hero cred on the home front, during the war but without question of shirking, and without competition from the dumb enthusiastic heroes on the front. If not the home front, then probably some tropical island with no fighting. By the time the war ends, you’ll probably have colonel’s wings and a couple thousand hours of flight time in a large multiengine aircraft, quite useful in 1945 and I wouldn’t mind having them in 2018.

      • Squirrel of Doom says:

        Tell me more about “Combat Veteran status as necessary for advancement in postwar America“.

        Assuming you don’t just mean advancement in the military, how much did veteran status mean for your social status in postwar America?

        • sfoil says:

          All of the Presidents after Truman until Clinton served in WW2 except for Carter, who had a very respectable run postwar in the fledgling nuclear Navy. Johnson felt the need to trump up his record, and Nixon’s service in a rear area seems like something that was held against him when he ran against Kennedy of the PT boats.

          Less concretely, it was pretty clear from talking to that generation of men that for many of them service in WW2 really was one of the defining features of their lives. Nobody lost their livelihood because they didn’t serve or overtly ducked into some safe spot. But it was something they had to live with, the big show that they’d missed. Heinlein’s character wasn’t even concerned about his own self-image, but he intended to live a long time and he didn’t want to make it less pleasant by having half the people he ran into busting his balls over it the next sixty years.

      • LesHapablap says:

        Among the non-combat flying in WW2, much of it still very dangerous (ie The Hump) but some like the Catalina or Albatross would have been a blast. Rescuing people from the water, hunting submarines, what’s not to like?

        Being able to fly one of the WW2 fighters the way it was intended might be too tempting to pass up even with the massive risk. Maybe mitigate by choosing one of the dominant ones like the P-38 or Dehavilland Mosquito.

        • Tarpitz says:

          I kinda assume that once you get late enough in the war, the risks to being a fighter pilot for the Western Allies can’t be that bad, at least in some sectors, and the direct fun and attendant perks do both seem significant.

          • Civilis says:

            Thinking more about the ‘adventure tourism’ comment by the OP downthread, I’d say that if you’re the sort of person that would do this, you’d be better off being a fighter pilot in the Battle of Britain than a fighter or fighter-bomber pilot on the Western front late war.

            With the risk of death, I think it’s at least psychologically more likely for people to try to minimize the risks of death that they can’t avoid via their own skill. It’s one thing for a thrill seeker to die because the other guy was better, it’s another to die to some unseen, unavoidable mass killer. A thrill seeker is going to minimize the risks that can’t be overcome by skill and maximize the amount of combat that is against comparable-threat enemies.

            With late-war pilots, you’ve got the possibility of being stuck doing the exhausting bomber escort role, or you’re based in some dirty airstrip near the front. Either way, the biggest probable killer I would wager is massed anti-aircraft fire, and if you survive being shot down, you’re likely looking at sitting out the war as a POW.

            So if I was a thrill seeker and thought I could pilot (I’m neither), I’d study up on the Spitfire until I thought I could really make the best of it, then go for the Battle of Britain. You’re based in the UK, so you’ve got pretty good conditions by WW2 standards. (Almost) all your risk comes from enemy aircraft, so if you’re as good as you think you are, you’ll live. And you have the psychological bonus that while you might not shorten the war, at least that bomber isn’t going to hit London.

          • Deiseach says:

            You’re based in the UK, so you’ve got pretty good conditions by WW2 standards.

            Just be sure that you fit in by dropping your modern day speech patterns 🙂

          • Incurian says:

            Deiseach, thanks, that was awesome. Hadn’t seen that before.

          • Fitzroy says:

            Civilis, I’d suggest the Hurricane over the Spitfire:

            It was a much better gun platform (the guns being in two tight groups of four in the wings, rather than spread out like the Spitfire’s, it had a tighter cone of fire and it bucked less when they were fired).

            It was much hardier (being primarily wood and doped linen construction rather the the Spitfire’s duralumin, the Bf 109’s 20mm cannon shells tended to punch straight through without detonating).

            It had a sturdier undercarriage (landing accidents, particularly on beat-up grass fields, caused a surprisingly high number of casualties).

            It could still out-turn the Bf 109, just try not to get into the vertical (they will outclimb you, and your float-chamber carburettor will cut fuel supply if you nose-over into negative Gs into a dive).

            Your biggest threat, though, is being bounced (there’s that unseen, unavoidable killer!) due to the RAF’s insistence, at the time, on tight formation flying. Try to get yourself posted to Douglas Bader’s 242 Squadron – he was one of the first to experiment with the German finger-four formation, and being part of 12 Group you’re slightly less in the thick of it.

          • Tarpitz says:

            If you think you’re probably decent – maybe a little better than average but not even confident of that – and like thrills a lot but are also very averse to actually dying, the casualty rate for Fighter Command in the Battle of Britain is almost certainly too risky to be the right balance. As far as I can work out, something like 25-30% of them died.

            Also, if you haven’t tried it, be aware that flying a small plane just is thrilling, all by itself. I don’t think having to do it off an airstrip in France or Italy would be such a terrible hardship, especially if – like me – you speak excellent French already and have a knack for languages such that you’re confident you could pick up Italian in a hurry.

      • bean says:

        Large multiengine aircraft were by that part reasonably safe in routine flight, and while the U-boats did have AA guns they usually didn’t put up a fight on the surface

        Actually, a surprising number of patrol aircraft were shot down by U-boats. I’m reading Morison’s The Atlantic Battle Won, and for a while, the U-boats were battling it out on the surface, and sometimes winning that battle. Not a good time to be flying a PBY. Although a PBY itself is low-performance enough you can probably get away with hovering nearby and homing in PB4Ys and the like.

    • JohnNV says:

      I don’t want to kill a lot of civilians, even on the axis side. So while my initial thought was that I’d like to be on a 1944 US submarine based out of Fremantle, the idea of sinking Japanese merchant ships or troop transports, killing hundreds with a single torpedo would wear me down. Put me in a Mustang doing bomber escorts. I’m firing on the enemy, but only enemy fighters who are trying to take down my bomber. I think that would be easier on my conscience

    • Incurian says:

      I’d be General McAuliffe’s aide to make sure all these damn time travelers don’t mess up the best laconic middle finger of the century.

    • Plumber says:

      My grandfather and his brother both were in the U.S. Army Air Core and survived the war (though most of their duties involved testing, training, and transportation) so I’d want to stick with them!
      If that’s not an option, I’d choose the last year of the war on the ground in the west, collecting surrendes.

      • Civilis says:

        Keep in mind that Hitler died April 30, 1945, so a year in the US forces in Western Europe could put you on Omaha Beach on D-Day (June 6th, 1944) and would also possibly include such nasty fights as the Hurtgen Forest and the Ardennes (Battle of the Bulge). The 1st Infantry Division managed to be involved in all three.

        • Plumber says:

          @Civilis,

          Yeah, I was afraid of that.

          My first thought would be to be in the Maginot line during the “Phony war”, but that would still leave four months of experiencing the invasion of France (and likely becoming a P.O.W.).

          Really “Some combat”, likely means “lots”, as far as I know, but my military history knowledge is sparse.

    • beej67 says:

      Fight well, and you might shorten the war by a few days. Die in the past, and you’re dead for real.

      This is the only real detail that’s necessary to answer the question rationally, and it could be broken down to math.

      Here’s the analysis:

      27,000 people per day died in WW2, including non-combatants. Obviously truncating the rate of death to a constant like that isn’t a great assumption, but the question didn’t necessarily ask which day would be shortened, so I don’t think it’s an unreasonable scenario.

      What are the chances you shorten the war by a day? I think basically zero, but since it’s a hypothetical, I think we need Mr. Larson to state explicitly what that chance is. If it’s 0.01%, then you going back in time to fight in the war has an expected value of 2.7 lives saved.

      There’s also the relatively large chance you don’t die. The question posits deployment. I can’t find numbers of US soldiers specifically deployed vs non-deployed during the war, but the total amount of people in the armed forces during the war was 12 million, with 407k (3.4%) casualties and 670k (5.6%) wounded.

      I think if I knew I could run into a burning building and save 2.7 people, and only have a 9% chance of dying or being wounded during the fire, I’d do it.

      • Randy M says:

        I think almost no matter what you have a vanishingly small chance of changing the length of the war, but if you are out to save live you stand a good chance of being able to do that, especially if you go back as a medic in an over-worked field hospital or something similar.

      • CatCube says:

        …but the total amount of people in the armed forces during the war was 12 million, with 407k (3.4%) casualties and 670k (5.6%) wounded.

        Minor point of order: “casualties” includes all losses to a unit, dead, wounded, missing, and captured, and can include sick in hospital. It’s not a synonym for “dead”. The “casualties” would be the sum of the two numbers, or 1.7mm casualties, with 407k dead. Confusing this terminology is a minor pet peeve, because it sometimes causes people to talk past each other when discussing statistics from military reports.

      • johan_larson says:

        For what it’s worth, since this is completely optional, I was assuming people who decided to go would be going for a bit of historical adventure tourism, and I added the bits about actual combat so people wouldn’t just go back to watch. I wasn’t expecting people to go back in time to harvest utiles.

        • Matt M says:

          I wasn’t expecting people to go back in time to harvest utiles.

          Pfft, what are you, new here? Why else would anyone go back in time?

          • Nornagest says:

            Well, traditionally you go back in time to kill Hitler, right?

            So: Red Army, Battle of Berlin, 1945. Chances are poor but it’s the only chance you’ll get after these constraints.

          • Matt M says:

            Time travelers don’t kill Hitler in 1945. They kill baby Hitler. You know, to maximize utils!

          • albatross11 says:

            In that case, I think I should start out by learning how to synthesize as many antibiotics as possible, and “invent” them at the start of the war. Schedule myself to be doing this work in a hospital that’s scheduled to get bombed by Germany the last day of my year (bring a rifle so I can shoot back at the bombers), and make sure my work is well-backed-up. Speeding up availability of antibiotics by a decade or so seems likely to harvest more utils than shortening the war an extra couple days by my amazing leadership on the battlefield.

            Alternatively (and more relevant to my own abilities), go hang out in Bletchley Park and help them get working computers a few years early.

          • Matt M says:

            But what if we sold the patent for the time machine and used the proceeds to purchase malaria nets?

          • John Schilling says:

            Time travelers don’t kill Hitler in 1945. They kill baby Hitler. You know, to maximize utils!

            Well, black hat time travelers don’t.

            And non-utilitarian time travelers often pick 1931 or 1938 so as to kill Guilty Evil Hitler rather than Innocent Baby Adolf. But given the overriding meta-rule of Time Machine + Gun == You Know, and the specific constraints of this scenario to serve in an Allied combat unit during WWII but not influence the course of the war save by maybe shortening it a few days, Red Army Berlin 1945 probably is the way to go.

          • Civilis says:

            In that case, I think I should start out by learning how to synthesize as many antibiotics as possible, and “invent” them at the start of the war. Schedule myself to be doing this work in a hospital that’s scheduled to get bombed by Germany the last day of my year (bring a rifle so I can shoot back at the bombers), and make sure my work is well-backed-up. Speeding up availability of antibiotics by a decade or so seems likely to harvest more utils than shortening the war an extra couple days by my amazing leadership on the battlefield.

            Looking at the history of the Second World War, there is historical precedent for someone with a hobbyist level of knowledge making minor suggestions that end up as major tactical innovations, so it shouldn’t take some level of modern expertise to make a difference. The Western allies were quite good at allowing innovations from common troops in the field to propagate, such as the hedgerow cutter.

            Even if it doesn’t significantly speed up the war overall, how many lives could be saved in any of the urban fights by someone with even a modern gamer’s knowledge of breach and clear urban tactics and reasonable access to period American or British supplies, assuming he has enough luck to survive a couple of repetitions of innovation and improvement?

          • Lillian says:

            And non-utilitarian time travelers often pick 1931 or 1938 so as to kill Guilty Evil Hitler rather than Innocent Baby Adolf.

            At which point they are immediately shot by all the armed guards he has hanging around him.

          • Joseftstadter says:

            People who go back in time to kill baby Hitler are just looking for excuses to kill babies. If you want to stop Hitler you go to Sarajevo in 1914 and stop Gavrilo Princip. You don’t even have to shoot him, just distract his attention before Ferdinand drives by. Ferdinand lives, World War I doesn’t happen (at least not in 1914), and Hitler remains an unpleasant failed artist and crank living in Munich.

          • Thegnskald says:

            I think accounts of Princip’s attempt suggest multiple time travelers already tried that solution and it didn’t work.

          • James C says:

            Hell, the assassination of archduke Franz Ferdinand is probably one of the most convincing arguments for fate there is.

          • Paul Brinkley says:

            The running gag I tell my friends is that Archduke Ferdinand went on to be the most brutal totalitarian in the 20th century, to the point that there was a conspiracy to head that off.

            Princip was that time traveller.

        • Kestrellius says:

          “Utility Harvest” is going in my list of book titles I’m going to use someday.

      • Skivverus says:

        So, for a talking point/intuitive-first-guess at where/when to go, then: Pearl Harbor shortly before it actually gets bombed? “Shortly” being “days to weeks”, ideally some amount of time during which you can convince your fellow sailors you’re not just being paranoid/psychotic about an impending attack, and so get them to help mitigate the damage.
        If there’s time for a little research before going, check on which ships (a) took the least damage and (b) had the most competent-in-retrospect commanders in the actual history, and turn up there.
        Aside from the immediate lives saved, this also I think puts you in a position to get your “getting shot at” out of the way early in the year, and improves your odds of getting promoted to non-front-line work where you can help decide where to armor the planes and things like that.

        • Schmendrick says:

          The implications of this are fascinating. If the U.S. battleship fleet isn’t as heavily damaged, would U.S. naval strategy have favored carrier operations so heavily? If the surprise attack isn’t as successful and the death toll smaller, is the U.S. public still as rabidly enthusiastic to go kick butt all around the world? If the Japanese don’t have as smashing a propaganda victory over the U.S. Pacific Fleet, does Hitler still decide to declare war on the U.S.? So many butterflies!

          • Protagoras says:

            Perhaps the U.S. sends the largely intact fleet toward the Philippines to intercept the Japanese advance, and the Japanese, seeking out their decisive battle, intercept. Which could go lots of different ways, but opens up the possible worst case scenario of a big Japanese victory which sinks a bunch of battleships where they can’t be refloated and repaired, plus a few carriers as well.

      • actinide meta says:

        I think if I knew I could run into a burning building and save 2.7 people, and only have a 9% chance of dying or being wounded during the fire, I’d do it.

        If Givewell is to be believed, you can save 2.7 people (mostly kids!) by giving $10,000 to AMF, with no chance of death and no time machine required. If you are willing to make significant sacrifices to save lives that seems like a better avenue.

    • engleberg says:

      First choice- be Roy Dunlap, from Ordnance Goes to War. Play with guns, sometimes use them, but not in the mud and blood all day. Second choice- be L Ron Hubbard, dashing anti-submarine patrol commander. Find an oil pool, tell sonar they hear a sub underneath, drop depth charges, go home and report you sunk a U Boat.

    • Jiro says:

      This is a trick question, though it may not have been intended this way.

      One of the constraints is that you must expect to be fired upon. But one of your obvious goals is to avoid being fired upon. This means that you want to pick the position that has the minimum amount of being fired upon that still counts.

      Bearing in mind that almost any military position has some chance of being fired upon even if just in unlikely situations, the dividing line between “doesn’t count” and “does count” is vague and can’t be determined from the stipulations, yet the answer depends on it.

      The stipulation that you can save lives but you can’t change history is similar. You want to get as close to the dividing line between those two things as the problem allows, and exactly what that is is unclear.

      • sfoil says:

        I don’t know what the dividing line was in WW2, but right now whatever your brigade (more likely regiment back then) commander says counts, counts. Assuming things are the same, a “forward” posting in a secure area that’s occasionally on the same map sheet as errant incoming with an indulgent full bird is what you want if you’re maximizing for survival while meeting the criteria, I think. Finding such a target for your time machine isn’t outside the possibilities of history but it wouldn’t be easy.

        • Jiro says:

          I don’t know what the dividing line was in WW2, but right now whatever your brigade (more likely regiment back then) commander says counts, counts.

          The question about “what counts” is “what counts as satisfying the stipulation that you take a combat position where you’re fired upon”. The being who sends you through time determines this using his own criteria; your brigade commander doesn’t.

          a “forward” posting in a secure area that’s occasionally on the same map sheet as errant incoming with an indulgent full bird is what you want if you’re maximizing for survival while meeting the criteria

          What counts as “occasionally”, though? You want “occasionally” to be the minimum value that still counts as “occasionally”. And we’re not told what that is.

          • Gobbobobble says:

            But one of your obvious goals is to avoid being fired upon. This means that you want to pick the position that has the minimum amount of being fired upon that still counts.

            You want “occasionally” to be the minimum value that still counts as “occasionally”. And we’re not told what that is.

            Technically this is just an instrumental goal. Your true objective is to avoid being hit. The sort of people who jump out of perfectly good airplanes for fun might net gain from scenarios that involve some shooting going on.

    • dndnrsn says:

      Looking at the numbers, you want to avoid being a Soviet, and want to avoid being in the infantry. However, that doesn’t break it down further than the branch of service.

      I would guess that the non-trick-answer safest combat posting would be in the artillery, if on the ground, or in the navy (not sure what role would be safest). Air combat personnel actually had a relatively high chance of getting killed, as I understand it; I think bomber crews moreso than fighter crews.

      It would be interesting to see numbers that break it down more; it could be that just about anything on the Western Allied side was safer than just about anything as a Soviet.

    • Bla Bla says:

      I choose serving in the tank forces for the first year of the war, in 1. Panzerdivision.

      • Protagoras says:

        Really? Casualties weren’t horrible, but also weren’t negligible. Just want to be involved in dramatic victories? And wouldn’t it bother you at all to be fighting for the bad guys?

        • Bla Bla says:

          I thought that being part in the blitzkrieg would be safer and more interesting than other options.
          As far as morality goes, I dunno, how does it even work when the result is predetermined?

        • LesHapablap says:

          You could always be a saboteur on the German side, that would likely do a lot more than fighting for the allies in any real capacity

          • Tarpitz says:

            Also presumably extremely dangerous. And moral issues aside, Nazi Germany does not strike me as a place I would find pleasant to live.

    • The Nybbler says:

      Clearly I refuse to accept this one; it’s all downside and no upside. If I would have to accept it, it looks like the USS Enterprise was a reasonably safe place for a year starting November 10, 1943. So I’ll be an ordinary seaman there. Hopefully I haven’t missed anything.

      Of course the benefit here is serving on the USS Enterprise. Even if it isn’t the right one.

      • albatross11 says:

        “Say, Chief, how come everyone else is wearing a normal Navy uniform, but I got assigned this red shirt?”

      • bean says:

        Of course the benefit here is serving on the USS Enterprise. Even if it isn’t the right one.

        What do you mean, “Even if it isn’t the right one”? Of course it’s the right one. The Big E. The ship most often “sunk” in the Pacific. How can you say that she isn’t the right one?

    • cassander says:

      I can tell you what I definitely wouldn’t do. One of the largest battles of the war was fought at Kursk between the Germans and Russians. Both sides spent a long time preparing for the fight before hand, and a lot of prep involved setting up large mine fields. The Russians, being Russian, were long on men and short on almost everything else, including mines. To resolve this problem they assigned a penal battalion the job of crawling through the russian mine fields (which weren’t labeled) into the german mine fields, digging up armed german mines and putting them in a sack, carrying the sack of armed mines back into the russian mine fields, digging holes in the russian mine field (which, I must remind you, weren’t labeled) and burying the armed german mines in the russian fields.

      Any one of those tasks was pretty dangerous. All of them together, I submit, were the the worst job in history.

      • fion says:

        That’s horrifying, but also a really interesting story!

        • cassander says:

          That’s horrifying, but also a really interesting story!

          That might be the best summary of the soviet union I’ve ever read.

      • Tarpitz says:

        There are widespread (possibly apocryphal) stories of Soviet airborne troops jumping/being pushed without parachutes from slow, low-flying aircraft over deep snow. The most credible source I’ve found suggests the unit involved was the 4th Airborne Corps in the Vyazna pocket in early 1942. Presumably this was also an unpleasant experience for those involved.

    • idontknow131647093 says:

      I routinely think back about how close I was to applying for the Naval Academy (and had the recommendations lined up had I chosen to) and I think aircraft carrier duty would have been the most interesting deployment in WWII. I probably wouldn’t have been much of a flyer though, so something on deck.

    • ChrisA says:

      I wouldn’t accept this assignment, whatever the cost to me. The chances of shortening the war is maybe small so there is only a very small upside, but the downside, if somehow I caused the allies to lose, is huge. Remember the good guys won and we have had a pretty decent peace for the last 70 years, with no wars of such a magnitude again. Yes the cold war was scary, but the good guys won that too. As a society we were extraordinarily lucky in the 20C, anyone wanting to mess with that needs their head examined.

      • James C says:

        It’s a fair point. If you’re going into this with the idea of making the twentieth century a better place you’re probably in the wrong war.

    • Michael Handy says:

      Are we restricted to out branch of the allies? (eg can a New Zealand woman become a Soviet Sniper/Night Witch?)

      Also, are we allowed to choose our combat role (garrison duty as a soviet commissar on the Manchurian Frontier sounds peaceful, but technically meets your requirements.)

      Actually, that’s my choice, Commissar, attached to Soviet Field Artillery, Russian/Manchurian Border, April 1944-45

      • johan_larson says:

        Yes, you can serve in the military of another country.

        Yes, you can choose your combat role, but keep in mind that you need to face at least a little bit of active combat.

        • Michael Handy says:

          I’m thrilled to be facing active combat against random bandits using artillery, as well as curb-stomping third rate Japanese troops in 1945.

      • Matt M says:

        I knew Call of Duty was onto something when they made it so that you could create black female Nazi characters…

    • carvenvisage says:

      You will be trained to do the job and to fit into the time and place you are going to. Don’t expect to change the course of the war; you will not be placed anywhere you could decisively change world history. Fight well, and you might shorten the war by a few days. Die in the past, and you’re dead for real.

      What prevents you from passing critical information up the chain of command from anywhere? Or angling from your initial position for reassignment to a more critical area? (I vote selective amnesia for something exploitable or magic time police for something not.)

      What’s the threshold for “decisively change world history”? Could one for example take an assignment that meets a a post-war leader late in the war and feed them information about the post war world up to that threshold?

      Some approaches that try not to be too exploity:

      1. Get assigned to a war hero’s unit for inspiration/learning to carry back to the future.

      2. flying vs the germans, as they had aces with kill scores in the hundreds whose individual loss might not be critical but might be one of the biggest impacts an individual soldier could have. (also high impact, cool, and if shot down one might land behind enemy lines, which seems like an uncriticisable way to get to critical place.

      3. Go somewhere where allied atrocities were committed in hopes of preventing it.

      4. Whatever needs the most training without being overstaffed. (seeing as you carry it back from the future).

    • Registered says:

      Since the outcome of the war is predetermined and my actions don’t matter to the grand scheme. I will opt for maximizing personal gain.

      I choose to be an admiral in the USN in the Atlantic in the last 12 months of the war, commanding some task force or other. I will invest 600 dollars a month for 12 months in the stock of Johnson and Johnson in some sort of trustee account. That initial investment of $7200 will be somewhere around 50 million dollars today.

      When I am done with my 12 months, I will hop back in my time machine to the present, twirl my moustache, and go collect my funds.

    • AISec says:

      I had a different take on collecting Utils… what position could do the most good in the present, given inability to help much in the past?

      Maybe secretary or adjutant to Hitler, Stalin or slightly lesser big bads, in order to shed light on lost historic details (and possibly sabotage them in some small way)? But I guess that’s non-combatant, so I don’t qualify. Maybe high-level courier in and up to the fall of Berlin… I imagine even the non-combatant positions saw some enemy action near the end, especially if you include up to being carted off as a POW.

      • Evan Þ says:

        If you shoot Hitler (or Stalin), you would pretty quickly be in combat. That’s what I thought you were building up to when you proposed taking those positions.

    • noddingin says:

      “a telephone operator in London during the Blitz”

      I would pre-emptively defect from the mission rules and begin preparing to harvest selected utils.

      Depending on how much time I have to prepare….

      During my tea breaks as Ernestine in London, I’ll use Time Travelers’ Handbook to get rich quick.

      Then become useful to Bletchley and attempt to save one life: Alan Turing.

    • I’ve just been reading Stationed Safe out Here by George MacDonald Fraser (of Flashman fame), his account of his experiences as a nineteen year old soldier in the Burmese campaign. A fascinating account, but not where I want to be.

      Interesting in part for the culture war between his 1940 attitudes and the 2000+ attitudes in Britain when the book was written. He strongly disapproves of war correspondents interviewing soldiers as to whether they are scared. Of course soldiers are scared–but making a fuss about it makes things worse, not better.

      • Matt M says:

        Not directly related but your last sentence made me think about it.

        Something that increasingly annoys me is when interviewers focus on negative emotional experiences. You see this more and more in sports (of all places) these days. I swear, during the last winter olympics, 50% of interviews with medal winners started with a question of the “How do you think your father, who tragically passed away from cancer last year, feels about your win today?”

        Like, really? Can we not let these people have a minute to celebrate before we force them to re-live an emotional tragedy?

        • Paul Brinkley says:

          This is almost exactly the same reason I concluded a while back that the best solution to school shootings is to stop reporting them (except locally).

    • Alsadius says:

      Three obvious choices come to mind.

      1) Find the safest job I can. Probably that means being on a battleship in 1944-45, though the air force of that era was pretty dominant too(especially in the Pacific). That maximizes my odds of getting home.

      2) My grandfather was an infantryman who mostly served in the Italian theatre, and he died when I was five, so I never really got to know him. There’s a part of me that might want to fight alongside him.

      3) Do something that gives me an interesting and useful skill set. Being a pilot is the natural choice here. (Might work well with #1, too)

  2. ManyCookies says:

    Awww, no ranked/approval voting for the contest entries?

    • Vanessa Kowalski says:

      Yes, it was really hard to decide between some of the candidates, and ranked/approval would be a lot better to express what I think. Not to mention that one would expect something more sophisticated from a rationalist blog, especially given all the talk about how first-past-the-post is a terrible system 🙂

      (That said, good job Scott and all of the contestants! It was a very interesting initiative worth repeating.)

    • fion says:

      I kind of wish there’d been an adversarial collaboration entry on “first past the post is a terrible voting system”.

      • johan_larson says:

        Isn’t there broad agreement that it is a terrible system?

        • fion says:

          That’s not the point. The point is that it would make Scott more confused about deciding how to set up the vote for the winner of the contest.

      • Gazeboist says:

        “Approval voting is a better FPtP replacement than IRV” could be an interesting one, although my opinion on it is actually “it depends on what you’re doing with the vote”.

      • Jameson Quinn says:

        I was seriously considering doing one on “ranked choice voting is in practice the best proposal for fixing FPTP in English-speaking countries (that is, especially the US, but also UK and Canada).” I would have been the voice against, and I know plenty of people qualified to be pro.

        Seriously, Scott: choose-one voting is horrible, and you could have easily used star.vote, OR used google forms and publicly-available spreadsheet code for running STAR voting, to have something better.

        • ManyCookies says:

          I’m assuming you’re not arguing for status quo FPTP, so what’s your position and what do you think it does better than ranked choice?

          Man I would have loved to see this collaboration. Hell if we do this again, I’ll sponsor 50 bucks towards the topic.

        • Gazeboist says:

          I’d be tentatively interested in taking the pro-IRV side in an IRV vs STAR collaboration. Post it next time there’s a contest starting up?

        • drocta says:

          My impression is that ranked pairs seems better than IRV, and if I had to pick a voting method, I’d probably go with ranked pairs.

      • Eric Rall says:

        I’ll take “‘first past the post’ is a terrible name for the voting system it purports to describe”. There’s no “post”, since a candidate can get elected with any number or percentage of votes so long as they got more than any other candidate, and there’s a “first” only in the loosest possible sense of one candidate getting more votes than the others. The system actually described by the name is one where “votes” are cast and tallied over an extended period of time (e.g. by circulating petitions) and the first candidate to hit the required vote tally gets the office.

        Instead of “first past the post”, I strongly prefer “plurality” or (if more precision is necessary) “single-member plurality” as a name for the system. It has the virtue of actually describing the key characteristic of the system: the single candidate who receives a plurality of the votes cast is elected.

        • Harry Maurice Johnston says:

          I think it’s evocative. The most common way of winning is to reach “the post” of 50% + one vote, and after that happens the rest of the votes are irrelevant. The analogy to a foot race is self-explanatory. Whereas the problem with “plurality” is that almost nobody knows what it means.

          I suppose you could be more accurate by calling it “first past the post or whoever got closest to the post” but that’s a bit of a mouthful.

          • Gazeboist says:

            Highest hammer hit or first ring? Still a bit of a mouthful I guess.

          • smocc says:

            Counterpoint: I know what plurality means because I have taken US government classes in school, but I’ve always struggled to remember what “the post” is supposed to refer to in “first past the post.”

            Also, no candidate passed the 50% “post” in the US ’92, ’96, ’00, or ’16 US presidential elections, so at least for that significant example it’s not really the most common way of winning.

          • Eric Rall says:

            If 50%+1 is the post, then we’re talking about something like the election procedure prescribed in Robert’s Rules of Order, where if there’s no majority, you hold successive ballots until a majority somehow emerges.

            And I don’t think you get to name the election procedure based on rounding up to a “normal” case where the winner has a clear majority of votes (or first-place votes). Plurality, Instant Runoff, Bucklin, Condorcet, and RRO all look pretty much the same when there’s a clear majority.

            Whereas the problem with “plurality” is that almost nobody knows what it means.

            Google trends seems to think that “plurality voting” and “first past the post” are roughly equal in prevalence.

          • Harry Maurice Johnston says:

            @smocc: “plurality” might be in more common use in the US, I suppose.

            I’m not sure what you mean about the US presidential elections; for example, according to Wikipedia, Trump got 56.51% of the electoral vote. If you mean he didn’t reach 50% of the popular vote, well, it isn’t a FPTP system as regards the popular vote, precisely because of the electoral college.

            Edit: wasn’t thinking clearly about that. In the context of either a presidential or parliamentary election, the post isn’t 50%+1 of the popular vote, but 50%+1 of the electoral college votes (US presidential) or of the seats (parliamentary).

            Second Edit: no, I think perhaps I’m just hopelessly confused in general. That kind of underline’s Eric’s original point, I guess.

          • Harry Maurice Johnston says:

            I note that in the case of US Presidential elections, a fuller description would be “first past the post, or if nobody reached the post, the House of Representatives decides”.

            Perhaps we could add boxing to the mix and call it “first past the post or win by decision”. 🙂

          • AG says:

            Elections reps will increase until statistical significance is established 😛

            That should fix the “tiny margin” issue. 😛

          • Placid Platypus says:

            But if anyone gets 50% + 1 of first choices it doesn’t matter what voting system you’re using, since basically all of them will give the same result.

          • Harry Maurice Johnston says:

            But if anyone gets 50% + 1 of first choices it doesn’t matter what voting system you’re using, since basically all of them will give the same result.

            Well … I don’t want to defend my original argument, because although I still find “first past the post” evocative, I do seem to be confused about exactly what it evokes. 🙂

            That said, I don’t think it’s quite that simple. In New Zealand’s MMP system (for example) 50%+1 of the overall vote is enough to give a party control of Parliament, but only barely. Under the old first past the post system we used to use, 50%+1 of the overall vote typically meant a large majority in parliament. Not at all the same.

            Also, I gather the electoral college rules in Nebraska and Maine work differently to the rest of the US, and in principle those rules could swing a Presidential election against the candidate that got 50%+1 of the popular vote in either or both.

            Question: when I fact-checked that, I came across the phrase “winner takes all” describing the electoral college rule used by all US states except Nebraska and Maine. Is this actually the same rule being described upthread as “first past the post” or is there a distinction I’m missing?

          • albatross11 says:

            Here’s how I understand this:

            This only applies to electoral college votes. That is, in the US system, voters don’t vote directly for president. Instead, we vote for electors, who are pledged to vote for a certain candidate. If you vote for Donald Trump in 2020 in any state in the US, you’ve voting for an elector who’s pledged to Trump. That elector then casts his votes in the electoral college to choose the president. In theory, the electors can change allegiance after they are elected, but I don’t think that has caused any serious issues in modern times.

            Each state has a certain number of electoral college votes–how many electors they send to the final vote that decides who the president will be. Missouri has 10 electoral college votes, Florida has 29, and California has 55. The number of electoral votes tracks with the number of people the state sends to Congress–2 senators plus some number of representatives determined by the state’s population.

            All states but Nebraska and Maine allocate all their electoral college votes (where they elect the people who cast the actual votes for president) in a bloc. That is, if you win the majority of presidential votes in Missouri, then all Missouri’s electoral votes go to electors who have promised to vote for you. This means that a state whose presidential votes go 51-49 for the Democrats gives 100% of its electoral votes to the Democrat. This is the main way that the electoral college can get very different results than the popular vote–every vote for Hillary in California past 50%+1 is wasted, because all those electoral votes are already going to Hillary no matter what. 51-49 Hillary gives the same result as 99-1 Hillary.

            Maine and Nebraska allocate their electoral votes by districts–someone wins a plurality of votes in each district, and that person’s electors get sent to the electoral college. So you still get that 51-49 Hillary = 99-1 Hillary effect, but only within a district, not within the whole state. I don’t think either state has ever split its electoral votes, but they could.

            You could imagine other ways to do it, like proportional granting of electoral votes. But a political party that dominates a given state has a big incentive to keep the winner-take-all system for electoral votes.

            For example, California is heavily Democratic. Right now, they are a reliable 55 electors for the Democratic candidate. Suppose they switched to a district system like Nebraska and Maine: maybe now, on an average year, they give 40 electors to the Democrat and 15 to the Republicans. The Democrats running California would screw their own party over in future presidential elections by changing to a district system. (As would the Republicans running Texas, for the same reason.)

          • Plumber says:

            “….That is, in the US system, voters don’t vote directly for president. Instead, we vote for electors, who are pledged to vote for a certain candidate…”

            @albatross11,

            That’s why I get increasingly irritated by exhortations like “Nothing you do this year will be more meaningful than casting a ballot in November“, because it’s simply not true. 

            I’ve voted since 1986 in the same district near where I was born in Oakland, California and the elections have never been close, and as a Californian my vote is diluted compared to small States anyway.

            I’m tired of being lied to by being told “Your vote matters” when it really doesn’t.

          • @Plumber:

            “Your vote matters” is particularly false for those of us who live in California, but it’s not really true, ex ante, in a presidential election in any state. The probability that one vote will decide the election is very close to zero–well under one in a million if we were using a national majority vote. Calculating it with the electoral system would be hard, but it’s tiny, since it requires that the totals in your state differ by only one vote and that the electoral results differ by no more than the electoral votes of your state.

            Saying that it matters is a statement of religious faith unconnected to reality. When I vote they give me a sticker saying “I voted,” and I don’t wear it.

  3. ManyCookies says:

    Suppose half the population of the world suddenly fades to ash in a few seconds. Now obviously their disappearances will cause secondary deaths; uncontrolled cars will hit people, unlucky planes crash if their pilot and co-pilot fade, surgeons die mid operation etc. So how many causalities does this event ultimately produce?

    • johan_larson says:

      I’d guess the secondary casualties amount to a few percent of the world’s population. At any one time, one third of the world is asleep. Another quarter of the time, people are at work or in school, generally doing things where their disappearance does not immediately kill anyone. Most of the casualties will probably come from passengers in cars whose driver vanished or in cars hit by driverless cars. That doesn’t sound like a big group, particularly since cars without a driver don’t usually keep going. With no foot on the gas pedal, a modern car glides to a stop.

      • LTK says:

        Cars with a manual transmission will keep going as long as they’re in gear, even with no foot on any of the pedals. That hasn’t changed recently, as far as I’m aware.

        • Incurian says:

          Maybe I’m crazy, but I’m thinking a manual car will come to a stop much faster than most automatic cars, because the automatic cars seem to shift into neutral (or something) when you take your foot off the pedal and then even as they approach a stop the torque converter still keeps you going at like 1mph or something, while the manual car will first slow due to engine compression and then stall.

        • Jesse says:

          If there is no foot on the gas the manual car will come to a stop much quicker than the automatic. And if the driver of an automatic that is stopped vanishes, it will start to move forward. The manual will stall or sit there idle depending on what the gear position was. (or roll downhill…)

        • Paul Brinkley says:

          A quick search reveals cars in the US are overwhelmingly automatic (about 96%), while most cars worldwide are manual (80%). So this would be an interesting case where the US has a different immediate dead-driver experience from the rest of the world.

          However, a manual car with no acceleration will quickly go slower until it stalls out if it’s higher than first gear. A car in first gear will trundle along at 5-10 mph until it either runs out of fuel or bonks into a tree or something.

          One huge factor is the time of day at which the ash event occurs. Whose rush hour does it occur on? This will determine where the most driverless cars turn up, and in the case of manuals, what gear they’re likely in.

          Expect several thousand cars drifting out of their lane at that point. This isn’t terribly dangerous to the drivers left, although it’ll be very noticeable and scary. Most people will either shift one lane over to give room, or hit their brakes, and possibly be rear-ended by a wandering car behind them. Cars on the highway will stall out, probably just smashing together on one side of the highway or the other, while drivers watch in stunned amazement. Cars on busy non-divided streets will be in the worst situation, as some will drift across the center line before drivers can realize what’s happening and pull over. However, the stall should happen sooner in those cases – 25-45 mph zones vs. 55+.

          I don’t think this will amount to even 1% extra casualties, however. It’ll mostly be a lot of random property damage.

          • Incurian says:

            A car in first gear will trundle along at 5-10 mph until it either runs out of fuel or bonks into a tree or something.

            Is this normal? My car will certainly stall in first gear, probably more readily than in higher gears even.

          • Paul Brinkley says:

            This is based on experience. I used to drive a truck in first gear on the farm when I was three years old, while my dad pitched hay out the back. I couldn’t reach the pedal, but all I had to do was steer.

          • baconbits9 says:

            Expect several thousand cars drifting out of their lane at that point. This isn’t terribly dangerous to the drivers left, although it’ll be very noticeable and scary.

            I think you are underestimating how much active management of cars goes on and how quickly everything will go to shit. For an example when you merge onto a highway the car behind you often slows down a little to increase the space, but they only slow down a little and expect you to keep accelerating. If the merger (on a busy highway) is one of the people disappearing then the car behind them is going to suddenly hit the brakes and any car behind them without a driver is going to hit them well before they have slowed below a critical speed.

            This is happening all the time, people are changing lanes, drifting and correcting slightly and getting on and off the highway. Once or twice a week (and I don’t have a commute) I find myself making an opening for another driver who seems to have misjudged, isn’t paying attention or is accustomed to people giving them space. Drivers suddenly disappearing will make all of these things way more dangerous.

      • Davide S. says:

        Most of the casualties will probably come from passengers in cars whose driver vanished or in cars hit by driverless cars.

        Shouldn’t you also factor casualties caused by drivers who were NOT turned to ash, but had passengers who did and freaked out because it’s the most obvious reaction to something like this.

        Likewise for surgeons and anyone else performing life-or-death activities that require intense focus.

      • Nancy Lebovitz says:

        Cars going downhill will presumably keep going until they hit something.

    • jaimeastorga2000 says:

      Nice try, Thanos.

      • ManyCookies says:

        Hey I don’t want to overkill here, it’s not exactly a 50:50 split if another 20% of the population dies in the next few years.

        • albatross11 says:

          There’s an immediate collateral damage effect, and a longer-term effect.

          Immediate:

          About 1/4 of passenger planes will lose both pilot and copilot, all at once. I suspect this will lead to a fair number of plane crashes, as the (50% diminished) air traffic control folks will be overwhelmed with requests for help.

          Every busy highway will become a massive pile-up of death–half the cars just went driverless and crashed, and the rest were the targets/victims. The emergency services are missing half their people, and anyway can’t get through the wreckage to get to people in the middle.

          I don’t know how many people that will be–probably not a huge number, but plenty.

          Longer-term, I’d expect:

          a. Suicide from (say) a mom who had both her kids disintegrate in front of her eyes. If you assume most people have a network of (say) four people who are the most important to them, 1/16 of those people will have lost everyone important to them. A lot of them will help Thanos’ project along a bit.

          b. Dependents who die because everyone who was caring for them disappeared.

          This is probably a big problem for the couple weeks afterward, but not for a year later. But there will be, for example, babies that have both parents evaporate and then they starve to death in their crib.

          Still longer term, there’s social disruption. We have half the population and half the workers, so we *can* get back into balance, but you’d expect things to be pretty rough for a few years till we managed it. There are historical examples where lots of people died in a short time, but I don’t know how many are relevant here. You could imagine this leading to wars or chaos, though I’d expect it wouldn’t be quite that bad.

          In that social disruption, you’d also expect to see less available resources for dealing with people with edge-case needs. Cancer patients die because the whole economic network making their super-expensive targeted monoclonal antibody cancer treatment breaks down, and their cancer isn’t responsive to the cheaper still-available broad chemotherapy agents. People on the ISS or research stations in Antarctica stop getting resupply and don’t have a ride home.

          I suspect this wouldn’t be a large fraction of survivors, unless you got full blown collapse / civil war. If you stop being able to make targeted monocolonal antibody therapies, you lose a small number of people; if you stop being able to make antibiotics, you lose lots and lots of people.

          • Rana Dexsin says:

            If you assume most people have a network of (say) four people who are the most important to them, 1/16 of those people will have lost everyone important to them. A lot of them will help Thanos’ project along a bit.

            Doesn’t that depend on the distribution of which people are important to how many and which others at once?

      • Incurian says:

        Why didn’t Thanos just directly put an end to the conditions besides population that caused suffering?

        • James Miller says:

          Why didn’t any of the heroes who had the chance to talk with Thanos suggest this?

        • albatross11 says:

          Judging from the human experience, he could have simply magicked effective birth control into existence along with piles of wealth usable by each species individually–that would have also cut population, and in a longer-term way than wiping out half the population of the galaxy.

        • ManyCookies says:

          Because Thanos is insane (in a more interesting way than world destroyers usually are). He’s become so enamoured with the image of “This has to be done and I’m the only one who has the will to do it” that he can’t consider these less drastic options, and if someone suggests it he’ll brush them off as a weak willed fool trying to weasel out of what has to be done.

          • Incurian says:

            That makes sense, but the movie, from my point of view (it’s possible I was drinking), seemed to portray him as both saneish and sympathetic, and even smart, except that his plan was stupid. I wish they could have made him less sane while preserving sympathy for him. I may rewrite my head canon to fit.

          • Matt M says:

            Hollywood is gonna have a tough time making a villain with Malthusian impulses seem truly evil. For, uh, reasons…

    • Luke Perrin says:

      I suspect the number of casualties is large; almost 100%. For example it’s not like we have only 50% of the farms that existed before. We have 100% of the farms, each of them with only half the number of required employees. Likewise for power stations, transportation, communication. All of these services would fail until they managed to adapt. The world would have to reorganize itself into a world that was half the size, and it would have to do so before everybody starved to death. I think this is pretty unlikely to happen, especially if we lose power and electronic communication.

      • fion says:

        I’m inclined to disagree. For farms, we still have half the farm workers. Maybe they can only farm about half the area, but the rest of it will just go wild – no biggie. Transportation is probably fine. We still have half the bus drivers and train drivers, and we still have some people in organisational roles who should be able to figure out some kind of timetable based on half staff.

        Power stations and communication I’m not sure about. But I suspect they’d be able to limp on with people working large amounts of overtime until the whole thing can be scaled down effectively.

        • albatross11 says:

          Yeah, we only need half as much food, so if we can figure out how to keep half the farms working, we should be good.

        • add_lhr says:

          Agreed w/ albatross. The food system in much of the world would be one of the few bright spots in the whole exercise. You have half the labor but only half the demand – and you get to use 100% of the land, 100% of the equipment, and 100% of the inputs! For the upcoming season, you can immediately stop using all of your marginal & overfarmed land, focus on the best land and plant it with the best seeds & the recommended amounts of other inputs almost regardless of your economic situation prior to the event, and if your community didn’t have enough tractors & combines before, it sure does now.

          And remember, even if you don’t have enough labor to harvest all the crop in the ground, that doesn’t matter – you only need half of it (and you can – obviously – get more than 50% of the crop from the best 50%+ of the land that you can harvest with 50% of the labor and 100% of the equipment).

          Yes, advanced economies will suffer severely from transport disruptions post-harvest, but staple grains will be in abundance and the several billion people still living in smallholder farming communities will experience a time of agrarian prosperity the likes of which they’ve never seen before.

          In fact, it strikes me that a place like Ethiopia will come out of this relatively better than almost anywhere else:
          – Almost no motor vehicles on the ground or in the air (I would say probably 1,000 Ethiopian citizens are in the air at a given time, and that’s only if it’s daytime in Ethiopia!)
          – The government has made vast investments in decentralized provision of basic health & ag extension services, such that every village has between 3-5 of each and they are at that part of the ‘S-curve’ where most gains in living standards are coming from knowledge and basic health & ag inputs rather than advanced inputs & high tech equipment – so with 1-3 ag & health workers still left in each village, plus 100% of the health facilities, medicine stocks, grain storage, basic ag machinery, you are looking at windfall gains when you stop relying on the worst 50% of your land
          – plus that worst 50% of the land is still the source of basically all disputes today in the country, so that immediately goes away

          Ethiopia’s major imports are petroleum (cars will be an issue but they’ve got plenty of hydro for electricity with only 50% of the demand), palm oil (but they have sunflower & soy, so this is fine), sugar (but they have beets, so again fine), and heavy machinery to support their construction boom (…plenty of vacancy now), so they’re basically fine on that front as well.

          • baconbits9 says:

            You can make the opposite argument, the closer you are to subsistence the smaller the hit you can take. Americans can have their real purchasing power cut in half and still afford enough calories to live, that is a lot less true in Ethiopia.

            Yes, advanced economies will suffer severely from transport disruptions post-harvest, but staple grains will be in abundance

            Eventually but currently in the US staples are shipped from long distances. Farmers markets tend to grow specialty items with few things that could be turned into staples, and the mid west is heavily dependent on fertilizers produced elsewhere and modern irrigation which requires quite a bit of functioning infrastructure.

            the several billion people still living in smallholder farming communities will experience a time of agrarian prosperity the likes of which they’ve never seen before.

            Maybe, but a lot of these communities would suffer immensely and just disappear with half their population gone.

      • baconbits9 says:

        In the long run, if you survive the transition, farming should be fine. The transition though is really rough. every major highway is going to be blocked, only places with virtually no traffic and long straight shots or bumper to bumper traffic will be without massive fatalities. Every car that is going into a bend at the time of the disintegration is going to crash well before it slows down significantly, and every driver that sees such a crash is going to hit their brakes. Massive pileups are rare because when you hit your brakes the guy behind you probably hit his, if half of the cars behind you lose their drivers and don’t hit their brakes and some of those are going to be semis going 60+ miles per hour. Its going to be a horrific minute.

        After that minute there are going to be few ways in or out of these major cities, and there is going to be mass panic and no organizational structures.

        The cities that get it the worst timing and luck wise are probably just going to burn, cities that get it mildly bad are going to be looking at the worst disaster ever and aren’t going to be shipping food, water and equipment out to help these cities.

        • ManyCookies says:

          How would you clear busy highways of a bunch of stranded, abandoned cars anyway? Would you have to slowly tow all the cars off the road or something?

          • baconbits9 says:

            If every car was a stick shift you could drive a lot of them out of they way, even if it was just over to the shoulder etc because they would stall out and their engines would stop. Any automatic that wasn’t in a serious enough accident to stop its engine would idle until they ran out of gas meaning you are towing or bringing gas for each of them.

            Moving cars is doable, but a jack knifed semi takes a lot of time to clear with emergency personal and experienced tow truck drivers. I can’t even imagine how long it would take if you also have dozens of cars piled up on both sides of the truck and probably rotting corpses to boot.

          • The Nybbler says:

            How would you clear busy highways of a bunch of stranded, abandoned cars anyway?

            If you’re trying to pretend the world hasn’t gone mad, you have to tow them all and keep track of where they went and all that, and this will take a very long time. But if you’re treating it as an emergency, a large plow will clear all but the urban highways expeditiously (you just shove the cars off to the side or into the median, and worry about complete cleanup later). For the urban highways you might need a mobile crane (because there’s no place to push the cars), but most likely you can get a path cleared with just your plow.

          • John Schilling says:

            For the urban highways you might need a mobile crane (because there’s no place to push the cars)

            If you’re going to do this, have fun with it. The ultimate mobile crane and a set of designated drop zones near the highways you want to clear.

            I’m hoping both Adam and Jamie will survive, because it won’t be nearly as much fun with only one.

            Alternately, have one of the better Junkyard Wars teams build a mobile auto-trebuchet.

          • James C says:

            Snowplows are remarkably good for beating traffic, as seen on Mythbusters.

    • JPNunez says:

      Google says that either 660K or 1.2M people in the air at any moment. Let’s go with the higher number and 300K people die because both pilots in their plane die. I guess a small number will be able to land safely anyway thanks to luckily having another pilot between the passengers, but also a bunch of planes where only the pilot dies may crash if the copilot cannot do anything to fix it (mainly for planes in an crucial part of the flight, like landing). I guess that another problem is having half the ATCs, plus panic in the remaining ones, so people in the air may have trouble landing.

      But it is not a lot of people.

      The number for cars is harder to estimate.

      • Edward Scizorhands says:

        Don’t forget that half the passengers on the planes went to ash, so there are only 150K people dying in plane crashes.

      • RavenclawPrefect says:

        I think planes might be safer than people imagine; IIRC they run almost entirely on autopilot for the middle parts of the flight, so really there’s a window of at least a few hours during which there’s a chance for information to be transmitted to any of the remaining passengers as to how best to land their plane (and I imagine communications networks don’t have so many points of failure that they’d stop working in the immediate aftermath).

    • DragonMilk says:

      Is there a particular part of the world that’s affected or is a vertical slice of the population (half of all occupations)?

      I agree with others and go with the 50% going to 53%

      • JPNunez says:

        It’s a reference to the last Avengers movie. And it’s a random half of the population. Every person has a coin toss whether they survive, everybody goes up in ashes at mostly the same time.

        • Kuiperdolin says:

          Then you won’t really get a vertical slice. Some subgroups are going to be completely wiped out (hope they’re not doing anything very important). Some are going to be entirely spared. Lucky. At first. And then someone starts pointedly asking why *somehow* the one jewish family in town, or the police department, or the next village down the road had not a single loss, although every one else had. My guess is it will get ugly fast. (It need not even be true! Although it might well be).

          And of course it works at every scale, all the way up to the states : probably unrealistic to expect any country to be wholly spared but it’s likely at least one, especially one of the small ones, gets, say, only 25% killed in the Event. And what do you know, the toll is way worse than average for their ancestral enemies. Did Elbonia unleash a terrible secret weapon on the world, with no regard for innocent casualties? And how can we trust them not to do it again, if they were willing to sacrifice a quarter of their own? Oh, no proof, OK, I guess a wizard did it then?

          • fion says:

            I think you’re underestimating the power of large numbers.

            The probability of tossing a coin 10 times and getting at least 7 heads is 17.2%. The probability of tossing a coin 100 times and getting at least 70 heads is much smaller: 0.0039%. The probability of tossing a coin 1 million times and getting at least 0.7 million heads is so small Wolfram Alpha can’t calculate it.

            A small country of a million people will definitely not have only 25% of its population killed. Even the least populous country in the world (The Vatican, population 800) has only about a 10^-47 chance of having 25% or less of its population killed.

            A town of 10,000 will lose between 49,000 and 51,000 (95% certainty).

            A town of 100,000 will lose between 490,000 and 510,000 (99.5% certainty).

            So yes, you will see “that one family were all spared” but you will not see “that one town had only 25% casualty rate”. Nowhere in the world. This doesn’t scale like you might expect.

          • Thegnskald says:

            Advanced large numbers homework problem:

            How many planets with populations of 5 billion would you have to have before you achieved a 50% chance of a planet existing with zero casualties?

            ETA:

            A hint: The odds against a planet surviving without casualties:
            Bar va (gra gb gur cbjre bs bar cbvag svir ovyyvba, nccebkvzngryl.)

          • Kuiperdolin says:

            That’s fair, my gut feeling was wrong, it does not scale up. So maybe many local dustups (heh) at most.

          • Paul Brinkley says:

            If there were enough planets for there to be a 50% chance of one with zero casualties out of at least 5 billion, then there would be no need for Thanos to carry out his plan.

            This seems to hold regardless of the probability of any one person being ashed.

    • rubberduck says:

      This is basically the plot of Y: The Last Man, in which all the men on earth (except the protagonist) die all at once and society has to restructure itself. I don’t remember what death toll was given in the story but it’s a pretty good series so if anyone’s into comics I recommend it.

      • Jon Gunnarsson says:

        The death toll for all men dying would be far worse than if a randomly selected half of the population dies. There are so many crucially important jobs that are done almost entirely by men. For starters, you can expect the power grid to fail, with all of the dire cosequences associated with that.

    • arlie says:

      This would depend in part on which 50% suddenly fade to ash.

      I’d expect 3 main causes of death:
      – immediate secondary deaths – passengers, people hit by cars, planes, etc; helpless people whose carer vanishes who aren’t found in time, etc.
      – suicide and other violence, as survivors freak out
      – failures of complex systems which are no longer viable. At 50% it’s not especially likely to lose all of any particular type of skill, but plenty of settlements and businesses will become non-viable, and depending on the results of the freak out stage, this could get worse than it looks at first glance.

      I don’t feel capable of estimating the numbers here. We’ve had other cases of death rates in the 50% range in various times and places, e.g. due to epidemics. It might be informative to examine those cases, but many (most) are very badly documented.

      • John Schilling says:

        – failures of complex systems which are no longer viable. At 50% it’s not especially likely to lose all of any particular type of skill, but plenty of settlements and businesses will become non-viable, and depending on the results of the freak out stage, this could get worse than it looks at first glance

        Yes, there are going to be a lot of cases where you think you’ve got recovery well under way and then, e.g., there are no more spare framistat modulators for medical CT scanners anywhere, because everyone went to the same sub-sub-tier contractor for one key bit and their survivors weren’t able to keep the firm running. So now you get to reinvent the wheel, from incomplete documentation, with people dying while you wait. Repeat x10,000.

        • baconbits9 says:

          I don’t think this is much of a concern, if most of the infrastructure is fine there will be tons of capital to raid and there shouldn’t be a sudden shortage of many things.

          I think the big concern is maintenance of major infrastructure points. How long can the Hoover Dam be abandoned before something catastrophic happens? Levies along the Mississippi? Can you even repair some of these things if they start to fail with half the population and X% of GDP left.

        • LesHapablap says:

          In the short term, there are at least four networks that we depend on, and if one fails the others fail, or become difficult to repair:

          mobile/cellular network
          road network
          power
          fuel

          It’s important to note that even for extremely important jobs, like running a fuel depot and delivering fuel, you aren’t going to get more than 20% of your workforce to show up in the first few days. Most of the survivors won’t be able to get to work as they likely have to commute 15+ miles over blocked roads. Others won’t come as they need to protect family from looting, are hundreds or thousands of miles away, or died/injured.

          Trying to unblock the roads without a good fuel supply will be impossible. Delivering fuel to critical infrastructure will be impossible. If anything critical breaks, getting parts ordered and delivered with either no fuel, blocked roads, or no communication won’t be possible. Delivering anything in general will be tough because most of the trucks will be stuck or crashed on the roads, and nothing by air.

          If it happened during the summer, California would be entirely on fire due to the hundreds of plane crashes and hundreds of thousands of car crashes. Refugees from California would be flooding toward the north I suppose, if they could get there between the fires, the blocked roads, the traffic jams.

      • Davide S. says:

        – immediate secondary deaths – passengers, people hit by cars, planes, etc; helpless people whose carer vanishes who aren’t found in time, etc.
        – suicide and other violence, as survivors freak out

        I think secondary deaths would be extremely high because I wouldn’t expect most pilots, drivers or surgeons who didn’t get turned to ash but saw someone else do to be able to stay focused.

    • Tatterdemalion says:

      I’d guess most of the world’s population over the next few years – I think that society, and hence food production, would probably collapse messily in the aftermath.

    • marshwiggle says:

      I’m noticing that none of the replies so far really mention crime and looting. Perhaps keeping people from doing wrong is just an example of a critical subsystem that can become disrupted, but I think this one is different in a key way. There’s a psychological common knowledge sort of barrier that keeps most people in most parts of the world from breaking stuff and stealing stuff and so on. It’s not pure force that keeps people from open wrongdoing, and it isn’t the pure goodness of human nature. It’s some combination of legitimacy of the law, habit, the belief you’d get caught, and so on. If that gets disrupted badly enough things get ugly fast, especially once you factor in desperation.

      Another factor would be protests that someone fix things, as opposed to finding constructive ways to help.

      Those things might make the cooperation necessary to get things running again harder or even impossible in enough areas to really drive up the casualties beyond the first hour.

      • baconbits9 says:

        One conceptual issue is that when one thing goes wrong there is a fairly obvious point that you can fix, or at least manage. If half of your staff calls out sick one morning you can figure out what to ignore and just suffer through a couple of rough days. If half your staff doesn’t show up, plus none of your deliveries, plus there are downed power lines everywhere from cars running into telephone poles, plus general panic, fear and hysteria what exactly do you do? Where do you start?

      • arlie says:

        I’m noticing that none of the replies so far really mention crime and looting.

        Yeah. I don’t really know what would happen there. With enough unclaimed property, there’s little need for looting, but at least at the freak out stage, no one’s going to be acting rationally, even those who think they are 🙁

        I suspect it will depend a lot on local (sub)culture, grievances, etc. Personality too, in individual cases. Not everyone has much desire to do wrong, or for that matter to behave illegally (not the same thing). Or their desires are very specific, and not affected by this kind of catastrophe.

        Desperation matters too – but remember, there will be twice as many resources per person available immediately after this event than there were before, minus those destroyed during the event itself (cars, planes, etc. and things they hit). And many people/cultures regard it as perfectly OK to use other peoples’ resources without asking in a true emergency. (E.g. breaking into a vacant house to shelter from a storm, in the absence of a better option.)

        Mostly I think real crime/bad behaviour will be people acting out from hopelessness/despair/general freak out. Plus those who first crawl into a bottle because of freak out, and prove to be violent drunks.

        • baconbits9 says:

          Food is going to be the big looting item if highways shut down even for a few days.

          • albatross11 says:

            That will be a problem, but it will be mitigated by the >50% of the population who are no longer demanding food.

            Are there historical examples where we can see anything like a 50% die off? Wikipedia says the Black Death is estimated to have killed between 30%-60% of the European population, so maybe that’s a starting model, though the situation is different enough I’m not sure how much we can learn from it.

            Wikipedia also claims that the USSR lost something like 13% of its population in WW2, I guess from some mix of war and famine. And nearly a fifth of the Polish population died in the war, about three million of whom were Jews murdered in the Holocaust.

            I’m not sure how to use any of those as models, since they happened across many years. But the fact that Poland and the Soviet Union continued to exist as functioning societies after that death toll (and surrounding destruction from the war) suggests that the Earth would probably manage a sudden 50% death toll plus surrounding collateral damage.

            Where else might we look for examples?

          • Matt M says:

            That will be a problem, but it will be mitigated by the >50% of the population who are no longer demanding food.

            This makes some sense at a very high theoretical level. But let’s get practical for a second.

            Consider a non-wealthy family that has ~5 days worth of food stored in their pantry. Given the general state of chaos, it’s quite likely that their source of income is in chaos and they aren’t back to work at a place able to pay them. Even if they have savings, it’s not likely to be in the form of “cash in the mattress” and they may not be able to access it. Assuming that the grocery stores are up and running and the distribution networks are back in gear to replenish them.

            What are they going to do on Day 6? The fact that there’s a lot more available farmland per person, or the fact that there’s a lot more food stored in some warehouse somewhere per person does them little to no good. They still need food and have virtually no means to obtain it, aside from looting.

            More likely than this though is that the criminally inclined see this coming, and start their looting on Day 1. Any locations known to have large stockpiles of high-value items that aren’t well secured (and I mean like, armed guards willing to fight security, not a latching gate) get taken over by aspiring warlords quick. Perhaps the warlords are willing to trade some food to starving families, but the terms are likely not favorable, as the families might not have much of value that the warlords actually want.

          • fion says:

            @Matt M

            Consider a non-wealthy family that has ~5 days worth of food stored in their pantry.

            Good news is it’s now ~10 days cos half the family is dead!

          • Hoopyfreud says:

            Fun fact about disaster response: this almost never happens.

            The idea that social morality is nothing but a veneer over selfish impulses is absolutely not born out in historical diaster response. Instead it’s overwhelmingly true that people express a desire to help others and put forth tremendous effort for little or no personal gain, particularly in their local communties. While disaster profiteering absolutely does happen, among communities directly impacted by disasters, sharing and self-sacrifice are extremely common, while taking more than one needs from communal unprotected resources is quite rare.

          • Matt M says:

            I’d be interested to know how this works when the disaster is several orders of magnitude larger than any other disaster ever experienced in the history of mankind, one where quick and speedy recovery is assumed to be impossible, and where a return to the previous status quo is, shall we say, unlikely.

            There are good reasons not to loot during a hurricane. Mainly that as crazy and chaotic as things seem in the middle of it, they’ll be back to normal enough such that the government will regain control and will heavily punish such behavior within a manner of days. Furthermore, the community will re-establish itself and those who behaved badly will be punished socially, even if not literally jailed.

          • Hoopyfreud says:

            I mean, I’d be interested too, but it seems more reasonable that people would behave in a manner consistent with previous disasters – especially when the major sources of disruption can be addressed pretty effectively by people sinply being willing to do necessary work. Evidence shows that in such situations, people are generally willing to pitch in, and unwilling to mercilessly exploit common resources.

            Forgive me for saying so, but I find it *incredibly* strange that you of all people would assume that:

            A – people will immediately start cleaning out grocery stores with guns
            B – nobody will be willing to stop people from cleaning out grocery stores with guns
            C – nobody will organize to punish the people who clean out grocery stores with guns

            Finally, why on Earth would the community fail to re-establish itself? People are *dead*, but the survivors aren’t scattered across the Earth. People tend to rely on their existing communities in situations like that.

          • albatross11 says:

            Matt M:

            One thing about this disaster is that there would still be an existing government, police force, legal system, etc. They wouldn’t be able to call on the state cops or national guard to help them if they got in over their heads, but they would have the legitimacy to, say, deputize some respectable surviving members of the community, impose a dusk-to-dawn curfew, and enforce laws.

            In the US, there are governments at basically every level–city, county, state, federal. So if the mayor is still alive, he’s got the legitimacy and moral authority and probably the willingness of city employees (especially cops) to take his orders. That means if he’s not a complete idiot, he can probably respond to the crisis in ways that will make things better fairly quickly. He can tell the road dept. to start clearing the highways, maybe set up some kind of food rationing, start organizing to care for kids left with no parents, etc.

            I think panic and civil disorder would make everything a hell of a lot worse. I imagine that this kind of disaster would end up with some places getting the kind of leadership they needed and basically muddling through, and other places kind-of falling apart. The local political leaders who survive are utterly inept or are driven crazy by the situation or are in some godawful power struggle, and they make a mess of the situation. On the other hand, places where there’s order and some kind of continuity of government will still be capable of functioning. They may have problems with supplies from far away, but that’s a lot easier to deal with when you’ve actually got someone more-or-less in charge who can make decisions, contact other people still maintaining order, etc.

          • Matt M says:

            I suppose a few places might keep order, and I agree with whoever above said that pre-existing organized militaries would probably be best positioned to “handle” this sort of thing.

          • John Schilling says:

            One thing about this disaster is that there would still be an existing government, police force, legal system, etc.

            That isn’t at all clear. Military units that suffer ~50% casualties in short order usually become combat-ineffective due to demoralization, often with rapid desertion as the survivors take an “every man for himself” attitude and exhibit a complete disregard for the moral authority of their commanders to order anything different. And the forces which do retain some measure of integrity at 50+% casualties, usually benefit from a clear understanding of how those casualties happened and how their staying in the fight can prevent further catastrophe.

            The idea that if 50% of a police department spontaneously combusts, the remaining cops will be walking a beat and keeping order the next day, is unproven and dubious. Likewise the bit where everybody else will accept the moral authority of a mayor or chief of police who couldn’t keep half the force from being ashified and the other half from deserting.

          • baconbits9 says:

            To be fair half of those mayors and police chiefs aren’t around anymore.

          • Matt says:

            John Shilling:

            That isn’t at all clear. Military units that suffer ~50% casualties in short order usually become combat-ineffective due to demoralization, often with rapid desertion as the survivors take an “every man for himself” attitude and exhibit a complete disregard for the moral authority of their commanders to order anything different.

            Do you really think you can extrapolate from soldiers at war who take 50% casualties then decide to escape the war to soldiers in garrison after an event where the entire world has taken 50% fatalities and there is no apparent escape?

            Where will they run to?

      • fahertym says:

        The crime element would be beyond anything we can imagine.

        Picture the inhabitants of a massive Beverly Hills Mansion vanishing. Some survivors could take over the mansion, and defend it with guns. Who’s going to stop them? The police will be far too busy. The courts will be overworked for decades trying to process all the inheritances. Stuff like this could happen everywhere for years until some sort of equilibrium is established and order restored.

        • ManyCookies says:

          And that’s when the Avengers finally save the day and release everyone from the soul stone!

        • James C says:

          Does it matter though? The aggressive up-sizing is definitely illegal but something like an eighth of all homes were just vacated and the property market crashed overnight. The police have a lot more pressing issues than a couple squatters with guns taking over a near worthless McMansion. I’m sure the courts will take an interest in a few years when they get around to the case, but it’s not a pressing survival or even an economic issue.

        • The Nybbler says:

          Courts? If this happens I expect organizations which are relatively resilient to losing half their people to take over. That means IBM, Colgate-Palmolive, 3M, Google, GE, Micr.. no, no, really it means militaries.

          • gbdub says:

            Government agencies seem to have the largest proportion of “make-work drones” to “actually essential personnel”, so in that sense they are probably more resilient than e.g. Google. Then again they are less adaptable. Tough call.

    • sandoratthezoo says:

      Zero because it gets retconned away in the next movie. Or, like, maybe 6 or so, depending on who is done with their contracts.

      • albatross11 says:

        Yeah, the probability of Marvel leaving Spiderman and Black Panther as piles of ashes on the ground is very small.

  4. vaaal888 says:

    Does the “post anything you want” policy applies to external link? Because in that case I would be interesting in sharing this little piece of mine about political theory and government seen under the lenses of dynamical system framework: https://medium.com/@valeriobiscione/governments-in-a-multidimensional-space-59e1c2751277
    But, expecially, I would like to have opinion from anyone that takes those 6 minutes to read the piece.

    • phil says:

      Interesting piece, not sure the idea of attractors made a lot of sense to me/not sure I agree with it.

      (Assuming I’m groking your piece) In ‘close to hell scenarios’ I’m not sure it makes sense to think about it as being attracted to total hell, to my mind it makes more sense to think about it as a system that’s out of equilibrium, which will likely settle on the first workable equilibrium, even if it’s not a particularly good one.

      • vaaal888 says:

        Hi phil, thank you for reading my piece.
        yes, this is my current view as well. I don’t think that the worst possible government is actually a stable one, whereas a heaven government *may* be stable. But this is purely speculative and based only on personal intuition.
        But even if you don’t agree about the hell government being an attractor, you may still agree that there are probably attractors, and that you can view the government as a dynamical system which movement is partially determined by it’s internal mechanism. That’s the concept I wanted to express 🙂
        I am gathering interest on this idea, as ideally this topic would be really suited for agent-based simulation modelling.

        • Murphy says:

          This kinda reminds me of some old work I saw on simulating cells.

          Living cells tend to be fairly stable most of the time, they’re fantastically complex and they’re governed by , in turn, the unchangeable characteristics of reality and physics, the generally slow to change genome of the organism and finally by the environment they find themselves in.

          They contain a lot of regulatory systems, with repressors and promoters which get switched on and off as the levels of various enzymes and chemicals change.

          So, there was an interesting paper a few years ago where a team tried to simulate human cells. A titanic task given that there’s still plenty we don’t fully understand. But they made a strong attempt using a large fraction of known regulatory systems.

          This gave them what was basically a state machine.

          One of the interesting outcomes was that they could then try setting things up with different initial levels and perturbing the system in various ways.

          Most roads lead to apoptosis, self destruction, suicide.

          But they did find various attractors, stable states that, if the state got close enough they’d end up in one of these stable states and a really nice result was that most of these stable states were equivalent to known major tissue types.

          Of course in biology there’s no good/bad, heaven/hell.

          If you imagine human nature, the ways humans respond to incentives and various fairly-non-mutable aspects of the human mind as the hard-to-change equivalent of genetics then you might have a similar situation for government types. You might find a large selection of stable organization types, outside which organizations either don’t last long before self destructing or tend towards one of those stable states.

          Though governments live in a much more mutable universe. Tech advances can mean that what would have been an unstable government a few decades ago can be stable now because surveillance tech allows things which would have been impossible before.

          • vaaal888 says:

            That’s very interesting, and is exactly the same thing I would like to pursue, in terms of political theory. I feel that we as human strongly need a system of prediction for government’s trajectory.
            On the other hand, is quite obvious that we are going to have trouble funding such a system: the government has no incentive to improve itself.

          • Plumber says:

            ^ “…the government has no incentive to improve itself…”

            @vaaal888,

            As an employee of a government I wish that were true, but much to my annoyance ambitious or idealistic hire-up desk jockeys are continually issuing directives and memo’s to keep as from tried and true ways that work in favor of buggy reforms that keep us from doing the actual core purposes of our jobs (fixing things) and instead pile on additional paper work and useless meetings, delaying physical repairs.

             Change is the enemy of progress!

          • Tenacious D says:

            Fascinating. Do you still have a link to that cell simulation paper?

        • phil says:

          I think it would help readers to grok your thesis if you made it more explicitly tangible.

          You talk a little bit about the UK government and the Italian government, but you do so with a considerable degree of abstraction.

          I think it would be a stronger piece if you picked a specific chain of events that actually happened, and described how you think those specific events demonstrated your more abstract principle.

          After that, pick a specific state of the universe, and use your abstract principle, to make a falsifiable prediction about how you think that state will change.

          (I think this sort of toggling back and forth between the tangible and the abstract is one thing SSC does really well)

          Just a suggestion.

          • vaaal888 says:

            That’s a good idea. I am really uninterested in real world politics, I am more interested in meta-politics, but I think that I must put some effort if I want to convince anyone about my framework

          • phil says:

            Its a tricky thing to implement too,

            lots of readers will just grasp onto whatever example you pick, and have opinions about those examples just at the object level, and never make it to your abstract level.

            But its hard to get people to see the point of your abstract idea, if they don’t see the tangible effects.

          • phil says:

            Also, once you’ve made a prediction, readers will judge it based on whether or not it came true, despite it being a sample size of one regardless of the outcome.

            I still think you should do this, abstractions are mainly valuable for the predictions they help us make.

    • Nancy Lebovitz says:

      The policy seems to be that occasional links to one’s own work are alright, but it shouldn’t happen in every post.

      This policy is relaxed, I think, if your work is very popular.

    • ADifferentAnonymous says:

      Honestly, I wasn’t that impressed, but then I was already well familiar with the phase-space trajectory way of thinking, which was indeed a colossal insight the first time it was explained to me.

      Coincidentally, I just read this highly relevant Caplan piece, which puts forth a hypothesis about government-space trajectories–a good concrete example of the kind of thinking you describe, whether or not one agreed with it on the once level.

      • vaaal888 says:

        Interesting, I arrived to these ideas completely independently. Do you remember who else supports them, a part from Bryan Caplan?

        • ADifferentAnonymous says:

          I’m not sure I’ve seen it explicitly spelled out to look at politics this way, though IIRC some of Scott’s Moloch stuff gestures in the direction.

          But mostly I encountered it through physics and internalized it enough that it’s intuitive to look at politics that way. I don’t remember exactly where–I read a bunch of popular-science books in high school (I think one of the Science of Discworld books may have covered phase space?), then studied it formally in college, and emerged with the phase-space view firmly in my conceptual toolkit. It’s pretty much the only way to make a lick of sense of quantum physics IMO. Theoretical computer science also might have helped, what with the examination of state machines.

          The wiki on ‘phase space’ should give you some idea of how the concept works in STEM if you aren’t familiar.

    • Thegnskald says:

      The attractor state stuff is one of the reasons I drifted away from libertarianism; it isn’t a stable state of affairs, and there are many different attractors surrounding a libertarian state. The state we have appears to be one of the least bad attractors, so resetting to libertarianism runs the risk of falling into a worse state.

      I disagree that the starting state defines a trajectory, however. I would say it is more that the starting state makes certain trajectories more likely. I also disagree that we can predict the final state; if you reset the US government to its state as of 1890, to pick a year mostly at random, you wouldn’t arrive back to 1990s era US government after a century. The social context has changed.

      A key element in this is that technology advances – this includes both technology of statecraft, and anti-statecraft (criminal, basically) technology. You can’t roll back the statecraft technology and expect to return to a previous state; the anti-statecraft technology has rendered the previous state of statecraft technology obsolete in a way that it wasn’t, in it’s own time.

      The tax code is a good place to look at this: If a tax law worked for a century, then somebody discovered a loophole in it, the tax law no longer works. You can’t get rid of the law that closed the loophole and expect tax to work the same way anymore.

      Anti-statecraft technologies are a good example of an unpredictable element influencing trajectory. There are many more.

      • vaaal888 says:

        you are right: a more correct model would be that of a probabilistic dynamical system, where there is a probability distribution of trajectory. I am not sure about the mathematical formalism involved in these types of model, though…

      • John Schilling says:

        The state we have appears to be one of the least bad attractors, so resetting to libertarianism runs the risk of falling into a worse state.

        The state we have now is not an attractor. It is neither static nor orbiting an attractor, but exhibiting secular movement along several axes. Some of which are pleasant, others not so much, and some (e.g. debt-to-GDP increasing without bound) seem positively apocalyptic.

        It may be that there is no attractor, that the old theories of cyclic rise and collapse are fundamentally true. If so, the best you can do is target the beginning of the good-government period of the cycle, get to that stage ASAP and then try to at least slow Cthulhu/Moloch’s inevitable “progress” thereafter.

    • Nootropic cormorant says:

      This is how I understood Orwell’s 1984, as being about a scary idea that all governments converge into an attractor state of absolute repression, rather than merely being an anti-leninist pamphlet like Asimov thought.

      • Tarpitz says:

        Count me among those who think the appendix implies that while such a state is an attractor, it’s neither inevitable nor inescapable.

      • Mark Atwood says:

        Asimov was a fan of some sort of cosey soft urban totalitarianism, where there was one state that had also rolled up all economic activity up into itself, and if there were any corporations at all, they were about as tightly welded to the One State as Ma Bell was to the US government.

        Asimov is an author that does not improve in the rereading as I get older.

        • Sometime back I reread a pretty good Asimov novel, I think the first of the robot detective ones, and a Heinlein juvenile, Podkayne of Mars. The Heinlein came across as much better.

    • Plumber says:

      @vaaal888

      I’m probably too poor of a reader to get what you meant.

      Government as attractor?

      Multidimensional space of government?

      Sorry, it went past me.

      • Nornagest says:

        Just math jargon. An attractor is a region in a chaotic system that that system tends to settle into over time; if you get near the attractor you tend to fall into it, and if you start inside the attractor you tend to stay in it. Depending on the system it can be a point or a volume or something more complicated. Saying that government is an attractor is saying that once something’s a government it tends to stay a government unless it gets really out of whack for some reason, and also that things that aren’t governments but are similar to them, will tend to evolve into governments.

        Similarly, the “multidimensional space of government” is a mathematical way of visualizing all the different possible governments. Math and physics have this concept of configuration spaces, where you take all the ways a particular system can vary and map them to dimensions in a (usually high-dimensional) space; this has lots of useful properties. For example, you could imagine a configuration space for In-N-Out hamburgers in two dimensions, with slices of cheese on one dimension and burger patties on the other; (0,1) in that space then would indicate a plain hamburger, and (2,2) a Double-Double.

        • Plumber says:

          ^ “Just math jargon. An attractor is a region in a chaotic system that…”

          Thanks @Nornagest,

          Sadly, it’s stuff past my education level.

          • vaaal888 says:

            I understand, sorry if it wasn’t clear. I certainly don’t use math formalism to confuse people, just to make it clearer, but it doesn’t always work

          • Plumber says:

            “I understand, sorry if it wasn’t clear. I certainly don’t use math formalism to confuse people, just to make it clearer, but it doesn’t always work”

            No need to apologize @vaaal888, I think I just have far less formelly education than the majority of SSC commenters so stuff will go over my head that others will catch.

          • Pattern says:

            The claim is that:
            Some governments change. Other governments are black holes – the closer you are, the faster they suck you in, and the harder it is to escape. Once you get too close it’s impossible to escape, and you’re stuck there. If you want to change things at that point, it’s going to take a revolution. You’ll have to start over. (If you don’t want things to end up where they started, you should make the new government far away from that black hole.) Additionally, maybe some ‘black holes’ are nice, while some definitely aren’t.

  5. jaimeastorga2000 says:

    Why is Rotten Tomato’s best movie feature so ridiculously biased towards recent films? Consider the animation category. The first animated feature was Snow White in 1937, which to their credit is included. Since then, the Disney Animated Canon has been pretty good at releasing one animated feature every one and a half years or so, and there have been a bunch of competitors as well (my favorite Western animator is actually Don Bluth). That’s on top of an entire parallel animation industry in the form of anime. But you wouldn’t know it to look at the Rotten Tomatoes list of the 100 best animated movies ever made, which includes just 27 movies released before the year 2000! Talk about how Culture is not about Aesthetics!

    • Dutch Nightingale says:

      This is not just a characteristic of Rotten Tomatoes. The pattern of more recent films having more “rating coverage” comes up over and over again if you spend a lot of time exploring film ratings data.

      • Civilis says:

        It looks like the rating coverage issue also affects foreign films. Most of the films I recognize as foreign have a lot fewer reviews, and have a higher rating than the films around them. It could be that the analytics work better with more reviews.

        As an example:
        52. 91% How to Train Your Dragon 2 (2014) 173 reviews
        53. 94% My Neighbor Totoro (1988) 49 reviews
        54. 96% Long Way North (Tout en haut du monde) (2016) 52 reviews
        55. 89% Big Hero 6 (2014) 207 reviews

        The two foreign films have a quarter of the reviews of the two domestic films, despite one being from 2016, and both the foreign films have a higher positive review percentage than the domestic films with similar ratings.

        An interesting note: Ponyo (2016) and Howl’s Moving Castle (2005) have a number of reviews more reminiscent of domestic animated films, and don’t show the higher positive review percentage than the films around them in the list.

        • DragonMilk says:

          Random rant – after much hype, I bought my little sister My Neighbor Totoro and it was the worst movie purchase I’ve ever made. It ended so randomly and I felt like you get such terrible time value out of it.

          BOOOOOO

          • rubberduck says:

            Does anyone actually watch My Neighbor Totoro for the plot? I thought it was all about the adorable character designs and whimsical atmosphere. Even at anime conventions where you think people would be more into discussing these things I’ve never heard any references to the actual story, despite everyone present having seen the movie.

          • Hoopyfreud says:

            I don’t enjoy the movie *for the sake* of the plot, but the underlying story of young children trying to deal with unexpected and unintelligible fear and change… has a lot of resonance for me. I wouldn’t enjoy the movie as much without it.

          • theredsheep says:

            The plotlessness was half the reason I hated it. The other half was the part where most of the dialogue seemed to consist of small children hearing a word, then repeating it in a shriek. But then I watched the thing something like ten years ago.

            Fun story: it was originally played (in Japan) as a double feature with Grave of the Fireflies. Grave of the Fireflies is possibly the best anime work I’ve ever seen; it’s also the most depressing. You get to watch a pair of kids die over the course of two hours. That’s not a spoiler; it’s clear at the beginning that they will both die, the movie is about how they get that way. I could only get through it in twenty-minute increments, with breaks for deep breaths. Will not watch again. I really want to know what the logic was behind pairing it with frigging Totoro, of all films.

          • Nornagest says:

            I watched Amélie as a double feature with Requiem for a Dream once. Grave of the Fireflies and My Neighbor Totoro sounds like it’s coming from the same place.

        • Nick says:

          An interesting note: Ponyo (2016) and Howl’s Moving Castle (2005) have a number of reviews more reminiscent of domestic animated films, and don’t show the higher positive review percentage than the films around them in the list.

          Didn’t they get an American release via Disney? I remember ads for Howl’s Moving Castle as a kid.

        • AG says:

          There is definitely a sort of isolation of anime, specifically, some of it self-isolation. Most media-review places like AVClub nonetheless decline to review anime (other than oldschool classics), so anime websites have started doing them on their own. Thew few times that anime films get wide releases in the West, no one from film websites goes to see them, both from never knowing about them, and not wanting to see them.

          And then other international animation is even more obscure, with other Asian animation only sometimes getting crossover popularity from the anime crowd.

          Basically, there is no true “general media” website, because of self-sorting. There is an unspoken expectation that weeb stuff doesn’t count. (See also, for example, how essays on the art-ness of video games end up ignoring anything from Japan.)

          • Gazeboist says:

            Or that the art-ness present in Japanese videogames is not fundamentally rooted in their game-ness.

          • Nornagest says:

            See also, for example, how essays on the art-ness of video games end up ignoring anything from Japan.

            I dunno, Shadow of the Colossus shows up in those essays a lot.

    • Brad says:

      Suppose for the sake of argument that Michael Jordan in the prime of his career would be a mediocre player in today’s NBA. Would that disqualify him from being the greatest basketball player ever? (I’d argue yes.)

      • jaimeastorga2000 says:

        So your argument is that modern movies are better than old movies in an absolute sense, and that most old movies that were considered great classics when compared against their era’s competitors would be considered mediocre if released today and compared against modern competition?

        I don’t know, there is definitely some of this going on; few people enjoy watching black and white movies, and nobody but the most hardcore of movie buffs enjoys watching silent movies. But technicolor talkies started coming out as early as Snow White and the Seven Dwarfs (1937), The Wizard of Oz (1939), and Gone with the Wind (1939). There have been other improvements since then, but none as fundamental as the advent of sound and color (and, arguably, some of these advances have been for the worse; compare the puppets and models from the original Star Wars trilogy to the CGI of the prequel trilogy). Considering how much more material there is from 1937-2000 than from 2000-2018, I don’t think that can fully account for the difference.

        • Harry Maurice Johnston says:

          I think there may have been at least some significant methodological improvements, not just technological. Pacing, for example; old movies do sometimes drag things out a bit much.

          My kids prefer the Star Wars prequels to the originals. I have the impression they are not unusual in this respect.

          • Gazeboist says:

            I think that’s more the shine wearing off the originals, plus them being kids. New Hope is fundamentally linear, Empire is solid but a middle movie with a dark ending, and Return is proto-TFA.

        • Brad says:

          I don’t think The Wizard of Oz would be a success if it was released today. It’s not exactly a kids movie but it’s not exactly an adult movie either. It’s a live action musical, which in turns out is not a great form for movies. There’s not much plot or character development, and the action is ho-hum. The female lead would not qualify as a sex symbol. Nor would any of the male leads.

      • achenx says:

        Hm, I think I disagree with that. I don’t have enough basketball knowledge to cite specifics, but let’s talk baseball. “Greatest player ever” can certainly be argued, but “Babe Ruth” is a common suggestion. Now, if you took your Delorean to 1929, picked up prime Babe Ruth, brought him back to the future, and stuck him on the 2018 Yankees, he would almost certainly be ridiculously overmatched. Advances in training, nutrition and conditioning, and even just the environment of the game (strategy and tactics, etc) would be too much for him to overcome.

        Does that disqualify him from consideration as best player ever? You apparently would say yes, but I don’t think so. In my mind it’s more reasonable to compare him to his peers in the 1920s and 30s. And a lot of the more advanced baseball statistics try to do this — not solely using the literal numbers (batting average or on-base percentage, number of home runs, ERA or WHIP for pitchers) but adjusting them as “where did they rank compared to other players in that season”. So for example, Pedro Martinez having a 2.07 season ERA is good no matter what the year, but specifically today it would be very good; having it in 1968 would be pretty good; when he actually had it in 1999 it was ridiculously good. “2.07” is what actually happened, but it doesn’t tell you the context. Sticking with the Babe, 60 home runs is always an elite number, but when he did it, that was more than most other entire teams. “Elite” is an understatement.

        Assume that “talent at hitting and/or throwing a baseball” is innate. What if you instead set your Delorean for 1915, and then brought 1-year-old Babe Ruth to 1996? Then in 2018 you’d now have a 23-year-old Babe raised with current training and conditioning and so on. Would he be able to dominate MLB now the same way he did in the 1920s and 30s?

        • Matt M says:

          I honestly see the merit in having both discussions. On the one hand, being the best relative to your competition is very noteworthy.

          On the other, the fact that any current MLB team’s 3rd-best pitcher would strike out Babe Ruth 9 times out of 10 also seems like something we can’t just pass over.

          • meh says:

            Who should be in the hall of fame, Babe Ruth or any current MLB team’s 3rd-best pitcher?

            This ‘hall of fameiness’ is what a lot of people are thinking of when they say ‘greatest’

          • Protagoras says:

            It seems unlikely that he would strike out 9 times out of 10 against an average modern pitcher. The impression I get is that probably the biggest difference between Ruth’s era and the present is that a great number of modern pitchers throw as hard as the elite power pitchers of Ruth’s era (there are other differences, but the other differences seem to be smaller), and Ruth could hit the elite power pitchers of his era. Facing effectively only elite pitchers would surely lower his productivity, but not by as much as you suggest.

          • AG says:

            @Protagoras

            Huh, that gets into the question of how much training and/or knowledge of meta game helps in the batter’s box. How long would it take for Babe to adapt to the existence of the various trick pitches? Do modern day batters get a leg up for knowing common pitching patterns? It’s as much about the mind game as it is about power, though higher power enables more mind-gaming.

          • Tarpitz says:

            Interesting. I think it’s very likely that 1930 Donald Bradman, with 1930 training and nutrition, would be overwhelmingly the best test match batsman of this or any era he was dropped into. This is partly because Bradman was a genius unequalled in sport, not just cricket, but mostly because I really don’t think test cricket has changed all that much since the First World War – far less than other sports. I had assumed much the same would be true of baseball, but I guess that assumption was unwarranted.

        • Brad says:

          I see what you’re saying but I don’t think it’s “greatest”. That I think is like fastest mile—it’s an absolute measure not a relative one.

        • tocny says:

          A modern example I believe would be Albert Pujols. He’s reached 3000 hits, which basically guarantees him to be entered into the Hall of Fame. But do his seasons since being traded to the Angels change his success with the Cardinals? For perspective, he has amassed 6.7 WAR while playing for the Angels over 7 years, compared to an average of 7.3 WAR per year with the Cardinals.

      • secret_tunnel says:

        This is something I think about a lot with the original Star Wars trilogy. They’re still great movies, but by defining the genre they’ve become more stale themselves over time.

        It’s ironic.

      • meh says:

        My high school physics teacher knows more physics than Newton. Who is the greater physicist?

    • ana53294 says:

      The way ratings work, in my understanding, is that people watch a movie, and then rate it shortly afterwards. Most people will not go and rate a movie they watched 30 years ago, even if they really loved it. My guess is that most of the older movies’ ratings come either from fans who keep re-watching the movie (Star Wars and LOTR have such a cult following), or people who watched those old movies recently.

      So ratings will be biased by when movies where watched, and how accessible the internet was then. So, if the movie has a lot of viewers during a time where a lot of people have good access to internet (the last 10+ years), they will have disproportionately more ratings than movies that were as popular but were watched during a time when there was no internet coverage.

      Even the most popular movies, in my understanding, are watched mostly during the first year after release. People rarely go back to watch older movies.

      Also, the subpopulation of people who watch the movie (age, gender, race, location), and the rate of internet utilization of that group will influence ratings (a movie that is really popular among people 60+ will have fewer ratings than a movie that is popular among the 20+ population).

    • JPNunez says:

      Eeeh dunno about this.

      I watched a bunch of classic animated Disney movies in my childhood, and wasn’t really that impressed by them. Snowwhite, Sleeping Beauty, Pinocho, etc. Some of them in cinemas.

      The reality is that, plainly, they aren’t that interesting as a modern animated movie. Plots are more complex in most modern titles, animation (and particularly the CGI) is getting more and more impressive by the year. Add that a lot of old movies decided they had to have some musical numbers in them. Some are memorable (Hi Ho in Snowwhite is great, as is Bibbidi-bobbidi-boo from Cinderella) and other just don’t care about.

      Also a bunch of them are about Fairy Tales which…gets old.

      That said, Pixar plots may be more complex nowadays, but they are also getting hella predictable. It’s honestly a shame that Inside Out is first place, since that movie is the most transparent of all manipulation movies by Pixar.

    • Matt M says:

      My theory is that there is more interconnectivity leading to a greater monoculture today than there was before.

      Most of these really high scores depend entirely on receiving very few negative votes from viewers, or, more importantly, critics.

      I can imagine that in the 1950s, some newspaper might hire some crotchety independent-minded critic who would shit on whatever seemingly popular movie just came out. I’m sure we can go back and find a few people panning Gone With The Wind or whatever. But today, it quickly becomes obvious what movies are supposed to be good and are supposed to be bad and there is great value in going along with the herd. A critic who routinely pans movies that every other critic loves will probably be fired. Doubly so if these movies are considered to be wholesome family pictures advancing all the right political agendas.

      Animated Disney movies also self-select their own audiences probably more than any other genre does.

      • Deiseach says:

        A critic who routinely pans movies that every other critic loves will probably be fired. Doubly so if these movies are considered to be wholesome family pictures advancing all the right political agendas.

        Oooh, this is ringing a faint bell. Wasn’t there some minor fuss a few years back about allegations of major studies putting pressure on media outlets for favourable reviews of their films?

        Thank you Google, looks like I was conflating two things: first, an online accusation about Disney bribing critics for good reviews and secondly, Disney did blacklist LA Times movie critics in what was perceived as tit-for-tat over an investigative story into the relationship between Disneyland and the city of Anaheim.

        • Matt M says:

          Eh, I’m not even going that far into conspiracy level. I’m just thinking that most film critics work for newspapers in a “pop culture” sort of role where they’re generally expected to entertain and make people feel good, rather than “speak truth to power” or whatever it is the regular journalists think they do.

          If we already know that everyone loves Moana, it’s a big risk to trash it in your review, because you aren’t paid to attract controversy. In the 1950s, it wouldn’t be as controversial, because if you work for the Des Monies Register, there’s no way for anyone in Seattle to know (or reason to care) that you trashed a movie they loved. But today, when everyone uses these sorts of websites, the whole world will instantly know that you were the dangerous outlier. Individuals may try to smear you publicly, and your employer might reasonably say “We don’t really need this sort of controversy coming from our film and entertainment section.”

          I suspect that the major studios have always tried to exert some form of pressure over critics, but I think today, they don’t really have to as much. They can rely on the public to do it for them.

          • albatross11 says:

            I think the useful mental model (to cross threads) for thinking about this situation is a Keyesian beauty contest. You see this with pundits as well: Almost nobody judges a pundit by checking the accuracy of their explicit predictions over time; instead, people judge pundits by whether what they’re saying sounds reasonable *now*. Which is largely a function of what the other pundits are saying now, which is ….

            This has probably always been true, but you can imagine reasons it might be more important now. (Though I don’t know if that’s true.)

          • baconbits9 says:

            I don’t know if this holds because there are far more niches now. 1950s Des Moines might not have been able to support more than one or two film critics. In this sense you are more pressured to follow local popular opinion, but now you can plausibly trash a popular movie and the 1% of people who hated that movie is enough to support you because you can draw from 100 million potential customers, instead of a few hundred thousand.

            Perhaps there are some second order effects where once you have that audience you end up catering to them, but before that it seems more likely that freedom of opinion has increased.

          • baconbits9 says:

            I think the Keynesian beauty contest is really oversold as a useful model, and is a really bad model of markets. It almost entirely depends on information that you don’t have access to, and so you any actor who applies it is actually acting irrationally, not rationally.

          • Incurian says:

            Haven’t people been known to do that at times?

    • Nabil ad Dajjal says:

      Rotten Tomatoes, as I understand it, aggregates reviews from both professional critics and from its userbase.

      A film from before widespread access to the internet is presumably going to get a greater proportion of its reviews from professional critics, whereas a more recent film will get an overwhelming number of reviews from the users. Professional critics have an incentive to rate every film, because it’s their job; users have an incentive to only rate the film’s they feel very strongly about, because it’s a hobby.

      Maybe my assumptions are wrong and the data points the other way, but I would expect post-internet films to dominate the lists of best and worst movies because they have the strongest responses from the userbase.

    • Incurian says:

      Isn’t the base rate for animated movies substantially higher in the last couple decades due to CGI? Even if older animated movies were on average better than newer ones, you would still expect we’re going to disproportionately see “great” movies from the recent animation bucket since we’re pumping out like 20 a year.

    • Gazeboist says:

      “Which 100 movies you should go see” is not the same question as “which 100 movies are best”. For the first, you generally want a recency bias, because you’re more likely to have seen an older movie (to a point, but the odds are long that recommendation culture will actually adapt to the fact that people rotate in and out of society any time soon). Most “top X” lists for almost any domain are actually used as “next X” selectors, and the creators of the list know that, so they have an incentive to be biased towards recent movies. Add some strong filtering effects created by the fact that the number of candidates for “best” is constantly increasing, and you’ve got a pretty clear explanation for the recency bias.

      All this doesn’t even touch the fact that “which X are historically important” is yet another distinct question. Innovation leads to widespread adoption in a technological niche, and storytelling is a technological niche (with many, many sub-niches). Oedipus Rex is frankly a terrible story by modern standards, but it was highly influential, so if you want to know how literature (especially drama) developed it’s a critically important read, but if you want an entertaining story about the downfall of a well-intentioned but arrogant hero (or an entertaining mystery story) you should try something more recent, because you can probably find something that draws on Oedipus for a net gain over the original.

    • Thomas Jørgensen says:

      Lots more reviewers in recent times than historic average, thanks to the internet? Certainly more reviewers which are available to an aggregator – it is not like Rotten Tomatoes have staff going to the historic archives of lower bumfuck, West Virgina to see what the local newspapers reviewer thought of the film premieres of 1938.

    • Ash says:

      I dont think its as strange as all that – a “top 100 movies” list is not a list of top movies of all time, but a list of the top movies as of today, for today’s audiences. Our sensibilities and expectations for movie quality, story tropes, themes of interest, etc, have all evolved and changed. While one of the things we do value is “historical import”, I dont think its the dominant value.

      Thus modern movies would likely make up the majority of the list, as they will conform to that. To take your example, I think Snow White is trite, dull affair – its plot is barely even described as one, its musical numbers are simply independent songs that dont further the characters or the narrative, and while the animation does at least hold up its doesnt stand out. For the audience at the time though, “animated feature that looks good” is all they really wanted, that was enough to defy expectations given the newness of the media, and in fact a complicated plot would have been too much to take in. I dont think there is anything wrong with tastes changing.

    • A1987dM says:

      Conversely, according to http://www.acclaimedmusic.net seven out of the ten best songs of all times came out in the 1960s.

      But that’s not a bias, music was actually that much better back then. <gd&r>

  6. kipling_sapling says:

    What’s the voting deadline?

  7. gbear605 says:

    So is a whale not kosher because it doesn’t have fins and scales or because it doesn’t chew its cud or have cloven hooves?

    • Machine Interface says:

      The former — the rule doesn’t operate on a non-mammal/mammal distinction, but on a “lives in the water”/”lives on land” one.

      • Lambert says:

        Similarly, Catholics can eat aquatic mammals such as beavers and capybaras during lent.

        • Dack says:

          The current rule (as to what is meat) is “the flesh of warm-blooded animals”.

          Note that some localities enforce this differently, as in some jurisdictions only call for abstaining from meat on Fridays during lent, and others still call for it (on Fridays) year-round. Others regularly grant dispensations if say, St Patrick’s day falls on a Friday during lent.

          Similarly, there are also historical idiosyncrasies with regards to what counts as meat. I have heard that beaver tails used to be allowed (counted as fish) in one locality, because they tasted much like fish. I have also heard that eggs used to be not allowed (counted as meat) in some place because they were too tasty.

  8. jaimeastorga2000 says:

    In “Frequently Asked Questions about the Meaning of Life”, a young Eliezer Yudkowsky claimed to know of three “software” methods of increasing intelligence; learning to code, studying CogSci/EvoPsych/AI, and reading hard science fiction. I know the document is deprecated (and for good reason), but does anybody agree with his list? I don’t think doing any of these can literally raise your IQ, but maybe they can make you more open-minded or something?

    • albatross11 says:

      I have no idea if there’s any way to increase g (the thing IQ tests are trying to measure). You can raise your IQ score to some extent by practicing a lot on the kind of questions your IQ test will be asking.

      One big question about the Flynn Effect[1] is how much of the rise in raw IQ scores is based on improved test performance vs improved actual intellectual ability. For example, having kids accustomed to pencil-and-paper tests and used to sitting in a quiet room and doing stuff with books and paper both probably boost the score they get on IQ tests, even if they’re no smarter now than before. But better nutrition and fewer childhood diseases and less lead exposure to small children all probably can actually improve how well their brains work, and so can make them smarter in ways that IQ tests will measure.

      [1] In most of the world, the average raw scores on IQ tests go up a little every year, requiring the tests to be renormed every few years, since IQ scores are normally reported in terms of a normal distribution (mean=100, standard deviation =15 is common but there are some IQ tests that do it differently). IMO it would be better if all IQ tests just reported their results in terms of percentile scores–that’s a lot more intuitively meaningful, even though down in the tails you need to add decimal places. (A good programmer at Google is likely in the 99th %ile, but a theoretical physicist doing research at Caltech is probably in the 99.99th %ile.)

      • phil says:

        So much of this sort of debate revolves around G,

        when there’s a lot that’s plainly obvious you can do to improve your ability to navigate the world, even if you’re unable to meaningfully stretch your G.

        I like the general outline given by https://old.ycombinator.com/munger.html

        which is try to learn the most valuable 80-90 mental models to make sense of the world.

        (which is also an argument for diminishing returns once you already grok a particular model, ‘go learn another model, and add another tool to your tool belt’)

      • jg29a says:

        (A good programmer at Google is likely in the 99th %ile, but a theoretical physicist doing research at Caltech is probably in the 99.99th %ile.)

        I’d guess you’re off by an order of magnitude:

        – An undergraduate in the harder third of majors at a UC school is probably 99%.
        – One of the better programmers at Google is probably at 99.9%.
        – A theoretical physicist at an elite school is even money to be at 99.999%.

        • zzzzort says:

          – A theoretical physicist at an elite school is even money to be at 99.999%.

          Haha, no.

          I don’t know if I would even expect selection efficiencies that strong even among groups specifically selected by IQ test. If you have a bunch of people that an IQ test has said are really smart, measurement error means some of them are ‘just’ quite smart and lucky.

          In any sort of natural group of people things like personality, conscientiousness, dumb luck is going to regress you to the mean pretty hard.

        • sandoratthezoo says:

          There are apparently about a quarter-million students in the UC system.

          At a rough guess, about 5-10% of California’s population is “college aged,” loosely. Let’s say 10%. California’s population is about 40 million, so let’s say 4 million.

          Even if 100% of the smartest college-aged people in CA went to the UCs, the UCs would have the top 6% or so smartest people in CA. The smartest third of them would be the 98%. If you mean the “hardest third of majors” meaning not the majors accounting for the hardest third of student body, but the total number of majors, regardless of how many students they had, divided by three, then 99% wouldn’t be unreasonable — except that it’s pretty ludicrous to imagine that the top 6% of the smartest college-aged people in California are entirely going to the UC system.

        • Freddie deBoer says:

          Intelligence tests (and indeed language tests and similar) are less reliable the further you go from the mean. An IQ test has essentially no ability to make the distinction between 99.9% and 99.999% meaningful.

          • albatross11 says:

            I am skeptical of the ability of a paper-and-pencil IQ test to distinguish from 1/1000 and 1/100000 intelligence, but I rather suspect that MIT’s PhD program in physics can manage to make such a distinction passably well.

          • John Schilling says:

            I’m pretty sure they can’t, and I’m not even sure why they would try.

            First, what they actually value is the combination of IQ, conscientiousness, aptitude for and interest in physics specifically, and culture fit with the rest of the MIT physics department. Pulling out IQ specifically would be difficult, and of no particular value to them.

            And second, of the 1/100000 intelligence crowd, probably 99% have no great desire to ever be an MIT physicist in the first place. Which means that the MIT physics department will get maybe one such applicant per year, which isn’t a big enough statistical sample to calibrate a superduperultragenius IQ test or whatever. And, again, of no particular value when their real problem is sorting out exactly which of the 1/1000 intellects they will have to accept to round out their ranks.

            Which they will do in large part on the basis of conscientiousness, specific aptitude, and culture fit, and they may find that the star performer of that year’s class is one of the 1/1000 crowd who outworked the lazy ultragenius.

          • quanta413 says:

            In support of what Freddie said, the GRE to IQ tables I’ve seen show at least for the new GRE there is a ceiling at 145 IQ which is ~99.9%.

            I don’t think enough physicists are scoring perfect on the verbal section (or close enough) that it’s plausible there’s enough of them that grad schools could pull many people at 99.999%. There are only around 3000 people at that level in the whole U.S. even if we assume we can get valid measurements to that level.

    • Thegnskald says:

      Oh, look, somebody claiming their specific interests make you smarter in general. Somebody tell the chess people.

      No, the claims are nonsense. At best, these interests have a correlation with intelligence.

      Learning to code might help you be more concrete and specific in your thinking – or it might not. I code for a living and most of my thinking remains pretty sloppy. I guess it might help me be aware of the sloppiness of my thinking?

      Cognitive science… no. Maybe if paired with intensive meditation practice? But I suspect meditation will be doing the heavy lifting there, and cognitive science just saying where to dig. But the overwhelming response to learning the way human thought processes tend to be broken is to immediately start diagnosing all of your monkey rivals with these broken thought processes.

      And reading hard sci-fi? No. Maybe writing it, similar to coding, might help you develop insight into sloppy versus rigorous thinking. Reading hard sci-fi can only make a difference if you are the sort of person to validate the author’s results, and then works like coding, acting as practice for rigorous thought patterns. Inwhichcase, you might as well just take physics coursework.

      • Deiseach says:

        I think the “read hard sci-fi” recommendation is along the lines of “eat your vegetables”; unless you are paying close attention to the realism (more or less) and the depiction of what is physically possible, and lapping up the maths and orbital physics and all the rest of it, then you are only a reader for pleasure (and that is bad).

        That seems to be the attitude of some partisans of hard sci-fi, at least: that you’re only reading it for the science, and that any SF that is not diamond-hard is only sensationalist trash for popcorn reading and rots your brain. It’s the Moral Improvement model of literature, and while it can be fun it its own way (because it genuinely is fascinating to read how things work), I’m still going to read sloppy soft Harrison and Bradbury and the Sad Puppies Thrilling Tales of Space Adventure and all the rest of it, mmmkay?

      • rubberduck says:

        I’ve never coded in my life so I was wondering, is the type of rigorous thinking required for coding in any way similar to that needed to solve math puzzles such as sudokus? Because if so then just from personal experience, getting good at rigorous thought patterns learned from solving sudokus is helpful when trying to solve, say, a KenKen, but not really helpful when trying to solve any real-world problems with more ambiguous conditions.

        • dick says:

          “What specifically is it about programming that makes it more difficult for some people than others” is not a settled question, but I think the most common answer involves some variation of holding multiple levels of abstraction in your head at one time, which is not very similar to solving numerical puzzles.

        • Thegnskald says:

          Imagine trying to tell someone how to solve a Sudoku puzzle, step by step, which you can’t look at or ask questions about. That is the level of rigorous thinking in programming.

        • Incurian says:

          In the sense that there’s a lot of trial and error 🙂

        • jg29a says:

          I have the college course sort of coding ability, i.e. solving discrete, really tricky, problems from nothing. My memory of such does indeed feel very similar to what I do paying strategy games. But I understand it to be radically different from the employment sort of coding ability, which involves having the ideal bank of stuff to cut and paste, using trial-and-error efficiently, and using structures that are common and transparent enough to easily integrate with the work of others. That has always sounded utterly tedious, an extrovert’s nightmare.

    • albatross11 says:

      However, it’s definitely possible to learn new mental tools that make you much more capable of solving problems. There are a lot of tricky mathematical puzzles you need to be really smart to solve, until you learn basic algebra–it then turns out there’s a simple way to solve them all in a plug-and-crank manner.

      Logic, algebra, calculus, probability theory, game theory, the study of computer algorithms–all give you powerful mental tools to solve problems that would otherwise take a lot of cleverness to solve individually.

    • Björn says:

      The grain of truth in there is probably that learning things keeps your mind active, and even if it doesn’t, at least you learnt something.

      That said, I find his suggestions highly comedic. For all he cares about intelligence and super-intelligence, the only suggestion he can give to other people is to become like him. Considering that his interesting rationality writings are ten years old and since then he has spent his time chasing after his believes about AI, maybe he represents a strong data point that even if hard scifi does not lower you IQ, it still turns you into Don Quijote.

    • JPNunez says:

      Eliezer himself seems to doubt about the learning to program part? I remember some ironic tweets about it and baseball…too lazy to search.

      Dunno what’s his current take on it. As a professional programmer I think it is useful in forcing you to doubt yourself when debugging programs, but dunno how much it transfer to other areas.

      @Nabil ad Dajjal

      maybe too much honesty. ouch.

    • Nabil ad Dajjal says:

      It sounds like Baby Einstein for teens and twenty-somethings.

      Intelligent people are more likely to listen to classical music, therefore playing classical music to your baby will make it more intelligent! It’s the same “wet streets cause rain” anti-logic that unfortunately underlies most of our society’s thinking about education.

      On a bit of a tangent, following Big Yud’s advice on self-betterment seems like a terrible idea because by nearly any standard he’s a loser. He’s internet famous to the extent of a mid-tier YouTuber, a college dropout, fat, childless and not particularly well-off. He doesn’t even seem to be very happy with his situation. I certainly wouldn’t want my children to turn out like he has.

      This guy is more of a cautionary tale about a smart kid who never grows up. In real life you don’t get to be Peter Pan by failing to develop your potential, you just end up as a sad middle-aged man writing Harry Potter fanfiction.

      • NostalgiaForInfinity says:

        This seems like needlessly personal abuse.

        • Nabil ad Dajjal says:

          If I started giving advice on how to hit a grand slam in baseball, I would hope that someone would point out that I bat a .500 at tee-ball.

          That is to say, I’m questioning his qualification to give life advice. How his own life is going is directly relevant to that.

          • NostalgiaForInfinity says:

            There’s a difference between saying “you bat a .500 at tee-ball” and someone saying “you’re so cack-handed you bat a .500 at tee-ball, you’re too fat to run to first base, and I wouldn’t even have you on my team as a water boy”.

          • baconbits9 says:

            That isn’t what he said though. If you are talking about someones position/status/success physical appearance and romantic success are going to be benchmarks for a lot of people. NaD might have been on the insensitive side but he wasn’t actively attacking with the post, simply stating what his perception was.

            Consider the inverse, if EY was very fit or had a girlfriend or wife that was considered amazing by a lot of people, or had a large family these would be used to refute the fact that he wasn’t successful.

      • Incurian says:

        you just end up as a sad middle-aged man writing Harry Potter fanfiction.

        It’s really good though.

      • jaimeastorga2000 says:

        Eliezer Yudkowsky is not a college dropout. According to his autobiography, he never attended high school or college. Eliezer clarifies this point in the talk page of his Wikipedia article; he left the formal school system after graduating middle school at the end of eight grade and became an autodidact.

        • Nabil ad Dajjal says:

          Thanks for the correction.

          That said, from a lifetime achievements standpoint that’s arguably even worse. Graduating high school and college are both very low bars to clear, so it’s a bad sign when someone can’t even manage that much.

          It would be one thing if we were talking about Peter Thiel, who has a number of tangible successes he can point to, but this just seems like more failure to realize potential.

          • The Pachyderminator says:

            There seem to be a lot of unexamined assumptions here. Can you conceive of any reason that someone might choose not to go to highschool or college (as opposed to just not “managing” it)? Do you know of a case where Eliezer’s lack of formal education has prevented him from doing something he really wanted to do?

          • albatross11 says:

            If you bailed out of your CS program at Stanford to go work on a startup, and ended up a multi-millionaire by the time you’re 30, then I think we all agree you did okay without the degree. But if you didn’t finish college because of the irresistible draw of beer/pot/girls/video games/etc., then that suggests bad things about your future.

          • Matt M says:

            While I found Nabil’s original post to be pretty funny, I think the relevant comparison here is how EY has done relative to other non-HS/college grads.

            The level of fame and notoriety, even if not easily monetized, he has achieved is surely worth something. If I think about all people I know personally who didn’t graduate from HS, he seems to be doing better than they are…

          • Viliam says:

            I think the relevant comparison here is how EY has done relative to other non-HS/college grads.

            I’d say he has done pretty well even compared with most people who have a university diploma. (And compared with them, less time wasted and less debt.)

          • baconbits9 says:

            While I found Nabil’s original post to be pretty funny, I think the relevant comparison here is how EY has done relative to other non-HS/college grads.

            I wouldn’t agree with this, EY clearly should have had no intellectual impediment to completing an advanced degree, nor does he seem to have major work ethic flaws that would have prevented it. You wouldn’t judge a Harvard drop out vs all college dropouts if you could judge them against other Harvard drop outs or even just Ivy League drop outs.

      • sty_silver says:

        Neither true (at least two objectively false claims) nor kind (duh) nor necessary (the site is marked by EY as obsolete, so attacking his character is clearly not needed to disagree with the content).

      • meh says:

        He’s internet famous to the extent of a mid-tier YouTuber

        Well… given the number of people trying to achieve this, it’s not nothing!

        • Tarpitz says:

          With the added bonus that he’s unlikely to find himself in the middle of Yew Tree 2.0 in 30 years’ time.

      • Viliam says:

        Yeah, Yudkowsky is just some random guy who happens to work at a non-profit organization of his dreams having other people send him money to support him following his dream, is famous around the world so that people in many countries have regular meetups inspired by a blog he wrote, and has a harem of girls. What a loser, right? I am sure most of us do much better with our lives.

        He should have developed his potential by spending another decade of his life at school and then becoming a corporate drone at Google or a similar company, just like most smart guys do. No one would have heard about him, so he would avoid being a target of similar derisive comments.

        (Tomorrow’s topic: Why J.K.Rowling is a loser for writing Harry Potter books instead of taking a job at Walmart.)

        • Gobbobobble says:

          By those metrics Jim Jones was more successful than the corporate drone but I know which path I’d want my kids to take.

          • Viliam says:

            This argument feels a bit like — “Yudkowsky wrote a book that inspired many people.” “Yeah, just like Hitler.” — Technically correct, but…

            Also, seems like the goalpost moved from the original “by nearly any standard he’s a loser” towards comparing with a famous villain. Both have negative emotional valence, but for completely different reasons.

          • Gobbobobble says:

            You’re the one who used “harem” as though it was something to be praised

      • Atlas says:

        He’s internet famous to the extent of a mid-tier YouTuber, a college dropout, fat, childless and not particularly well-off. He doesn’t even seem to be very happy with his situation. I certainly wouldn’t want my children to turn out like he has.

        Well, if you believe Yudkowsky, he plays/played a pioneering role in doing/guiding AI safety research, which he, and others, claim is of civilization importance. While the arguments about the risks posed by a future superintelligence seem plausible to me and are apparently accepted by many experts (Scott wrote a post about this), I remain ignorant of what falsifiable predictions have been or can be used to test these claims. (Aside from AI progress in fields like Go competition and the internet box challenge thing.) If he turns out to be right about that, it’s a pretty impressive feather in his cap; if not, it will likely seal his reputation as a Freud-like crank.

      • Scott Alexander says:

        Warning: this veers too close to personal abuse to somebody who is not enough of a public figure that I am willing to tolerate it under a public figure exemption.

        (and don’t give me the “it’s relevant to his advice on intelligence” thing – the number of children he has isn’t relevant to whether he’s right about programming increasing IQ”. Also, I’m especially annoyed about the “didn’t go to college” mockery since that’s just contributing to this dynamic)

    • rahien.din says:

      It might be that learning new modes of thinking or approaches to problem solving will increase your mental flexibility, thus making you a more adept thinker.

      But I don’t think that means the same thing as augmenting your general intelligence. If anything, causality flows the other direction.

      • Matt M says:

        Yeah, I sort of buy the coding one, because I think that coding requires you to think in a very certain and particular (logical) way that many people will not otherwise encounter.

        Not that you have to learn an entire language or something. I myself never got that much farther than “Hello, world” in QBasic. But the general idea behind it was almost entirely new to me at the time and I think it helps that I at least tried it once.

        • sty_silver says:

          Ideally, I think code should be the least complicated way to express a series of instructions with zero ambiguity. Going with that, provided that you know a language, you arguably understand a series of instructions completely iff you are capable of writing them as code. It seems intuitive to me that a person who is regularly wrong about her level of understanding would develop better mental habits through writing code. I also think that whether this increases your intelligence could easily turn on details of how intelligence is defined, but either way, it seems useful. But whatever causal effect exists is certainly going to be weaker than the correlative effect.

          I also think it’s interesting to consider how widely different code looks in functional versus sequential programming. I would argue that functional programming comes much closer to the ideal of being the least complicated way to express something with no ambiguity and that sequential code puts additional difficulty on a problem because it is not actually how you would describe an algorithm with no ambiguity (it does have no ambiguity, but it makes you solve additional tasks that aren’t needed). Definitely in simple cases like sorting algorithms, the functional implementation will look much closer to the natural language explanation people would give if their task was to explain the algorithm fully.

          • Thegnskald says:

            I think you may be typical-minding here a bit. I can code in functional languages, but it feels completely unnatural to me.

            Also, “unnecessary tasks”, to me, means “the things that make it easy to figure out what is going on”. People who think the shortest code is the best code haven’t spent a decade maintaining other people’s code. I despise lambdas. Really, I despise just about every trick people invent to make code shorter, because those tricks always make it harder to fix problems. And functional programming, to me, looks like one giant “trick” to shorten code. No. Just write the boilerplate and move on. Your maintenance guy will despise you a lot less for 3,000 lines of legible code than 30 lines of dense logic.

          • sty_silver says:

            typical-minding

            I’ve never heard the phrase before; does that mean assuming my own preference is more typical than it actually is? If so, yeah, you could be right.

            Let me explain why I say it, though, it’s not just because I prefer functional programming (though I do). Suppose I ask you to explain the instructions that take a string like “slate star codex” and return “ssllaattee ssttaarr ccooddeexx”. You would probably say something like, “go through the string letter by letter, and insert a duplicate of each letter directly behind its curent position”. In Haskell, that is implemented by

            duplicate (letter : rest) = letter : letter : duplicate rest
            duplicate [] = []

            In java – if we are taking the instruction literally (i.e. aren’t allowed to dodge the problem by creating a new string instead), it is

            for (int i = 0; i < word.length; i+=2) {
            word = word.substring(0, i) + word.charAt(i) + word.charAt(i) + word.substring(i+1);
            }

            Look at how much thought this requires about indices. This was what I had in mind when I said ‘unnecessary tasks’. Nowhere in the natural explanation of how to duplicate letters is an analogy to the fact that you have to increase your index by 2 instead of 1 to do this, because your string is growing. Even after writing this, I had to pull up a sheet of paper to make sure it does exactly what I want. If you look at the functional code, on the other hand, it maps almost exactly to the natural description. That’s why I would honestly suspect that whatever positive effect coding has (if any) will be larger with functional code.

            quicksort would be a better example, but it’s the go-to thing so I made something up instead to avoid cherry-picking.

            You could write the Haskell command in one line instead of two – I forgot how because I haven’t written in Haskell in a while – but there is something to automatically go through lists. Then I think there’s a solid argument for making it shorter but less readable. The two liner is the naive, “natural” approach, though.

          • Tatterdemalion says:

            sty_silver, your haskell may be shorter than your java, but the flip side is that as someone who doesn’t know either language (although admittedly I do know C, which shares a lot of syntax with java), I can immediately see what your java is doing, whereas your haskell is pure voodoo to me.

            Also, I think that forcing yourself to do the duplication in place is a slightly artificial constraint, and that if you removed it the java would be much cleaner.

          • Thegnskald says:

            Sty –

            Uh. That is… one way of doing it. Of course, if you are doing it that way, you might as well just go full bore and do:

            String newWord = "";
            String oldWord = "Slate Star Codex";
            for(Char c : oldWord) {
            newWord += c;
            newWord += c;
            }
            return newWord;

            Yeah, I am creating a new string. Many, in fact. So are you, however – every time you change the length of a string the system has to copy the existing character array into a new character array.

            The constraints that are important in a large application aren’t “Don’t create things as side effects of doing something”, they are “Make sure the guy who inherits this code from you in two years can maintain it”. The fact that you can do something in one line in Haskell is NOT a point in its favor – it means somebody with more cleverness than sense is going to make my job harder by condensing complex logic into a pile of Boolean spaghetti I am going to have to painstakingly convert into long form just to read, and if I am going to bother doing that, I am just going to rewrite it in long form and leave it.

          • Gobbobobble says:

            In java – if we are taking the instruction literally (i.e. aren’t allowed to dodge the problem by creating a new string instead), it is[…]

            “If we write in an OO language but aren’t allowed to use OO features, it is…”

            Just make a new String and return it. Why is this a problem?

            ETA: Also, Java can do recursion too, since that’s arguably the key concept:

            String duplicate(String word) {
            if (word.isEmpty()) {
            return "";
            }
            return word.charAt(0) + word.charAt(0) + duplicate(word.substring(1));
            }

          • dick says:

            I believe strings are immutable in both languages, with heavy optimization, so fulfilling the “in place” requirement is kind of more about compiler behavior than what the code says.

            More generally, if I were trying to teach a complete noob to understand programming well enough to write a simple function, in theory I’d probably want to use a functional language just to avoid introducing the whole concept of objects, with the baggage they bring. But in practice I feel like if I’m teaching a non-programmer some simple scripting, the odds are very high they ought to be using Javascript, just due to it being the default beginner language right now.

          • fluorocarbon says:

            @sty_silver

            I think you’re being a little unfair in your comparison of imperative and functional programming by using Java and Haskell. Those two languages are different in more ways than just how functional they are. You’re also claiming that it would be cheating to have the Java code create a new string, but that’s exactly what your Haskell code is doing (?).

            This is how I would program the the function functionally and imperatively:

            // imperative
            var newStr = '';
            for (var i in oldStr) { newStr = newStr + i + i; }

            // functional
            var newStr = oldStr.split('').map((i) => i + i).join('')

            To me, the functional code above is easier to read, but not that much so. In general, I find the claim that functional programming is closer to how people think suspicious. There are algorithms that make a lot more sense when programmed in an imperative style. There are some that can’t even be written in a purely functional language with the same complexity. I also know professional programmers who have spent years trying to “get” functional programming and learn Haskell but who still don’t understand recursion at all. On the other hand, there are some things that are easier to think about in a functional way and I personally prefer using functional idioms when possible.

            I would say that the functional/imperative debate is one of the few where the “both sides have good points so the truth is in the middle” talking point is actually true.

            As an aside:

            quicksort would be a better example, but it’s the go-to thing so I made something up instead to avoid cherry-picking.

            Quicksort is a bad example to use for functional programming. It’s extremely difficult to program quicksort correctly in pure functional languages like Haskell. The “one line” versions are often not true quicksorts and have super slow runtimes. See here.

          • sty_silver says:

            Also, I think that forcing yourself to do the duplication in place is a slightly artificial constraint, and that if you removed it the java would be much cleaner.

            I agree that the example itself is silly, no-one would ever do it that way, and internally more strings are created etc. I really don’t think that has any bearing on the point I was trying to make, which is that you have to think about indices when you implement simple procedures on data structures, and that this is something with no analogy to how humans think about problems. I didn’t think that having an artificial constraint would be that distracting, but if it is then please just forget the example, and replace it with any non-trivial work with indices.

            sty_silver, your haskell may be shorter than your java, but the flip side is that as someone who doesn’t know either language (although admittedly I do know C, which shares a lot of syntax with java), I can immediately see what your java is doing, whereas your haskell is pure voodoo to me.

            sure, but that’s just because the sequential way of programming is more common.

          • Viliam says:

            Java 8

            public static String duplicate(String word) {
                return word.chars()
                    .mapToObj(c -> "" + (char) c + (char) c)
                    .collect(Collectors.joining());
            }

          • sty_silver says:

            ^ You really don’t understand my point if you think this is an argument against it. I’m not arguing against java, I’m arguing that imperative programming introduces elements that aren’t inherent to the problem itself. The fact that java 8 has introduced functional elements is if anything a weak point in its favor.

          • Thegnskald says:

            Sty –

            Indices – or rather, relational information, which indices are just a useful abstraction for – are definitely part of the problem definition for your example case.

          • Viliam says:

            Java 8, procedural approach

            public static String duplicate(String word) {
                StringBuilder sb = new StringBuilder();
                word.chars().forEach(c ->
                    sb.append((char) c).append((char) c));
                return sb.toString();
            }

            Note: no indices needed.

            When I compare the two solutions, the extra complexity of the procedural solution is creating the “StringBuilder” object for holding the intermediary state. On the other hand, the functional solution does a similar thing inside the “Collectors.joining()”.

            (Also, the functional solution could be parallelized, but in this example the overhead of parallelization would probably exceed all gains.)

    • Gazeboist says:

      Specialized intelligence / skill almost always swamps general intelligence because general intelligence is slow.* Studying computer science, at least, can teach you about how problem solving works and why maximal generality is not usually something you want to rely on to accomplish a task, and also give you a bunch of useful procedures for solving problems that tend to show up on IQ tests (or at least inspiration for when you’re generating solutions, which is better than going in blind if the problems are actually similar).

      * General reasoning as it is usually meant here (SAT-equivalents or Bayesian modifications thereof) is NP-hard. Unless you’re certain you need an exact solution to the problem in question and that the problem is also in NP, a specialized algorithm or even a specialized heuristic is the best way to minimize your opportunity loss due to computation time. Alternatively you could claim P=NP, but that’s a pretty extraordinary claim.

    • Thomas Jørgensen says:

      Coding certainly teaches extremely useful habits of thought. I would say it genuinely does effectively make you smarter.

      … I think evo-psyc might actually make you dumber, however. Far to credulous of “just so” stories justifying 1950s americana as the natural state of affairs.

    • Murphy says:

      I’d argue that learning to code teaches you how to utterly clearly explain your ideas. it moves you away from “philosophers bullshit” explanations of things where you just say something that sounds kinda wise-ish but which you can only explain with circular references.

      If you can’t explain how to do something to a computer… then you don’t really understand it.

      Because computers are the ultimate idiots.

      They can do precisely as told but waving your hands and muttering a vague description doesn’t cut it.

      It may make you more systematic and foster avoidance of bullshit ways of thinking but it absolutely does not make you smarter.

  9. bean says:

    Naval Gazing has been busy lately. First, an account of the most incredible long-range bombing mission in history, as part of the ongoing series on the Falklands War.
    Second, I neglected to post the link to our continuing discussion of strategy for a modern navy.
    Third, meetup report. We had five people show besides me, and everyone seems to have had a great time, although it was a very long day between the engine tour and the main tour.

  10. Deiseach says:

    It’s Sunday, so some silly fun links for the end of the weekend!

    (1) Vengeance for Yuggoth! Take that, deGrasse Tyson! We may soon see Pluto restored to its rightful place as the ninth planet of our solar system (yes, I’ll take “making sufficiently large satellites into planets under this definition” as well as a compromise).

    (2) Please excuse the trashy sensationalist take on this – Peter Thiel is a vampire! Well, no,not exactly, but it looks like the story about Pope Innocent VIII and his Jewish physician will be getting vindication of a sort – tranfusing the blood of youth into older veins to combat aging. I’m especially amused by this since I’ve read at least one 19th century horror story where this is the plot: aging countess has succession of young ladies as companions who start out healthy, then all sicken and die while she goes on in the pink of health; will you be very surprised that the spoiler is her doctor (I think a Sinister Italian, but can’t swear to it since it’s ages since I read it) is transfusing the blood of the young women into her veins to keep her alive?

    That the old trope of the elderly stealing the vitality of the young should have something to it, thanks to modern scientific research, tickles my fancy immensely.

    (3) Our presidential election will be happening on 26th October (along with a referendum on removing the offence of blasphemy from the constitution; a second referendum on the place of women in the home has been deferred. Our recent governments seem exceedingly eager to overhaul the constitution; I’m not sure I am happy about this, since they seem to be treating it as ‘how can we get easy social capital and approval that will turn into votes out of this’ instead of ‘should we be treating the foundational document of the state like a box of tissues to blow our noses on?’ I’m also always wary of politicians who want to muck around with constitutions as I feel it is more like ‘under the old law we’d all be strung up by our thumbs so let’s just snip that part out of it, okay?’)

    Anyway, the main point is that unlike the US, our best hope is having our septuagenarian incumbent (and First Leprechaun President) return to power, as the candidates thrusting themselves forward at present range from the hopeless to “go away and stop bothering us you lumpkin (yes I mean you Seán Gallagher)”. If you care, here is the list of some of the contenders pushing themselves on the public notice (they haven’t included all of them, presumably on grounds that those left out are really no-hopers). To date it seems like the political parties are pretty much in agreement to either back Michael D. or not put forward a candidate of their own, save for Sinn Féin who will let us know in September, as for once they seem to be reading the mood of the public correctly who are happy enough with himself above in the Áras.

    • Machine Interface says:

      1 > Forget Pluto! When will we finally get a clear ruling over whether sub-brown dwarves are planetary-mass stars or free floating planets?

  11. jaimeastorga2000 says:

    Teaching to the test gets a bad reputation, but in principle there is nothing wrong with it as long as the test is well-designed and you are willing to let the kids fail. AP classes, for example, are explicitly designed to teach students how to pass the AP exams, and they are usually pretty great.

    The problem is that when people talk about teaching to the test “the test” is usually some lowest common denominator crap that you have to spend all your time and effort getting the stupidest kids in the class to be barely able to pass. The example I am most familiar with is the Florida Comprehensive Assessment Test, for obvious reasons. As my middle-school civics teacher once said, “everything we do in this school is in order to please the great god FCAT”.

    • gbear605 says:

      Most of the AP teachers I know think that the AP standards are way too low to be equivalent to college classes and teaching to the test for APs means giving a poor education, so instead they teach a much larger superset. (There appear to be a few APs where this is not the case. For example, apparently Music Theory.)

      • Peffern says:

        Just anecdotally, I got a 5 on my Music Theory AP and found that it
        a) was the hardest test I have ever taken, and
        b) did not prepare me for 100-level college music theory courses.
        This was a single-digit number of years ago.

      • idontknow131647093 says:

        If that is the case most AP teachers are stupid. I took 5 AP exams and got a 5 on all of them, they were much harder tests than the equivalent midterms/finals in similar 100 level classes I took in college. Indeed, by a quirk at my university I had to take physics anyways (as background, my AP physics teacher was probably the worst in school history, and was fired after the year because our class got majority 1s and 2s on the exam), my University professor was surely worse even than him but he taught probably 1/2 the material we learned in AP Physics, I learned nothing in his class and just watched videos on my laptop while setting the class curve on 3/3 exams.

        My Psych, Micro, etc 101 classes were pathetic in curricula and midterm/finals compared to AP US History and Chem. The only classes I took that seemed similar were that CALC BC seemed similar in test difficulty to Calc III and AP Comp Sci similar to my Engineering 101 course.

        • aristides says:

          I agree with idontknow. If we compare a 3 to a C, 4 to a B, 5 to an A, than anecdotally AP tests were harder than upper level undergraduate classes for me. I spent much more time studying for AP tests than undergraduate classes, and my GPA, converted with the scale above, was 0.25 points higher in undergrad, than AP.

    • DragonMilk says:

      My friend was a schoolteacher and it seems indoctrinated in the American Education system that testing is bad. As a Chinese-American, this view is unfathomable to me as basically in China, your life is essentially determined by one big test at the end of high school, and standardized testing is to me the most fair way to compare kids across different schools.

      You make a good point that it depends on the test and objective. What if median rather than lower quartile test score was emphasized? Leave some children behind (but seriously, it seems analogous like a goal of eliminating poverty)!

      • gbear605 says:

        One problem in America is that a significant portion of Americans can get perfect scores on the main tests (other than SATs and ACTs, of which relatively little is teachable) without doing any studying at all. So if you teach to the test, you’re doing a disservice to your students. I imagine that your single big test is significantly better at differentiating students.

        • albatross11 says:

          Not all that large a fraction, but some. And we could certainly design these tests to make perfect scores happen once per Terrence-Tao-level genius. It’s a choice not to do so. I’m not entirely sure why they do it that way.

          • Hoopyfreud says:

            Probably because the fraction of people the test wouldn’t tell you anything useful about would rise if it were calibrated for a genius.

          • DragonMilk says:

            RE: Hoopyfreud

            I don’t think that’s the case – you can make tests purposely too long to finish, so to speak, with easy questions first per section, and harder questions later. Let all test takers know that they are not expected to finish the test, and they can feel free to skip questions, but that generally the test is getting harder.

            You can get plenty of info out that way.

      • Matt M says:

        Standardized testing has become a highly politicized topic in America, which means that it’s impossible to have a rational discussion on the matter.

      • DavidS says:

        Mean score would be better than either. Median means your incentive is to entirely give up on the bottom two fifths or so and also means largely ignoring the top fifth.

        Here in the uk the key stay was how many get a grade from A to C and it encouraged people to ignore high achievers and those who never get a C

    • Urstoff says:

      Isn’t teaching to the test a prime example of Goodhart’s law?

      • Incurian says:

        Right, so the resolution is not to try to defy the law, but to have your metrics better align with your goals.

        • Edward Scizorhands says:

          https://thinkprogress.org/half-of-adults-in-detroit-are-functionally-illiterate-5c0ca20df0a9/

          Under those circumstances [47% of adults in Detroit are functionally illiterate] I find it difficult to be seized with worry that schools are going to be ruined by teachers “teaching to the test” too much.

          • baconbits9 says:

            This is a bad reply, it just flat assumes that people who can pass the basic tests will grow up to be mostly literate. Teaching to the test is considered a negative because being able to pass a test with someone coaching you on it specifically doesn’t translate to being able to functionally use the material, its a way of passing people without them actually mastering the skills.

            How does he think that half of Detroit is functionally illiterate when public school has been mandatory for a century?

          • Edward Scizorhands says:

            Matt doesn’t say that tests make people literate. But if half the kids are getting through without being able to read, it means that the schools need to be checking their own results at least slightly more than they do now.

            There are places where I would worry about “teaching to the test” but not basic literacy.

          • Gazeboist says:

            Your data’s a bit stale.

            (TLDR: the 47% statistic comes from a 1998 model based on a 1992 survey that did not test anyone in Detroit; the proficiency levels are based on an average result on three distinct sets of tasks, and so don’t tell you much about what the people in question have trouble with; “functionally illiterate” is an extremely ambiguous phrase which may technically describe a person with partial proficiency at level 1 tasks but implies that they would fail almost all of them.)

            (Even shorter: C’mon, people. Sniff test. 47% of a major city is functionally illiterate? In 2018? In a country where all of the cash is the same size and color? What, are they paying for heroin with quarters?)

            ~~~~

            Based on what I can find, the 47% number is based on a 1998 statistical model (which I don’t feel like digging up) which was itself based on a 1992 survey of literacy rates in California, Florida, Illinois, Indiana, Iowa, Louisiana, New Jersey, New York, Ohio, Pennsylvania, Texas, Washington AC, and Kentucky. From the 2002 report based on the 1992 data (p 17), 22% +/- 1% of adults surveyed demonstrated only level 1 literacy; there is no level 0 in the survey. Further, 25% of the people in this group were immigrants who may have been just learning English, 19% had a vision problem that made it difficult to read print, and about a third were at least 65 years old (ie would have graduated high school in 1945), and about 2/3 hadn’t actually completed high school. The 2002 report only examines differences at the region level (Northeast, South, Midwest, and West; p 71), but I shall trust the blockquote from a memory-hole’d tumblr post that the 1998 report has the actual 47% number for Detroit.

            These estimates were created in response to demand for city- and county-level data which, of course, only existed in the twelve states that volunteered for the study. The current estimates only show county-level percentages, but for Wayne County, Michigan (no, you didn’t miss Michigan in the list of states up above), they show level 1 literacy levels of 15-30% in 1992 and 6-21% in 2003. I’m not sure how they extrapolated from the 1992 data to get estimates for 1998 or 2003; I think they just used demographic changes from census data.

            All of the following information on the meaning of the NAAL scores is pulled from section III of the 2002 report, which starts at page 94 of the linked pdf.

            The NAAL scores are based on tasks broken into three categories:

            – Prose literacy: The ability to pull information from an expository text like an article or essay, or occasionally a poem or novel.
            – Document literacy: The ability to pull information from more varied types of texts, such as data tables, graphs, forms, and maps.
            – Quantitative literacy: The ability to perform arithmetic operations using numbers pulled from one of the above classes of text. Quantitative tasks varied in how explicit they were about the operations required, and occasionally asked for an explanation or required a calculator to perform.

            Prose literacy at level 1 indicates the ability to find a piece of information in a relatively short text, with misleading or information relatively far from the actual answer. The questions are very direct, with no real ambiguities about what is desired. Level 2 generally requires a minimal degree of interpretation in reading the question or finding the answer. At level 3, longer and less organized texts are introduced. Higher levels proceed from there.

            Document literacy at level 1 requires the person to find or enter a single piece of information from a very simple document. One example task asks them to sign a Social Security card; another asks them to fill out a fairly simple job application form. Level 2 begins dealing with more complicated documents, such as pulling information from a particular cell in a table. Level 2 also starts doing graph interpretation.

            Quantitative literacy at level 1 asks the person to take two given numbers and perform a simple operation on them (though they may not specifically say “add” or “subtract”, and the simplest task in the set requires you to know how to interpret a deposit slip; I’m actually a little confused by it, since I haven’t used a deposit slip in ten or fifteen years). Level 2 is still a single operation, but without any “simple” qualifier, and occasionally requires you to know which operation needs to be performed.

            A level one reader is estimated to be able to answer level 1 questions (of any type) bit less than half the time; for a level 2 reader the expectation is closer to 80%. Whether or not a level 1 reader is “functionally illiterate” is a matter of interpretation; personally I would expect a “functionally illiterate” person to succeed on these tasks something like 20-30% of the time, depending on the category; at 50% I might be willing to call them “barely literate” or some similar thing, but I would not want to imply total illiteracy.

            ~~~~

            I don’t think it’s Yglesias’s fault that he used a statistic based on no direct data at a level of precision the then 19 (now 26) year old original study did not report; the power point he got it from was only about a month old.

          • baconbits9 says:

            What is special about literacy? There is a good amount of evidence that absent practice skills learned in school fade out very quickly (Bryan Caplan cites a study that 3 months out of school drops one month worth of schooling, extrapolating that someone who graduated high school could be literate at a 6th grade level by the time they turned 30).

            Is there evidence that literacy is an exception to this general concept (or that the concept is wrong)?

          • Gazeboist says:

            @baconbits

            Literacy isn’t special as an educational goal, just important and nigh universal. Certainly the DRWF thinks so.

          • arlie says:

            @baconbits9

            There is a good amount of evidence that absent practice skills learned in school fade out very quickly (Bryan Caplan cites a study that 3 months out of school drops one month worth of schooling, extrapolating that someone who graduated high school could be literate at a 6th grade level by the time they turned 30).

            Well, that certainly accounts for part of my wretched educational experience. My school system apparantly knew this, 40 years ago, so they spent the first part of each year reviewing last year’s material. The problem is that I don’t recall them ever covering anything I’d actually forgotten. So either I “practiced” all summer, or the amount of forgetting is highly variable. (Or I was so far ahead of the class that no matter what I lost, it wasn’t something the school had ever actally covered, so I didn’t notice :-()

            Looking at literacy in particular though, it’s somewhat of a sad indictment of our society – and our schooling – if it’s something lots of people don’t practice outside of school.

          • Gazeboist says:

            @arlie

            It’s not; the Yglesias post is bullshit based on bullshit which is itself based on wildly out-of-date and inadequate data, and the whole idea relies on sleight of hand with the definition of “literacy”.

            A reference to a hypothetical crisis is not sufficient support for a call to action unless accompanied by genuine evidence that the crisis actually exists.

    • roystgnr says:

      Teaching to the test gets a bad reputation, but in principle there is nothing wrong with it as long as the test is well-designed and you are willing to let the kids fail.

      In principle I agree, and “I taught these kids so well! …but only in a way which can never be objectively measured” is only slightly more respectable a claim than “There’s a dragon in my garage …which is invisible, incorporeal, and only breathes heatless fire.”

      In theory, shouldn’t test length vs. sample size sometimes be an issue? If you want to make sure that kids know most of a few thousand quickly-recollectable facts then you can ask multiple choice questions about a couple hundred of them and get a very accurate estimate of what percentage they know… but if you want to make sure that kids are able to handle most of a few hundred types of long complicated problems, such that they might take half an hour to figure out each, then the pass rate is going to depend on luck as much as on skill. You might be able to give every kid a different test and thereby get a good sample of how their class/school/district is doing as a whole, but you won’t be able to use it as an exit exam unless you use a very low threshold.

      In practice, I thought the SAT and ACT and AP tests I took way back when were all pretty solid, but if they had been the *only* component of college admissions (much less graduation!) then I would have expected Goodhart’s Law to wreak a nightmarish vengeance on the 90% of my curriculum they didn’t cover.

      • baconbits9 says:

        I don’t agree in principle because the purported purpose of education is mostly about positive life outcomes afterward, and not how well you do on a test. Testing only matters if it correlates beyond things like IQ and socio-economic status going in. Without good reason to believe that increasing test scores results in better long term outcomes it becomes a “something must be done, this is something” situation.

        • idontknow131647093 says:

          Yes, but the purported purpose of education is mostly a lie, and almost all the correlations with education disappear if you adjust for IQ and SES. So what education is doing is mostly just measuring those things and presenting them back to you as grades and test scores.

        • Matt M says:

          positive life outcomes afterward, and not how well you do on a test

          Wanna bet these things are very highly correlated?

      • Viliam says:

        Some problems with tests could be fixed by making better tests. For example, if the test covers only 10% of what you learned, the obvious solution is to add questions for the remaining 90%. If that’s too many questions in the test, then always choose a random subset. This can be further tweaked by choosing one random question from chapter 1, one random question from chapter 2, etc., so that despite the randomness all chapters are covered.

        It seems to me that people often don’t even try this, before they give up.

      • jaimeastorga2000 says:

        In practice, I thought the SAT and ACT and AP tests I took way back when were all pretty solid, but if they had been the *only* component of college admissions (much less graduation!) then I would have expected Goodhart’s Law to wreak a nightmarish vengeance on the 90% of my curriculum they didn’t cover.

        The AP program provides exams for almost every subject. Coverage only gets broader if you include the very similar CLEP exams as well. If you had a high school education which consisted of nothing but training you to pass the AP and CLEP exams associated with the typical required high school courses (algebra, biology, world history, English literature, etc…), on top of training for the SAT and the ACT, I think you would probably have a better high school education than 90% of people.

        • Gazeboist says:

          What’s on the AP architecture exam?

          (ed: not to say that I think architecture in particular is an essential subject, but the AP tests have a pretty narrow focus on getting people into college, which in turn gives them an extremely useful (if they’re lucky; sorry “Communications” majors) but nevertheless fairly narrow education.)

          • jaimeastorga2000 says:

            How many high school students take architecture? How many high schools even offer architecture?

            My point is not that electives wouldn’t suffer. My point is that electives constitute the minority of a high school education, and that since everybody takes different electives any single one of them can’t be essential anyways. If it’s okay for some kids to take JROTC and other kids to take Speech & Debate, why isn’t it also okay to take neither?

          • Matt M says:

            I took architecture in high school. Basically we spent the whole semester “designing a house” from the initial brainstorming stage, up through basic-level blueprints in AutoCAD, and then had to build a little scale model of it as our final project.

            It was mostly an easy joke type of class (taught by one of the shop teachers). We occasionally went on Friday field trips to nearby houses under construction and roamed around, which was kind of cool though.

          • Gazeboist says:

            Obviously no particular elective is essential, otherwise they wouldn’t be electives. The presence of electives is necessary in order to avoid pouring the vast majority of our economic effort into producing workers for a relatively narrow class of jobs that cannot stand in for all the things that need to be done. A modern economy needs its workers to specialize, and it needs them to specialize in different things. That process is severely hampered by a monomaniacal focus on classical western literature, miscellaneous historical events, expository writing, and the prerequisites to univariate calculus during the first twelve years of a typical citizen’s formal education.

          • Matt M says:

            Agreed. Electives expose students to potential career options that are available outside the narrow English-Math-History-Science specialization that the public school system enforces.

            And given how the vast majority of students eventually obtain careers as things other than authors, mathematicians, historians, and pure scientists, my belief is that there should be a lot more of them, not less.

  12. DragonMilk says:

    Food Preparation Question!

    Background: I value eating over work. Unless I’m actually at a meeting at the time or on a conference call I can’t hang up on, I will get lunch at 11:30am every day. But I’ve hung up on people and ended meetings a bit early to eat lunch, saying I have an 11:30.

    Someone else, however, says I don’t appreciate the culture of residency and she says that during this past weekend, she was unable to make any time to eat breakfast, lunch, or dinner. Since she did this before and I got mad about not prioritizing eating, she says she ordered pizza for everyone at 7pm. I suppose a side-question would be to all doctors, asking if it’s really a problem if a first-year resident makes a bit of time (say 5-10 minutes) to eat breakfast, lunch, and dinner.

    Question: What are some fairly easy things to prepare that you can eat with your hands that you can grab out of tupperware container in a refrigerator within minutes? Preferably nothing that gets too soggy potentially like a mini sandwich, but I won’t eliminate that possibly entirely (perhaps if you toast it first it’s less soggy?)

    Thanks!

    • Matt M says:

      Not a doctor, but I would quit any job that didn’t allow me ten minutes to eat food three times a day.

      • Thisbe says:

        I am a vet, not a physician. I’m in a relatively low-key outpatient general practice. It is common (at least once a week) for me to have a busy enough day that I not only don’t get ten minutes to eat, I don’t remember to drink water or use the bathroom between when I get to work and midafternoon. When I worked daytime emergency, I would bring food for breakfast and if I was lucky, get it eaten by 5 pm. When I was in clinical training, it was routine (more days than not) to be working minimum 14 hour days with no allowed/designated rest breaks, and supervisors would definitely notice and complain if people took rest breaks anyway – so whether or not one took a rest break depended mostly on tolerance for adversarial interactions with supervisors. Anyways I’m not saying your reaction isn’t healthy for yourself, it sounds like a great standard to set. I wonder if there’s any person in the western world who can say, “I am a doctor, and I would have quit a job that didn’t allow me ten minutes to eat food three times a day.” I think on the whole probably not; those who would quit such a job are, de facto, not doctors.

        • CatCube says:

          When I was in clinical training, it was routine (more days than not) to be working minimum 14 hour days with no allowed/designated rest breaks, and supervisors would definitely notice and complain if people took rest breaks anyway – so whether or not one took a rest break depended mostly on tolerance for adversarial interactions with supervisors.

          That’s demented. How can a single person be that irreplaceable that you can’t spare a few minutes throughout the day for them to take care of necessary functions? What happens if that person gets hit by a bus on the way to work? Will everything fall apart? That’s actually a failure on the part of the supervisors, if they can’t cope without a particular person for a few minutes.

      • CatCube says:

        I’d go so far as to say that the culture is sick if it’s not allowing its personnel ten minutes to eat. I mean, shit, even in the military you take turns so everybody can get food into themselves, so long as you’re not actively engaging the enemy.

        It’s also dangerous, since the ability to think and some emotional stability is dependent upon eating (think of the people who do get pissy when they haven’t eaten–I’m one).

        • fion says:

          And also, based on Thisbe’s comment, there may be many otherwise-capable potential doctors who don’t go into the profession (or who drop out) because they can’t cope with the food/sleep deprivation.

          I’d be interested if anybody can defend the status quo on this. It sounds like a bad system, but normally when I think that I’m missing something. What am I missing?

          • Matt M says:

            What am I missing?

            Government restrictions limiting the amount of potential doctors comes to mind. It sounds to me like it’s a situation of there being too few doctors, therefore, hospitals have to get as much labor out of them as humanly possible.

            Licensing and other requirements almost certainly are standing in the way of the market clearing itself properly.

            ETA: John Schilling’s suggestion of “pointless hazing” also seems plausible to me. The medical field is both respected and insulated enough that the public will “trust” that the people in charge no what they’re doing and of course all of this is highly necessary, despite the fact that it seems virtually impossible to justify as far as I’m concerned.

          • Perhaps this is too cynical, but it seems like a perfectly natural result of rational self-interest. The higher the barriers to entry, the higher the equilibrium wage. So if the rules setting the barriers to entry are controlled by those who have already gotten through those barriers, it’s in their interest to make the barriers high.

            It’s the same logic that makes it in the interest of existing barbers to support absurd licensing requirements–hundreds of hours of classroom study before it’s legal to charge people to cut their hair.

        • John Schilling says:

          Yeah, pretty much every field that structurally has to deal with sustained high-tempo operations, e.g. the military, knows to organize overlapping shifts so that everybody gets something approximating three quick meals and at least four hours of sleep a day with very few exceptions. So I’m pretty sure this thing where the medical community insists that the newbies work forty hours non-stop for “training”, is really approximately 100% hazing.

          Telling your patients to keep bleeding until you’ve finished breakfast, will almost certainly lead to better average long-term outcomes.

          • Thomas Jørgensen says:

            That is trivially provable, since doctors are specifically not exempt from the European working time directive, and European medicine and doctor training has noticeably failed to collapse.

        • ana53294 says:

          Also, why don’t labor laws apply to interns? Are they not considered workers? Can’t interns unionize?

          I am pretty sure that any other industry that tried to pull off this shit on their workers would be facing indefinite industrial action and countless lawsuits.

          While I can’t say that an internship in Spain is a pleasant thing, I have never heard of it being impossible to eat during work (I am not a doctor, so I may be misinformed). I have heard about the American situation, though.

          • Gazeboist says:

            Interns are (for some purposes) employees. Residents are students.

          • ana53294 says:

            Thanks for the explanation. I had a vague idea that it was the other way around; this system seems really counter-intuitive.

            Using students/interns for structural* work is illegal in Spain, although this gets ignored regularly. The worst offenders seem to be high end restaurants, which treat assistant chefs abominably. They do kind of admit this is hazing, and always say that they went through it, too.

            *”Structural work” in Spain is a legal term that refers to work that has to be done in order to keep the company running. It can be a bit murky, so companies frequently abuse this.

          • Gazeboist says:

            Honestly I’m not totally sure I’ve got it right. It’s slightly different here, where it’s illegal to not compensate your employees, but education and training counts as compensation. Interns have at least some labor rights, but it’s very difficult to exercise them, and in any case an employer can claim that what would otherwise be an illegal violation of an employee’s rights is an essential part of the job, and I would expect courts to be deferential to a hospital’s assessment of what is necessary for the hospital to function.

        • Matt M says:

          Yeah – I’ve been in the military (although in a non-combat role) and also worked in management consulting, which is often thought of as about as demanding and high-workload and minimal balance of a white collar job as you can get (possibly excepting I-Banking or private equity).

          In both cases, not giving you ten minutes to eat would be considered egregiously cruel and completely unacceptable.

          • CatCube says:

            In the units I was in (engineers), it was considered a positive obligation to ensure that your subordinates ate. I believe that this is true everywhere in the Army, but don’t know how much that was honored in the breach rather than the observance elsewhere.

            However, in my units, if I was regularly denying my personnel food I’d expect to be relieved for cause, and I would relieve subordinates for cause if I found they were doing this. The general expectation is that the company commander and First Sergeant were the last people through the chow line–this is a bit of theater, admittedly, but one time when we discovered that we missed getting a subordinate through before us, we were actually embarrassed by this.

            I suspect that for whatever reason “doctor culture” didn’t develop this habit, and anybody now who claims that supervisors should ensure that their people can eat or piss at some point during the day are called whiners.

          • Matt M says:

            Right. I was genuinely surprised when I transitioned from military to corporate that “make sure your subordinates are taking care of their basic needs” was not something most supervisors saw as an important part of their job.

          • sfoil says:

            Eating last is a great way to make sure your unit is actually getting enough food.

          • CatCube says:

            @sfoil

            Oh, I understand why the practice evolved. Since the commander and 1SG are responsible for quantities ordered and ensuring portioning (nominally the commander, but a good captain will trust his first shirt on this, and they’re a team), it’s a great way to ensure that there’s an immediate signal to superiors if something has broken down.

            However, in the US and at current contractor-operated forward-deployed dining facilities, it’s mostly theatrical because there’s no shortage of food, and if it came up short you could probably get more in short order. But maintaining the cultural requirement is important so that when there is a logistics-constrained environment, you’ve already got the expectation of the senior leaders eating last built-in. (In a battalion-level field kitchen, the battalion commander and sergeant major eat after the last company commander and 1SG)

            It’s just weird to me that you have an environment where the workers will be there for days on end, and the supervisors don’t see it as imperative to ensure that they’re properly fed and taken care of. Like @Matt M, it’s weird when you come out of a military background.

          • Incurian says:

            I once got in medium-trouble for publicly pointing out that the leadership was violating the “feed your soldiers” norm. My civilian friends couldn’t understand why it bothered me so much that I felt the need to make an issue of it (which I admit was a dumb thing to do, especially the way I did it, but it was frustrating that they didn’t even understand the motivation).

      • Thegnskald says:

        If I were to steelman the “hazing”, I would guess they are trying to make sure the doctors can cope with a disaster scenario where those conditions may arise.

        Which, if you are limited to X doctor’s on staff, makes sense. It stops making sense if you cannot even fill out X positions, however, since additional doctors who can’t cope with the disaster would still leave people better off in the disaster.

        • Matt M says:

          As John Schilling mentions above, the proper way to prepare/train for such scenarios is to realistically identify that people need to eat and to develop contingency plans that actually solve the problem of “how do you enable people to eat when they’re really busy for long stretches of time”

          “Force them to keep working and they can eat whenever the emergency over” is pretty explicitly not a viable long-term plan that can be deployed in an emergency.

          But even putting that aside, there’s a difference between “make someone prove they can work a long time in a row in case an emergency requires them to” and “make them do that every day, as a matter of course, for a year”

          ETA: The highly acclaimed documentary series on military medicine, M*A*S*H, featured multiple scenes of orderlies and nurses holding sandwiches and sippy cups of juice up to the faces of surgeons who were required to work 10+ hours straight. That’s what planning and preparedness looks like. Not starving them during non-emergency times just to make sure that in the case of an emergency, they can still perform surgery with no food and no sleep.

        • dick says:

          And don’t forget self-imposed abuse as a way of demonstrating one’s value. Aren’t internships at least partially a kind of competition for scarce rewards like plum assignments or post-internship jobs?

          • John Schilling says:

            In most places, they are “competitions” to demonstrate that you can do the actual job better than the other interns, not that you are willing to endure greater suffering to get the job. If I’m looking to hire someone, how much they want the job is near the bottom of the list of things I care about. Why they want the job, that’s interesting and potentially informative, but you don’t really get that datum out of pointless torment.

          • Matt M says:

            If I’m looking to hire someone, how much they want the job is near the bottom of the list of things I care about.

            It’s funny you say this. The more experience I get in the corporate world, the more I feel like “how much they want the job” is near the top of the criteria used for promotion/hiring decisions.

          • dick says:

            The more experience I get in the corporate world, the more I feel like “how much they want the job” is near the top of the criteria used for promotion/hiring decisions.

            I think I know what you’re referring to but I would phrase it more like “A candidate’s willingness to feign interest in our business is a good indicator of future willingness to embody the pretense that this bullshit corporate job is Very Important, e.g. behaving as if a brief service outage in a business application is an emergency on par with a plane crash.”

        • Matt says:

          Here’s a counterpoint:

          Las Vegas Mass Shooting

          It was around four o’clock when I started trying to look at a CAT scan report. I tried to read it, but I think I burned every neurotransmitter that night. I remember looking at it and not understanding a single word that was on there. At that point, I knew I was more dangerous to the patients than helpful. These were stable yellow tags that needed a set of fresh eyes. By then, we had a lot of doctors who had arrived, so I turned that aspect of care over to them.

          The surgery team performed an unprecedented feat that night. The numbers speak for themselves. In six hours, they did 28 damage control surgeries and 67 surgeries in the first 24 hours. We had dispositioned almost all 215 patients by about 5 o’clock in the morning, just a little more than seven hours after the ordeal began. That’s about 30 GSWs per hour. I couldn’t believe that we saved that many people in that short amount of time. It’s a testament to how amazingly well the hospital team worked together that night. We did everything we could.

          This doctor, after a heroic effort to save lives from 10pm to 4am, ended his shift early (normally 8pm – 6am) and let others take over for him at 4am because he determined that “At that point, I knew I was more dangerous to the patients than helpful.”

          Just a data point.

          But here’s a couple of questions: Do you think Dr. Menes found time to have a sandwich during this disaster? If not, is it possible that a 10 minute meal (for him or anyone on his team) could have enabled him to continue working until 5-6am instead of 4am? Enabled them to perform with fewer mistakes, thereby saving more lives?

      • rahien.din says:

        I’m a physician, not too far removed from internship. And I’m kind of touched by all of your incredulity.

        DragonMilk’s description is accurate. Medical training is generally an exercise in sacrificing your own health in order to ensure the health of others. The level of onerousness depends on the specialty and the venue – surgical training seems to be the absolute worst – but pretty much everyone gets it bad. The general adage is “eat when you can, pee when you can, sleep when you can.” Of these, the inability to sneak off and eat is probably the least important. I have gone an entire day by stealing graham crackers and peanut butter out of the inpatient nutrition rooms.

        What really gets bad is the lack of sleep. The subject of sleep in medical training is a matter of intense debate. Everyone knows you need to sleep to function, but, no one really seems to know how much sleep is worth missing medical experiences.

        In the initial stages of my training, I was on Q4 in-house call, meaning every fourth night on the inpatient service, I would remain in the hospital overnight and stay awake caring for patients, then make rounds the following morning as usual, and leave around noon. I would eat a big meal, then sleep from about 1500 until 0600 the following day, and then I would get up and do my usual 12 hour shift. This gets pretty miserable when you do it 4 weeks in a row. A lot of people became concerned that there were too many medical errors due to horribly disordered sleep patterns. Residents driving home after a 40 hour call are basically driving drunk.

        So, the ACGME made Q4 call illegal for most programs. This forced residencies to use night float systems, wherein there is one team who only does the overnight work, a week or two at a time. This led to more transitions of care (another source of medical errors) and decreased continuity of care (another detriment to learning). A lot of people rightly questioned whether this change in the call system was actually achieving its purported goals. As far as I can tell, no one can demonstrate that it has, and not for lack of trying.

        As for why? As the old-timers are fond of saying, the only bad thing about Q2 call is you miss half the cases. That’s not exactly wrong. You can’t become a physician without practicing medicine. You have to build a highly-informed intuition by seeing a lot of patients in a variety of situations.

        When I got into my neurology training, I took house call every 3-5 nights for three years. This means that I would go home at the end of the day, but if a patient had a neurologic problem, I had to wake up, drive to the hospital, see them, talk to my attending, and carry out their treatment plan. This probably occurred 1-5 times a night, in addition to the phone calls I would get that I could merely wake up and handle over the phone. That kind of workload basically bends you into shape. I got really good at handling things. I also was incredibly miserable in some very serious ways. And I wasn’t even a surgical resident. Those folks go through absolute hell.

        It does get better once you are out of training.

        • Matt M says:

          Isn’t “practice makes you better” universal of all occupations?

          I’m not trying to deny that medicine is very important and that it’s highly desirable we have well practiced doctors who can thrive under stress.

          But surely it isn’t the only occupation in society for which this is true? And yet, this sort of deprivation-style hazing seems almost uniquely exclusive to it.

          • rahien.din says:

            deprivation-style hazing

            The term “hazing” does not apply to all situations of difficulty. Hazing is the deliberate imposition of artificial difficulty upon those who wish to join a society, organization, or profession. This isn’t hazing.

            For one, the difficulty is not artificial. Medicine is just plain difficult, particularly for trainees. The amount of direct experience necessary to develop well-informed intuitions is extremely steep. Decreasing workload has not been shown to be beneficial for trainees, and not for want of investigation. The difficulty of learning to become a physician is very, very natural.

            Moreover, physicians are not clock-punchers by nature. As a profession, they are well-known to be compulsive, controlling, and anxious excessively conscientious. That means that the pressure to see ever more patients is not entirely imposed from without, but also comes from within. Making sure your colleagues eat, sleep, and rest is not easy – I have had to wrest patient care responsibilities from resentful colleagues.

            What’s more, society wants it this way, for good reason. From The role of compulsiveness in the normal physician, JAMA 1985 :

            On the other hand, whatever preexisting personality type may be attracted to the field of medicine, most would agree that the process of medical education itself enhances and positively reinforces whatever preexisting compulsiveness is present. There is some evidence that the premedical curriculum, the medical school experience, and the stresses of residency foster the development of certain defense mechanisms that are typical of compulsive personalities.

            When it is not extreme to the point of being pathological, compulsiveness is a highly adaptive trait, which makes for diagnostic rigor. For example, doubt, the first element of the compulsive triad, leads the physician to “run the extra mile” and rule out the rare disease entity that a less conscientious person might fail to consider. It leads him to check and doublecheck laboratory data and physical findings for discrepancies and for minute changes that might be of significance.

            None of us would question the importance of thoroughness in the physician’s diagnostic and therapeutic practice, and we would all probably choose a compulsive physician if we were seriously ill. Herein lies the grand paradox: compulsiveness and excessive conscientiousness are character traits that are socially valuable, but personally expensive. Society’s meat is the physician’s poison.

            Lastly, medical students and residents are not looking to join the medical profession – they have already joined. The day you enter medical school you are treated as a nascent physician, and med schools and residencies work extremely hard to ensure that you don’t have to leave medicine, sometimes bending over backwards to keep trainees that really don’t have what it takes. (There are exceptions – some extremely competitive fellowships will admit more trainees than they intend to fully matriculate, but this seems terribly rare.)

            So it’s not hazing. Medicine is just plain hard. Medicine attracts, creates, and requires a certain kind of excessively-conscientious personality. And the difficulty is not a price of admission to the profession, it is kind of the prize that is bestowed by the profession.

          • Matt M says:

            The term “hazing” does not apply to all situations of difficulty. Hazing is the deliberate imposition of artificial difficulty upon those who wish to join a society, organization, or profession. This isn’t hazing.

            The fact that your previous post started with “I can’t believe outsiders are questioning our grand traditions!” and ended with “And I did it myself and got through it just fine!” is a pretty strong tip-off that it totally is hazing.

            As for the rest, look, I’ll happily concede that medicine is very difficult, very demand, and attracts candidates who are willing (and maybe even eager) to put up with that difficulty and those demands. It’s almost certainly in the top 5% of professions by those standards, no questions asked.

            My point here is that it’s not completely unique. Certain other fields (like consulting, investment banking, and possibly even military or police work) can also plausibly make those claims as well. And yet, they not only don’t seem to have this ritual of requiring new joiners to go a whole day without food as a matter of routine, many of them have long-standing traditions and specific policies in place to ensure that doesn’t happen.

            In a job where you’re not really busy, and when getting reps isn’t important, and when the candidates are lazy and don’t care that much, you don’t need a “make sure your subordinates eat” policy. It’s just a given that they will. The fact that the military has hard policies on this, and that consulting firms have cultural expectations of it, doesn’t prove that they are categorically different from medicine. It proves that they are categorically similar. With the sole exception of not having these hazing-like “traditions” and a million lame excuses to justify them.

          • rahien.din says:

            Matt M,

            I am not denigrating your time in the military or your work in management consulting, or any other profession. Medicine does not have the market cornered on difficulty, smarts, conscientiousness, or nobility.

            Medicine has had success learning from other professions (checklists! checklists! checklists!). So, I wish that the policies that have been effective in the military and management consulting were effective in medical training. Unfortunately, those policies don’t seem to transfer.

            I don’t know why that is. I wish this all came down to simple malice, or market inefficiency, as those problems would present clearer solutions. Available evidence suggests that the difficulty lies somewhere else.

            Uhhh, it’s definitely hazing

            I stated a what I think is a clear definition of hazing, and the reasons why I think the difficulty of medical training does not meet criteria for hazing.

            You might disagree with the definition, or you might think the difficulty of medical training actually does meet criteria. Maybe discuss that. It’s not really that satisfying for you to ignore what I wrote and resort to your initial reaction.

            (If you think I’m deluded, or brainwashed by hazing, just say so.)

          • ana53294 says:

            The argument that medicine is somehow unique because American hospitals have tried to improve it fails when you look at non-American hospitals. Medicine is high pressure everywhere, especially surgery. But nowhere else do they fail to give doctors ten minutes to eat.

            If you are saying the American healthcare system has tried several times to improve conditions and failed, I would like to examine those cases.

            A lot of people go through hazing and don’t realize what it is.

          • rahien.din says:

            But nowhere else do they fail to give doctors ten minutes to eat.

            I’d be interested in reading about that. You may have seen something I haven’t. Can you point me to some reading material?

            You were hazed. You just can’t recognize it. But I can.

            How does the difficulty of medical training meet criteria for hazing? Just saying “hazing” over and over again doesn’t make it so.

          • CatCube says:

            @rahien.din

            So, I wish that the policies that have been effective in the military and management consulting were effective in medical training. Unfortunately, those policies don’t seem to transfer.

            The “policy” in this case is the supervisor pulling somebody aside after an emergency about midday and asking “Have you eaten?” If they answer in the negative, it’s “Go grab your sandwich out of the break room and take 10 minutes to get food into you.” Then they go through their subordinates until all of them have had food, even if its sending them off one at a time.

            As I said above to @Thisbe, if the organization really can’t cope with them stepping out for just long enough to get food into themselves, then the organization is broken. I’m going to ask the same question I asked there: what if this person got hit by a bus on the way into work? Then the ER will have to do without them for a whole day, or at least a few hours until you can find somebody to cover. If it won’t break down in those circumstances, then there’s certainly the ability to give people time to take care of basic needs, and if the supervisors aren’t, it’s because they don’t care to do so.

          • ana53294 says:

            I may have been too strict by saying this doesn’t happen anywhere else. I correct it to most countries. I have been told about the personal experiences of doctors who have gone through residencies in Spain and Sweden, and this does not happen.

            How does the difficulty of medical training meet criteria for hazing?

            I fail to see how not having ten minutes to eat every shift for weeks is a necessary criteria for training doctors. Or going sleepless.

            Sure, there will be surgeries that last 16 h+. But surgeries this complicated would have a team of doctors. Having just one surgeon do a 16 h surgery seems very risky and unnecessary. And interns won’t be performing crazy complicated long surgeries every day for weeks. So an intern will be changing tasks every so often. While it may make sense for an intern to run overtime to finish a task they started, it is unnecessary to have an overworked intern work the next task. There should be another, rested person doing that.

            This seems like hospitals not hiring enough personell, or organizing it in an unefficient manner.

          • Matt M says:

            Rahein,

            I’d like to withdraw from most of this, because I worry that we’re coming too close to insulting each other and there’s no more conversation to be had. But I would like to go back to this:

            Decreasing workload has not been shown to be beneficial for trainees, and not for want of investigation.

            I’m interested in exactly how these investigations were done. To be very clear, my complaint is not that residents work 14 hour days, when 12 would be just as good. That part of this equation is not unique. Some consultants, bankers, and military members also work 14 hour days. My complaint, similar to CatCube’s comment above, is the assertion that resident hours are so incredibly valuable and the nature of the job is so unaccomodating to any interruption that a general policy of “allow everyone at least 20 minutes, at some point, within every 6 hours or so, to eat, go to the bathroom, and take a few seconds to clear their thoughts” is considered some sort of luxury unnecessary for all but spoiled sissies. That is the part of this that other high-demand occupations reject, and for good reason.

            My suspicion is that studies on “workload for residents” were of the “compare someone who has an 8 hour shift vs someone who has a 12 hour shift” variety, and not of the “compare someone who works 14 hours but their boss makes sure they get 10 minutes every 6 hours to eat vs someone who works 14 hours with zero interruptions whatsoever” variety.

            But perhaps I’m wrong.

          • Aapje says:

            Doctors who make mistakes can cause a lot of extra work (and damage), so malnourished and under-rested doctors may have so much work because the mistakes they make.

          • rahien.din says:

            Matt M,

            Thanks, you’re right. I regret that. But, I should stop here, too.

            I agree with how you seem to be formulating the root question : we need to know how precious these duties actually are with respect to medical training. That is an ongoing question in medical education.

            I don’t think the most recent studies are set up to answer our exact question here. The more blunt questions (should we make interns stay up 40 hours at a time?) remain unanswered, and may be hard to answer at all.

            The other thing that has been entirely lost here : the patient can not necessarily wait. Telling your patient, as suggested above, to “keep bleeding until you’ve finished breakfast” is at best unprofessional, at worst malpractice.

            Thanks for your discussion, as always.

            All,

            It feels weird to say, but, thanks for your sympathy? If medical training appears this bizarre from the outside, that’s validating to me. If you want to read a rather intense caricature of medical training, Samuel Shem’s The House of God is essential reading.

          • Matt M says:

            Telling your patient, as suggested above, to “keep bleeding until you’ve finished breakfast” is at best unprofessional, at worst malpractice.

            I don’t think this is what is being recommended.

            The military doesn’t simply leave guardposts empty while everyone takes a break for meals. And consultants don’t simply ignore client requests when noon comes up and it’s lunchtime.

            The proper solution here is to either schedule overlapping shifts, such that one person is available to cover for the person who needs to eat, or to schedule such that there is enough slack in the system that at some point within a 6-hour window you can expect at least 10 minutes during which nobody is actively bleeding on you.

            A scenario wherein doctors are actively treating bleeding patients for 14 hours straight with no 10-minute opportunity for breaks doesn’t sound like “something you need to do to prepare for emergencies.” That sounds like it is an emergency. And if doctors are operating all the time in emergency-like conditions, that would suggest, to me, that there is absolutely zero slack in the system and that in the event of a real emergency, we’d all be pretty well fucked.

        • A Definite Beta Guy says:

          Medicine might be special, but it just doesn’t match the experience of most of other people and doesn’t seem to make sense. How can impaired young interns barely removed from drunks be expected to retain anything useful? Or perform at a reasonable level? You’d get fired for showing up to work drunk, so I’m not sure why you would expect anything better showing up to work extremely sleep deprived.

          I only really have immediate and second-hand experience in business fields. There are a lot of people who are of the “lots of hours, no time to eat!” variety, and there are a lot of people of the “40 hours, no matter what” variety. Typically the “lots of hours” comes from an internal mindset, not a condition: those people just feel like working a lot of hours so they can appear to be working hard. Companies have different cultures that may or may not reward that kind of thinking, but it’s usually not a result of underlying conditions.

          Also, it really sucks to work for managers like that, because it’s obviously bullshit, and said managers usually create unnecessary work.

          It can be a result of conditions, but such conditions will ALWAYS result in missing deliverables. You should always staff with slack, knowing that you will occasionally be buried in work, but still have staffing for it. If your staff is always working insane hours all the time, they will eventually miss something. It’s not avoidable. And it makes management a lot tougher, because you always have to discuss priorities and let your stakeholders know that you’ll fail to get a deliverable, because you are under-staffed.

          Typically you do need to work harder when learning, but IME most people who SAY they are working harder to learn are really doing nothing of the sort. Obviously, medicine may be different, because all industries are different. YMMV.

          But I absolutely wouldn’t advise anyone to go into medicine specifically because of this. Come on over to Financial Analysis! You graduate in 4 years, work some crazy hours occassionally but normally get nice corporate gigs, and can make 6 figues after a decade of work experience. Plus you meet lots of eligible bachelors and bachelorettes.

          • rahien.din says:

            You’re not wrong!

            What’s more, a common joke is : if doctors were actually as smart as they think they are, they would have become investment bankers.

          • A Definite Beta Guy says:

            Just to add post: YMMV. I am not a doctor. I can’t speak to other industries. I just have the intuitions from my own industries and industries of friends. So I don’t want to assert that you are wrong, because that’d be talking out of my ass.

    • fion says:

      I agree with Matt M. Also, the rest of work/life balance. I’d quit any job that didn’t let me spend 20 hours a week on exercise/hobbies.

      On your question about food, sandwiches are probably the way to go. They’re reasonably quick to prepare (depending on what you put in them) and you can eat them while walking if you have to. I don’t really understand your point about sogginess. I’ve never had a problem with sandwiches going soggy.

      I’d also submit: a tupperware of cooked potatoes.

    • Incurian says:

      Pasta stays yummy even if it’s cold.

    • Conrad Honcho says:

      I am a big fan of canned seafood. Tuna, sardines, smoked oysters. There’s also now several companies putting out pouches of spiced tuna, salmon, etc that come with a little spoon. There’s a jalapeno salmon one I love. High protein, fast, no refrigeration required, and you can buy them in bulk and keep them at the office. I have a drawer full of these things at my desk for snacks.

      • DragonMilk says:

        Any recommended brands? Are they available at pretty much every supermarket?

        • Conrad Honcho says:

          Should be available at any store. I haven’t found a bad brand, but I really like the Geisha smoked oysters. I’m currently out of the packets with the built-in spoons with the spices and I can’t remember what brand they are, but I also have tuna packets (no spoon) from StarKist Selects and those are good, too. I prefer canned fish in oil rather than water, but whatever floats your boat.

          It’s just neat they’re adding all the spices and things these days. Canned tuna was sort of “comfort food” for me growing up because it was something I associated with band practice after school or road trips with friends in college. Just something quick and cheap you could take anywhere. And now they’re all gourmet-ish, so it’s nice. Makes me happy.

          ETA: Herp, forgot there’s this thing on the internet called a “search.” The jalapeno tuna stuff with the spoons is from Bumblebee. Very tasty. $1.50 a packet.

          • DragonMilk says:

            Nice – and are any of these nonmetal? I’m afraid she’ll be careless in her busyness and cut herself, haha.

          • Conrad Honcho says:

            Yeah, the pouches are…pouches. They’re made out of…pouch material. I guess it’s plastic? I don’t know. I’m going to pretend there’s farms where majestic pouchbeasts are raised and humanely slaughtered to harvest pouchhide from which pouches are made.

          • gbdub says:

            Pretty sure it’s plastic-coated aluminum foil. I guess it can give you a nasty “paper” cut, but you can open them without tools and they are much less dangerous than cans.

    • dodrian says:

      An egg dish such as a fritatta/quiche/spanish omelette works well. About half an hour to make, and can be done in a large pan for several portions at once.

      Savory pies are also a good option (pasty/empanada or whatever it’s called in your country), especially if the ‘eat with hands’ is important. You could even use the fritatta as a filling.

    • Nicholas Weininger says:

      If eating with hands is really required: sandwiches, cut up veggies, hard boiled eggs, cheese and crackers, falafel. If this constraint can be relaxed: frittata, rice and beans.

    • Rebecca Friedman says:

      Off the top of my head, from least fancy to most…

      Fruit – if there’s some kind of fruit you like, much of it doesn’t even need to be refrigerated and practically all of it is finger-food. Not very filling (with some exceptions), but the only prep you need to do is pick it up at the grocery store. I’d go for bananas if you/she like(s) them, they’re fairly solid as fruit goes.

      Hardboiled eggs – fast to make, not too slow to peel, you can do a bunch at once, and reasonably solid food. (Deviled eggs if you have time to throw away, but I don’t think that’s the situation you’re describing, and ordinary hardboiled eggs are still pretty good. Better if you add a small container of salt.)

      Cheese and sausage – like, a chunk of cheese and a chunk of sausage (or small sausage) of a type you like. Optionally add crackers and cut up the cheese/sausage if you like cracker mini-sandwiches. Alternately, sandwich with nothing wet in it – sandwiches with tomato disintegrate like anything in my experience, but plain bread and cheese doesn’t, and I don’t think adding butter makes a difference (don’t know about other condiments)*. Alternately, they used to sell cheese-and-salami wraps that were really good (as in, salami wrapped by cheese); they probably still do, and you could probably make one yourself with fairly little effort. This one is a high-fat high-protein option but if you’re dealing with constant hard work all day that might not be bad.

      Bag of cut up veggies – whatever you like but I’d go for things like carrots and broccoli that keep well cut up and avoid bell peppers – with small sealed container of dip of your choice. (Ideally something solider like hummus.)

      There are various savory pasties/savory pies if you look outside modern American cuisine. I don’t know how hard savory pasties are to make, but I know they’re still done in modern Britain, and you can occasionally find them in the USA. You can definitely find savory croissants although they may be either expensive or not very good (I’d guess they’re very hard to make). My own experience is with the cheese-and-egg veggie pies, or the meat pies, that we have from historical sources (http://www.daviddfriedman.com/Medieval/Misc10/Misc10.pdf for recipes), many of which are both fairly simple to make and very good, and which are another “make one pie on the weekend, put it in the fridge, bring a sizeable slice to work every day” thing – a real solid meal, but probably even so a good deal more work than you’re looking for.

      … That was fun! Food prep is a broad subject and I probably missed a lot of options, poke me if you want more. And good luck to you and your coworker.

      *Technically places that sell pre-made sandwiches deal with tomato just fine – I’m not sure if they keep it from touching the bread or drain the seeds or what, but if you care about that you may be able to make them not disintegrate. It hasn’t come up enough for me to look into it myself.

      PS: Oh, I missed an obvious one. If you live near a chinatown, chinese bakeries do their version of savory pasties – bao – often for -very- reasonable prices, with a very wide variety of fillings. BBQ pork is the standard but there are many, egg custard is also great. My university used to have a tradition of student groups that wanted to fundraise buying a lot of boxes from chinatown and then selling the buns for the significant markup of $1.50-$2.00 apiece to students coming into/going out of classes, because they were about right for eating in the five (or three) minutes before you had to be in your next class. Wonderful tradition. Good luck!

      • DragonMilk says:

        Thanks! Actually I have a pretty chill job food-wise, as long as work gets done, no one gripes about trivial things like taking time to eat. It’s my fiancee I’m talking about (I guess I was vague to the point of confusion).

        I generally buy ready to eat things myself and am not an experienced cook, so I’m trying to think of simple things i can make that would be easy to eat.

    • Matt says:

      I’m a fan of peanut butter and raisins in a tortilla for ‘on the go food’. Approximates a PB&J, takes longer to go stale than the sandwich, and does not get soggy.

      Actually, any sandwich is better as a tortilla wrap if you’re worried about the thing going soggy.

    • rahien.din says:

      Almonds and other nuts
      Cheese cubes or cheese sticks
      Protein bars
      Peanut butter sandwiches on wheat bread
      Cut fruit and veggies
      Hummus
      Lots and lots and lots and lots of coffee

      ETA : don’t get mad at her or scold her for not making time to eat. You really can’t understand what she is going through, or why. Medicine is a priesthood. Do your best to support her.

  13. Utility Monster says:

    New top level post to talk about more recent meetups. (Or those of us that didn’t get around to posting in the last OT.)

    • Utility Monster says:

      Vancouver (BC) meetup report:
      – We had 9 people show up and 3 more sending regrets
      – One of our high-value contributors was a semi-retired prof who had only recently found out about SSC
      – A format that worked well for the initial meetup was asking job-interview type questions. “Where do you want to be in 5 years” launched many good conversations.
      – For future meetups, we might prefer to focus around a particular article or post. However we don’t want to make it required reading, due to the risk of scaring people away.
      – Our stim toys helped come up with alternative names for the blog: I brought Scrabble tiles for SLATESTARCODEXN. See other post.

    • SamChevre says:

      Northampton (MA) meeting report:
      – We had 12 (I think) people.
      – I was late-hopefully arlie will fill in more details
      – Wide mixture of ages–high school student to fifty-ish.
      – Engaging and interesting conversations on a variety of topics.

  14. Utility Monster says:

    Alternate names for this blog, as determined by a bunch of distracted people amagramming scrabble tiles:
    (these are rearrangements of SLATESTARCODEXN/SCOTTSALEXANDER)

    Full anagrams (plus a teaser for the resulting blog, where applicable):

    Colder Assent Tax
    Lost Sand Excreta – oddly poetic
    Colander Sex Stat – no kinkshaming, please
    Real Sextant Docs – Renaissance Era Astronomy: Much More Than You Wanted to Know
    Clan Stars Dot EXE – Plausible scifi title
    A Narco Settled Rx – A druggie talks about drugs. Could be a tag on the blog as is.
    Dr Oxen Cattle Ass – Ungulates: Much More Than You Wanted to Know

    Partial anagrams/good words:

    closeted
    adolescent
    desolate
    atlases – no shrugging, at least.
    coldness
    exactness – pretty on brand, and doesn’t use too many vowels
    contextless – not a word in some spell checks, nonetheless excellent
    anecdotes – if only we could afford “contextless anecdotes”

    And our favourite:
    Excel at Arson – Go big or go burn down home. Bonus: the remaining letters are STD. No comment on that.

  15. Paul Brinkley says:

    What happens if I record another response on the ACC poll? Is it possible to spam the poll? (Because I would obviously rather this weren’t possible.)

    • Le Maistre Chat says:

      That’s what I wondered as soon as I saw that option. Do you want a Chicago stuffed-crust ballot box?

  16. Jacob says:

    I am choosing not to vote for two equally important reasons:
    * The voting method appears to be first-past-the-post, rather than approval or RCV. I predict the “winner” will not have a majority.
    * I didn’t have enough time to read all the entries.

  17. vaaal888 says:

    I read only 3 of the 4 entries, I voted anyway. I think this is justified: one of the entry was totally uninteresting for me, and I think the entry level of interest can be a factor for judgment.

    • Well... says:

      I’m sympathetic to this (I was only interested enough in 2 of the topics to even consider reading them — I haven’t read any yet) but it creates a perverse incentive for future collaborators to choose topics that are sensational/very much in the public eye, rather than topics that might be even more important but less a matter of public controversy.

  18. Steve? says:

    I have to say, I was definitely surprised in a good way about the quality of the ACC entries. I ended up voting for the vaccine one because I thought it was the best structured, but I could see any of them winning. Bravo to all involved.

  19. nestorr says:

    I’m not proud of this but I haven’t been able to get through any of the adversarial collab posts. I can read any walls of text Scott throws at me, but it seems I’m unable to extend this to his guest bloggers.

    • rubberduck says:

      Same. I was able to get through the opening parts, then said to myself “eh I’ll finish it later”, and had to force myself to read through the rest, despite the topics being ones I’d usually find interesting. Having such long essays posted so close together was probably what did it. But since the goal is for the readers to vote on the best essay it makes sense for them to be posted in rapid succession, since if they were spread out more it would be hard to remember the first essay by the time one got to the last, and it doesn’t seem fair to choose a favorite without reading every entry to the end, or ask the authors to significantly cut down their writing.

    • fion says:

      In my opinion Scott is almost uniquely good at producing enjoyable and readable walls of text.

    • liskantope says:

      Same here. I’m glad to know I’m not the only one.

    • Matt says:

      Me, too.

    • Aapje says:

      I think that it should have been spaced out over time. Reading four long essays at the same time is a large ask.

    • Mark V Anderson says:

      It was a lot to read, but I read them all. I would put those four essays on a par with Scott’s best. They all had fascinating information. Please read them all if you can make time.

      I plan to vote for the the Islam one, because it actually had enough information to make a reasoned judgment about the issue. (The six countries were so varied that it indicated that simply having a Muslim majority was not restrictive enough to determine the type of government. Therefore, it seemed reasonable to me that an Islam majority would not preclude having a liberal democracy. This isn’t a slam dunk, but I thought it made a good case for this).

      Even though the Islam one was the best, the other three were all good, and I can imagine voting for any of them if their competition wasn’t so good.

      Now I better vote, because I don’t know the deadline.

    • fustruly says:

      If you haven’t yet, consider trying the podcast; they’re all available, around an hour each I think. (Don’t know where Mr. Jeremiah got the time.) It isn’t the highest production value podcast of all time, but for me at least it solves the wall of text problem.

    • meh says:

      I’ve read them and found them very good. They all did seem to err on the side of completeness/exhaustive/precise/etc. This made them very airtight, but somewhat long winded in spots; especially since most readers of this blog have some basic familiarity with the topics. Not sure what the right balance is, since people so often willfully misread controversial topics like these, but saying things like “most, not all” when “most” will do does reduce readability.

  20. helloo says:

    The Canadian province of Alberta has often been called the “Texas of Canada”.

    There’s a number of similarities including –
    Significant agriculture presence esp. cattle ranches
    Center of petroleum industry
    Generally higher than average economy
    Conservative leaning politics and culture
    Openness in both locations and people
    Minor separatist movements

    That said, the title of the Texas of ___ exists for many places.
    For example Bavaria is sometimes described as the Texas of Germany due to its conservationism, strong economy and minor separatist streak.

    What are other regions or even subcultures than can be described as “Texas’s” and why?

    • Thegnskald says:

      Georgetown in Austin, TX has a decent claim to the Texas of Texas of Texas of Texas. If you found a good enclave within Georgetown you could toss another of Texas in there.

      Texas enough for you?

      • johan_larson says:

        Isn’t Austin a bit too blue-state to be the Texas of Texas?

        • Incurian says:

          Georgetown isn’t Austin. It’s a different county even.

          EDIT: I see that Thegnskald said it was, I should have replied directly to his thread.

          • Thegnskald says:

            That isn’t how Texans think about cities. “Dallas” is larger than Connecticut, and encompasses many cities and counties, not just those labeled “Dallas” (much to the consernation of the politicians of some of those cities, who want it to be called an increasingly convoluted acronym composed of the first letters of the largest cities in the conglomerate).

          • Incurian says:

            I understand what you’re saying, and in one context you’re absolutely correct – Georgetown is part of the Austin Metro Area, no question. But when it comes to “Isn’t Austin a bit too blue-state to be the Texas of Texas?” then for those purposes Georgetown is very much not Austin. I lived in Georgetown until my recent divorce and currently live in Austin (so no one can argue with me, muahahahaahaha).

          • Thegnskald says:

            Incurian –

            So Georgetown is “part” of Austin, but is contrarian to Austin’s views, and only sort of considers itself part of it?

          • Incurian says:

            It’s the same question as whether a tomato is a fruit or a vegetable, it depends on the context. If I’m trying to quickly explain where, geographically, I used to live to someone not familiar with the area, I’d probably just say “Austin,” knowing that in fact it’s a different city, county, population density, and political persuasion. If there were a need to explain where I lived in more detail than just fat fingering it on a map, I wouldn’t say Georgetown is or is in Austin.

        • Thegnskald says:

          I define Texas to be more about a certain contrariness, than to be about a specific direction of political alignment.

          • helloo says:

            Not sure if that works by itself. Noone is calling Quebec Texas despite it being more contrarian than Alberta in a number of ways.

          • Thegnskald says:

            That is because Quebec is clearly the California of Canada.

            ETA: And no, Austin isn’t the California of Texas. Californian contrariness is more like an extreme version of the dominant ideology; Texas contrariness is more like an extreme version of the opposition ideology. This doesn’t quite capture the connotations involved, but close enough.

          • Paul Brinkley says:

            As someone who grew up in rural Texas (just outside Austin, in fact), I see Texas contrariness as a belief that you’re doing fine, combined with a feeling of not needing to prove it. It’s like that one-ton bull in your herd. Yes, it knows you want it to go to the next pasture. That’s why it walks in that direction – at its own pace. You’re invited to try to make it walk faster and see what happens.

            And then there’s this.

            ETA: and this.

          • Well... says:

            Vancouver is clearly the LA of Canada, so BC, not Quebeck, must be the California of Canada.

          • marshwiggle says:

            Right, BC is totally the California of Canada. Quebec would be like the Rhode Island of Canada or something, I don’t know.

          • johan_larson says:

            Quebec is different from the rest of Canada because of linguistic differences, religious differences (which matter less now than they once did), and a history as a conquered people. I don’t think there’s a close fit to that in the US, but Utah sort of fits in the religious dimension and the history of antagonism between Mormons and gentiles.

          • David Speyer says:

            “linguistic differences, religious differences … and a history as a conquered people” Sounds like Puerto Rico is the Quebec of the US.

          • Obelix says:

            David Speyer:

            Sounds like Puerto Rico is the Quebec of the US.

            Except that Quebec is the first province of Canada, and originated a large part of Canadian culture including the word “Canada” itself. While Puerto Rico is very much peripheral to the US.

            There’s never any way to make these “X is the Y of Z” stand to scrutiny.

        • AG says:

          Austin is the Canada of Texas, clearly.

    • fion says:

      I’ve not heard anybody call it this, but Scotland has a reasonable claim to be the Texas of Britain.

      It’s got the cows, petrol, good economy, empty space and separatist movement. The big difference is it’s less conservative, not more.

      • Tenacious D says:

        Scotland and Québec make decent parallels:

        Historical grievances about being under the thumb of the English
        Separatist movements
        Nationalistic to a degree that would be politically incorrect elsewhere in the country
        Cows
        Energy resources

        • Obelix says:

          Actually, Canadians (other than Quebecers) are very nationalistic, even though they pretend not to be. Canadians will tell people (including foreigners) that their country is the greatest country in the world, which is not something you’ll ever hear a Quebecer say.

    • Paul Brinkley says:

      Argentina has long struck me as the Texas of South America, for obvious reasons.

    • Odovacer says:

      Can Texas be called the _____(insert other region/country/etc) of the US? I mean, it’s a lot more recent of an invention than most of those other places.

      • John Schilling says:

        I’ve heard Texas called the America of America, on the grounds that within the United States, Texas fills the cultural role that the United States does for the rest of the (free) world. Large, very distinct, proud, arrogant, filled with gun-toting cowboys, etc.

      • Tarpitz says:

        Texas is the Yorkshire of America, I think.

    • Plumber says:

      Modesto is the Texas of California.

      Why?

      “Cowboy actor” Louis Burton Lindley Jr., better known by his stage name Slim Pickens was born there.

      I rest my case.

    • WashedOut says:

      Western Australia could be considered Australia’s Texas for several reasons, but falls short in others.

      Similarities:
      Biggest state by far
      Has tried to secede at least once
      Massive Mining and Resources Industry, including huge offshore oil and gas deposits
      Complains that the rest of the country doesn’t appreciate it’s contribution to the economy (is this true for Texas as well? I’ve heard mixed reports.)
      Very isolated from centres of national governance

      Dissimilarities:
      Not a particularly distinctive accent, although there are many local quirks of speech and ways of saying things that are different from the rest of the country.
      Nowhere near as culturally different from the rest of the country as TX is from USA.

    • ana53294 says:

      The expression I hear more often is “Venice of …”. Basically, every city with canals gets called Venice. St. Petersburgh, for example, is sometimes called the Venice of the North, even though they are not similar at all.

      My guess is that “Texas of” is going to be as accurate.

    • zzzzort says:

      I’ve heard Ukraine described as the Texas of Russia (as in this Life magazine piece). Their separatist movement was more successful, though I guess the final outcome remains to be seen.

  21. proyas says:

    Have there been any verifiable instances where someone broke another person’s neck using the “sleeper hold plus forced neck jerk” move that is common in action movies?

    • dick says:

      Well, a 2010 study found 26 cases where a patient died due to the “cracking the neck” part of a chiropractic manipulation, mostly due to tearing of an artery leading to a stroke. So it certainly seems plausible that someone doing the same thing but much harder could cause incapacitation, though not necessarily a literally broken neck.

    • sharper13 says:

      I don’t have much info on that portion, but having done research on the sleeper hold (AKA rear naked choke hold) for a scene I wrote, I can tell you that while you really can knock someone unconscious after about 3-10 seconds of shutting off blood flow, you then either kill them by holding it way too long, or else they wake back up 3-20 seconds later, somewhat more upset at you.

      So all those movie scenes of knocking someone out, laying them down and just walking away to infiltrate some place are unrealistic. You’d need to have something to really quickly restrain and gag them before they woke back up if you wanted the same actual effect.

      • Thomas Jørgensen says:

        Holywood has failed me. Watching the hero or villain trying to gag and ziptie someone in less than five seconds sounds hilarious.

      • Le Maistre Chat says:

        1) How long do you have to hold to kill them by stroke or cardiac arrest?
        2) Given the importance of grappling in historical armed martial arts, it’s noteworthy that the gorget (neck armor) goes back to the earliest-known solid plate armor, the Dendra panoply.

        • dndnrsn says:

          I’m pretty sure that styles of grappling in which people are wearing armour are less about choking the other guy, or even joint locks. They’re more about pinning the other guy so you can stab him in the eye or something like that.

          • Nornagest says:

            That’s certainly a common goal, and it’s behind a lot of Western wrestling (you can see recognizable wrestling moves in 15th-century longsword fencing manuals). But joint locks still work, and there’s some stuff you can only do in armor: the style of jujitsu I study has some throws that historically involved grabbing an opponent by the helmet straps, and some breaks that benefit from rigid torso armor. There’s even one that historically involved trapping the opponent’s arm between the dome and the crest of your helmet: unarmored, you use the back of your neck (which works okay, IME) or your forehead (which doesn’t, again IME).

            Choking, maybe not.

        • Nornagest says:

          At least a few minutes if it’s a perfect choke, which it rarely is (it’s a tricky hold, and even advanced students sometimes have trouble with it). Things get a bit dicier if they’re older or have circulatory problems, though. Conventional wisdom in my corner of martial arts is to never choke anyone over fifty all the way out unless you’re willing to risk killing them. Or, if you’re over fifty, to tap out before you fall unconscious.

          That’s probably erring on the side of safety, though.

        • dndnrsn says:

          Google suggests that after about 6 minutes without oxygen, brain damage begins. It’s extremely unlikely that anyone’s going to hold a choke that long, even if they do want to inflict harm.

          One thing that’s always puzzled me is the reaction when an MMA fighter gets choked out. It’s way less bad for you than getting knocked out.

    • dndnrsn says:

      I don’t know if there are any cases of a neck crank or similar breaking a neck, but it can jack your neck up – anything from “off the mats for the next few days” to “lasting damage.” If it is possible to break a neck, it can’t be easy, because if it was easy then white belts going too hard on finishing a choke would have a death toll.

  22. johan_larson says:

    Let’s suppose you put parachutes on 100 young dudes and made them static-line jump under ideal conditions with zero training. How many would end up fine/hurt/dead? Anyone know?

    • CatCube says:

      Probably quite a few would end up hurt or dead, depending on parachute design. Modern skydiving parachutes are actual wings, and have to be flown to a landing in a downwind/base/final setup just like a light aircraft. This includes facing into the wind on final approach.

      I once accidentally turned the wrong direction off of my base leg to downwind (even I don’t know what the fuck I was thinking). That was very memorable. The ground starts going by really fast, and after doing so I realized that my landing was going to end up being on the paved ramp, rather than the grass field I was aiming for. I fully expected to break one or both of my legs doing a PLF, but managed to only scrape up my hands getting dragged across the pavement, and got banged up and was sore for a couple of days. That was after knowing how to at least do a parachute landing fall, and realizing that I made a mistake in time to minimize the consequences.

      • gryffinp says:

        You say that modern skydiving parachutes are “actual wings, and have to be flown”. Do they really not make simpler parachutes that don’t particularly need to be flown? If such exist, I’d think we could give them to our rookie jumpers given the scenario’s call for “ideal conditions” outside of their lack of experience.

    • Statismagician says:

      According to this Norwegian study, at least 20 injuries, probably more and probably heavily weighted towards the ‘splat’ kind of injury.

      EDIT: I’m apparently blind; that’s the rate per thousand jumps.

      • CatCube says:

        That study also indicates that the “splat” kind of injuries are rare, not weighted towards. (Assuming by “splat” kinds of injuries, you mean a malfunction resulting in a nonfunctional chute, rather than screwing up the landing.)

        In civilian skydiving, most deaths occur under a perfectly good parachute. Malfunctions are very rare, and having both your main and reserve malfunction are even rarer. The kinds of people who skydive are thrill-seeking, and a great part of the fun is flying the parachute after it’s open; however, if they misjudge the distance to the ground while flying it, they will either make a low turn or accidentally catch the ground while close to it, and impact the ground or another obstacle with high vertical or horizontal speed.

        If you’re just freefalling to 2500′, then quietly flying your parachute to landing, skydiving probably isn’t much more dangerous than riding a motorcycle–I had an instructor claim it was safer (“Your jump is safer than the drive here”), but don’t recall the statistics offhand to be sure that’s true.

        • Statismagician says:

          Yeah – since the specification is zero training, and those figures are for at least somewhat-trained paratroopers, I was thinking they’d be pretty optimistic. Haven’t thought about it enough to make a useful estimate, though.

        • gbdub says:

          Paratrooper chutes don’t require the same degree of skill to fly though, right? They typically use plain round or squared round designs rather than the wing-like skydiving chutes.

          • bean says:

            Correct, although there are still techniques for making a safe landing and dealing with malfunctions. (I may have read both the military static line and free-fall parachute manuals.)

          • gbdub says:

            How much less likely are injuries if you’re not fully rigged up with a week’s worth of fighting gear? I thought part of the issue with paratrooper injuries was that the amount of extra weight taxes the load limits of the chutes (and the average ankle).

            My best guess is, assuming benign conditions, an unweighted static line jump with an untrained individual (who is at least smart enough not to screw with things too much) would, at worst, result in a twisted ankle or knee from a bad landing – it would definitely not be fatal unless the chute failed, which is pretty rare in static line jumps right? I mean, unless the person was deathly afraid of heights and died of heart failure.

    • sfoil says:

      How “ideal” is ideal? If they’re not carrying anything, using a basically idiot-proof/minimally maneuverable parachute like the T-10, in daylight with zero wind onto an endless expanse of powdery snow, I think the number of injuries is around zero.

      The number one cause of static line parachute injuries is hitting the ground the wrong way under a good chute, and the number one thing taught in static line training is how to hit the ground properly. If the drop zone is anything short of absolutely ideal — i.e. the ground is basically pretty hard — I would expect about 50% of the jumpers to get “hurt” (sprains to ankles and knees) with about 10% of them breaking bones (same). If they’re wearing helmets there won’t be any fatalities. If they try to maneuver (or have to because they start drifting into the trees around the drop zone), more will get hurt the way CatCube talks about, trying to turn too close to the ground.

      • johan_larson says:

        How “ideal” is ideal?

        Daylight, no wind, dropped from 5000 feet onto a grassy field.

        • sfoil says:

          If you have them chant “keep your legs and ankles together before you hit the ground” a few times I doubt more than 10% would get hurt.

      • Chipsa says:

        Current T-11 has a desired rate of fall of 5.8 m/s. This is about the same as falling 1.7m, or about 5 feet. If you can jump down that far, you’ll probably be ok.

        This assumes the jumpmaster makes sure everyone is fastened into their harness correctly.

    • Fossegrimen says:

      Depends a bit on how you define ‘zero’, but your description sounds a lot like my first jump in the army and none of us died. Few bruises and sprains.
      Of course, we had general army training from obstacle courses and stuff which meant we knew how to fall. Also a brief blackboard introduction on what to do.

  23. ugn says:

    Hello, I’m new to this site but I’ve found it very thought-provoking and it occurred to me that it might be a good place to ask a question that’s been bugging me:

    Is there a cogent counter-argument to making pharmaceutical sales illegal?

    One opposing narrative that I’ve found is roughly the “reduced sales would reduce R&D which would reduce innovation” but I don’t find that very convincing since I am taking no stance on price, just a stance against misaligned incentives. There’s also a study that suggests drug spending is not correlated to innovation but it seems to make some mistakes in terms of conflating countries’ HQ domiciles with their end markets / centers of R&D (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2866602/).

    I found one older post related to this topic but it didn’t seem to focus on the question of whether it should legal or not… http://slatestarcodex.com/2015/02/17/pharma-virumque/ – maybe I missed the point.

    • Utility Monster says:

      Does “criminal gangs are worse than drug companies” count as a cogent argument? If not, how about “many fewer people die if the drugs they’re taking are what they’re supposed to be?”

      Like yeah, there are problems with drug companies and misaligned incentives. But if you make something illegal, not only are the people who do it criminals by definition, but it gets wrapped up with a bunch of other criminal activity.

      There’s a reason that the movement to decriminalize all drugs has traction. i don’t know how you end up swinging so far the opposite way.

    • ADifferentAnonymous says:

      Have you misstated your question, or are you proposing making it illegal to sell medicine? The argument against such a ban would be that medicine helps sick people and capitalism is a generally effective system for ensuring they can get it.

    • Cheese says:

      Do you mean sales as in ‘marketing, advertising and other sales associated costs’? I’m assuming you don’t mean making selling pharmaceuticals illegal as that is probably a bit ridiculous.

      If you mean advertising well then that is an interesting question. It would be interesting to compare countries which allow direct to consumer advertising of prescription requiring products (pretty much only the US I think) to those that don’t. You would however have to untangle the effects of that from the various subsidy schemes which would differ place to place. Advertising to doctors is obviously another point you’d have to look at – i’m unsure of the relative spend compared to consumer advertising.

      • ugn says:

        As per above, I meant to ask specifically about sales representatives… Advertising strikes me as more benign (more regulated).

        The study I linked to attempts to untangle some of those effects you mentioned.

    • AKL says:

      Without taking a position, some potential answers come to mind:

      (1) Prescribers are busy, and on net direct to prescriber sales and marketing might make them more informed.
      (2) More patients will receive treatment. Obviously this is not without downside but plausibly at least is net-good. And to the extent that patients are over-treated, likely only a bare minority of the effect is due to advertising / sales activity.
      (3) Many sales calls are about informing prescribers of patient assistance programs, compliance statistics, etc. E.g. “encourage your patients to register with our patient assistance team. It will save them 75% off the cost of their medicine and improve adherence by 50%.” Again these things are at least plausibly net positive.
      (4) Philosophically, should we outlaw otherwise-legal activities because [the authorities] determine them to be utility negative on net? Should it be illegal to advertise fatty foods? Illegal to cut down a tree in your yard if the majority of your neighbors object?

      There are of course reasonable arguments that pharma sales and marketing should be severely restricted or prohibited, but I think it’s far from a slam dunk (both on philosophical and empirical grounds).

  24. cmndrkeen says:

    Perception is Bayesian. When you read Scott’s post about the dangers of indoor CO2 levels, the air around you suddenly feels more stale and kinda suffocating.

    So too is perception about race and gender. When you read an essay saying that women are unlikely to achieve in math due to fundamental biological differences, that impacts your perception of them. If a woman and a man collaborate on a math project, you will ascribe more of the success to the man. If a woman applies to a job requiring serious math ability, you will view her resume with more skepticism.

    Is it at all surprising then when women’s groups feel it is threatening to women when an essay gives a biological explanation for the math achievement gap?

    Even if those essays are completely correct (and I don’t think they are), they harm women who are strong in mathematics by influencing people’s priors about them. This includes those women themselves, who will be less likely to believe in themselves and do ambitious work.

    One more thing: when people say that the reason you see fewer high-achieving women is because their abilities have lower variance, rather than lower mean, they seem surprised when this claim is viewed as an attack on women. However, the result of that claim is the same — that there is a powerful biological force keeping women out of math.

    • fortaleza84 says:

      I believe that your hypothesis is known as “stereotype threat.” As I understand it, the objective evidence supporting the stereotype threat hypothesis is pretty flimsy.

      • cmndrkeen says:

        I’m talking about medium/long term goal setting and life planning, not short term performance differences on tests due to being reminded something you already know.

        • The Nybbler says:

          Do you have evidence, then, for the hypothesis in your second paragraph?

        • fortaleza84 says:

          I believe that “stereotype threat” — as the term is normally used — includes medium/long term effects.

          But if you prefer to use a different phrase, that’s fine. Let’s call it the Peter Pan Hypothesis. i.e. the belief that beliefs in inherent group differences are themselves responsible for a significant portion of observed group differences.

          So my question is this: What is the evidence for the Peter Pan Hypothesis? How would you test it? How would you measure it?

          Also, what is the best evidence against the Peter Pan Hypothesis?

          Is it at all surprising then when women’s groups feel it is threatening to women when an essay gives a biological explanation for the math achievement gap?

          It’s not a surprise to me — those women’s groups generally object to any observation which puts women in a negative light compared to men — even if those observations are true. Do they object because such observations are truly harmful? Or is it simply a matter of hurt feelings and rationalization?

      • Paul Brinkley says:

        I dunno about that. As soon as I read your comment, I immediately began to believe that it was flimsy.

        Which then makes me think it might not be flimsy.

      • DavidS says:

        I thought stereotype threat was ‘being told women.are bad at maths makes them actually do worse at maths tests’

        • Nancy Lebovitz says:

          My impression is that stereotype threat is just reminding people that they’re in a group that there’s prejudice against.

          There just might be ethical issues with exposing people to microaggressions (or macroaggressions) and then testing them to see whether they don’t do as well at whatever.

    • lvlln says:

      When you read an essay saying that women are unlikely to achieve in math due to fundamental biological differences, that impacts your perception of them. If a woman and a man collaborate on a math project, you will ascribe more of the success to the man. If a woman applies to a job requiring serious math ability, you will view her resume with more skepticism.

      This is a non-trivial empirical claim about reality that I’ve seen people throw around in various contexts. But I don’t think I’ve ever seen any empirical support for this claim or variants of this claim. One could just as plausibly posit that knowing about such fundamental biological differences would lead someone to value and laud the woman more, for having had to work harder than the man to get to the same place. Or it could lead someone to put their thumb on the scale in favor of the woman, in order to “right” what they consider a cosmic injustice. There’s just as much empirical support (i.e. none) for those claims as the one you make, as best as I can tell.

      • cmndrkeen says:

        The absence of empirical evidence here is hardly surprising, as this is extremely difficult to study!

        • lvlln says:

          The absence of empirical evidence here is hardly surprising, as this is extremely difficult to study!

          Agreed! Which is why we should refrain from making claims that have no empirical backing, and be highly suspicious of people who make such claims and then use those claims as justification for other actions.

          And in the meanwhile, it would probably good to get rid of that absence of empirical evidence – that is, do that extremely difficult work of studying this issue and finding/producing empirical evidence to analyze, to lead us to actual empirically supported conclusions, without the expectation that it conform to our preexisting intuitions about what they should be.

          • albatross11 says:

            Ivlin:

            Speculation is a part of how we get more information–if only by motivating people to check it out. And in practice, it’s hard for me to imagine an equally speculative/abstract paper which predicted that women should outperform men in the hardest subjects would have the same problems. (Though maybe it would–it’s not like we have any kind of widespread survey of this kind of paper-spiking exercise that we can draw on to examine statistics!)

        • There is one fairly simple test–the relative ability of men vs women who make it through the filter.

          My sister went to Boalt (Berkeley Law School), I think in the late sixties. My memory of her account is that about ten percent of the students were women, and one year, out of the six top students (two in each of the three years), five were women. That pattern suggests that, for whatever reason, women had to be abler than men in order to get into Boalt.

          It should be possible to look for that pattern in other ways in other areas. Chess players are rated. If the sort of effect being discussed tends to keep women from going in for chess, one would expect the average rating of those who do to be higher than that of the men who do.

          • albatross11 says:

            If the overrepresentation of men relative to women in STEM fields is because of a wider variance in mens’ intelligence distribution (or other mental traits), then it’s interesting to ask why this only happens in STEM fields or math-heavy fields. It seems intuitively like you should need to be way out in the right end of the intelligent distribution to be a working biologist, but that field is much more gender-balanced than physics. And it’s not clear to me that you should need to be smarter to be an engineer than a lawyer, or an economist than a psychologist, and yet in all those cases, the math-oriented field is heavily male dominated.

            It seems to me that the most plausible single explanation is a slightly right-shifted distribution of math ability of men relative to women, rather than a higher variance. That would give you the pattern we actually observe–economics is more male-dominated than psychology, engineering is more male-dominated than law, computer science is more male-dominated than medicine, biology is more male-dominated than physics or chemistry, etc.

          • Matt M says:

            I don’t know enough about STEM to know if this is true – just speculation on my part.

            But it might be that in STEM fields, the methods of testing and evaluation are more objective and robust, such that those fields have been able to more credibly resist demands for equal representation as political forces have increasingly demanded gender parity.

            As a comparison, sports have done a great job resisting such demands because it’s clear and obvious that male athletes are superior to female ones. Politicians could try and force an NBA team to sign a female player if they wanted to, but their inferiority would become overwhelmingly obvious the second they hit the court. Therefore, the pressure doesn’t come.

          • The Nybbler says:

            It seems to me that the most plausible single explanation is a slightly right-shifted distribution of math ability of men relative to women, rather than a higher variance.

            If there is some threshold below which low ability does not matter, higher variance with the same mean and higher mean produce similar results. That is, if you need an (arbitrary) math ability score of 90 to be an economist, group A with a higher percentage of 90+s will have more potential economists than (similar-sized) group B, even if they also have a higher percentage of 10-s.

          • marshwiggle says:

            Agreed with Matt.

            As to biology vs physics, econ vs psych, engineering vs law, I’m really not sure those are great examples. I’m pretty sure the male dominated ones of those tend to require more intelligence at the high end. For that matter, it would be easier to switch from the male dominated one than in the other direction. I think you could find paradoxical pairings, but I don’t think the ones you listed work.

          • albatross11 says:

            MattM:

            Okay, but sports are a weird special case, because the differences in strength and size between women and men are huge. The differences in mental abilities, as best we can measure them, are quite small (small verbal advantage for women, small mathematical/spacial advantage for men).

          • Matt M says:

            albatross,

            Sports is useful as an edge case, but let’s walk through this.

            Imagine a society composed of two groups – the Supermen and the Plebs.

            Supermen are, on average, 10% better, at everything than Plebs are. Smarter, faster, stronger, more sociable, all of it. I’m not a statistician, but 10% seems like a threshold such that it would manifest itself in a clear and visible way at the upper edges of society (corporate boards, sports teams, nobel winners etc. would be all be over-represented with Supermen), while at the same time, leaving plenty of room for everyone to personally know really smart and accomplished Plebs and really dumb loser Supermen.

            Now let’s say that polite society decides that this is not actually the case. Supermen aren’t inherently better than Plebs at all. That entire notion was all a fictional invention of Supermen-supremacists to hold back and oppress the poor Plebs. There is no difference. The difference we see in outcomes is all attributable to discrimination, which must be reversed as soon as possible.

            Where would we expect to see this reversal manifest itself quickly, and where would it take longer? Presumably, it would manifest quickly in disciplines where success is somewhat arbitrary (and where the stakes in general are low), and take a lot longer in disciplines where objective measurements are easy (and the stakes are high).

            If you believe that Engineering is more objectively evaluated than Biology, you would, therefore, expect Engineering to resist such demands for reversal, while Biology complies more quickly and easily.

          • baconbits9 says:

            Another fairly simple test is to look at broad social trends as cultural pressure eased. The percentage of female doctors (a difficult and well compensated profession) has been rising for a while and the percentage of females among college students is higher as well. You have to posit extra misogyny in certain fields to suppose that it isn’t some innate trait keeping women’s participation from rising.

          • idontknow131647093 says:

            @albatross

            Know that whichever it is, it is most likely a result of more objective standards being applied.

            Feminists are already openly arguing in favor of making stem less objective because women do worse on objective tests. See, e.g.

            http://journals.sagepub.com/doi/pdf/10.1177/2332858417743754

          • Aapje says:

            Over-representation doesn’t have to reflect a difference in ability. It can also (partially or fully) reflect a difference in interest.

            For example, imagine a society where men are judged more on their status and money-earning, while women are judged more on looks. One would then expect men to sacrifice more to get status and a high income, making it seem that they better at being CEOs, while in actuality, they may be better at: “be willing to work 80 hour weeks for decades to achieve very high income and status.”

          • John Schilling says:

            If the overrepresentation of men relative to women in STEM fields is because of a wider variance in mens’ intelligence distribution

            The overrepresentation of men relative to women in STEM fields is at least in part because more men than women want to partake of most STEM fields. That’s going to make it very difficult to deconvolve other effects. And if you’re going to even try to pull the hypothesized greater male variability out of that morass, the last place to do it is at the barely-made-it margin where you are closest to the center of the distribution function.

          • idontknow131647093 says:

            @John Schilling

            If that we a major reason it would be easy to see in statistics: If men dominate STEM because they prefer it they will have a disproportionate share of the worst performing STEM students.

            By way of example:

            Math School: 1000 Students, 900 Men 100 Women. If preference predominates, we would expect the bottom 100 students in GPA, admittance tests (SAT/ACT) and exit tests (GRE) should have >90 men. Over several student bodies this effect would be shown to be statistically significant.

          • cassander says:

            @idontknow131647093 says:

            >If that we a major reason it would be easy to see in statistics: If men dominate STEM because they prefer it they will have a disproportionate share of the worst performing STEM students.

            the people in stem classes are self selected. the people who do worst at it don’t take stem classes to begin with, or fail out. though I’d actually bet a fair bet that if you look at the rates of people who fail stem classes, it would be overwhelmingly male.

          • The Nybbler says:

            @idontknow131647093

            If interest is independent of ability, that claim does not hold; if an interest-granting oracle randomly samples 900 from the male population and 100 from the female population, I expect the bottom 100 to be 90 men, 10 women.

            If interest is strictly dependent on ability and the populations have the same mean and variance, it is true.

            If interest is related to ability in some more complex way, that perhaps differs between genders, this test can tell us nothing.

          • idontknow131647093 says:

            @cassander I don’t see how self selection on ability would change the % of anything at any level, in your situation it would seem the population %s would be similar at all rankings within a school.

            @The Nybbler I think you are right with regards to my initial statement, which leads me to revise it more closely to what I wanted it to mean: If there is a kind of deterrent effect against women that causes them to be disinterested in STEM (or encourages men to enter it), as some feminist theories posit, this will most likely affect the marginal student (the classic is an freshman engineer who graduates in another major, usually business for men). Under that model, we would expect men to cluster at the bottom, because lower performing men would either be more encouraged to sign up, or stay in comparatively.

          • Thomas Jørgensen says:

            Women were quite deliberately chased out of some fields in STEM. Computer Science for example had a lot of women at the start, but that fell off a cliff, and that was driven by internal politics more than anything else: The men in the field did not want to be working in a womens occupation, because they accurately judged that this would do large amounts of damage to their prestige and earnings, so they set out to get rid of as many of them as they could.

            That does not still hold, but if you ignore the historic impact of blatant and overt sexism, you will just not have an accurate picture of reality.

          • Thegnskald says:

            Thomas –

            That is quite a series of remarkable claims all wrapped up in a neat little bow.

            I am pretty sure you are just mischaracterizing the history of IT with regard to the now obsolete job of “calculator” (among other jobs that would now be characterized as data entry related), but if there is something more to the claim, please surprise me.

            ETA:
            The specific claims I’d like to see evidence of:
            Large numbers of women in early computer science (if it isn’t just going to be people who move punch cards, please don’t bore me with that)
            That men specifically tried to chase them out.
            That men specifically tried to chase them out for prestige-related reasons – this seems implausible on its face, suggesting these mathematics sorts were politically saavy enough to both see the advantage in it, and cooperate to achieve an ends.
            That the women involved were successfully chased out, because this seems implausible on its face as well, even given all the other stuff happened. It just sounds like “Oh those dainty women, they couldn’t handle the rough and tumble politics.” Which seems like the sort of thing you might believe if you had met neither early century working women, nor the sort of women who end up in computing, the shared set of which sounds particularly unlikely to let themselves be quietly chased off.

          • bean says:

            I think idontknow131647093 is onto something. If there’s an interest gap between men and women of similar ability, then it seems logical to expect that the population who fails (say) the basic weed-out engineering course is going to be disproportionately male relative to the population of the course.

          • Protagoras says:

            I thought there were some statistics showing that men were disproportionately represented at the bottom of STEM fields, due to women being more likely to drop out of STEM majors than men if they got mediocre grades. Though a difference in interest levels is not the only thing that would explain this phenomenon (if I’m correctly remembering the studies that found this, and if the studies were accurate).

          • The Nybbler says:

            @Thomas Jørgensen

            That is an extraordinary set of claims, as Thegnskald points out. We know computer programming (which was very different, but still recognizably programming and debugging) was female-dominated in the ENIAC era. But the ENIAC programmers were selected at the tail end of WWII, when a lot of men were busy elsewhere. We know when the US Department of Labor started keeping statistics (IIRC late 1960s), programming was male dominated. Margaret Hamilton has indicated that the profession was male dominated in her time in it, which began in the early 1960s. As far as I know, no one has pinned down when the changeover happened, and certainly no one has conclusively demonstrated the reasons for it.

            The claim “Men chased them out to help the prestige and earnings of the profession” sounds like a mash-up of two strands of very modern feminist reasoning (“chased out” and “female-dominated professions lose prestige and financial value as a result of being female-dominated”). Furthermore, the profession didn’t become at all prestigious until much later — after the personal computer revolution, certainly.

          • Thomas Jørgensen says:

            I did over egg the argument badly, but the solid facts are that it started majority female, had healthy (and climbing! see those bureau of labor statistics) numbers of women in it until the pc era, and then female participation very abruptly fell off a cliff.

            Pretty much exactly when it became a generator of great fortunes. Which is where I start putting an incredibly hostile gloss on things, because this is not what biological factors look like – this is a cultural campaign of exclusion.

            The kinder read on things is that the entire profession of coder got run over by a marketing juggernaut – The personal computer sales pitch was just incredibly gendered in the professions infancy, so a whole lot more boys were handed one as teenagers, with predictable results, and the media portrayals that defined the popular conception of the computer geek ranged from “quite incredibly misogynist” – the revenge of the nerds films have so much rape in them. Just so much rape – to “the only women in the movie are there to be explained at”

            So an entire generation of women were informed in no uncertain terms that computing was a boys club. Resorting to biology to explain the ratio after that is, well, not necessary?

          • The Nybbler says:

            @Thomas Jørgensen

            but the solid facts are that it started majority female, had healthy (and climbing! see those bureau of labor statistics) numbers of women in it until the pc era, and then female participation very abruptly fell off a cliff.

            It started majority female, when there literally six programmers in 1945. Even by 1950, programming occupations were not considered significant enough to mention by the Bureau of Labor Statistics. As I said, Margaret Hamilton mentions in an interview that the profession was male-dominated at least from 1959.

            According to the Statistical Abstract of the United States (1976 edition), by 1960 census we have a whopping 9000 male computer specialists (not broken down further), and 4000 female computer specialists. In 1970, 207,000 men; 51,000 women. In 1972, 259,200 men, 16,800 women. According to the 1982 edition of the Abstract: In 1981, 1,087,500 men, 38,500 women. This is the early PC era, with the IBM PC being released in the middle of 1981. One estimate is that only about 1.8 million PCs — total — had been sold by the end of 1981. These aren’t people who grew up with PCs; those won’t enter the labor force until the late 1980s.

            What we see is not men driving out women. What we see is a field expanding, but unevenly. The number of women doubles; the number of men quadruples.

            _Revenge of the Nerds_ was 1984; as I recall, it has rape by fraud (played for laughs), but hardly “so much rape”. Then there’s _Weird Science_, where it’s the nerds who are there to be explained to by a woman. And _Real Genius_ where you have a 3-man, one-woman ensemble. The idea that _Hollywood nerd movies_ drove women out of the field is even more unlikely than the idea that the men in the field did.

          • quanta413 says:

            If I had to eyeball it from teaching (not a very large sample size, very biased sample, etc.), I’d say men are perhaps a little more likely to be among my worse students in physics. But I think it’s probably either because men are somewhat less conscientious, or it’s just noise.

          • The Nybbler says:

            Oops, I read the wrong data from 1972 and 1981.

            1972 has 276,000 computer specialists, 16.8% women (46,368).
            1981 has 627,000 computer specialists, 27.1% women (164,900).
            1982 has 751,000 computer specialists, 28.5% women (214,000).

            At this point occupational categories change. If we follow the detailed occupations of “Computer Programmer” and “Computer Systems Analyst” (which made up “computer specialists” in 1982) and assume “Computer Systems Analysts, scientists” is essentially the same as “Computer Systems Analyst”, we get

            1983 719,000 total 30.7% women (220,700)
            1984 817,000 total 33.4% women (272,500)
            1985 893,000 total 31.8% women (283,700)
            1986 934,000 total 34.2% women (319,100)
            1987 974,000 total 34.5% women (336,400)
            1988 1,049,000 total 31.0% women (324,800)
            1989 1,127,000 total, 33.8% women (380,900)
            1990 1,199,000 total, 35.2% women (422,500)
            1991 1,221,000 total, 33.8% women (413,100)
            1992 1,243,000 total, 31.1% women (386,600)

            Thus through the PC era we see a rise at the beginning followed by fluctuations with no trend, with a possible drop starting the Internet era. I’ll stop there for now because I’ve been typing these by hand from scanned BLS reports, and I’ve probably made mistakes and will make more.

            I think the “driven out by mid-1980s Hollywood nerd movies” and “driven out by the PC era” ideas are not supported. Continuing through to the dot-com bust (which definitely drove women out, but wasn’t sexist) might show more, but I don’t think it’s going to substantiate men driving women out. Especially since I was in the industry at the time, not doing any woman-ejecting.

          • John Schilling says:

            Computer Science for example had a lot of women at the start, but that fell off a cliff, and that was driven by internal politics more than anything else:

            I thought that was driven by the first wave of computer science hiring occurring during the period when most of the male candidates were fully occupied with Extreme Nazi-Punching, and the second wave occurred during the culturally conservative 1950s.

            The men in the field did not want to be working in a womens occupation, because they accurately judged that this would do large amounts of damage to their prestige and earnings, so they set out to get rid of as many of them as they could.

            Citation very much needed for this explanation.

            For that matter, citation needed for women being “gotten rid of” at all, for their numbers “falling off a cliff”. My understanding is that the absolute number of women employed in computer science(*) rose almost monotonically through the 1940s, 50s, and 60s, and that only their relative participation declined as men joined the field even faster.

            * NOT hand-calculation in lieu of computers not being available yet, because we don’t need elaborate theories of discrimination to explain the calculators not all being rehired as coders any more than we do to explain John Henry never being employed as a steam-drill engineer.

          • 10240 says:

            But it might be that in STEM fields, the methods of testing and evaluation are more objective and robust, such that those fields have been able to more credibly resist demands for equal representation as political forces have increasingly demanded gender parity.

            I don’t think this is a reason. The same disparities also exist in countries where such political pressures don’t exist (e.g. Hungary), as well as in contexts where participation is purely a matter of choice, or depends on objective tests.

            Furthermore, while the assessment of the correctness of a result is the most objective in math, the importance of a result is just as subjective as in biology.

            If that we a major reason it would be easy to see in statistics: If men dominate STEM because they prefer it they will have a disproportionate share of the worst performing STEM students.

            Interest and ability are very hard to separate. If you are more interested in something, you are more likely to study and understand it properly, so you are going to be better at it. So if there is an innate difference in interest but not talent, once you have to understand complex math, it will translate to a difference in ability. Vice versa, if you are not good at something, there is a good chance you’ll lose interest.

          • Matt M says:

            only their relative participation declined as men joined the field even faster.

            Isn’t this what really matters, though? Why should the absolute number be the one we focus on, especially in a quickly growing industry and especially during an era where female workforce participation in general was also growing very quickly.

          • Thegnskald says:

            MattM –

            I think you are responding to a response to the argument that women were chased out.

            Pointing out that the absolute numbers of women didn’t decrease is a strong argument against the idea that men moved in and pushed all the women out.

            The relative representation doesn’t matter to this specific discussion, even if it matters a lot to the broader question of women in STEM.

          • Matt M says:

            I suppose that’s fair if you take “chased out” very literally. I was thinking of it more metaphorically.

        • Matt M says:

          One way to be sure you won’t get any more empirical evidence is to demand that the issue never be discussed.

        • fortaleza84 says:

          If one assumes for the sake of argument that actual ability gaps do exist (not an outrageous hypothesis), it’s hardly a surprise that some people would come up with difficult-to-falsify hypotheses that explain away the evidence for such ability gaps.

          And stereotype threat is conviently difficult to falsify.

          • AnonYEmous says:

            And stereotype threat is conviently difficult to falsify.

            not to be that guy but actually stereotype threat studies usually fail to replicate and I don’t think anyone without a lot of political investment in progressivism takes it seriously any more

    • Matt M says:

      Perception is Bayesian. When you read Scott’s post about the dangers of indoor CO2 levels, the air around you suddenly feels more stale and kinda suffocating.

      So too is perception about race and gender. When you read an essay saying that women are unlikely to achieve in math due to fundamental biological differences, that impacts your perception of them. If a woman and a man collaborate on a math project, you will ascribe more of the success to the man. If a woman applies to a job requiring serious math ability, you will view her resume with more skepticism.

      I disagree with this comparison. Scott’s post about CO2 levels identifies many ways in which one might remedy the situation. You might feel stuffy in a closed environment, but you won’t feel stuffy in a large, well ventilated, open space – because Scott specifically tells you that these places are unlikely to be affected by this problem.

      Similarly, if you read a study that says that, on average women aren’t as good at math due to variability, that doesn’t tell you anything about the comparison between male author and female author, and any decent reasoner should be well aware of this qualifier and should apply it to their analysis – thereby not ascribing more of the success to the male than the female.

      I suppose it’s possible that this level of nuance in analysis might not be capable by the very stupid, but it’s really not that advanced or nuanced. It’s not difficult to explain to someone that variability explains why there are few female math geniuses relative to male math geniuses, but that this does not mean that the female geniuses who do exist are inferior to the male ones. Just as “indoor space = CO2 = stuffy” does not mean that all indoor spaces have high CO2 counts and are stuffy.

      • cmndrkeen says:

        If you believe that biology dictates that it requires a 1/100 event for a woman to be the best mathematician in the world, will you not be skeptical if someone reports that the world’s best mathematician is female?

        If the math paper of the century is cowritten by a man and a woman, will your belief not impact how you percieve whose insights were most important?

        • Matt M says:

          If you believe that biology dictates that it requires a 1/100 event for a woman to be the best mathematician in the world, will you not be skeptical if someone reports that the world’s best mathematician is female?

          I’ll be as skeptical as the statistics indicate that I should be.

          It’s also ~1/100 chance to flip a coin and have it land heads 7 times in a row. If I saw that happen, I might ask to examine the coin to see if it’s a trick coin or something. But weird random things happen frequently.

          Last year, I lost my car in what was supposedly a 1-in-500 year flood event. All the skepticism in the world ain’t bringing my car back.

        • Randy M says:

          If you believe that biology dictates that it requires a 1/100 event for a woman to be the best mathematician in the world, will you not be skeptical if someone reports that the world’s best mathematician is female?

          Yes. And rightly so.

          Skepticism, of course, is the same as flat out disbelief. And, upon confirmation, I will then be skeptical of the original belief.

          In each case, that skepticism only extends to a degree warranted by the strength of the evidence.

          • baconbits9 says:

            Depends on the distribution and how often people have claimed that a woman is the top mathematician. If the latter claim is rare enough there isn’t much reason to be skeptical that the woman being claimed to be at the top than any individual man named. If the claim is rare enough it might actually make you more likely to believe the one time you hear it.

          • Matt M says:

            Agree with baconbits.

            When I get hit with a “once in every 500 years” flood every three years or so, I do indeed start to get very skeptical.

            Not skeptical that a flood happened, but skeptical that the people who declared it to be supposedly so rare have any idea what they’re talking about.

          • Randy M says:

            True, it’s just one data point, so the skepticism would be small. To update your prior, you would prefer to have a list of the previous (as many as possible) top mathematicians.
            But to learn that a particular outcome is the 1% is surprising. Even if you should encounter 1% events periodically, the 99% is the way to bet.

          • Randy M says:

            @Matt M
            The difference is, in your scenario, we know for a fact that a flood occurs, and so it is only the statistic that is questionable.
            Whereas in the mathematician scenario, we don’t really have any more reason to believe the study showing “top mathematicians are 99% men” or the subjective evaluation “x, a female, is the top mathematician”, but we should consider that they are both evidence, however sparse, against the other.

          • For a while, one of the world’s best chess players was a woman. The criteria for math aren’t quite as tidy, but I don’t think mathematicians would have a hard time recognizing a woman equivalent of Gauss or Von Neumann.

          • cmndrkeen says:

            Randy M: And that skepticism harms the person you are skeptical of!

            DavidFriedman: That’s not obvious to me – Georg Cantor killed himself because the mathematics community was dismissive of his findings.

          • Protagoras says:

            Hmmm? Cantor died of a heart attack at age 72, according to wikipedia at least. Where did this story of him killing himself come from?

          • cmndrkeen says:

            Protagoras: Oops, you’re right. Don’t know why I thought Cantor killed himself! Apologies, DavidFriedman.

            Still, he was hospitalized for depression in 1884 and switched from math to philosophy and literature for a long time due to intense criticism of his work.

          • cmndrkeen says:

            DavidFriedman: A better response to your argument (though not a perfect one):

            Gauss and Euler lived in time periods where women had almost no freedom, even rich women.

            In fact, during Von Neumann’s time, the world did produce Emmy Noether. She had an extremely difficult time getting universities to accept her, despite the support of David Hilbert. Noether’s Theorem is one of the most profound bits of physics I have learned, and something I still wrestle with (https://en.wikipedia.org/wiki/Noether%27s_theorem).

          • Randy M says:

            Randy M: And that skepticism harms the person you are skeptical of!

            No it doesn’t.

          • quanta413 says:

            Maybe it’s just my bias against the more pure forms of mathematics, but as successful as he was Cantor doesn’t strike me as nearly as important and influential as Gauss and Euler.

          • littskad says:

            @quanta413

            Cantor doesn’t strike me as nearly as important and influential as Gauss and Euler.

            Good Lord, you have high standards. That said, Cantor’s work is absolutely fundamental in the modern understanding of the infinite, and has unexpected applications in just about every area of math.

          • quanta413 says:

            @littskad

            Sure, Cantor’s work is important to mathematical foundations. But it’s not that my standards are high. It’s that Gauss and Euler are really high standards. I wish I’d die 1/100 as successful in my field as Cantor. I figure that’s about 1/10000 as important as Gauss or Euler. (Numbers just illustration of rough idea, not meant to be taken seriously.)

            Unless I’m missing something, Cantor’s range was much smaller than Gauss or Euler’s. It’s mostly foundational math with not 0 but not that much application outside of its range (well… compared to Gauss or Euler). In the sense that you’ll not use it much if you’re not in those fields, not in the sense it doesn’t lay logical foundations. Gauss and Euler both made major contributions to multiple branches of mathematics and physics. It’s hard not to use their work if you do any physical science or engineering. Or large chunks of math.

            But like I said, maybe my bias. I think set theory is interesting and logically important, but I don’t rate it as highly as graph theory, differential geometry, physics, individually much less all combined.

            To be fair to Cantor, Euler and Gauss did live earlier. That has significant advantages in these sorts of comparisons.

        • 10240 says:

          If you believe that biology dictates that it requires a 1/100 event for a woman to be the best mathematician in the world, will you not be skeptical if someone reports that the world’s best mathematician is female?

          Not if I have no reason to believe that the people who evaluated her work were biased against men.

    • Matt M says:

      However, the result of that claim is the same — that there is a powerful biological force keeping women out of math.

      Also keeping them out of jail and homeless shelters.

      “High variability” in and of itself is not typically considered a desirable trait, at the individual level at least.

      • cmndrkeen says:

        A female professor does not care about this. She cares about whether she can get tenure, not the fact that some other woman somewhere is less likely to be homeless.

        A women’s group that cares about high-achieving women will not be mollified by your statement.

        • Matt M says:

          A women’s group that cares about high-achieving women will not be mollified by your statement.

          Well if high-achieving women were the only demographic that mattered, your concerns might be relevant.

          You seem to be focusing entirely on one aspect of this question, which is how this might harm high-achieving women, while completely ignoring how it might

          1. Benefit low-achieving women
          or
          2. Harm or benefit men

          Those things also seem relevant when asking the question “Should we ban discussion of this topic” do they not?

          • cmndrkeen says:

            When determining whether to ban discussion on something, a whole host of factors are important (including whether we want to start a precedent for banning discussions).

            All I’m saying is that essay harms someone, and it is unsurprising when groups or individuals concerned about that harm say so.

          • 10240 says:

            “It’s understandable that women’s groups suppress the paper because it’s in their interest to do so”, that’s a pretty low standard. Yes, it’s understandable, but the important question is whether it’s good, legitimate, or not morally repugnant. (A much more salient reason than yours that it’s in women’s groups’ interest to suppress such research if their goal is to increase women’s representation is that if people believe, rightly or wrongly, that the cause of women’s low representation in STEM is discrimination, then they are inclined to engage in affirmative action to increase women’s representation.)

    • The Nybbler says:

      This appears to be a re-invention of stereotype threat. Which has not been very robust to replication.

    • albatross11 says:

      cmdrkeen:

      In living memory, we had a whole social order that said that women were unsuited for science and professional work, that they were mostly unsuited to becoming doctors or engineers or lawyers. My mother in law has harrowing stories of how she was treated as a PhD student in psychology, and she’s not remotely alone. There were authority figures arguing that most women shouldn’t and couldn’t go into these fields. Many colleges wouldn’t even allow women into those departments.

      And yet, somehow, we’ve seen women reach parity or better across most fields, including medicine and law and accounting and biology and psychology. That’s happened over the last 50 years or so.

      If the massive social pressure and social messaging against women being successful in those fields didn’t keep them out (despite plenty of active harassment and opposition), why is it plausible that a fairly obscure mathematical paper would manage the trick?

      • cmndrkeen says:

        I think this is the best response my post has received so far. Your argument is that women shouldn’t care about the influence of these essays. What about if the dean of the school makes this argument? What about the fact that battling arguments of this kind is part of why women have managed to get into fields previously denied them?

        • baconbits9 says:

          His argument isn’t that women shouldn’t care about these things, its that they have demonstrated that they don’t.

          • Matt M says:

            At the risk of defending someone that I think might well be a troll, I think this analysis is missing something.

            While it’s true that particularly ambitious women have been “fighting the power” and succeeding in male dominated occupations for many decades, this doesn’t necessarily mean everything is fine.

            Perhaps there are women out there who excel in math, but are also not incredibly ambitious. Or perhaps they are very conscientious or shy or what have you. In a world with perfect gender neutrality, they’d enter math and be, well, not the greatest mathematician in the world, but pretty darn good. But because they don’t want to rock the boat, they go become a good biologist instead. Presumably, this is a sub optimal outcome that we’d like to avoid.

          • gbdub says:

            Part of what’s missing from the “obviously it’s sexism” argument is an explanation for why math/physics/engineering, of all places, are where anti-female discrimination has held out. The men who go into those fields aren’t exactly the alpha jocks you’d expect to be hardcore masculine stereotypers. Furthermore, math and physics at least do almost all of their practice in academia, which is dominated by social progressives.

            Absent that, some sort of difference in average / peak skill and interests seems to be a more plausible explanation. Basically, discrimination has gone down more or less equally across society, but women only pushed into the fields where they have the most skill and interests.

          • baconbits9 says:

            While it’s true that particularly ambitious women have been “fighting the power” and succeeding in male dominated occupations for many decades, this doesn’t necessarily mean everything is fine.

            I don’t think his argument was that particularly ambitious women were making it happen, the point made was that women had matched or exceeded men in some of these fields, and it isn’t simply women on the fringe of ambition making it.

          • Matt M says:

            it isn’t simply women on the fringe of ambition making it.

            How do we know that?

        • albatross11 says:

          My point isn’t so much that women shouldn’t care, it’s that I don’t think there’s any reason to think that allowing this kind of article to be published is going to discourage any substantial number of women from pursuing STEM careers.

          We have this worked example, where there were massive social messages from high-status people saying women shouldn’t (for example) go into medicine and law. Despite that, women reached parity in both fields a couple decades ago. That seems like pretty strong evidence that women can and will, in fact, ignore such social messaging when there aren’t actually rules keeping them out of those fields.

          In fact, right now, the high-status people are all telling women and girls that they can be whatever they want to be, that all fields are and should be open to them. It’s *really* hard to imagine how allowing people to discuss mathematical hypotheses about the cause of the male/female imbalance in STEM fields would have a lot of impact on womens’ choices of field.

          And the other side of that is that we’re talking about suppressing research papers. Having the suppression of scientific research be a normal operation we do to accomplish social goals seems like a terrible idea, to me.

          A lot of useful information that comes out of science is very uncomfortable. Lots of people would be happy to suppress the most discomfort-indusing information. But then we end up blind, as a civilization.

        • 10240 says:

          What about the fact that battling arguments of this kind is part of why women have managed to get into fields previously denied them?

          Arguments that fewer women than men are good at X weren’t the big obstacle for women. False factual claims such as “no woman is good at X” were, as well as attitudes that were not justified by the (possibly true) factual claims (e.g. factual claim=”fewer women than men are good at X”, attitude=”no woman should be admitted”).

          Women didn’t succeed by suppressing true or plausible factual claims, or arguing that true claims were false, or arguing that plausible claims were false without evidence. They succeeded by arguing that false claims were, in fact, false, and by battling attitudes not justified by facts.

    • gbdub says:

      Is it at all surprising then when women’s groups feel it is threatening to women when an essay gives a biological explanation for the math achievement gap?

      If stereotype threat is true (evidence is apparently sketchy), aren’t women’s groups also at fault for priming women to believe that they will be under constant threat of discrimination, and that any career setback is likely due to rampant structural sexism?

      • cmndrkeen says:

        See my reply to fortaleza84. I don’t care about priming.

        • gbdub says:

          Yes you do, it’s what your whole original post is about. Call it something different if you want, but your response to Fortaleza was not responsive to them, and certainly isn’t to my post.

          My argument applies equally to medium long term planning as it does to short term priming, and that’s the only difference you offer between “priming” and “the suspiciously priming like thing you are worried about”

          • cmndrkeen says:

            I am arguing that the impact of perception matters in two ways:

            1) Others’ perception of you
            2) Your own self-perception, which I believe impacts ambition and the decision to invest in yourself

            Neither of those has much to do with performance on standardized tests when someone does or does not briefly remind you of your race/gender.

          • gbdub says:

            Neither of those has much to do with performance on standardized tests when someone does or does not briefly remind you of your race/gender.

            That’s how priming is studied but it’s not the only way priming and stereotype threat would work in the wild, if they are real effects.

            It’s obviously also not what I was talking about, my objection would certainly cover the “impact of perception” in both the ways, particularly 2, you note.

            That’s about as charitable as I can be, either engage with arguments or don’t, but don’t play silly semantic games to avoid the question.

          • cmndrkeen says:

            Apologies, most people use the word priming to refer to something “measured” as a short term effect. See https://en.wikipedia.org/wiki/Priming_(psychology), where all the entries I bothered to read are short term.

            Let’s go back to the definition you used. Yep, if you are constantly concerned that you are under threat of discrimination, this is likely really distracting, and many feminists believe that treating women as victims is counterproductive.

            In fact, the popular “Lean In” movement focuses on ambition rather than victimhood. “Circles are a place where women can be unapologetically ambitious.” (https://leanin.org/circles).

          • gbdub says:

            Thank you. I’d also suggest the Wikipedia page on “stereotype threat”. I think that tracks pretty closely with your 2).

            Note that yes, a lot of the studies (which are by necessity short term) involve some sort of stereotype intervention followed immediately by a test, however the actual effects are hypothesized to be more long term (no one really worries about the sort of artificial scenario of priming tests, the whole point is that the tests are intended to extrapolate to real world stereotype effects, which would by necessity be more medium to long term).

            Much of the weakness in evidence, as I understand it, is in demonstrating a link between short term priming and long term perception/stereotype effects.

            That is, the evidence for short term priming is stronger than the evidence for longer term effects due to “perception” changes caused by exposure to derogatory information such as a study showing lower peak female skill.

    • Nootropic cormorant says:

      It would be nice if we could collectively forget we ever considered women to be less likely to achieve greatness in mathematics and let everyone thrive according to their abilities unburdened with any preconceptions.

      Unfortunately, the world we inherit has women statistically underperforming men, and why this is so is a great question of our age of enormous political significance. We cannot afford to hamper the pursuit of truth with regards to this question, as it won’t stay unanswered, and a false answer can and will cause societal damage.

      • Aapje says:

        Unfortunately, the world we inherit has women statistically underperforming men

        By some metrics…

        There are also quite a few metrics where men underperform compared to women.

      • It would be nice if we could collectively forget we ever considered women to be less likely to achieve greatness in mathematics and let everyone thrive according to their abilities unburdened with any preconceptions.

        It wouldn’t be nice if what we were forgetting was true, since the false belief would lead to the conclusion that women were being discriminated against in mathematics and policies to eliminate that nonexistent discrimination.

        • cmndrkeen says:

          DavidFriedman: Whenever I hear this argument, it feels a lot like God of the Gaps.

          Sure, science has now disproven 99 crucial parts of the biblical story, counter to practically everyone’s firmly held beliefs, but the 100th? Haha, where’s your science for that one?? Therefore, God did it, and we should sit on our hands and bask in the warm feeling that everything is the way it should be.

          Sure, women overcame a widespread belief in biological inequality and achieved parity in 99 fields, but the 100th? Haha, where’s parity for that one?? Therefore, there’s an innate, insurmountable biological difference, and we should sit on our hands and bask in the warm feeling that everything is the way it should be.

          • Nootropic cormorant says:

            I know exactly what you’re talking about, all of the people disagreeing with me (always on the retreat in the face of surmounting evidence) also worship and sacrifice to the Gaps.

          • gbdub says:

            Sure, women overcame a widespread belief in biological inequality and achieved parity in 99 fields, but the 100th? Haha, where’s parity for that one?? Therefore, there’s an innate, insurmountable biological difference, and we should sit on our hands and bask in the warm feeling that everything is the way it should be.

            This is a straw man (or at least a weak man). More often, the arguments I see are more like “disproportionate participation alone is insufficient to demonstrate discrimination. Parity, near-parity, or even super-parity in similar fields in fact suggests that discrimination is unlikely to be the primary cause of the apparent disparity. Therefore, the causal factor is more likely some combination of differing interests and skill levels, which may have an innate genetic component”

          • a reader says:

            Parity, near-parity, or even super-parity in similar fields in fact suggests that discrimination is unlikely to be the primary cause of the apparent disparity.

            Especially if the more-than-parity in the other direction, the preponderance of women, can be predicted – as Scott observed:

            So this theory predicts that men will be more likely to choose jobs with objects, machines, systems, and danger; women will be more likely to choose jobs with people, talking, helping, children, and animals.

            Somebody armed with this theory could pretty well pretty well predict that women would be interested in going into medicine and law, since both of them involve people, talking, and helping. They would predict that women would dominate veterinary medicine (animals, helping), psychology (people, talking, helping, sometimes children), and education (people, children, helping). Of all the hard sciences, they might expect women to prefer biology (animals). And they might expect men to do best in engineering (objects, machines, abstract systems, sometimes danger) and computer science (machines, abstract systems).

            That article is a must read:

            http://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/

          • lvlln says:

            The reason the “god of the gaps” fallacy is a fallacy is because it presumes “god did it” as the default explanation for anything we don’t know, despite the lack of evidence for a god.

            So your attempt at creating a parallel between it and Friedman’s point just fails completely, because he’s not presuming that there’s an innate, insurmountable biological difference despite the lack of evidence for a difference. Neither is he calling for everyone to sit on their hands and bask in the warm feeling that everything is the way it should be.

            Ironically, the “god of the gaps” fallacy has a tight parallel to the very common argument that any gender disparities must be evidence of discrimination or systemic bigotry. Much like how “god of the gaps” presumes without evidence that some god exists who does things that we can’t explain, that latter argument presumes without evidence that, without bigotry, the genders would be identical in some given measure. In fact, this isn’t just evidence-less faith, it’s faith that specifically denies the current best scientific evidence.

          • The Nybbler says:

            You have perhaps seen this chart?

            This does not show a single exception or even a very few exceptions.

            The fact that removal of formal discrimination resulted in women’s participation in certain professions going to gender parity (or even domination by women, as with veterinarians) while other professions changed little if at all is evidence that there is some relevant difference between those professions with respect to women; that is not a “god of the gaps”.

        • Nootropic cormorant says:

          Exactly my point, although I spoke too obliquely to steer away from culture warring in open.

    • WashedOut says:

      When you read an essay saying that women are unlikely to achieve in math due to fundamental biological differences, that impacts your perception of them.

      The (now approximately 40 year old) scientific understanding of the biological differences in professional outcomes in STEM fields is that the difference exists mainly at the interest level, rather than the ability level. The “achievement gap” you mention can thus be a reflection of fewer women entering the field, but the way you’ve worded it makes it sound like there are equal numbers of men and women in the field and the men are producing higher quality work than the women, a claim which lacks support.

      The other half of perception-being-Bayesian is that people’s priors get revised upon contact with new evidence. Whatever latent ‘skepticism’ exists of a woman’s math ability should last up until the moment her aptitude is demonstrated. I would also caution against conflating skepticism with the less interesting but more plausible feeling of never having met/worked with a female math PhD before.

      • AG says:

        “Interest” does not exist in a vacuum, though. Consider representation on the screen. Do we claim that fewer Asians have an interest in acting? Perhaps individuals have been less interested, because their prospects were not good. There are many testimonials that they never imagined that such-and-such career could be a possibility for people who look like them, until they saw someone else do it.

        I don’t have the link now, but there was a study showing that young girls’ interest in STEM fields were most influenced by whether or not there was a woman STEM role model in their lives. Increased representation increases interest.

        (note that this is different from a claim of discrimination, which would be the bailey)

        • albatross11 says:

          Interest doesn’t exist in a vaccuum, but I am somewhat skeptical of Hollywood representations being a major driver of differences in interests. It seems plausible to me that women might have an inherent tendency toward being more interested in people jobs than abstract-thing jobs, but that absolutely could be an artifact of our society that won’t apply always and everywhere.

          • AG says:

            I didn’t mean “we need more depictions of women scientists.” Minority actors were inspired to become actors by seeing other minority actors. Similarly, girls are more likely to become interested in STEM if an adult in their social circle (a parent, relative, family friend, etc.) is also in STEM. Although seeing lots of awesome lady scientists in the news would be cool, too. The change of narrative about Hedy Lamarr has been good.

        • WashedOut says:

          Consider representation on the screen. Do we claim that fewer Asians have an interest in acting?

          No, but people who are high in trait conscientiousness and low in trait openness would be dissuaded from pursuing a career in acting, especially in Hollywood. My understating is that this is generally the case for Asians.

          • Tarpitz says:

            I would put a big chunk of it down to acting being in almost every way a staggeringly terrible career choice, and Asian cultures generally having stronger antibodies against same.

          • AG says:

            No, because the advice for a lot of Asian-American actors is that you have to go to Asia to have a career. There are plenty of dreaming hopefuls overseas, when they’re the majority demographic. It’s not an Asian cultural thing.

    • a reader says:

      @cmndrkeen:

      Perception is Bayesian. When you read Scott’s post about the dangers of indoor CO2 levels, the air around you suddenly feels more stale and kinda suffocating.

      The air around me didn’t. I took more care the next days to refresh the air in my room, but the air didn’t feel any different.

      So too is perception about race and gender. When you read an essay saying that women are unlikely to achieve in math due to fundamental biological differences, that impacts your perception of them. If a woman and a man collaborate on a math project, you will ascribe more of the success to the man. If a woman applies to a job requiring serious math ability, you will view her resume with more skepticism.

      That could easily prevented without silencing science: make people who evaluate candidates not know that it’s a man or a woman, see only the initials (that could also prevent race/ethnicity discrimination).

    • LesHapablap says:

      Essays that pose those sorts of explanations may harm the perception of women. But then so do essays that call to ban those essays. If I read essays about how we can’t publish research about sex differences because it might hurt women’s feelings, it changes my perception of them:
      1. sex-differences research must be fairly rigorous, because otherwise the objections would be against the research methods themselves
      2. if authorities in academia are willing to suppress research to help out women, then it is possible they are tipping the scales toward women during the hiring process, and perhaps some women don’t deserve their awards/tenures

      So even if you are right that this sex-differences research is a harm to women, there’s no way to fix it. Suppressing the research is also a harm to women and a much greater one, since it implies that academia is willing to suspend their integrity to support women.

    • DavidS says:

      I’m not sure what you think happens if you don’t publish the essay. If this sort of thing is a problem (others give good reasons to be doubtful) then surely we can only get round it by making sure people don’t know there’s a gender discrepancy in the first place, or by convincing them it’s there for reasons that no longer apply.

      If people think there’s a discrepancy due to discrimination, that’s surely more off-putting: people can know how good they are at maths compared to their peers, so can overrule the prior of ‘how likely am I as a woman to be good’ with more accurate data. Whereas they can’t do this for discrimination.

      It’s possible it would have that effect in terms of how others see them, but I suspect only on the margins especially if the field is male-dominated (which would fit with a ‘women tend to be less good, so fewer of them get in’ model).

      A belief that one sex is worse at something on average is only going to be an issue if people hold it but society also forces jobs in that area to go to both sexes equally. In that case the corrolary is that sex is a good guide to competence in that field.

      But the better solution is probably not to force job distributions (even if you’re sure ability is equal, as a difference in interest or in another ability could mean that in one sex all the best maths people do maths whereas in the other they choose to do other things).

  25. Nicholas Weininger says:

    If anyone is familiar with the facts behind https://quillette.com/2018/09/07/academic-activists-send-a-published-paper-down-the-memory-hole/ I would be interested in any independent perspectives/other sides of the story. What is the author leaving out or misrepresenting if anything? In particular, if anyone has read the paper and can point to a substantive critique of the mathematical claims, that would be very interesting. Trying to combat confirmation bias here, and also cognizant that participants in these sorts of controversies cannot generally be trusted to give honest accounts. FWIW I am much more interested in discussion of the process/academic norms question here than of the more directly CWish question of whether GMVH is actually true for mathematical ability or not.

    • albatross11 says:

      That’s how I feel, too. Peer review (and academic publishing more generally) can be a very ugly, messy process, but the story recounted in the Quillette piece is not something I would ever expect to see happen. OTOH, we have only the author’s word for what did happen, so maybe the actual story is very different in some way that explains why the paper got withdrawn.

    • ajakaja says:

      This should help. Fields medalist Timothy Gowers thought the paper was bad and should never have been published in the first place.

      • Nicholas Weininger says:

        Thanks. From reading that post and its comments and the linked HN article, nobody comes off looking very good. AFAICT what seems to have happened is:

        1. Hill and his collaborators write a paper whose model rests on an assumption which they don’t justify and which is prima facie implausible, namely that “variability” is a heritable property of a subpopulation. Gowers gives a good account of why this is prima facie implausible. I wouldn’t be quite so quick to dismiss it as he is (and note that he also dismisses as implausible the idea that most men don’t have offspring but most women do, which as I understand it, and commenters point out, is actually the historical truth). But it absolutely requires serious evidence in order for the paper’s overall argument to be sound and they don’t provide any, and in general the paper does seem to me (a “dead” mathematician in the Erdosian sense of the word, with a Ph.D. to my name but no real research activity in 13 years) to be waaaay less rigorous than any “serious” mathematical journal should stand for.

        2. They initially attempt to publish the paper in the Math. Intelligencer which isn’t really a “serious” journal. This gets accepted and then the acceptance gets retracted for what really do seem to be discreditable political reasons; unsound though the argument may be, there is no basis in the paper that I can see for it to be regarded as fomenting or excusing bias or discrimination against anyone.

        3. Then, frustrated by that, they come up with a corrupt backdoor way of smuggling it into a real journal with the connivance of an axe-grinding editor.

        4. The rest of the editorial board of the real journal, upon discovering this scheme, understandably revolts and demands it be fixed.

        5. The fix is carried out in a ham-handed way that doesn’t really repair the journal’s editorial integrity (for which purpose they should have just fired the axe-grinding editor and put a disavowal note on the paper) but does allow Hill to continue to nurse his persecution complex.

        Ugh.

        • Nicholas Weininger says:

          Caveat to #2: we cannot be sure that the version of the paper accepted and then retracted from the Intelligencer was the same as that uploaded to Arxiv. It is sometimes the case that authors of inflammatory texts will revise those texts after initial criticism to be much more anodyne, and then make themselves look more martyrish by claiming the criticism was levied against the revised anodyne version.

          • Douglas Knight says:

            There are 9 versions available on the arXiv, two of them from before when he claims it was accepted to the Intelligencer. The commenter “Mathematician” similarly claims that version 2 is close to what was accepted.

        • cassander says:

          1. Hill and his collaborators write a paper whose model rests on an assumption which they don’t justify and which is prima facie implausible, namely that “variability” is a heritable property of a subpopulation.

          Why on earth is it implausible that variability varies across populations?

          • Nicholas Weininger says:

            Variability varying isn’t implausible. Variability being heritable in the sense the authors need it to be is. Gowers’ key point is: “Those poor individuals at the bottom of population P [the one assumed to be more variable] aren’t going to reproduce, so won’t they die out and potentially cause population P to become less variable?”

            Suppose to take an extreme example that in subpopulation P, half the members are six feet tall and half are five feet tall, while in Q, everyone is five foot six; that height is 100% heritable; and that for “selectivity” reasons nobody, from P or Q, succeeds in having offspring unless they are at least five feet eight. Then P is clearly more variable in height than Q, and the offspring of the overall population (P plus Q) will be drawn disproportionately, and in fact entirely, from P. But they will all be six feet tall, and thus clearly less variable than the prior generation! Now you can tell a just-so story where instead of having a gene for height there could be instead a gene for variance in height and so it is that which is inherited instead. But this is implausible enough to require unusually strong evidentiary justification and as far as I can tell the paper provides none.

          • cassander says:

            @Nicholas Weininger

            the trouble with your example is that we know for a fact that male height is more variable than female, despite there being reproductive benefits to height. And this pattern repeats for just about every trait we can measure. Greater male variability isn’t a hypothesis, it’s a fact.

          • Nicholas Weininger says:

            @Cassander, I’m not claiming GMVH is false generally (though do note that its truth for one trait does not imply its truth for other traits). I’m claiming that the reasoning behind the authors’ hypothesized mechanism to explain it is unsound.

          • I haven’t read the article, but I think there is a simple response to your argument. If height is precisely controlled by genes and taller men have greater reproductive success, then eventually everyone will be tall. Also smart, handsome, … given similar assumptions.

            But we observe that not everyone has reproductively optimal characteristics. The obvious explanation is that there isn’t a way in which the genes can reliably produce tall men. There is a gene that, metaphorically speaking, tries to produce tall men and sometimes fails. The taller its target height, the worse the failures. Where the optimum is depends on how large the benefit is of being tall and how large the cost is of failing to be tall–in your simple story, being short. If the payoff to being tall is very large, it pays to try to produce very tall men even with a sizable failure rate. So the bigger the difference in reproductive success between successful individuals and unsuccessful individuals, the wider the optimal distribution is.

          • BlindKungFuMaster says:

            Greater male variability is due to the X-chromosome. If you have one X, you get whatever is encoded on it. If you have two X you get the average of what is encoded on those two. This averaging effect reduces variance.

            Now, if you look at what the X chromosome is enriched for, surprise, surprise, its stuff like height and cognitive function, which directly impact male reproductive success.

            Variability can also be a heritable property of a subpopulation if this subpopulation for some reason engages more in assortative mating. In that case it is not encoded in any genes, but in the distribution of genes over chromosomes and of chromosomes in genomes. If the assortative mating stops, recombination will break up this distribution and the variance will decline. But for the time being, it would be a heritable trait.

          • BlindKungFuMaster says:

            “But we observe that not everyone has reproductively optimal characteristics. The obvious explanation is that there isn’t a way in which the genes can reliably produce tall men. There is a gene that, metaphorically speaking, tries to produce tall men and sometimes fails. The taller its target height, the worse the failures.”

            I don’t think that’s what’s going on. Height is highly heritable, so people actually get what their genome specifies. But, height isn’t an unalloyed good. There are always selection pressures to decrease height too. Maybe the tall guy doesn’t survive the famine. But because of the higher variability of male reproductive success it makes sense to throw the dice and occasionally have a tall son who gets all the girls, hopefully before he starves to death.

            So the average height of men would still be optimal in the average environment of his population. But the environment varies and therefore the optimal height varies too. And for men it has higher benefit to try to nail that optimal height.

          • Thegnskald says:

            How variability can be inherited:

            Genetic transference to the X chromosome. (Yes, we see generic transfer between chromosomes. Google that shit.) Any allele that has a more neutral heterozygous expression will result in greater variability in men. If this is advantageous, this variability will spread. Boom. Inheritable variability. (Double points if one copy is good, one and a half copies are great, and two full copies will kill you. For men. For women the numbers would be more complex, but both “great” and “deadly” would be much less likely.)

          • aho bata says:

            @Nicholas Weininger

            There are two kinds of selection effects that can increase the proportion of a subpopulation that meets the more selective sex’s standards.

            One is selection of strictly beneficial genes, e.g. genes that raise general intelligence without any adverse effects.

            The other is selection of genes that might be beneficial but that come with an associated risk, e.g. genes that raise general intelligence but increase the risk of developing autism or some neurodegenerative disorder.

            As far as I can see the first effect would cause the less selective sex to become less variable over time, that is, they would become more uniformly good, while the second effect would cause them to be more variable — at the level of the genotype, not just phenotype (so you don’t need such things as “genes for height variability”), although any individual genes whose phenotype is “variability” would also be subject to this effect.

            If Gowers’ summary is accurate, Hill ignores the first effect, while Gowers seems not to understand the second or to be conflating it somehow with the first. How well Hill’s model translates to the real world depends on the proportion of genes that come with adaptive tradeoffs. I don’t know how tractable an empirical question that is, but I wouldn’t be surprised if it was high: as pointed out above, even height has tradeoffs if being short can win you the jackpot when the famine hits.

            Also, this apparently isn’t reflected in Hill’s model, but not all women find the same things attractive. People have idiosyncratic preferences; therefore, people can benefit from being idiosyncratic in random ways. Most of the time random idiosyncrasies will be deleterious or neutral, but they will be less heavily selected against in men if their only chance of success is to be in the 99th percentile in something.

          • @BlindKungFuMaster:

            I agree that an alternative explanation for a distribution of characteristics is that the distribution is superior to everyone having the same characteristics. My standard example is giraffes. If all of them have very long necks, a short necked giraffe will be able to eat from low branches and avoid the costs of carrying a long neck. So there is an equilibrium distribution of heights, in which the net benefit to all neck heights in the distribution is the same.

            The equivalent for intelligence could be a trade off between benefits of higher intelligence and costs, such as the calorie cost of a large brain. If there are some niches in which high intelligence pays enough to be worth its cost, some in which it does not, you could get a similar equilibrium.

            But that still leaves the possibility of the explanation I earlier offered, in which a characteristic in the phenotype is unambiguously positive, but the more of that characteristic the genotype is trying to produce, the less often it succeeds. And that gives a heritable distribution, which is what Nicholas was arguing was inherently implausible.

            All of which suggests that the proper response to Hill et. al. is not to refuse to publish the paper but to publish it and let others read it and offer their criticisms.

          • quanta413 says:

            @aho bata

            How well Hill’s model translates to the real world depends on the proportion of genes that come with adaptive tradeoffs.

            Loosely speaking, probably all genes have a certain amount of adaptive tradeoff. Whether or not its a quantitatively relevant amount is a much harder question.

            For those who are curious about this specific topic, you can look up stuff on antagonistic pleiotropy. That’s one name the phenomena goes by for biologists.

        • Viliam says:

          Unlike most people who quote Gowers, he himself acknowledges not being an expert on biology: “I am also far from expert in evolutionary biology and may therefore have committed some rookie errors in what I have written above.”

          The first thing that comes to my mind is that men have one copy of the X chromosome and women have two copies, so any trait that is (partially) encoded in the X chromosome will likely have different variability between the sexes. But, not being a biologist myself, I have no idea what traits exactly this refers to (other than men more likely being colorblind).

          • quanta413 says:

            Gowers is way, way off with his ideas about heritability of variation etc. He hasn’t a clue. See my post below.

            The fact that he couldn’t even come up with the example you did (which an educated academic ought to know) is not a good sign for how much he thought about this before posting.

            His caveat is just lame ass covering.

        • quanta413 says:

          @Nicholas Weininger

          Not to dogpile, but with respect to (1) Gowers shows that he is clueless with respect to biology. He shouldn’t have brought that up yet as far as I can tell that’s his primary critique. He admits he may have made a rookie errors to cover his ass and surprise! He did. His critique resembles many stupid critiques of the past that biology can’t generate randomness but only deterministic traits. Generating deterministic traits is no harder or easier a priori than “random” traits or anything inbetween. If a gene raises the mean but also the variance of a trait, it could be selected for or against depending on how selection acts on that trait. Genes aren’t a blueprint to which organisms are made to tolerance.

          In addition to the many above examples of increased heritable variability due to men lacking a full chromosome (essentially), I’ll give an extreme example of selection for a random mechanism. The number of sperm with a Y chromosome compared to sperm with an X chromosome is very close to 50/50. This ratio is the evolutionarily stable strategy in humans for well understood reasons. It’s not the only ratio you see across species. There are some species where a lot more females are born than males, although I can’t remember the name of the species off the top of my head. I vaguely remember the life history details though if you’re interested.

          EDIT: Just saw this

          Now you can tell a just-so story where instead of having a gene for height there could be instead a gene for variance in height and so it is that which is inherited instead. But this is implausible enough to require unusually strong evidentiary justification and as far as I can tell the paper provides none.

          No, it’s not that implausible because your entire model of how genetics maps to phenotype is way off. It’s pop culture level understanding. There is no “gene for height” or a “gene for variance in height”. Most genes are probably pleiotropic with respect to high level descriptions of phenotype. That means most genes affect multiple parts of phenotype. And a priori there is no reason not to suspect they influence both means and variances of traits. The mean is usually focused on, but that’s not because of biological plausibility. Your understanding of genotype and phenotype and how they are related is lacking. You are overconfident despite a lack of knowledge.

          EDIT EDIT:

          Honestly, main reason to doubt the story in the paper has any application to intelligence etc. is because it doesn’t fit our fragmentary knowledge of how intelligence is likely related to genetics (largely intelligence is probably due to having less generally deleterious mutations), and given that the variance in intelligence differs among human subpopulations I have other doubts. Also I kind of doubt the average effect of women mating is selective for intelligence in and of itself, but that’s a much more murky question.

          EDIT EDIT EDIT:
          Sorry for the negativity. I thank you for posting a criticism even though I think it is naive, because it did make me think about why the paper didn’t seem very applicable to me. I skimmed the paper and thought it looked vaguely cute but not worth reading closely, but I hadn’t really thought about why until I had to respond to your post. Now I know it’s partly because I think that the topic the author was interested in isn’t described well by this sort of model.

          • There are some species where a lot more females are born than males, although I can’t remember the name of the species off the top of my head.

            Elephant Seals.

            The argument is due to Sir R.A. Fisher. The implication is that sex ratios will be equal if the cost of producing males and females is equal. Male elephant seals are much larger than female elephant seals, so the same argument implies more females than males produced.

          • quanta413 says:

            I didn’t know that example! That’s a very interesting one. Different reason too than the example I was familiar with.

            I was actually thinking of a different species of some sort of insect (or maybe arthropod) that mostly lives its life cycle inside a plant of some sort after hatching but eggs are dispersed in clusters between plants. Because of the high level of relatedness of individuals in one plant, the optimal sex ratio shifts to produce more females. Producing more males than needed is competing with yourself, in essence. Rather different than well mixed mating.

            EDIT: Just noticed a mistake I made in higher up post about “variance in intelligence differs among human subpopuations… other doubts bla bla”. That’s poorly thought out because obviously the idea was that there was a variance in variability among subpopulations in the model, but I was thinking of different subpopulations that didn’t mate with each other until recently which is a different issue than the subpopulations in the model.

          • BlindKungFuMaster says:

            “The argument is due to Sir R.A. Fisher. The implication is that sex ratios will be equal if the cost of producing males and females is equal. Male elephant seals are much larger than female elephant seals, so the same argument implies more females than males produced.”

            Ah, that’s interesting. I always assumed that the ratio would mirror the ratio of genetic material transmitted (after controlling for childhood death), so the only way to get more females would be to have a species where females contribute more genetic material to the next generation. But the cost to the mother is another factor.

            But wouldn’t that lead to a kind of antagonistic selection? If the male elephant seal figures out how to produce only male sperm, the females are out of luck.

            I should probably read a book. Figuring this stuff out on my own is bound to leave some pretty big holes. Any recommendations?

          • BlindKungFuMaster says:

            “Because of the high level of relatedness of individuals in one plant, the optimal sex ratio shifts to produce more females. Producing more males than needed is competing with yourself, in essence. Rather different than well mixed mating.”

            That also makes sense. If you shift to group selection, more females come in handy.

          • quanta413 says:

            @Blind Kung Fu Master

            Ah, that’s interesting. I always assumed that the ratio would mirror the ratio of genetic material transmitted (after controlling for childhood death), so the only way to get more females would be to have a species where females contribute more genetic material to the next generation. But the cost to the mother is another factor.

            But wouldn’t that lead to a kind of antagonistic selection? If the male elephant seal figures out how to produce only male sperm, the females are out of luck.

            You’re quite mixed up. Unless elephant seals are way different than humans (which I really really doubt because they’re also a mammal although apparently some mammals do use a different system…), sex is determined by a small fraction of DNA depending on which of two chromosomes the male randomly passes on (X or Y). The amount of genetic material being passed on isn’t relevant although the amount passed on is basically half from each parent (slightly less DNA is from your father if you’re a male actually).

            Males aren’t trying to make males and males don’t breed true so to speak. You need both sexes so there’s no selection pressure for more males in order to benefit males. More female descendants is (almost) as beneficial to a male in this situation as it would be to a female. Not quite the same since mothers bear more costs, but close. Or as beneficial to almost all of the male’s genes depending how you want to look at it. If you’re male, daughters are just as closely related to you as sons.

            I should probably read a book. Figuring this stuff out on my own is bound to leave some pretty big holes. Any recommendations?

            What level do you need? On the one hand, what you wrote about sex determination is way off. On the other hand, you recognize that having more relatives in your environment as competitors and less nonrelatives is kind of like group selection (which is correct!), but most people wouldn’t recognize that.

            My area of specialization roughly speaking is evolutionary theory and mathematical modeling of populations. So I can recommend fairly high level texts on that. I’m less sure for beginner material because I took a rather strange route myself.

            But I think maybe you should learn some basic genetics too. Depends on your goals and current knowledge.

          • I should probably read a book.

            I’m sure it isn’t the best source, but I explain Fisher’s argument in Chapter 1 of my webbed Price Theory, under the subhead “Economics and Evolution.”

          • BlindKungFuMaster says:

            “You’re quite mixed up.”

            You are awfully quick asserting other people don’t know what they are talking about.

            Unfortunately you seem to have completely misunderstood what it is I was talking about. I spell it out for you:

            1) If you have a population with a stable 2:1 ratio of female to male, being a male is just better. If every kid has one dad and one mom, in this population males have twice the average number of kids.

            2) This means there is selection pressure to have more sons. Over time this selection pressure will ramp up the number of males and there will be a balance. This is the reason why usually there are equal numbers of males and females.

            3) This assumes that both sexes contribute roughly the same amount of genetic material. If however, the species is triploid and females contribute two chromosomes and males one of each type, than the 2:1 female-male ratio is stable, because while males are more likely to have kids, females still get to contribute the same amount of genetic material. I’m not saying that happens, I’m just saying this is one mechanism to get a lopsided male-female ratio.

            4) In the elephant seal example, there are more females because a female can have n>1 female babies with the same effort as having one male baby. This, as explained in 1), means that being male is just better, ie fitter, than being female. Because males aren’t paying the cost of a pregnancy and generally cannot count on having multiple babies with one female (turn over of alpha males can be pretty rapid), there is a strong incentive for males to sire these fitter sons. Because sperm determines sex, evolutionary changes in that regard are difficult to counter for the females. If there is only male sperm, it is going to be a son.

            5) This is what I called antagonistic selection. A more common term is antagonistic sexual co-evolution. In this specific scenario males want to have sons and there is evolutionary pressure to “breed true”.

          • quanta413 says:

            @Blind Kung Fu Mater

            Mea Culpa. I thought you were saying something much weirder. I’ve never heard of any species like (3) though, which is probably partly why I didn’t make the connection. Who knows though, maybe some plants do something like that. Plants are weird. Perhaps I shouldn’t post so late.

            With respect to (2) sure, but there’s also selection pressure in that population for females to have more sons as long as costs are equal. It benefits a female to have more sons in that population. A female with a novel mutation that led to sons with more sons would benefit roughly half as much as a male with a mutation for more sons (I’m guessing based on a heuristic). A female with a mutation that led to more sons immediately would benefit just as much as a male with a mutation with similar effect.

            With respect to (4) and (5) though I’m not sure which genes control the ratio of X to Y sperm. You’re obviously right that only males have sperm, but genes that affect the ratio of X to Y sperm may occur in both sexes. It is not obviously true that females can’t easily interact with the genetics of sex determination. That could only occur if all the genes controlling the ratio of sperm “sex” were on the Y chromosome. Even then, females could potentially develop ways of sorting sperm based on sperm sex.

            The ratio of X to Y sperm could be affect by meiosis though so my guess is a pretty large number of genes could have an effect, some of which females have. I see no reason why females with genes that caused there to be an uptick in Y chromosomes in sperm wouldn’t benefit from having more sons with sons. Most females are no more related to a female than most males, so it’s not obvious to me that it’s in any individual female’s interest not to have sons that have more sons. It would take a gene that increased the number of sperm with Y chromosomes spreading to a pretty large chunk of the population before it was bad for females that had that gene (Unless elephant seals tend to mate with relatives). Only at that point would having sons that have more sons be bad for a female.

            In other words, my point about sexes not breeding true was that we should expect females to be “defectors” against other females as long as they don’t pay the cost of a gene for more males right now. Because roughly speaking a female isn’t more related to a random other female than a random male.

          • BlindKungFuMaster says:

            @quanta413:

            Now you seem to have completely dropped the assumption that sons are more costly for mothers.

            Yes, if the sex ratio is lopsided both females and males benefit from having the underrepresented sex in their progeny. You’ll get female “defectors” etc.

            But the whole point of the elephant seal example was that mothers pay an additional cost for sons. If a mother can have either three daughters or one son, and there are no further factors, you’ll get that 3:1 ratio. That is the equilibrium. With that ratio three daughters are as costly and as fit as one son is.

            But only for mothers …

          • quanta413 says:

            @BlindKungFuMaster

            No, I’m saying something different. If a gene affecting number of sons affects the balance of X to Y sperm, then even if that gene is in mothers, mothers who have it don’t pay the additional cost of sons because they don’t produce sperm. They only pay the cost once the gene is so common that most males have it. But at that point so do any mothers who don’t have the gene.

            That means that a mother with a mutation that causes her sons to have more sons because the gene affects the ratio of X to Y sperm is initially favored by natural selection if males who had more sons are favored by natural selection. Up until the point where the sex ratio is skewed too far and sons who had more sons would themselves do worse.

            This is distinct from a gene that say, caused a mother to sort Y sperm towards her eggs and X away or something to that effect. A mother would pay the cost of that gene and so would all her daughters, so it could be disfavored.

            Basically the reproductive success of any gene in an individual mother doesn’t care about the gene’s effect on other mothers.

            In reverse, a mutation that caused females to sort sperm inside them towards the optimal male to female ratio for a theoretical female is actually a beneficial mutation in a male because males don’t sort sperm. So males can also have genes that defect against the optimal male:female ratio for males as a group as long as those genes aren’t on the Y chromosome.

            Mutations on the Y chromosome are unique in that because the Y chromosome is only in males, mutations in the Y chromosome that affect sex ratio are only favored if it increases the number of sons. I think this is true even if the mutation pushes the number of sons too high even from a male individual’s perspective.

          • littskad says:

            @quanta:

            I was actually thinking of a different species of some sort of insect…

            It sounds like you were thinking of aphids.

          • quanta413 says:

            @littskad

            Thank you. Aphids seems like the right fit.

      • syrrim says:

        Fairly weak response IMO:

        Well I’m afraid that to me it doesn’t suggest anything of the kind. If females have a higher cutoff than males, wouldn’t that suggest that males would have a much higher selection pressure to become more desirable than females? And wouldn’t the loss of all those undesirable males mean that there wasn’t much one could say about variability? Imagine for example if the individuals in P were all either extremely fi