THE JOYFUL REDUCTION OF UNCERTAINTY

Open Thread 110.5

This is the twice-weekly hidden open thread. As the off-weekend thread, this is culture-war-free, so please try to avoid overly controversial topics. You can also talk at the SSC subreddit or the SSC Discord server.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

580 Responses to Open Thread 110.5

  1. robirahman says:

    There will be a meetup this Saturday, September 22 at 7:00 in Washington, DC. The address is 616 E Street Northwest. For more information, check our Google group.

    • Rolaran says:

      There is also a meetup planned in Edmonton on September 22 at 1:00 p.m. Will be at the Garneau Remedy Cafe again (meeting on the second floor), address is 8631 109 St NW.

      • Viliam says:

        There will be a Less Wrong / Slate Star Codex meetup in Bratislava on October 9th at 5PM in Progressbar, Dunajská 14. (Links: LW, FB.)

  2. Robert Jones says:

    I’ve been thinking (partly as a result of an earlier discussion on this blog about whether people are evolved to like forests), about the environment of evolutionary adaptedness (“EEA”). It seems to me that people sometimes hypothesise features of the EEA so to explain whatever the (supposedly) evolved trait to hand might be.

    People talk a lot about savanna or savanna woodlands (which is perhaps where the thing about forests came from). The suggestion is (broadly) that the Homo genus began with H habilis in the East African Rift Valley, when the rift produced climatic changes replacing the jungle with savanna, leading to the evolution of bipedalism.

    I’m not sure that the timing works (the rifting began 22-25 mya, vs H habilis emerging c2 mya), and it looks like there’s some historical contingency at play because (a) the conditions in the Rift Valley were favourable for the preservation of fossils and (b) Louis and Mary Leakey happened to do their seminal work there.

    In any case, it has been variously (and contradictingly) suggested that H habilis (a) didn’t exist, (b) should not be assigned to the genus Homo, (c) is a variant of H erectus rather than a separate species and (d) is not ancestral to modern humans. If any of those are true, it doesn’t seem like much weight can be put on the fact that one particular hominin species evolved in a savanna environment.

    H erectus ranged over a huge area: nearly all of Africa, as far East as Java and as far North as England (although there is debate about whether these should all be considered H erectus or whether they should be divided into some number of separate species). Its wikipedia page states that it is uncertain whether the species arose in Africa or Asia, which needless to say is hopeless in terms of identifying the EEA.

    H heidelbergensis also had a large range (certainly if H rhodesiensis is rejected as a distinct species) and again it is unclear where in its range it emerged. Some early sites are in absurdly unexotic locations like Lowestoft and Boxgrove, but there is another early site in Ethiopia (if indeed that is the same species).

    H sapiens is now pretty much universally agreed to have evolved in Africa, but that’s a big place! Again, the species seems to have dispersed rapidly.

    The endurance running hypothesis seems pretty compelling to me, at least in terms of the existence of the adaptations (as opposed to their hypothesised advantages in scavenging and hunting). It occurs to me that if humans are evolved for long-distance running, then that might explain why the various human species have dispersed so quickly. Perhaps the range of individual bands was very large, so that the EEA would actually encompass a variety of ecosystems. The ranges of contemporary hunter-gatherers might be misleading in this respect, because they’re constrained by the presence of sedentary populations.

    • Robert Jones says:

      It occurs to me that a discussion of human evolution has sometimes fallen within the culture wars (e.g. the Scopes monkey trial). I don’t think it’s a hot topic for the purposes of SSC, but of course the above comment can be deleted if I’m wrong about that.

      Unconnectedly, the thought occurs to me that when running through Highgate Wood and the Queen’s Wood (which are remnants of the Forest of Middlesex), I might be crossing paths with my H heidelbergensis ancestors, although I suspect that the climate was sufficiently different 700 kya that it would not have looked very similar.

    • BlindKungFuMaster says:

      If you accept the endurance running hypothesis, shouldn’t you also accept that humans evolved to a significant degree in an environment in which running makes sense, i.e. a savanna?

      • Watchman says:

        We’re evolved to run in all terrain – so up mountains or through forests as much as Savannahs. Indeed, Savannah creatures seem prone to evolving sprints (think cheetah, springbok) although that’s an impressionistic view. Less impressionistic is the ability of woodland large mammals to do medium distances at least (deer and wolves for example).

        • BlindKungFuMaster says:

          Persistence hunting can basically only be done in a savanna.

          Why would you run through a forest or up a mountain, even if for some reason the terrain does allow running (which wouldn’t be terribly common for either)?

          Running only makes sense if there is a lot of calories at the end of it.

          • Watchman says:

            I regularly run on mountains and through forests (orienteering is a great sport) – humans are well designed for this too. Considering savannahs are neither locally flat nor free of obstructive vegetation there doesn’t seem to be a huge difference. So I’m not sure why hunting a goat or a deer through different terrain is any different than hunting an antelope on the savannah? There is a lot of calories at the end of each hunt.

          • BlindKungFuMaster says:

            You cannot run down deer in a forest. Hunting deer in a forest only makes sense with ranged weapons. Those come way later.

            For persistence hunting you need to be able to establish sight of the chosen antelope regularly. The occasional hill or copse isn’t a problem, but in a forest you would have to track as soon as you lose sight and that’s way too slow.

            You also need midday heat, because you basically hunt by overheating the antelope. Forests and mountains are too cool for that.

          • Watchman says:

            My understanding of the endurance hunting model is that you’ve used a ranged weapon (and darts and javelins work well in forests) to injure the prey animal first. We might be able to run down a healthy animal but as most prey animals run in herds or have offensive capability that’s not a great strategy even on the savannah. Adaptions for tool use certainly predates the development of the human running style as it is common to all the apes, so there’s no difficulty in thinking the running came after the weapons.

            Are we thinking of forests and mountains in the same way here? Mature temperate forest in the northern hemisphere tends to be relatively open as a high canopy reduces light to limit growth and lower branches therefore atrophy. Mountains are variable but few prey animals live far above historical tree lines, and those that do so still cluster on the lower slopes and not the bare rock and more commonly snowy bits. I don’t see either as notably harder than savannah to move across or even track on, although I am relying on the accounts of others for how savannah running goes.

          • AlphaGamma says:

            @Watchman on endurance hunting, there is a video made by the BBC (as part of David Attenborough’s Life of Mammals series) of a San bushman running down a kudu. He doesn’t injure it before it collapses.

          • Robert Jones says:

            @AlphaGamma, the difficulty is that people have suggested that the bushman was engaging in an uncharacteristic behaviour because the BBC wanted to film it.

            I believe that early hominins used sharpened sticks as spears, because they lacked the technology to attach a spearhead to a shaft. A sharpened stick seems like a terrible weapon, but perhaps you could at least injure a prey animal (causing it to leave a trail of blood) and then run it down?

            Edit: Or you could throw a stone, I guess?

        • baconbits9 says:

          Savannah creatures also seem to go on long migrations such as the wildebeast and zebra.

          • Watchman says:

            Yes, but at walking whilst stopping to chew up some nice vegetation speed.

          • baconbits9 says:

            Are early hominid distance runners going at a greater speed than walking wildebeest and zebras?

            What endurance hunters have evolved in woodlands and mountains?

            I have my doubts about most of these theories because it seems apparent that there is early intelligence selection well before there is bipedal selection. Humans and chimps diverged between 4 and 13 million years ago and yet chimps ended up on the top end of animal intelligence, while bipedal-ism is less than 2 million years old.

          • Watchman says:

            I think humans run faster than anything walks, yes. Especially if its grazing as it moves. We run pretty quickly you know.

            I’m not wedded to the theory, although above I’ve ended up suggesting it has to logically be a result of tool use to make sense. I’m also thinking that bipedalism is a precursor not a cause. I think it’s plausible that we developed as endurance runners due to being able to injure prey but not kill it quickly and due to having an energy efficient mode of locomotion to exploit. I may be causing confusion by arguing this if you are arguing that endurance hunting does not explain bipedalism and high intelligence. Both traits are probably reinforced by endurance hunting if it was a norm mode of hunting but I’d see them as prerequisites for humans to do this not outcomes of doing it (otherwise there’s a fair chance wolves in the Americas would be intelligent bipeds).

          • baconbits9 says:

            I think humans run faster than anything walks, yes. Especially if its grazing as it moves. We run pretty quickly you know.

            We run pretty slowly compared to other animals running, and endurance running is not done at top speed. Its my understanding that migratory animals often do 20-30 miles a day (and perhaps more) multiple days in a row during peak migration.

          • Robert Jones says:

            It seems highly plausible to me that humans on foot could follow a migrating herd at 20-30 miles/day. Ben Smith ran 401 marathons in 401 days. I have to wonder about children though: perhaps they were left behind?

            baconbits9 makes an interesting point about chimp intelligence, but it’s still the case that we see a dramatic and sustained increase in cranial capacity following the evolution of bipedalism.

          • Watchman says:

            Baconbits

            We run moderately slowly over the short and medium distances, but we can run further. And distance running is capable of covering more than twenty miles in less than half a day (or much quicker in Berlin this weekend gone…).

            The theory is not that we were running after migrating herds anyway. We were running after prey, probably wounded. As pack hunters I’d guess humans had territories in which they preyed on passing animals, so weren’t also migrating. Think how lions and wolves work.

      • Nancy Lebovitz says:

        So far as I know humans are the only hot weather persistence hunters and wolves are the only cold weather persistence hunters. Why is persistence hunting so rare?

        • baconbits9 says:

          What about Hyenas?

          • Nancy Lebovitz says:

            I didn’t know about hyenas (just spotted hyenas, striped hyenas are scavengers) being persistence hunters. Still it’s a very rare strategy if there are only three species that use it.

        • Watchman says:

          I think the rarity of the mode is proven by the fact that where they overlapped humans (very quick attack if using missiles, very long range runners and wolves (slightly slower but potentially more devastating attacks, especially in numbers, and better chasers over a medium range) have been plausibly argued to have cooperated. That suggests the strategy was effective enough to mean no competition was necessary and prey could be shared.

      • Robert Jones says:

        The existence of adaptations for long-distance running seems compelling to me, but the hypothesised reasons (including persistance hunting) seem less compelling. This is partly because, as you say, the Wikipedia article falls into exactly the error I criticise in the top comment: it’s clear that we have a sequence of adaptations over the course of human evolution, including some specific to H sapiens, so they can’t be adaptations to a particular time and place. If we think that having larger semicircular canals and longer Achilles tendons were factors which gave H sapiens an advantage over H neanderthalis, then they must have been relevant to the environment in which they were competing.

        Besides, the Wikipeida article also suggests that the wooded savanna inhabited by early hominins included parts with dense vegetation, which would have allowed the prey to escape.

    • Watchman says:

      Surely humans’ (a term here used very generally as I’m not sure how applicable my comments are to non homo sapiens) key genetic environmental adaptation is to enter a new environment, establish themselves as a or the apex predator and then happily out compete other animals? There’s quite a few continents and oceans without hominid evolution, and these show the adaptability of humans to these environments. With this in mind then any indication of environmental adaptation is quite likely to reflect general adaptability rather than a specific environment; it seems unlikely that our ancestors were environmental specialists considering their ability to expand and thrive.

      • Robert Jones says:

        That seems to be supported by this paper, which says, “An alternative view, the variability selection hypothesis, states that large disparities in environmental conditions were responsible for important episodes of adaptive evolution. The resulting adaptations enhanced behavioral versatility and ultimately ecological diversity in the human lineage.”

  3. johan_larson says:

    The zombie apocalypse is here, it seems. Within the past hour, you have seen credible news reports of a major nearby city being overrun. Social media is full of videos of zombies. Officials in your town are advising everyone to stay inside and barricade doors and windows. You hear far more sirens than usual and occasional gunfire.

    What do you do?

    • idontknow131647093 says:

      Start filling things with water.

    • sandoratthezoo says:

      Well, I mean, I’d probably freak out and break down.

      But if you mean what should I do, then I think it’s mine those social media videos. Get a sense for what kind of zombies we’re talking about here. If it’s relatively classic zombies, it feels like probably find some place to hole up and wait it out is the answer — it’s hard for me to imagine the military just can’t deal with relatively slow, relatively fragile zombies. I suspect that they’ll be contained before they get to me.

      If they seem more magical than that, I guess get out of town on bicycle and then try to steal a car once I get far enough from town that the roads clear up.

    • johan_larson says:

      0. Update probability that we are living in a simulation.
      1. Arm myself. A hammer is the best weapon I have readily available, so that goes in the right hand. For the left, wrap a thick towel around the forearm and tape it in place. That should let me take on a zombie or two.
      2. Secure the perimeter. I live in a condo apartment building where the first floor has way too much glass to be securable. But the second story is only reachable through the elevators and two emergency exit stairwells. The emergency exits have strong lockable metal doors that open outward; secure enough for now. Get the building staff to shut down the elevators or just block the elevator doors so the elevators won’t move.
      3. Get organized. Coordinate with the rest of the building occupants to set up plans for security and endurance, with a “war room” in a known place and guards at the emergency entrance doors.

      • LesHapablap says:

        Part of coordinating a group should be mandatory wearing of painter’s masks, so that any one in the group who gets infected can’t infect anyone else. Certainly you’d quarantine newcomers to the group that way for at least 24 hours.

      • baconbits9 says:

        You are probably going to die of dehydration or starve to death. Getting out of town on a bicycle means being very exposed and carrying very little with you.

        1. Barricade doors and windows.
        2. Start filling things with water.
        3. Collect a few weapons, put one or two near each point of entry in the house.
        4. Find all the non perishable food in your house, don’t eat any.
        5. Start eating all the perishable food in your house.

        Part B.
        1. Collect all the things you can think that will help you survive. Tents, sleeping bags, food, flashlights, matches etc.
        2. Find any high value to weight trinkets (gold chains, rings) for possible post apocalyptic bargaining.
        3. Acquire a vehicle with a full tank of gas with 4 wheel drive.

        Part C.

        Figure out alternative driving routes out of the city.

        • sandoratthezoo says:

          I think that you meant to reply to me, not to Johan, as I was the one who mentioned bicycling out of town.

          My thought was that if you’re doing this early, you can bicycle out of a big town in advance of the biggest breakdowns of everything, then loot the suburbs/exurbs, get a car, and then head for the actual wilderness.

          Note that this is only for the scenario in which zombies seem like they aren’t containable by the military. In which case I doubt my ability to effectively barricade them away from me. They’re probably at minimum superhumanly strong, tough, and fast if the military can’t handle them easily.

          (Also: if the military can’t handle them, I’m not bothering with weapons unless they’re supposed to be for other humans. Like, if a squad of guys 15 years younger than I am with combat training, body armor, and fully automatic weapons can’t deal with zombies, I’m imagining that I can do it because I found someone’s .32? Or mall-katana? No.)

          • baconbits9 says:

            The situation doesn’t give you many specifics, you don’t get to know for sure how bad the damage is. If you hop on a bicycle and ride out of town for the wilderness how long would you survive once you got there? Combined with the risk of being attacked while on the bike I don’t think you are increasing your odds of coming out of it alive this way. In the best case scenarios (military/police) take care of it you want to stay put, in a lot of the worst case scenarios you want to accumulate as much as you can in terms of survival goods before moving on (as long as the government isn’t nuking its own cities to contain the threat of course).

          • sandoratthezoo says:

            Johan established that social media is “full of videos” of the zombies.

    • Drew says:

      Fill my bathtub, and other containers with water, and then worry about violence from looters.

      The Zombies seem like they’re in an impossible strategic position. To stack the scenario in their favor, let’s grant that they turned Every Single Person in San Francisco. The problem is that, once we’re aware of zombies, you have a force advancing on foot that’s armed with melee weapons.

      Even worse, they have no long-range coordination and can’t disguise themselves as locals.

      Zombies have an advantage only as long as everyone is disorganized. After that, they’d be slaughtered by any organized resistance, and the clean-up would be moderately dangerous at best. The actual danger is non-zombies assuming that society would break down.

      • Well... says:

        Yeah, I never really got the deal with zombies. When I was a kid, a zombie was a deceased person whose corpse was animated by extreme hunger for human flesh, but without anything even close to the coordination and speed of a normal person. Then 28 Days Later came out and being a zombie meant you had a rabies-like disease that put you in a permanent blind violent rage. I haven’t watched “The Walking Dead” or hardly any of the other popular zombie stuff, but it seems pretty consistent that zombies are at least fairly dumb, and not competent enough to, say, load, aim, and fire a gun. Plus these same qualities make them rather conspicuous. (Well not always, as Shaun of the Dead pointed out to comedic effect.) As such, the notion that there could be a zombie “apocalypse” in which more than a few hundred thousand humans died, perhaps during the first few days of confusion, always struck me as incredibly implausible even for horror/apocalyptic fiction.

        • Atlas says:

          Have you read Max Brooks’ book World War Z? It’s a fictional oral history, in the style of Studs Terkel or Zinky Boys, of a zombie apocalypse, that attempts to depict it “realistically”—that is to say, considering the ways in which economics, military strategy, politics, medicine, religion, media, et cetera, would be shaped in response to the crisis, but all told in terms of the memories of, often very ordinary, individuals. I would say that overall it had a paleo-style critique of middle class modern capitalist life compared to small communities of people with physical/tangible occupations and taking risks to defend said community by being in the military. Am I mistaken in thinking that you’ve expressed interest in Kaczynski’s ideas in previous thread? If you have, you might find the book of added interest.

          I thought it did it an excellent job of envisioning how dumb, lumbering zombies could pose an apocalyptic threat to humanity. I’ll refrain from elaborating further so as to avoid spoilers, but I think the book quite adequately answers the reasonable questions you pose.

          • John Schilling says:

            that attempts to depict it “realistically”—that is to say, considering the ways in which economics, military strategy, politics, medicine, religion, media, et cetera, would be shaped in response to the crisis,

            Realistically except for the bit where the average IQ of US military officers was reduced to about seventy, and every bit of actual US tactical and strategic doctrine was edited out of their brains even before the zombies got around to eating them.

            Regarding the OP's question: The first and probably last step is going to be, figure out how to stay out of the way of the United States Army for the next month or so. They'll probably have the zombie problem wrapped up by then, but win or lose they're going to be doing an awful lot of killing(*) and you don't want to be part of the collateral damage.

            * Or whatever we call it when you shoot zombies and they stop being a problem.

          • ADifferentAnonymous says:

            Am I mistaken in thinking that you’ve expressed interest in Kaczynski’s ideas in previous thread? If you have, you might find [World War Z] of added interest.

            I think that’d go great on the book jacket.

          • A Definite Beta Guy says:

            I love World War Z, but the initial outbreak is kind of hand-wavey. It’s “explained” by massive government incompetence and the inability of media-addled and consumption-addicted people to organize, but we don’t live in a weird hybrid center-left/Fight Club critique of the world: we live in the real world. Classic Zombies wouldn’t realistically a threat to anymore than localized areas. They lack intelligence and the use of tools required to BE a threat. Even an incompetent government would rout them…any goverment NOT incompetent enough to rout zombies would have already been routed by rebels in their own nation.

            That said, it’s a really good book, it just requires some suspension of disbelief (which is a given with the genre).

          • Not only would you need military incompetence for a zombie apocalypse to happen, you need a lot of improbable steps to happen. Let’s say an outbreak happens in San Francisco where the disease starts out during a concert for maximum confusion. Where is the police? Why haven’t they managed to contain it? Why aren’t people hiding out in their homes and apartment buildings where zombies can’t get them? Why haven’t they been able to drive away to other places? What about normal people with guns? Even assuming police incompetence and no guns, why aren’t people able to organize their own resistance to what are essentially mentally retarded people? How are these zombies going to spread this contagion across any other city, especially considering the geography of San Francisco? How does it spread so fast when they are incapable of using complicated machinery? At this point, we’re at the question of military incompetence. That’s a hell of lot of steps, each with tiny probabilities. A zombie apocalypse is essentially impossible.

          • baconbits9 says:

            @ Wrong Species

            That depends on the cause and how the outbreak occurs. If the only mode of infection is being bitten by an obvious zombie then it requires incompetence but that begs the question of how it all got started (animal bite). If its a virus that can be contagious for some period of time before obvious zombie symptoms are visible then things can go to hell quickly.

            Also if there is a deranged looking man attacking people on the streets the cops are getting called, which means there is a good likelihood that at least one, and maybe many, police departments will have infected members early on and their first victims are likely to be cops. A well thought out zombie story could easily start with police departments falling apart.

          • Matt M says:

            The answer to most of your questions is that in fictional zombie outbreaks, the zombies are (for reasons unexplained) endowed with human-level intellect, a strong compulsion to search and destroy (beyond the needs of current food consumption) and even seem capable of organizing to at least a limited extent.

            Most fictional zombies behave like Terminators where any live human is treated as Sarah Connor. That enemy would be quite hard to stop/contain. More “realistic” zombies less so.

          • @bacon

            One deranged man can’t take out an entire unit with guns. Even if he somehow bit a few of them, zombies don’t use guns.

            And if the virus is contagious before showing symptoms, you would have to have some kind of megavirus the world has never seen before. The closest thing we’ve seen to a zombie outbreak in recent history is the Ebola outbreak and that involved governments far more incompetent than the US over a smaller population and that wasn’t nearly as apocalyptic as the typical zombie story. Even if, implausibly, this super virus did manage to infect everyone, it’s the virus doing all the legwork. It’s more of a viral apocalypse than a zombie apocalypse. And I’ve never seen a zombie story that worked like that.

            @Matt M

            Zombies with normal human intelligence aren’t really zombies under common terminology. Even so, you have the problem that it’s really hard to overthrow a government without highly organized resistance, a charismatic leader and an incompetent government. You still have the problem of how one zombie is going to bite enough people before the police can shoot them. The math just doesn’t add up.

          • Matt M says:

            But a human resistance has many more complex goals it has to counter-balance, up to and including self-preservation, that zombies do not.

            Modern zombie fiction often portrays the zombies as at least as physically capable, intelligent, and capable of organization as regular humans, but also with absolutely no desire other than to hunt and infect other humans, and no regard for their own self preservation.

            It’s plausible to me that such a force could present quite a challenge. There are a whole lot of zombie movies where counter-balancing different goals is specifically what leads the survivors into a whole lot of trouble. They let the crying mother with her bitten daughter into their shack out of sympathy, then the daughter kills them all. IIRC the sequel to 28-days later presented a scenario wherein a soldier refusing to fire on an escaping infected child was basically single-handedly responsible for the fall of Europe.

          • baconbits9 says:

            One deranged man can’t take out an entire unit with guns. Even if he somehow bit a few of them, zombies don’t use guns.

            He doesn’t have to take out the entire unit, he has to infect one guy who then goes berserk in the police station while multiple other outbreaks are happening in the city.

            And if the virus is contagious before showing symptoms, you would have to have some kind of megavirus the world has never seen before.

            They don’t have to be symptom free, they just have to be ‘not obviously a zombie’ symptom free. If you have several days of symptoms that could be any of a number of other viruses before you start biting people, but are also contagious in that time then by the time the first guy goes full zombie he can have infected an enormous (or tiny depending on the transmission mechanism) number of people.

          • @bacon

            Explain to me, in detail, how one man bites or gets enough other people to bite police officers covering every single police precinct without getting stopped. Also how does a zombie police officer manage to infect an entire station filled with armed people without any weapons? Even today a deranged police officer with a gun probably couldn’t kill everyone(see the Fort Hood shooting). Seriously how does “we have reports of a man biting people” turn in to “the entire city has been overrun by stupid, weaponless people”?

          • @Matt M

            You still have to problem of how a group of people without any weapons are able to defeat a much larger group of people with firearms. 28 weeks later was a ridiculous movie that only highlighted the absurdity of the scenario.

          • Paul Brinkley says:

            @John Schilling:
            Realistically except for the bit where the average IQ of US military officers was reduced to about seventy, and every bit of actual US tactical and strategic doctrine was edited out of their brains even before the zombies got around to eating them.

            Have you ever read Day by Day Armageddon, by any chance? If so, how do you think it did by this metric?

          • baconbits9 says:

            Explain to me, in detail, how one man bites or gets enough other people to bite police officers covering every single police precinct without getting stopped. Also how does a zombie police officer manage to infect an entire station filled with armed people without any weapons? Even today a deranged police officer with a gun probably couldn’t kill everyone(see the Fort Hood shooting). Seriously how does “we have reports of a man biting people” turn in to “the entire city has been overrun by stupid, weaponless people”?

            In a world where no one has actually seen another zombie a zombie inducing agent is introduced. First person infected by a person intent on maximizing carnage from that single point. So you give it to a guy who is going to be on the subway during rush hour when it its. He goes nuts in a crowded train car attacking people indiscriminately and causing a panic. Eventually he is subdued, either by the passengers or the cops at the next stop, but dozens of people are injured from the bumps and cuts of the general panic to the full on savaged person who got it first.

            An hour later you have a few dozen cops and medial personal around making sure everyone who needs to be treated is ok, statements are being taken and information recorded. Then a call comes out, some of the passengers who were transported to the hospital have started attacking hospital staff. Half of the cops jump into their cars just as the 2-3 people from the train who got minor injuries which included a skin breaking bite, but didn’t need hospitalization, go nuts and jump whoever is closest.

            People start screaming and the remaining cops run over and start beating/pulling off the zombies and a few of them get bitten. An hour later you have a dozen cops back at the prescient answering questions about what happened when 3-4 of them go nuts and attack the cop closest to them out of the blue, everyone fighting the guy off is trying to be gentle as they can be (seeing as this is a colleague and a fried) and resisting using bullets, tasers or night sticks.

            As it stands a bunch of your officers are down at the hospital sorting out the patients who attacked, have any of them been bitten? 3-4 of your officers already zombies who are friends of all the cops (holding them in cells? Strapping them down and rushing them to the hospital?) and between 3 and 20 more cops who have just been infected.

            Are some of these assumptions implausible? Yes, you probably won’t have a 100% infection rate for bites and a 1 hour incubation time, but you also are stripping out some possibilities, like having a several day incubation and wind up sick period where patient zero has flu like symptoms and is somewhat contagious, where the first attack is going to be followed by a mass of a dozen attacks a few days later.

          • Well... says:

            @Atlas:

            Am I mistaken in thinking that you’ve expressed interest in Kaczynski’s ideas in previous thread?

            You’re not mistaken but I no longer have much of an interest in his ideas specifically.

            I’ll check out WWZ eventually. I’ve heard good things.

          • Jiro says:

            If you have several days of symptoms that could be any of a number of other viruses before you start biting people, but are also contagious in that time then by the time the first guy goes full zombie he can have infected an enormous (or tiny depending on the transmission mechanism) number of people.

            That’s not a zombie apocalypse, it’s a viral apocalypse. The fact that the virus turns you into a zombie instead of kills you is irrelevant in that scenario, since the transmission is coming mostly from the contagious pre-zombie period, not mostly from the zombie biting people.

          • @bacon

            After several implausible assumptions, you still haven’t explained how the entire city gets overriden by zombies. Let’s say, just for the sake of argument, that 100 people were turned in to zombies. Now that they’ve seen what’s happening, they have an idea of what’s going on. The internet tells me that San Francisco has over 2500 police officers. Now how do 100 unarmed idiots manage to collectively defeat the entire San Francisco police department?

          • baconbits9 says:

            All zombie situations are implausible, so I’m not going to defend myself on that front.

            As for the rest I’ve explained how an illness could lead to a high infection rate among cops, where 10-30 of the first 100 victims could be police officers. If instead of one person you start with 100 people (say its a coordinated terrorist attack with a biological weapon) then you could have upwards of 1,000 cops infected the first day, with hundreds of those happening at precincts where stashes of weapons are and where they would normally coordinate from. There will be hundreds more cops at every hospital and at all of the attack sites trying to stop the zombies, calm crowds and figure out what is going on. So you aren’t going to start out with 2,500 cops lined up in full gear with weapons clearing the streets, you are going to start up undermanned, scattered and dealing with a total unknown, thousands of terrified callers with tons of incorrect information, and cops who have families at home they are steadily becoming terrified for and having to fight the urge to just go get them.

          • Lillian says:

            That’s not a zombie apocalypse, it’s a viral apocalypse. The fact that the virus turns you into a zombie instead of kills you is irrelevant in that scenario, since the transmission is coming mostly from the contagious pre-zombie period, not mostly from the zombie biting people.

            This is how the Walking Dead does it. The pathogen that turns people into zombies spreads itself through normal human contact and has a long incubation period, giving it some ridiculously high infection rate. A fraction of the population doesn’t sicken and die from it, and these are our survivors, but they still carry it and will become zombies if something else kills them. So the twist is that zombie bites don’t actually turn people into zombies, they’re all already infected, the bites just produce fatal septic shock.

            Granted the show is not particularly consisted about this premise and tends to frequently fall into narrating as if zombies themselves were the primary carrier, but ostensibly they’re not. So while you could say that the Walking Dead is not really about a zombie apocalypse, it seems to me that as far as most people are concerned, a viral apocalypse that produces zombies is still a zombie apocalypse.

          • cryptoshill says:

            Came here to recommend Feed by Mira Grant, recommendation was made better by someone downthread, previous content was deleted.

            I would feel like viral zombies has all the problems associated with a regular supervirus except with all of the problems that now there’s a huge immediate-death problem that requires Combat to survive. Combat involves all sorts of people physically striking other people, blood, guts, gore, new locations that haven’t been sanitized – decontamination processes afterward, etc.

        • sfoil says:

          I really liked the way Half-Life 2 handled zombies. They weren’t actually much of a threat to healthy people and were pretty much nonexistent anywhere that had a functioning police force. But you’d occasionally find one stumbling around somewhere people had no reason to go, and they were particularly dangerous in places where masses of basically helpless people couldn’t get away: prisons, retirement communities, sanatoriums.

          • Nornagest says:

            …and they sometimes throw spiders at you.

          • ADifferentAnonymous says:

            Also, not an expert on Half-Life universe, but I know the zombifying agents are actively deployed as weapons. I like to think that they were developed with the assumption that they’d be causing planet-wiping contagious outbreaks and instead became a marginal-value weapon that probably never produced a fraction of the value they’d have gotten out of spending that budget on more guys with guns.

          • acymetric says:

            So…I feel obligated to jump in. Half-Life “zombies” weren’t actually zombies at all, it was more of a brain slug situation (1 headcrab attached to a human = 1 Half-Life “zombie”). They weren’t designed weapons, they were an existing species introduced to Earth as a form of biological warfare. The “spiders” thrown were the headcrabs evacuating their host once the host was no longer suitable (AKA you were killing the host via various Half-Life killing methods). There is no contagion because Half-Life zombies aren’t “contagious”.

        • johan_larson says:

          One zombie isn’t particularly scary, but a thousand zombies that are all around you sure are. Yes, you can dodge or fight one or even ten, but a thousand will get you in the end.

          Come to think of it, the trope plays to fears associated with being part of an extremely tiny minority. It’s like someone designed a horror-monster to freak out Jews.

          • Nootropic cormorant says:

            In the original John Romero movies zombies represent stifiling conservatism, but in most new ones they are more like a violent underclass riot.

          • beleester says:

            The question is how they get to a thousand before the army or police show up and start shooting, especially if they’re slow shamblers rather than the more modern fast zombies. And especially in a world where zombies have been in pop culture for decades, so everyone knows that if something won’t die, try shooting it in the head.

            A lot of settings introduce some alternate way for the virus to spread to get around this – maybe it’s airborne, or maybe it just happens to everyone who dies regardless of if it’s by zombie bite – or they just handwave away the authority figures noticing, like “the first few hordes got mistaken for flash mobs.”

          • A Definite Beta Guy says:

            The question is how they get to a thousand before the army or police show up and start shooting, especially if they’re slow shamblers rather than the more modern fast zombies. And especially in a world where zombies have been in pop culture for decades, so everyone knows that if something won’t die, try shooting it in the head.

            It happed at the Taste of the Chicago and spread really rapidly. The police presence there isn’t really capable of stopping tens of thousands of undead shamblers breaking out all at once. If they had prep time, yeah, no problem.

            Also, who said zombies have to be shot in the head? They actually have to be shot in the heart while saying “Ooga Booga!” Yeah, OUR culture thinks shooting them in the head works, but movie cultures typically don’t know that. If zombies broke out and were practically invulnerable except for one little trick, the zombies would do a HELL of a lot of damage in a localized area before they were under control.

          • Deiseach says:

            If zombies broke out and were practically invulnerable except for one little trick, the zombies would do a HELL of a lot of damage in a localized area before they were under control.

            The original folkloric zombies can be released from their state (back to natural death) by feeding them salt. Imagine an outbreak where people raised on the Romero-invented (and imitators afterwards) think that you put a zombie down by shooting them in the head, and it doesn’t work – these are traditional zombies (and why would shooting in the head work when these are resuscitated corpses anyway? Their brains are not supposed to be working at all, even if you handwave some kind of virus!) and unless you know the proper ritual, they are not going to be stopped (then again, traditional zombies are not brain/flesh eaters, so there is that).

          • Nancy Lebovitz says:

            I appreciate the way zombies can be used for a wide range of symbolism, though on the whole, they’re used for fears of large numbers of the underclass.

            In Rachel Caine’s Working Stiff they’re part of a story about the horrors of employer-controlled health insurance.

            Gur znva punenpgre’f rzcyblre pbagebyf gur anabgrpu juvpu xrrcf ure sebz gheavat vagb n mbzovr. V jnf tbvat gb pnyy vg zntvp anabgrpu, ohg vg’f abg zntvp va gur frafr bs n snagnfl abiry, vg’f whfg abg yvzvgrq ol fpvrapr.

          • The Nybbler says:

            Even if shooting the zombies in the head doesn’t work, they aren’t much of a problem for modern society. Break their legs and arms (shooting works, or running them over with a vehicle, or a sufficiently strong person in armor with an iron bar if it comes to that) and they’re not much of a threat. Then you experiment on the immobilized zombies. If anyone remembers the folklore about salt, they’ll certainly try it, and now the zombies are pretty much done for.

        • beleester says:

          If you want plausible, genre-savvy zombies, I recommend Feed by Mira Grant. In that novel, the zombie virus is latent in everyone and triggers when you die for any reason (in addition to being bitten), which means that you can’t simply eradicate the zombie horde, all you can do is get things contained to a level where anyone who turns in a populated area is quickly put down.

          The novel is set after the initial outbreak (post-post-zombie apocalypse?), when order has mostly been restored, and everyone knows how zombies work, and they’re just sort of adapting to the new normal where you always carry a gun when you’re outside, you need a blood test to get into a house, and when you die, your friends are going to have to shoot you in the head and then cremate you. And they’ll do it, because they’re all very genre-savvy and know what happens when you refuse to shoot a zombie because they look like someone you used to know.

          Because of this, the novel is less of a zombie apocalypse novel, and more a political thriller that has zombies in it. It’s a very interesting take on the genre.

          • Well... says:

            I once envisioned a courtroom zombie drama (or maybe comedy), where the movie is set in a courtroom where our small band of heroes is on trial for murder after they went on what they thought was a zombie-killing rampage. That part of the story is told in a series of flashbacks. They’re on trial because an antidote to the zombie virus was eventually found, and (the prosecution claims) the people our heroes killed weren’t zombies, they were just victims suffering from this virus. (If you discount the zombie bit, not all the killings were strictly self-defense, as in most zombie movies.) The families of the slain are there in the courtroom and some of them testify.

          • albatross11 says:

            Well:

            ISTR John Schilling pointing out that this would make zombies much harder to deal with. Take the standard zombie apocalypse story, but with some expensive, difficult treatment that will cure half of the zombies–bringing them back to more-or-less who they were before they became zombies.

            Suddenly, instead of shooting lots of mindless zombies in the head with a clean conscience, you are shooting lots of sick people who might get better with proper treatment, including people you or your friends know and love. Or you’re trying to decide which of the mindless horde attacking your fortifications will be allowed to live and given the 50% effective but very expensive/difficult treatment. Or….

          • Deiseach says:

            What’s fascinating to me is the evolution of zombies; they’ve undergone the same metamorphosis as vampires (werewolves were presented as sympathetic in the Hollywood movies from the start with Lon Cheney as the unwilling victim of the curse), though more slowly, and have now become quirky heroines – see the show iZombie where the lead is a doctor infected by the zombie virus, she needs to eat brains or revert to the typical zombie, and so she works in a medical examiner’s office to get access to brains and helps the police with murder cases due to her talents – when she eats brains (such as the brains of murder victims) she gets memory flashes and personality traits of the person whose brain that was. (She’s also cute, with pale skin and white hair being the only sign of her zombiedom – no rotting flesh or shambling corpse, this!)

            The Santa Clarita Diet is even more extreme in its transformation, where Drew Barrymore looks and acts just like an ordinary human still even after being zombified.

            So we’ve gone from the poetic re-imagining of the traditional Haitian zombie by Jacques Tourneur to the Romero shambling, violent undead with a mania for flesh through the evolution of “brains, brains!” to the zombies who are faster, stronger and maybe even smarter in a way than humans to the ‘cute monster’ version of the zombie.

          • werewolves were presented as sympathetic in the Hollywood movies from the start

            Much longer than that. Bisclavret, by Marie de France, is a 12th century story in which the werewolf is the hero.

            And in Egilsaga, written in the 13th century but possibly going several centuries farther back in oral tradition, we are told that Kveldulf, Egil’s grandfather, was said to be a strong shapechanger.

            His name means “Evening Wolf.”

          • beleester says:

            @albatross11:
            Without spoiling anything, the sequels to Feed (Deadline and Blackout) discuss exactly this problem.

          • cryptoshill says:

            +1 for this series.

            One of the interesting portions of zombie-survival in the wild that this book brings up that I don’t think was ever adequately addressed in other media – the punishment for not keeping your rag-tag group of resistance fighters zombie free was also astronomical. So why are groups of people running through hot zones and not bothering to decontaminate or even check for wounds? Some of the better sequences of the book involve decontamination processes and it was quite refreshing.

      • baconbits9 says:

        The Zombies seem like they’re in an impossible strategic position. To stack the scenario in their favor, let’s grant that they turned Every Single Person in San Francisco. The problem is that, once we’re aware of zombies, you have a force advancing on foot that’s armed with melee weapons.

        It is wrong to model it as an us vs them army vs army. Such an event would probably start with numerous points which would include many military and law enforcement people. Besides the general chaos members of the military will be heavily defecting to get home to help their loved ones. Discipline is going to be practically impossible in a lot of situations.

        • Matt M says:

          I think “The Last of Us” did a pretty good job of modeling a somewhat realistic zombie apocalypse scenario. Strict military discipline (up to and including shooting children) is able to secure the existence of a few zombie-free “quarantine zones” (but creates some problems of its own). Large swaths of the country are basically no-mans land – travel at your own risk (and don’t expect anyone to help you). Some survivalist camps exist in rural environments and manage to get along mainly because there aren’t any particular “hordes” of zombies to bother them (there aren’t “hordes” of anyone up in rural Wyoming, so yeah).

          • baconbits9 says:

            Much like the “what happens if half the world’s population disappears discussion” I think most zombie theories underestimate how fragile supply lines would be. You quarantine an area, where are you getting enough food to feed a perpetual warrior class for defense? Lets say you need an acre of high quality farmland per person, and your population is 50,000 people, then you have an enormous perimeter to defend. You can do some motte and bailey stuff where you push out, clear an area, and then plant or harvest, but crop management, pest control, irrigation?

            The military could set up some of these areas in the near term because they have stockpiles, but those stockpiles would run low pretty quickly.

          • Matt M says:

            My impression from the game (and what seems like a logical answer intuitively) is that population density will necessarily fall significantly. Even within the quarantine zones.

          • baconbits9 says:

            Under most assumptions you should get a complete collapse. If you have specialization of behavior you need a large population base and a notable decline can ruin everything quite easily. If you just have farmers + military + vehicles you need people to be mechanics and a supply of fuel for the tractors/military vehicles. If you don’t have tractors you probably need a minimum of 2:1 farmers to non farmers (which would include children, your military, mechanics, water purifiers) and probably more like 5:1.

            The further population falls the fewer specialized farming implements you can have and the higher the farmer to non farmer ration has to go. Quarantine zones will fall apart without the requisite discipline and number of dedicated military personnel.

          • John Schilling says:

            You quarantine an area, where are you getting enough food to feed a perpetual warrior class for defense? Lets say you need an acre of high quality farmland per person, and your population is 50,000 people, then you have an enormous perimeter to defend.

            50,000 acres is conveniently just about a 50 km perimeter. So, one person per meter. I’m thinking they can hold against the zombies.

            Even if it’s one actual guard per ten meters due shifts and division of labor, automatic weapons plus razor wire count for a lot.

          • albatross11 says:

            If there is an organized force of soldiers with guns and under some kind of competent authority, and there is food anywhere that could be used to feed their population, I think the outcome is pretty obvious–the folks with the army are going to end up with the food. Longer term, they’ll also be holding secure territory where farming can happen again and they can become self-sufficient, but if there are food shortages early on, the guys with the guns and the organization are going to have it. And are going to be using it to feed their soldiers plus their dependents/civilian population.

          • Lambert says:

            > Lets say you need an acre of high quality farmland per person

            On what basis? There’s less than an acre of land per person in India, farm or no.

            And, unlike the average Mexican, zombies don’t have access to ladders, so even a simple palisade wall ought to go a long way towards making perimeter defense easier, as could natural barriers like rivers or cliffs.

          • baconbits9 says:

            On what basis? There’s less than an acre of land per person in India, farm or no.

            Is all food eaten in Indiana imported to Indiana?

            Total farmland in the US is about 2.5 acres per person, but the US is a net exporter of goods I believe, on the other hand it takes a massive and highly advanced logistical system to work those farms and it would be likely that productivity would fall in the face of a zombie apocalypse.

          • liate says:

            @baconbits9

            India, not Indiana

          • liate says:

            And according to this site, India’s “vegetable product” exports are at least a higher dollar value than their “vegetable product” imports, and the crops in question look like they at least could be able to replace each other? Not an expert, and it seems that finding a more direct finding would take a lot more time and effort.

            Besides, India has quite a lot of not-high-quality-farmland land, such as natural parks and mountains and stuff, along with people in cities, so it at least seems plausible.

          • baconbits9 says:

            India, not Indiana

            Well that is an embarrassing misread!

          • Drew says:

            Why would farms be threatened? Or even particularly inconvenienced?

            Lone zombies aren’t a threat to an armed person who knows that zombies exist. And farming areas don’t have the population to support hordes.

            To pick a region, Marion County in Illinois has a population density of 69 people / square mile.

            If authorial fiat killed half the county, the other half would have a weekend’s worth of hunting. And then you’d need to be careful to lock your car doors when traveling.

            The deaths-by-authorial-fiat might disrupt supply lines. But that proves that authorial fiat is a risk. The people rising afterwards doesn’t add much extra risk.

          • The Nybbler says:

            Yeah, zombies aren’t like an army; they’re like stupid and slow but tough rabid dogs (in a world before rabies cures).

          • Matt M says:

            Right. And even the more intelligent form of zombies don’t seem to be intelligent enough to engage in a long-term war of attrition. In other words, they don’t prioritize isolated farms as targets in order to disrupt the supply lines of the larger cities. They go for the cities, because that’s where the brains are.

          • baconbits9 says:

            Farms don’t need to be particularly threatened.

            Why is the farmer working hard to ship food to your city? Because he is getting paid for it. What is he doing with that money? Buying fuel for his tractors, clothes, trinkets for his kids etc etc. All of these interactions bring him in contact with numerous other people and raise his exposure 10-20x.

            There are things you can do as a city to quarantine against this risk, by having drivers drop off the trailers and have minimal interaction with people, or quarantine them for longer than the known incubation time, but these don’t do much to mitigate the risk to the farmer. Yes, an armed man, in a field shouldn’t be caught by a shambling zombie, but every mistakes that is made, he takes a nap outside on a hot day, his daughter sneaks out to see a boy, someone gets drunk increases their risk which in turn increases the risk to the cities food supply.

            @ Matt M

            When attacking a fortified position cutting supply lines is the first thing you do. If you are positing intelligent zombies then going after the farmers is step one, and step two is waiting for the city to empty out looking for food.

        • baconbits9 says:

          50,000 acres is conveniently just about a 50 km perimeter. So, one person per meter. I’m thinking they can hold against the zombies.

          Who is doing the farming here? Where is your water supply? Are you stationing people out in the open? Building shelters? Who is protecting the food/ammo stores? Who is delivering the food to the perimeter guards?

          Defending the perimeter is trivial on its own, but it takes months to turn farmland into food, even quick growing crops (which are low calorie yield) take 6-8 weeks from planting under good conditions, and crops that can potentially be staples often require 3-5 months.

          And the 50,000 acres is good quality farm land, which isn’t going to have infrastructure to support 50,000 people, and will need to have all the processing equipment in place to turn your crops into food.

          • CatCube says:

            Some of these are done by the guards. Part of securing a perimeter is, well, maintaining the obstacles that form the perimeter. For larger types of works, you can have a small group of engineers, but they probably won’t be doing work at more than one site at a time, so you don’t need very many–they may simply be regular guards with extra training.

            You also don’t need to truck food out to guards manning a perimeter. They bring lunch with them, and they come back for the other two meals. That’s how we do it right now, manning perimeters of bases in Afghanistan. Granted, you don’t have MREs in this scenario, but sandwiches won’t be a bridge too far.

          • baconbits9 says:

            @ Catcube

            What is the ratio of support troops to actual troops on the perimeter at any one time? If you run 3, 8 hour shifts on the perimeter with zero days off then your 50,000 drops to ~16,000 at a time. If each guard needs one support person to feed, clothe, and house them then you are looking at 8,000 guards available at any time for a constantly defended perimeter. If one farmer can feed themselves plus two other people then you have 5-6,000 people on the perimeter at anyone time.

            These are very generous assumptions, none of your population are either to old, infirm, sick or young to contribute at anytime, we haven’t discussed major issues like how we are providing water and heat and medical attention, when you get down to it you could maybe support 1,000 perimeter guards at any time at the top end and probably significantly less than that.

          • albatross11 says:

            Zombies don’t eat crops, so growing next year’s crops just requires that the farmers be able to keep farming. In sparsely-populated places that probably means securing their home against zombies (not all that hard) plus having extra people out as lookouts to shoot zombies who appear to interfere with the guys doing the farming. Everyone puts bars on their windows and locks their doors and only goes outside armed. If someone is running things in a given area, they’ll see that the available diesel fuel is rationed for the tractors, and probably provide armed men to help if the local farmers call. They will be collecting a lot of the harvest (part of the benefit of having armed men available), but the farmers aren’t going to be going hungry and neither will their families, and in a genuine zombie apocalypse, the farmers will be very damned happy to hear from the local national guard commander who’s taking orders from the governor and is keeping order by shooting zombies and looters as necessary.

          • baconbits9 says:

            Zombies don’t eat crops, but to effectively quarantine an area you have to limit people coming and going. This makes supply lines very difficult to navigate, and depending on the transmission mechanisms and incubation times will have many different issues.

            You might be initially happy that the national guard is going around shooting zombies and looters, but when they start showing up and demanding that every human assembles in sight in the middle of the field while they haul off your crops to feed their outfit these feelings will change. This is the expected outcome because what are they going to do, pay you for your crops? Money is pretty much worthless without a place to spend it and a quarantine area isn’t going to be letting farmers come on in to have a good time with the hookers and the blow once a month. So either the area has to be producing goods to trade with the farmers, or the farmers have to be in the quarantine zone where the military is functionally trading protection for the food which means defending an enormous area.

            If someone is running things in a given area, they’ll see that the available diesel fuel is rationed for the tractors,

            Central planning is notoriously bad at determining the distribution of goods absent markets. Who is more likely to get first crack at the diesel, the farmers at a distance or the military that is right there and is the the strength of the guy in charge?

    • Murphy says:

      What type of zombies are they?

      Because it really changes the tactics.

      Shuffling mall zombies: don’t really worry unless there’s like a million of them 20 years away, anyone with half a brain can beat them with minor prep. This is why so many zombie movies have to skip to where the military has been beaten with little explanation. Perhaps with the obligatory dead soldier hanging out of the top of a tank, like WHY DID YOU OPEN THE HATCH!!!! Reaction: buy popcorn.

      FAST zombies: HL2 running zombies or 28 days later: pretty much the same deal but I’d want to be further away from them. Still unthinking drones that are gonna be easily dealt with with even minor coordination. Reaction: move further away and wait for the situation to be resolved.

      Human-level smart zombies who want to spread the brain slugs or zombie virus: at this point they’re less zombies and more pod people. But they’ve got human level intelligence so now it’s time to panic. Depending on the rules they may even be smart people willing to make suicidal charges in order to spread the infection and/or may have other advantages of the undead. They’re fully capable of using firearms themselves, we may be fucked. Reaction: kiss ass goodbye

      Trauma zombies: Shuffling mall zombies but only from the neck down. They’re normal people with no control of their bodies and the parasite has tendrils in their brain amping up their urge towards self preservation making them want to live, if they grab you they follow normal shuffling mall zombie rules and turn you into one of them but if you point a gun at them they shuffle at you pleading with you not to kill them (the body runs on automatic zombie rules), and they mean it, they’re just innocent passengers along for the ride stuck on top of a zombie body. Reaction: invest in stocks of companies providing trauma counseling as a service and companies selling steel cages suitable for containing loved ones.

    • proyas says:

      The zombie apocalypse is here, it seems. Within the past hour, you have seen credible news reports of a major nearby city being overrun. Social media is full of videos of zombies. Officials in your town are advising everyone to stay inside and barricade doors and windows. You hear far more sirens than usual and occasional gunfire.

      What do you do?

      If things went from “normal” to “zombies have taken over a major city and they’re also in my town” in less than an hour, then the zombie pathogen must be airborne or in the water supply, and/or the zombies themselves are highly aggressive and are easily overpowering and infecting new people.

      With that in mind, I’d get in touch with my immediate neighbors to create some basic understanding of how we might protect each other or coordinate our actions. I’d then make my house look uninhabited so as to be a less appealing target by moving my car out of my driveway and parking it on the curb in front of someone else’s house, and I’d turn off all my lights and keep watch by peeping out a window blind. Staying inside would also be a good idea since my house’s HVAC system would offer some protection against an airborne zombie pathogen. I’d closely monitor the internet and radio for further information.

      Trying to flee in my car would be a terrible idea since everyone else would be assuredly doing the same thing, leaving the roads completely clogged with easy targets for the zombies.

    • Here’s what a realistic “zombie apocalypse” would look like: a pharmaceutical company is working on some kind of drug. But due to incompetence, they end up causing their test subject to start biting people and somehow missed something in the animal trials. He bites one or two people and the police are called to handle it. The company ends up going bankrupt. I probably miss the story because of some new political outrage.

      • Matt M says:

        But “nuke raccoon city” is so much more interesting from a narrative perspective…

        • And I’m ok with that. I just think it’s ridiculous when people try to get explanations of how something that is essentially impossible could happen, when the reasons given are extremely strained. I’ve seen the same thing with the new Black Panther movie. Things don’t always need an explanation. Fantasy authors get that. We don’t need a scientific explanation of how Gandalfs powers are actually compatible with our own world. Just accept that’s he a wizard and move on.

    • arlie says:

      Well that was a fun read, but I think pretty much everything has already been said.

      The one thing mostly unmentioned is that what a given individual does, or should do, is highly dependent on their capabilities and situation, as well as on the behaviour and capabilities of the zombies. At my age, I’m either hiding, in a group with competent defenders, or dead/turned 😉 Or most likely, watching the whole thing on news videos while police, military, and civilians of fighting age deal with the zombies. (I agree with everyone who points out that mindless-killing-machines are easy to eliminate, once past the initial surprise.)

      But given the scenario – I’m already hearing gun shots – either my local cops/neighbours are firing at figments of their own imagination and/or looters, or I’m unfortunately in the area of initial surprise. So sensible triage puts me in the quarantine zone. If I can’t find a non-zombie-attractive/defensible hiding place, stocked with enough food and water to outlast the zombies/last until those from outside come to clean up, I’m dead/turned. Given that I live in a normal suburban style house (windows) and have very little time (incidents within hearing range), and no obvious refuge (+ no supplies there), I think I’m SOL.

      I might as well fill containers with water and generally go through the motions – it beats simply succumbing to panic – and might make a try at barricading myself in my basement (smaller windows, awkward to get through) I still think I’m SOL.

      Of course this all assumes I’m at home. I have similar problems if I’m elsewhere, and fewer resources, except for the possibility of lucking into a good group of people, that don’t spontaneously turn into zombies.

    • Tarpitz says:

      You go to the zoo and you get a lion. Stick a remote control bomb up its butt. Push the button on the bomb and you and the lion die like one.

  4. tayfie says:

    Since I doubt at this point there will be any dedicated space to discuss the incomplete adversarial collaborations, this is a top-level post for that purpose. Since some of the collaboration topics may touch on culture war topics, I think it is best to keep this specific subthread for the purpose of discussing what problems people had with the collaborations themselves. I wished to do this sooner, but always found myself too busy before and would see the thread full of different discussion by the time I got around to checking and figured my post would be too far down to attract much attention.

    The remainder of this post will be discussing my own attempt at the collaboration and I invite any of the other team members to make their own posts in reply.

    For background, my collaboration was with aarsitus on the topic of the causal link between social media and political polarization. I felt we got a good start on it. Our nominal positions were me arguing that social media was a significant cause of polarization and him arguing against. I have information to share on the topic that many here may find interesting, but that is a post for another time.

    If I had to gauge our progress, I’d say were 1/3rd of the way to a presentable entry before losing touch about a month after beginning this endeavor. We had both spent several hours on research and had assembled an outline of what seemed like the relevant angles to the topic. Until that point, we had exchanged notes and ideas at least once per week throughout May. The proximal cause to our disruption and eventual abandonment of the topic was that we were both traveling at the beginning of June and could not work on our topic during. This interruption was fatal because, at least for me, it broke my chain of thought on the issues and when I came back, I had lost interest.

    With most of the research done, I had already scratched my own intellectual itch for some idea what was going on and it was difficult to find motivation to synthesize and clarify these insights into something worthy of an entry on this blog. It has always baffled me how bloggers like Scott can continually produce so much high quality content. I have never had the interest in writing that way, so I have never developed the skill. Do people find it fun? How? Anyhow, that’s getting off topic.

    I feel like the biggest issue for me was a simple lack of discipline to pick back up after a break. This is especially true for something that garners my attention in my free time only. It was something completely new to add to my schedule, whereas I think anyone who already writes frequently in their lives had a much better chance of completing this task. To that end, I suggest more of an effort be made to spread the idea among individuals who write frequently. Perhaps Scott himself could find another blogger that he respects and do a collaboration of his own.

    If the idea of adversarial collaborations remains strictly on the basis of who among the commentators is willing to volunteer, I think the best way to improve completion rates is to make the process more rigorous. All the teams were given enough rope to hang themselves, and many did. We got to choose our own topics, set our own working schedules, and choose our own scope. I would be most interested to hear the process that the teams completing the collaboration followed and think those plans should be posted with the next contest to give people a blueprint of how to handle things.

    • johan_larson says:

      I was struck by how long the essays of the successful finishers were. I suspect all four would have been accepted as undergraduate theses. If that’s the length it takes to satisfactorily answer a broadly worded question, it’s not too surprising that many teams did not finish.

      Perhaps it would be better to choose narrower questions and agree on a tractable length limit, such as 2500 words.

      • Mark V Anderson says:

        Yes I was also a bit shocked at the length. I was very impressed that four good results came out of this process. I agree with Johan that smaller topics might be easier to complete. Although they might be just as long, but in more depth. The one criticism I had for all four essays was that each one only seemed to cover part of the topic. I almost wish we’d have followup groups take one of the completed essays and pull a much smaller sub-topic out of it, so we could get to the meat of the issues. But I’m not volunteering myself, so I can’t say I expect others to do so.

        @tayfie. Please do plan on doing a post on the research you did. And I hope your adversary does the same, so we might get a piece of what we could have gotten if you two had completed.

      • Robert Jones says:

        I agree they did seem long, but perhaps it would have required more work to make them shorter? I can see that the nature of the adversial collaboration might generate a lot text: since the participants start off far apart, they probably have to go over a lot of basic detail to find common ground.

      • baconbits9 says:

        Perhaps it would be better to choose narrower questions and agree on a tractable length limit, such as 2500 words.

        If the idea is to get a good quality collaboration as a result then I don’t see why you would limit the length. More ‘finished’ but lower quality collaborations aren’t better than a few really high quality ones.

    • aristides says:

      I volunteered to do an adversarial collaboration, but I dropped out mainly because I underestimated the difficulty. I wanted to know the answer to my topic, porn’s effects, but the literature was larger and more detailed than I expected. It became obvious that to research my position, argue about the position, and write a collaboration would take more time than a small chance at the reward plus satisfaction of knowing the answer was worth. In the future, I will only volunteer for a collaboration I am already an expert on my position, and I am willing to publish in my own name. Being an expert already, lowers the difficulty of the collaboration, and publishing in my own name gives me an extra incentive to finish, for prestige and job related benefits.

      • Paul Brinkley says:

        I have a feeling that a lot of potential collaborators would have dropped out for the same reason – too much literature. (That’s pretty much why I never volunteered in the first place.)

        If that turns out to be accurate, then I think there’s some lemonade to be made here. Rather than adversarial collaboration starting from 0 kph, we could instead map the terrain – two adversaries outline all of the literature that would be relevant to a given topic, including an effort to cull it of duplicate information (and noting what was culled), vet it for provenance, etc. Then leave it to the next team with energy to spend.

        Alternately: make the teams bigger.

  5. doubleunplussed says:

    Can anyone recommend me a book on human social dominance? I’d like to understand more about what it is and what roles it plays in day to day life. I’ve been getting the impression that dominance is omnipresent in social interactions, and I’ve been trying to interpret things through that lens (mostly out of curiosity). But, I figure there is lots known about it that I’m unlikely to notice on my own, so if there’s a good book I’d like to check it out.

    Edit: I mean like, pecking order dominance, i.e. group alphas etc – not what comes up when you type in “Social dominance theory” into wikipedia, that appears to be about groups dominating groups rather than individuals having dominance relations with each other.

    • Mark V Anderson says:

      Sounds interesting. I don’t have any examples, but I’d like to hear of some such books also.

    • Robert Liguori says:

      I’d recommend the first chapter of Keith Johnstone’s Impro. It’s not a scholarly treatise, but I found it a useful pattern-crystallizer for myself, and I know that it’s been recommended around the rationalsphere by others.

      • J says:

        I read it due to someone’s recommendation here, and it was literally life changing: improv is now my favorite activity, and I’ve done 1-2 times a week for months now.

    • Atlas says:

      Also interested, but would like to add a request for material dealing with this issue in hunter-gatherer societies, both historically and presently.

    • aristides says:

      I recommend the Power Paradox: How We Gain and Lose Influence. Power is not a perfect synonym to dominance, but my personal view is that power is the better way of looking at social interaction than dominance.

    • maintain says:

      There’s Elephant in the Brain, which should be familiar to posters here.

  6. Paul Brinkley says:

    I’m looking for great moments in rationalist comedy.

    My first nominee: Dara O’Briain.

    • toastengineer says:

      What’s rationalist about that? That’s just Reddit Atheism.

    • James says:

      I’m not always a huge fan of Tim Minchin, but If I Didn’t Have You is nice, and pretty rationalist.

      He also has that one about bitching at a new age hippie chick called Storm at a dinner party, which is impressive and reasonably likeable, but feels a bit outgroup-bashy.

      • Nancy Lebovitz says:

        Not just outgroup bashy, but it also treats conventional medicine as absolute truth, which it isn’t.

    • baconbits9 says:

      Bill Burr is a guy that I think of as a rationalist, not because he approaches things from that bent but because he admits his biases and ignorance up front in a lot of his bits (and not in a generic self depreciating comedic way, but specific to his own faults).

      • j1000000 says:

        I’ve definitely never thought of Bill Burr as a rationalist — certainly he admits his own faults but his viewpoints are basically conservative. I guess some rationalists may prefer that to, say, Nanette, but I’d think he’d have more appeal to the alt-right-ish people (which Burr is not) than rationalists.

        But big second to him being very, very funny, especially his first few Netflix specials (Why Do I Do This, Let It Go, and You People Are All The Same).

  7. Hoopyfreud says:

    I think that the poster who was doing engineering mechanics has dropped out, so I figured I could continue their work a bit. This is a bit off-the-cuff, so this one will be short; the rest is going to take a lot of thought to explain without recourse to math.

    Continued from here: https://slatestarcodex.com/2018/08/29/open-thread-109-25/#comment-663922

    Stress, strain, loading, bending
    We’ve talked a bit about the forces that are at play in structures; this is pretty standard physics. However, the applications in engineering are a bit specialized.

    Stress is a distribution of force over a cross-section; this is roughly equivalent to pressure. To be more specific, *normal* stress is equivalent to pressure; this is stress applied along the cross-section. Shear stress is applied perpendicularly. All forces through a structure can broken up into normal and shear components. Note that torsional (twisting) loads are basically shear loads in all cases, for reasons I’ll explain below.

    Stress is an important concept because objects have a limited capacity to bear loads, and that capacity is often linear with respect to cross-sectional area; this neglects some stuff, of course, but it’s actually a fairly decent approximation. Stress levels in a structure are compared to the *strength* of the material in order to determine the amount of stress a material can bear before failure.

    Strain is… an interesting quantity. Essentially, it is the unitless percent deformation of the material; this is false, and requires a bit of math to properly explain, so I won’t. Besides, that’s how we usually measure it. Strain is proportional to stress in materials up to the point of failure, with roughly a linear relationship. This tells us how much we’re going to stretch out material by loading it, or alternatively, how much we’re going to have to load our material to stretch it. The reason why torsion is effectively a shear loading is that the material *sees* it as one – an applied moment will cause the material to experience shear strain, so we can understand what the material sees as a shear stress.

    Loadings – I touched on this in stresses above, but I should be explicit – materials can be loaded with normal forces, shear forces, or torsion. Normal forces are broken down into tension forces (which stretch the material) and compression forces (which squish it). Torsion produces effects a lot like those of shear forces, but loads the material in a fundamentally different way; this all comes out in the math, which has to do with the geometry of the material. Most materials can take more force in compression than in tension before failing. Interestingly, it’s possible to resolve the shear and normal forces into *principal stresses*, where only the normal stresses exist, using a coordinate transform. This is useful for evaluating the impact of loads on your structure, since a combined shear and normal stress load will have unpredictable effects.

    I should mention that all of these types of loadings are very rarely point loads, like they are in high school physics; here we’re dealing with loadings that are often distributed unevenly over a surface, which can very easily be imagined by considering the force exerted on, for example, a dam; water pressure increases with depth. There’s a bunch of calculus we do to determine the loading, either directly or using an equivalent.

    Finally, bending. Bending is what beams do in response to loads; deformation of beams is the most powerful modeling tool you can use on the back of a napkin, and (less sardonically) is the basis for most finite-element models. How bending works depends heavily on how the beam is supported and the kind of loads the beam is supporting, and most loadings are exhaustively tabulated. I’m not kidding; Roark’s Formulas for Stress and Strain is hundreds of pages.

    Now we’ve talked briefly about the *quantities* involved in the engineering analysis of structures and how they interact; next time we’ll talk about dynamics and what drives structures to move.

    • CatCube says:

      I actually have the next one about 3/4 finished, but I got sidetracked by some other stuff the last few weekends. It’s a continuation of that statics post, but talking about how we distribute loads among building parts. I’ll try to make an effort to get it done tomorrow after work.

      One thing that I was thinking about when discussing mechanics as you did in your post was using a pencil eraser, which is my go-to tool when I have to explain it in person. If you draw cross-sections on it, it helps to understand tension and compression, and demonstrate bending. If you’re going to move on to dynamics, I can do a small post with some examples as well.

      • Hoopyfreud says:

        I’m happy to defer to you and chime in occasionally – I’m not a structures guy, so I’m ready to learn about interesting statics, even if it’s just an overview. Dynamics is more my jam, so I’ll consider following up when you’re done. Sorry for stealing your thunder!

  8. Andrew Hunter says:

    New hot take, I’m not sure if I believe it or not. Thoughts?

    I’m not sure if I support the death penalty for murder or not–there are many arguments on both sides here. I lean against, roughly because I’m not sure and I feel like mistakes in one direction are worse. But what I can endorse unconditionally is: capital punishment for jumping turnstiles. Ideally, implemented by any right-thinking citizen who sees you do it immediately pushing you into the tracks. Same for most forms of petty theft. (Paging Inspector Javert, I know, I know, but hear me out.)

    Here’s the reason: when you jump a subway turnstile, you set fire to a little bit of social order and communal responsibility. Now, I know people make mistakes; sometimes great rewards are just too tempting and you defraud a client, or rob a bank, or murder someone in a fit of passion–it’s not OK, but it’s understandable, and we must see some mercy in people driven by their human limitation. But when you jump a subway turnstile, you demonstrated that you could not hold back from burning the commons…for $2.75. You sold out your fellow man for three bucks. What will you do when offered the chance to do the same for $100,000 at much greater cost to me and my loved ones? I’m pretty sure you’ll take it–and I will stop you from ever having a chance.


    If you cannot be trusted on the little things, if you destroy the common fabric of society for barely any personal gain, if you cheat when it barely helps you, then I’m terrified of what you’ll do with actual power.
    And I want you out of our society, post haste.

    (No, I’m not going to implement this policy, but the idea seems very strong to me and hard to disprove in basic form.)

    • toastengineer says:

      You’re missing the fact that people have, like, emotions and shit. The guy who gives so few fucks about his hypothetical fellow man that he jumps the turnstile may well step in front of something dangerous to save a kid, because… his instincts don’t really tell him that jumping turnstiles is bad, but they sure do tell him so about letting little children die.

      Furthermore, the kind of person who does that is probably young and impulsive. People change as they mature, usually for the better. Killing someone for minor defection when they’re in their 20s with testosterone sloshing around their skull denies them the opportunity to realize their mistakes and be better and then contribute.

    • Hoopyfreud says:

      This is a flaming-hot cheeto take if I’ve ever seen one.

      Refutation that’s probably lazy, but at least marginally better than yours: you’re talking about a justice system that’s actively regressive. The greatest non-violent transgression of law in a homeless dude’s year is probably (albeit, probably regularly) jumping the turnstile; rich people who aren’t dicks for the sake of it will never be in this position.

    • Atlas says:

      Strong agree that turnstile jumping is terrible, strong disagree that the case for selective capital punishment for turnstile jumping is unironically correct. (Not that you were necessarily saying it was.)

      The argument that someone who is willing to commit a crime for less material gain is more dangerous is indeed correct in my view. However, the relevant metric here isn’t the dollar amount to be gained from the crime, but the expected value/risk to reward ratio of the crime. $2.75 is a very small amount, but the risk of being caught and the punishment one can expect if caught are also very small. Thus, while turnstile jumping is a sign of poor character, because the risk of getting caught and the probable punishment are so much higher, fraud and robbery are likely signs of worse character. (As one would intuitively expect.)

      • Atlas says:

        Additionally, while I’m not 100% confident in this (it’s late lol), I think the argument you relate perhaps ties harm caused and benefit gained together misleadingly. (If you don’t know what I’m saying, it’s ok because I don’t even really know what I’m saying.)

        That is to say, the more harm someone is willing to cause to innocents to achieve their goals, the more evil they are. Someone who is willing to jump a turnstile is willing to cause ~$2.75 worth of harm in return for $2.75 worth of gain, plus damage to communal norms. They might very well not be willing to steal $2,750,000 worth of retiree’s savings, however. So perhaps we might be rational in distinguishing someone who is willing to commit a great amount of harm, even for a great amount of personal gain, from someone who is willing to commit a much smaller amount of harm for a smaller amount of gain, even if the harm/benefit or risk/reward is the same.

      • Ketil says:

        The argument that someone who is willing to commit a crime for less material gain is more dangerous is indeed correct in my view

        Moreover, the argument presumes that punishment doesn’t affect behavior. If everybody who jumps turnstiles are killed immediately, guess what, nobody does that anymore – not even once. Did you weed out any immoral types from the population? No. Enforcing behavior by threat of lethal violence doesn’t make people moral. The people willing to commit worse crimes (with less punishment) would still be around.

        • Matt M says:

          Well, I’m assuming someone arguing in favor of execution for turnstile jumping is not likely to also leave the punishment for every other crime exactly the same…

          • John Schilling says:

            England used to impose the death penalty for crimes down to theft of anything worth more than 12p; it’s not clear that they’d have executed turnstile-jumpers if they’d had the turnstile in 1800, but presumably the punishment would have been something quite draconian and well out of proportion to any economic benefit from jumping a turnstile. And maybe it would have been hanging.

            This conspicuously failed to make England into a peaceful utopia. And the same failure has been observed in many, many, many other societies that reasoned, “if we set really severe penalties for all crimes, all criminals will go straight, because Logic!” This doesn’t work. We know this doesn’t work. People proposing that we try this yet again, are at best ignorant and more often foolish or sadistic, and we should make sure they don’t get to put their schemes into action.

          • England used to impose the death penalty for crimes down to theft of anything worth more than 12p

            I do not believe that is correct. It took forty shillings to make theft a non-clergyable felony, hence capital. There were other conditions that could do it, such as stealing from an occupied house so as to put the residents in fear, but one shilling by itself did not make the offense capital.

            For minor offenses conviction did not require a jury trial and punishment was a week at hard labor, time in the stocks, public flogging, or other punishments intended mainly to shame.

          • Eric Rall says:

            It took forty shillings to make theft a non-clergyable felony, hence capital.

            I thought that even clergyable felonies were potentially capital for repeat offenders, since benefit of clergy could only be taken once. Or were you referring specifically to offenses that were capital for a first offense?

          • I was referring to offenses that were nominally capital. I say “nominally” because it appears that only a minority of those convicted of such offenses were actually executed. The rest were pardoned, pardoned on condition of agreeing to transportation, or pardoned on condition of enlisting in the army or navy.

            There is a long chapter on criminal law in 18th century England in the webbed draft of my current book project.

            Someone convicted of stealing less than forty shillings and not guilty of something else that made the offense more serious received a punishment less than execution.

          • John Schilling says:

            John Hostettler, A History of Criminal Justice in England and Wales, states that (capital) grand larceny started at 12 pence per an 1109 statute that persisted “for centuries to come” (page 52). Wikipedia also says 12 pence, citing Anne Glynne-Jones Holding up a Mirror: How Civilizations Decline and indicating that this persisted into the early 19th century. OTOH, I’m also finding unsourced references to 1699 and 1713 acts setting the value at 5 and 40 shillings for shoplifting and burglary specifically.

            As I believe you have noted before, the actual effect of this is not that the criminal population is scared straight by the sight of petty thieves being hanged for 12-pence thefts, but that judges, juries, and the entire criminal justice system starts turning a blind eye to the fact that anyone ever stole more than 11 pence at a time.

          • @John:

            Thanks.

            I was going on the 18th century situation, both because that’s the period I know a good deal about and because that’s the period of the “bloody code” which is usually described in this way.

            I’m dubious about the claim that 12 pence persisted into the 19th century, given that it was 40 shillings in the 18th century. But it could have been the rule in the 12th century.

          • John Schilling says:

            Yes, everything I can find that suggests 12p in the 18th or 19th century seems to be confused secondhand references to primary sources I can’t easily obtain. 12p in 1109 seems pretty solid; I’m guessing that it persisted on paper until 1699/1713, and the more careless writers missed the statutory update.

            Hardly matters what the value is, if the judicial algorithm actually implemented is to subtract 1p from that value and say “this is all we can prove was stolen, so petty larceny it is” :-)

    • BlindKungFuMaster says:

      I once sat in a car with a guy who drove in a very aggressive fashion, constantly cursing, speeding and driving up close. I was 15/16 and we were going to the national track-and-field championship. He was a sprint trainer and luckily I only ever interacted with him over these few days.

      A few year later he was convicted of over 200 cases of child abuse.

      So … I kind of get your point. But I’m pretty sure you would never catch that guy jumping turnstiles. There is a big difference between robbing the commons a little bit (and I don’t even think that predicts who will rob the commons in a big way) and showing this kind of aggression and lack of self-control when interacting with people.

      • Tarpitz says:

        I think you are still viewing people as far more comprehensible and possessed of grokable high level characteristics than they are. I predict that aggressive driving would correlate with child abuse only to the extent that both correlate with being male.

        • Mark V Anderson says:

          It is my impression that women abuse children more than men. Not sexual abuse, but when you look at all abuse. Certainly when I see abuse on the street, it is much more likely to be women doing it. Since this is CW-free thread, I will stop at that and not discuss reasons why. Oh I should say that I looked on the Internet and could not find confirmation of this — it is only my impression.

          • Viliam says:

            It is my impression that women abuse children more than men. Not sexual abuse, but when you look at all abuse.

            I worked shortly for an anti-child-abuse NGO, and according to their statistics, child abuse per gender of perpetrator was about 50:50. When it was divided into various subcategories, men had a significant majority in “sexual abuse” subcategory, but other subcategories were either balanced or women had a slight majority; so together it was about 50:50.

            It is probably useful to note that abusers are not only parents. Could be other family members, could be non-relative, and a nontrivial part of the male sexual abusers subcategory is “mother’s boyfriend”.

            By the way, this is from an Eastern European country; the numbers may be different in other countries.

            Certainly when I see abuse on the street, it is much more likely to be women doing it.

            The question is how much “child abuse on the street” is representative for child abuse in general. People are more likely e.g. to slap a child in public that e.g. to sexually abuse a child in public. Also, if they care about their image, beatings administered at home can be significantly more severe than beatings administered on the street. Etc.

    • Nootropic cormorant says:

      But if you introduce death penalty you won’t catch those doing petty harm for petty gain, but people making an honest mistake, suicidal people and people with infinite time preference.

    • dophile says:

      Confession: I have jumped a turnstile.

      I had an unlimited NYC MetroCard at the time. (For out-of-towners, that means I had prepaid some amount of money so that for the rest of the [week, month, year] I rode “free”, though my card could not be used more than once within a window of 15 minutes, give or take, to prevent its use by several people at once.) I hadn’t used my card at any time in the previous few hours, and when I swiped in the machine ate my ride but the turnstile remained locked. I had a train to catch arriving within the next few minutes. I jumped.

      Did I deserve to die?

      ———————————————

      Separate from all my other objections to this plan, how many “false positive” deaths per year in NYC do you think it would take to offset the benefits of this policy? Are you confident the number would be smaller than that?

    • pontifex says:

      How about the death penalty for trolling? One example of a proscribed comment would be advocating not using the death penalty for murder, but using it for turnstile jumping?

      • Nancy Lebovitz says:

        Hoping this isn’t too close to culture war: In Heinlein’s Friday, and possibly elsewhere, there’s a theory that a decline of public courtesy is a marker of a society going downhill. I consider this plausible but unproven. Thoughts?

        • Watchman says:

          Which courtesy I suppose is the question? If not holding doors open for women (but not men) dies out it might be a sign of social breakdown or of gender equality being a real thing. I’d suggest courtesies dying out is social change in action; whether that is society going downhill or not if probably an individual viewpoint

          • Nancy Lebovitz says:

            I agree, it’s a pretty vague claim.

            I don’t think eating on the street is an important decline. Neither is men no longer being obliged to wear hats.

            On the other hand, increasing nastiness online might be an indicator.

          • quanta413 says:

            Is increasing nastiness online really an indicator though? We used to think that if going to the polls resulted in an ass whooping by partisans of the opposite party so you retreated, that wasn’t a sign of voter discrimination but instead that you didn’t have the necessary courage expected of a citizen.

            Modern Americans strike me as very polite compared to 19th century Americans.

            …I may be exaggerating somewhat about the first thing.

          • pontifex says:

            Hopefully my post didn’t come across as nasty. I was just responding to an obviously absurd comment with something equally absurd.

            I usually do enjoy Andrew’s comments, but this one struck me as pretty silly.

            I also don’t feel that strongly about turnstile jumpers. Yeah, they’re stealing our tax dollars, but only a very small amount. It would be bad if this kind of thing got normalized, but I think we can handle it through normal means. We got this. 🙂

    • Robert Jones says:

      I oppose the death penalty in all circumstances and therefore I oppose the death penalty for jumping turnstiles as a special case. The death penalty is prohibited by Protocols 6 and 13 of the European Convention, and I think we should uphold the Convention wherever possible (even if we disagree with specific provisions, which in this case I do not). I don’t think you have come close to making an argument that jumping turnstiles is so egregrious that the death penalty is the only possible sanction.

    • Faza (TCM) says:

      If I understand you correctly, your reasoning is that a person who is willing to break social norms for a small gain is equally, or even more, likely to break them for a large gain.

      As an initial counter, I’d like to offer “hung for a sheep as well as a lamb” – if a petty evil like turnstile jumping gets you the death penalty, you might as well go big or go home. This has the pleasant air of a self-fulfilling prophecy.

      For a somewhat more nuanced view, let’s attempt an actual gain v. damage calculation.

      The gain to the turnstile jumper is the cost of fare as a percentage of their income for some period of time, e.g. fare/weekly wage. The damage suffered by the commons is the fare lost as a percentage of the total fare revenue for that same period, e.g. lost fare/weekly revenue. It should be pretty obvious that the gain to the turnstile jumper is pretty much guaranteed to be orders of magnitude greater than the damage to the commons, for any isolated instance of turnstile jumping. This is because an individual fare is a much greater portion of the jumper’s income than it is of the entire transit system’s.

      Aside: Around these parts, infants and senior citizens are completely exempt from mass transit fares, whilst certain groups are afforded reduced fares, so clearly the system doesn’t need everyone to pay the full fare.

      As long as turnstile jumping is restricted to singular instances, the damage suffered by the commons is so close to zero as to not really make a difference.

      Where things break down, is when turnstile jumping becomes the norm rather than the exception and this is the situation we want to avoid. As it happens, the easiest way to do so is by making turnstile jumping the more expensive option, by introducing a cost that is greater than the amount saved (not terribly difficult, given how little a single fare costs). It can even be implemented by individual citizens, via social ostracism until the turnstile jumper makes appropriate amends (acknowledging wrong and paying the fare). In other words: friends don’t let friends be a-holes.

      Granted, the above doesn’t really work in the kind of society we live in, where most people you meet are total strangers you’ll never see again. The kind of enforcement policies we have now are probably good enough to address this.

      As a bonus, let’s consider the costs and benefits of actually implementing the penalty you propose.

      On the benefit side, turnstile jumping is a one-time sport (assuming the presence of right-thinking citizens on the scene). There’s also the possibility of a deterrent effect (although what research we have doesn’t really support the idea that harsher punishments deter very much).

      There are some rather nasty cons, however.

      First of all, what you really want aren’t people who won’t dodge fares, but rather people who will pay fares (at least, if you want a mass-transit system). An executed jumper won’t dodge fares anymore, but neither will he pay any, so you stand to lose all other fares he might have paid in the future.

      Second, from a broader social perspective, a turnstile jumper inflicts some – albeit minor, as already shown – damage to society, but is also likely to confer some – probably much greater – benefit to society by working, spending money, paying taxes, supporting their family, etc. Your proposal appears, at first blush, to have signs of cutting your nose off to spite your face: the social losses from executing turnstile jumpers are likely to greatly exceed gains.

      Third, if we assume that any right-thinking citizen is qualified to carry out the punishment in situ, we are faced with the problem of false accusations. People are people, so we’re guaranteed to have cases of someone mistakenly or maliciously executing an innocent person and claiming they were a turnstile jumper. We could try to counter-act this by making the penalty for false accusation of turnstile jumping the same as for the actual jumping – and apply the same penalty to false accusations of false accusations. I fear, however, that at this point we’ve stumbled into a Monty Python skit.

      Fourth, carrying out the penalty presumes the ability of the right-thinking citizen to do so, but ignores the possibility that the turnstile jumper is both athletic and wily. It is not inconceivable that a lawfully minded individual might find themselves being pushed onto the tracks with the jumper claiming that it was in fact they who jumped the turnstile. At this point we find ourselves in a different, but related, Monty Python skit, as the merits of the case are adjudicated by other passengers on the platform.

    • nadbor says:

      I feel the same way about griefers in online games.

      I mean, I can easily imagine a frame of mind that would induce someone to jump a turnstile, and it doesn’t feel like it takes a bad person to do it or is strongly predictive of more serious offences. If you do it I would guess that you’re young, thoughtless and in a subculture that considers this kind of thing cool. You’re probably not thinking of your acts in terms of stealing from the commons. There is a good chance that you will grow out of it.

      If you’re spending your time harassing other people online for no other reason than to watch them suffer on the other hand … You’re an evil ‘moral mutant’ as far as I can tell. Terrorists at least have ideologies to justify what they do, you just like evil for its own sake. It’s like the online equivalent of torturing little animals – an early warning sign of psychopathy and not something a normal person would ever do. If you enjoy watching other people suffer, you have something seriously wrong with your brain and I honestly think the society would be significantly better off without your kind in it. I don’t advocate capital punishment or any kind of punishment for those people. I’d rather they quietly disappeared and every time an individual appears with those mutant tendencies they would disappear too. Within a generation there would be peace, love and understanding all around.

      On the other hand I’m probably not being empathetic enough towards bullies. Maybe they aren’t all psychopaths. Maybe there *can* be extenuating circumstances. But since I don’t have a death note, it’s not very important that I figure this out.

      • Nootropic cormorant says:

        It seems to me that most people are capable of enjoying the suffering of others, but only if the suffering is slight enough (teasing, affectionate pinching), the victim deserves it (criminal punishment, revenge fiction, slapstick comedy) or the suffering is heroic in itself (Christianity, dystopian fiction).

        Thus I would think that malevolent sadism is due to improper socialization rather than drastic psychological differences at a low level.

        • BlindKungFuMaster says:

          “Thus I would think that malevolent sadism is due to improper socialization rather than drastic psychological differences at a low level.”

          Why would malevolent sadism be any less heritable than any other character trait?

          • Nootropic cormorant says:

            I’m sure it’s pretty heritable, at least in the sense that much of the individual variation can be predicted by genes, but it doesn’t mean that this trait isn’t formed (or rather not formed if cruelty is the default state) through interaction with the environment (which is itself affected by genetics), which leaves the possibility of being preventable through a proper intervention.

        • nadbor says:

          Reason tells me that you must be right about the role of socialization. Intuition disagrees.

          I get revenge, jealousy, righteous anger, playful teasing, slapstic, vicarious heroic suffering. I don’t get repeatedly killing a random noob’s character for no personal benefit whatsoever just to see him squirm. I don’t get it at all. Logically, it has to be the outcome of the same set of instincts that we all share filtered through some specific life experience but I can’t imagine any kind of life experience that would make me be like this.

          Griefing is like peadophilia this way. Theoretically it’s on the same spectrum of human behaviour I’m on. In practice feels completely alien and incomprehensible. I find it as hard to believe that in a different life I could be a griefer as that I could be attracted to prepubescent children.

          • Nootropic cormorant says:

            Can’t blame you, there’s probably a reason why empathy with moral mutants is so difficult.

            Sadism is playing with people like you would play with things, probe them to see their reaction, predict the reaction and get pleasure from confirming your model of them, I imagine this is why cats torture mice too.

            I don’t know when children develop empathy, but remembering my childhood it seems clear that we aren’t born with it.

          • Nancy Lebovitz says:

            I’m sure there’s a lot of individual variation. Remembering my childhood, I wasn’t good at kindness, but I didn’t want to hurt people.

          • Tarpitz says:

            I would be willing to bet that griefers are overwhelmingly teenage boys. Teenage boys are moral mutants at a very high rate, but not many of them stay that way.

          • Randy M says:

            I think boys are more likely to play, and care about, games.
            Teenage girls grief on Facebook.

          • Nancy Lebovitz says:

            Is enough known about griefers that there’s a reasonable estimate of how many don’t age out of it?

      • BlindKungFuMaster says:

        There used to be legislation in Europe that allowed the execution of children if they showed signs of unusual cruelty. So there are precedents …

        I recently read an article about an institute for psychopathic youths. These kids were horror-movie-like creepy.

        • The Nybbler says:

          These kids were horror-movie-like creepy.

          To me the “horror movie creepy kid” is mainly a combination of two things — a flat affect (a common symptom of schizophrenia) and a conspicuously adult vocabulary and manner of speaking. Did these kids have both, or was it some other feature?

      • Nick says:

        Lots of people just like victimless destruction—I used to have days on my Minecraft server where I’d do a backup and then we’d all TNT everything in the most cataclysmic way possible. I think that, at least for some folks, there’s a disconnect between the destruction itself and the suffering that inflicts on the creators, the same way that for some, it’s hard to remember that what you’re yelling at online is another person.

        I used to argue with griefers on public servers when I was helpless to stop them. It distracted them a while, which is something, I guess. The most common response was “It’s just pixels.” (The single worst argument in the history of mankind.) Another was that it doesn’t really harm anyone, because eventually you’ll leave the server and someone else will come along to build anyway. Another was that they just enjoyed the outraged/vengeful responses—this could maybe function like an “If this is how you respond, maybe you deserve it after all” argument, but that’s post hoc. I wish I had a neat Mindhunter-esque story about an articulate griefer explaining himself, but I don’t; my impression was just of rationalization and failure to empathize.

        • Matt M says:

          “World” or integrated PvP in MMOs is an interesting case study in griefing.

          It’s not at all uncommon to see groups of 5-20 players in WoW camping a particular point wherein they habitually and constantly kill a much smaller (often lower level) group of players of the opposite faction. Even when the game modified its systems such that killing a lower level player gives you absolutely no gain whatsoever, people still do it – even in this highly inefficient form.

          If you were to take the Alliance/Horde conflict quite literally, the weak alliance soldier who manages to keep 5 high-level Horde warriors occupied griefing him is actually doing quite a service to the Alliance and is the clear “winner” in this activity, in any practical/logistical sense. But it certainly doesn’t feel that way when you’re on that end of things…

        • CatCube says:

          The problem is that griefing isn’t “victimless destruction”–you’re actively ruining the fun of others. I agree that there’s a lot of fun with what you described. A day of destruction, but permitted by the server admin, so there are backups and no permanent harm to anybody’s work.

          I recall a game in the very early ’00s. Attack, Retrieve, Capture, or ARC. It was a capture-the-flag type with a top-down view similar to the first GTA. This game was great fun, had a simple client, and a lot of games going on at the same time so it was easy to find a game at any time of day.

          It also was a relatively early online game and had no protections against hacked clients. These started to appear which gave you unlimited ammo, infinite health, and allowed clipping through stage barriers. After a little while (I think under a year), you’d only have three or four minutes of play before one of these guys would come in and start carpet bombing the entire stage and render the game unplayable. The game eventually died because you could never get a game going–the griefers completely destroyed all fun for those who wanted to play a real game. Even trying to declare some games as a free-for-all allowing hacks, and some as a “no-hack” to allow those of us who wanted to play a straight game went about as well as you’d expect.

          • Nick says:

            The problem is that griefing isn’t “victimless destruction”–you’re actively ruining the fun of others.

            Yes, to be clear, I don’t think griefing is victimless at all.

          • Ketil says:

            The problem is that griefing isn’t “victimless destruction”–you’re actively ruining the fun of others.

            Isn’t it just the old schoolyard bullying, but from a safe distance? Some people (perhaps we all do, deep inside?) get off on inflicting suffering on others.

            I’m sure there must be psychological explanations aplenty.

        • nadbor says:

          I totally get victimless destruction. Actually, less so now that I’m in my thirties but I remember that I did get it as a kid. I distinctly remember fantasising about running around an empty supermarket toppling shelves and smashing things. My pre-teen years were one long quest to find stuff I can break, burn or blow up with firecrackers. A burned lightbulb or an old tv or any other household object about to be trashed was something I could violently break with great delight. I even had this game where I pretended the shrubs behind my house were some kind of enemy and I had destroy them with my sword played by a wooden plank. I went at it for hours. Good times.

          None of it was ever directed at any recognisable human beings though. I’m even uncomfortable beating people in a 1vs1 game (more than losing at one) like chess let alone harassing some poor kid online until he quits.

          • albatross11 says:

            In college, I worked at McDonalds for awhile. I used to fantasize about destroying every one of those damned things that were beeping at me during some rush, when I was working grill trying to keep track of 20 things at once.

      • arlie says:

        On a bad day, I think one of the ways in which I am a moral/emotional mutant, is that unlike normal people, I don’t actually enjoy hurting others, which makes it more difficult (and less rewarding) to perform proper social dominance behaviours.

        Even Othering them doesn’t help – as soon as it goes from conceptual/theoretical to real individual cases, I’m no longer happy to see them suffer, however much I may think they deserve it.

        FWIW, this enjoyment of making others suffer seems particularly obvious in children – tweens and teens – just watch what they do to each other, and particularly to their own outgroups. Most adults seem to graduate to simply ignoring most others’ suffering, rather than actively seeking to cause it, at least most of the time.

        Of course it’s all tangled up with dominance – being able to have an effect on others is part of feeling/being seen as powerful, and the easiest impact to create is negative. Appearing at least somewhat dominant is self protective in many human circumstances; there’s also the implicit threat of “I could do this to you; better not attack me”. Thus cliques in junior high, with physical and emotional bullying.

        [On a good day, I figure that for most people, hurting others isn’t actually satisfying, and most of the nasty behaviour I see is either scared people striking out in an attempt to protect themselves, or insensitivity/obliviousness to suffering of people far from one’s normal social environment.]

    • John Schilling says:

      I think you have to add walking on the grass, otherwise you can’t be sure of getting Wesley Crusher. And that’s about the only redeeming feature this proposal has.

    • Matt M says:

      I don’t think it’s necessarily the cost that affects behavior here, but rather, the ability to make a direct and obvious connection to the victim.

      In other words, framing the issue as “If you’ll steal $3 then you’ll also steal $3000” is wrong. The turnstile jumper doesn’t see his offense as immoral because there’s no clear and obvious victim. In other words, someone who is willing to steal $3 from the general tax fund by jumping the turnstile is not equally likely to steal $3 from a small crying child on the streetcorner, even if doing so results in an equal gain and is similarly easy to execute.

      Conversely, I think a lot of people would be willing to embezzle $3,000 from a government or corporation or some other large faceless entity, but still would not take the $3 from the small child.

      As a side note, I once dated a girl whose mother spent six months in prison for committing a six-figure corporate fraud against her employer. Everyone was shocked – as far as anyone knew she had never done anything illegal in her life. Her rationale was that it was so easy, she didn’t figure anyone would miss it.

    • baconbits9 says:

      Which is more likely to destroy the social fabric, turnstile jumping or allowing (encouraging!) people to shove alleged turnstile jumpers onto the tracks?

      Turnstile jumpers exist because social fabric exists, they simply feed off the peripheral opportunities created by a trusting society. As such they are a symptom and not a cause of decay.

    • Nancy Lebovitz says:

      Wells Fargo has treated its customers atrociously, and *that’s* where it put it’s compliance efforts?

      In any case, the idea of turnstile jumping as extremely bad is a sort of extrapolation which should be viewed with extreme suspicion– it’s making guesses about motivations and large-scale effects. I think there’s a lot of be said for only having laws against actual damage and highly probable damage.

    • Nancy Lebovitz says:

      Sidetrack: Approximately what proportion of people are able to jump turnstiles? I’m short and have never been athletic, and I have no reason to think it would ever have been a possible crime for me.

      • The Nybbler says:

        If you can lift yourself from an armchair by your arms, you can probably jump a turnstile. I suspect nearly all turnstile jumpers are men, with the higher upper body strength that implies. If you’re extremely short you might actually have to jump.

        Entering through the “emergency exit” (when someone else comes through on the way out — the “emergency” part is fiction, or if it doesn’t close properly) is also done.

        • Nancy Lebovitz says:

          I’m not sure what you mean by lift yourself from an armchair by your arms. If I rest my feet on the ground, I can do it. If my feet are off the ground, I can manage maybe an inch of lift. I’m sure I could have done better when I was younger and lighter.

          I’m 4’11”. My inseam (distance from ground to crotch) is 24:. I *might* be able to clamber over a turnstile, but it seems rather difficult.

          Sometimes I have trouble with the high chairs at some bars and restaurants. As I recall, some of the seats are as high as my hip or a bit higher.

          • The Nybbler says:

            I’m not sure what you mean by lift yourself from an armchair by your arms.

            Push down on the arms so your arms are straight and your rear end is hovering over the chair. That’s the basic turnstile-jumping move (except it’s started from a walk) — you put your hands on the sides (near where the card readers are), push to lift yourself up, and lift your legs clear of the turnstile. Probably have to swing a little (I’ve never done it, only seen it) to clear it. Being short would make it difficult because your arms would be in the wrong position; you’d have to be much stronger because you’d have to pull yourself up rather than push, unless you literally jumped.

          • Nancy Lebovitz says:

            When I squared myself properly on the chair, I could get more lift, but I’m going to see how high the turnstiles are.

    • broblawsky says:

      I once jumped a turnstile because my Metrocard was depleted, I was with my parents (who had already gone through), and I wanted to be on the same train as them (so that we could all be sure of getting to an event on time).

    • achenx says:

      My yard has a small wooded stream running through it. The yard is also directly on a fairly well-trafficked road. A few times a year I crawl around in the stream cleaning up litter.

      What I’m saying is, if you extend this policy to littering as well, then you have my support.

      • baconbits9 says:

        Oh yeah, people who toss their cigarette butts on the ground will be asked how often and long they have smoked for and then will be buried alive in an equivalent number of butts.

        • Nornagest says:

          I don’t think you could get enough volume. What do you figure a cigarette butt is, a cubic centimeter? Two? Three?

          Let’s say you’ve been a pack-a-day smoker for ten years. That’s 365 * 10 * 20 = 73000 cigarette butts, which works out per above to between 2.5 and 7.5 cubic feet. Might be enough to bury you up to the knees.

    • The Nybbler says:

      This is part of the NYC shadow welfare system. The obviously poor get to use public transit for free (for buses, they may have to be the same tribe as the driver), either by begging for a swipe or jumping the turnstile. Anyone not obviously poor gets slammed with a ticket (even if they have a monthly and the turnstile was malfunctioning).

      • honoredb says:

        This.

        I wish I could generalize it more cleanly to other petty crime like littering, but it doesn’t particularly seem to be particularly-harried people who litter in NYC rather than expend time and attention to find a trash can.

        • Gazeboist says:

          Littering seems to be mostly a function of inadequate public trash cans, at least where I’ve seen it. Either because people are unwilling to go look further when all three cans in sight are full, or because they just toss the stuff on top of a full bin and it blows out a minute or two later.

          • albatross11 says:

            Or the modern version, where some genius distributes {glass, metal, paper, plastic} recycling bins and trash bins in random separate locations, so that when you have an empty bottle, there’s no visible correct place to put it.

    • Chalid says:

      The severely-underpunished NYC transport crime that I can think of is “blocking the box” – those people who let themselves be stuck behind traffic in the middle of an intersection when the light changes, thus blocking all traffic trying to flow in the cross street. This causes massive delays for nearly-zero benefit to the offender. That ought to be an automatic $1000 fine enforced by camera.

    • CatCube says:

      While obviously I don’t support the death penalty, I think a better case for “willing to tear up the social fabric for next to no gain” is made for metal theft. There, you have people who will tear the copper tubing out of an air conditioning unit, doing $6000 worth of damage for what, $60 in copper? Less? Same thing for the amount of damage that will be done when people pilfer copper wiring and plumbing for a house.

      At my job, we have a lot of remote sites, and we’ve taken to running all wiring underground and encasing the conduit in concrete. This is a substantial increase in construction cost, all to avoid theft of the wiring.

      I’m sort of in the mindset of Louis CK’s “Of Course, But Maybe” bit. Of course it’s terrible when you get electrocuted stealing $100 of copper from a transformer, necessitating a $50,000 replacement cost. But maybe the rest of society is better off without you running around.

      • Matt M says:

        There, you have people who will tear the copper tubing out of an air conditioning unit, doing $6000 worth of damage for what, $60 in copper?

        I was under the impression that most of these sorts of crimes are committed by drug addicts desperate enough that it’s safe to assume they aren’t really thinking through the economic costs and moral implications of their actions.

        Which isn’t to say that it’s okay – but that the high-level problem is drug addiction, not wire-ripping. Wire-ripping is just one of the lower-hanging fruits for people with no resources who are desperate as hell to get their next fix.

      • Viliam says:

        I think a better case for “willing to tear up the social fabric for next to no gain” is made for metal theft. There, you have people who will tear the copper tubing out of an air conditioning unit

        More antisocial examples would be stealing a sewer cover, or stealing a wire of a railway crossing semaphore. Unless one is retarded, it must be obvious how this creates a risk of random people dying.

        These are both real-life examples. My wife once almost fell into an uncovered sewer opening when we walked outside in late evening. I noticed in the last moment, and had barely enough time to push her aside. Yeah, I know, we are taught from early childhood that we are supposed to look where we step, but people usually have a low prior probability for a deadly trap in the middle of a sidewalk.

        For these kinds of crimes, I think mere financial penalty is definitely not enough. Even if no one was actually harmed (or you merely cannot link the specific harmed person to a specific metal thief).

    • BBA says:

      Tangentially: A few of the European metros I’ve taken (Vienna, Munich, Prague) don’t have turnstiles, but just expect you to buy and validate a ticket before you board. There are random spot checks to make sure you have one (though I’ve never actually been checked). I don’t know if this is a high-trust/low-trust deal – is Prague really higher-trust than London or Paris, both of which have turnstiles?

      A similar system is used on some light rail/streetcar/tram/whatever systems in the US, mostly because the station and train layouts make direct enforcement impractical. In ultra-low-trust Russia there are turnstiles on buses.

      • Mark V Anderson says:

        Yes, this is how the train works in Minneapolis / St Paul. I have no idea how many cheaters there are, but it would not surprise me to find out that there were many many riders who don’t bother buying. I suspect the transit folks don’t want to know how many.

      • The Pachyderminator says:

        Yeah, the St. Louis metrolink is like this, but cops periodically appear to check tickets. I’d say the chance of getting away with a single free ride is more than 90% but less than 99%. I don’t know how much the fine is if you get caught, but it’s probably enough that buying a ticket every time has a smaller expected cost.

        • brmic says:

          I don’t know how much the fine is if you get caught, but it’s probably enough that buying a ticket every time has a smaller expected cost.

          In my experience this isn’t true anywhere. I’d estimate buying tickets to be twice as expensive on average.
          At first and second glance this seems counterintuitive, as it creates odd incentives and it should be possible to hire more checkers until the probability of being caught is so high that cheating is no longer cost effective.
          I assume that the reason it empirically doesn’t appear to work that way is a combination of (a) a desire to avoid hassling regular commuters and (b) a decision to escalate to criminal charges for repeat offenders instead of trying to bring the cost up to the gains through increased inspections. This also maintains the norm/norm-breaking framing of the issue, whereas otherwise the arrangement might slide towards an alternative payment method.

          • Ketil says:

            Also, cost of checking is probably quite a bit higher than the increased revenue from fines or fewer free riders. So there’s likely diminishing returns for increased controls taht don’t make it worthwhile. Most people are honest, are risk aversive, abhor the embarassment of being caught, and perhaps need a little reminder from time to time that there is a risk involved.

          • albatross11 says:

            Lately I’ve ridden on a lot of trains in Belgium and the Netherlands. It seemed like my ticket got checked about 1/4 rides, and the probability of getting checked was much lower on very short rides. At one point I went into Amsterdam and I think there was some extra charge I was supposed to pay for congestion pricing, which I didn’t realize until the train had started. I suppose I’d have gotten some kind of hassle (at least paying the extra charge, maybe with a fine) if anyone had checked my ticket, but nobody did.

          • AlphaGamma says:

            @albatross11: Were you taking the Intercity Direct train? I think that’s the only one into Amsterdam which charges a supplement, because it’s supposed to be a “premium”, high-speed service. EDIT: And you only need the supplement if you go over a particular stretch of the route.

            If you had been caught, you would have had to pay a very small fine (€10 total, compared to the €2.40 cost of the supplement or the usual €50 fine plus the cost of the ticket if caught with no ticket at all).

        • Matt M says:

          It’s like this in Houston as well (although nobody takes the train in Houston)

          Interestingly enough, if you really want to game the system, you can purchase tickets on your phone via an app, and simply never “activate” them until you see a transit cop coming towards you (easy to spot in their big yellow vests) to check your ticket, thus getting the best of both worlds.

          • CatCube says:

            In Portland, they’ll check the time you activated the ticket and hem you up for activating it just before checks.

            I don’t know the final outcome of this, after they tell a court, but I watched a cop write a ticket under these circumstances.

      • AlphaGamma says:

        Have certainly seen random checks in the Netherlands (some, but not all, stations have turnstiles) and Poland. In the latter case, the inspector wasn’t even vaguely formally dressed- he looked like a metalhead in a band T-shirt and cargo shorts, and rode the tram for a couple of stops before taking his ID and terminal out of his bag. He caught some people.

        Meanwhile in Budapest, they dispense with turnstiles but have several burly shaven-headed men standing in the entrance in a line spaced closely enough that it would be very difficult to run past them, checking everyone’s ticket. This was the case at most if not all of the metro stations I went through.

        And heading back to London, the North London Line used to be a suburban train line. It was fairly dilapidated but was known universally where I grew up as the “free train”- there were no turnstiles, and the ticket machines were almost invariably broken so if you did meet an inspector you would just be able to buy a ticket at the normal price without being fined. My usual route to the cinema was only one stop on this line, so I almost never did meet an inspector. Now, of course, the North London Line is part of the Overground and has turnstiles.

        As you can probably gather from this, most fare-dodging in London is on suburban trains (or buses) rather than on the Tube.

        Finally, Paris has, or at least had, the Mutuelle des Fraudeurs- a group of people who paid into a monthly “insurance” fund to pay the fines of those caught fare-dodging. I think this was recently made illegal…

      • rlms says:

        My assumption/understanding from seeing this in Germany (with apparently no spot checks) was that very few natives bought tickets and the system was actually funded through tickets bought by tourists plus taxes.

        • Tandagore says:

          I’m very skeptical of this. It will probably depend on the city (Berlin will have a much higher amount of “Schwarzfahrer” than probably anywhere else), but I am fairly sure most people have tickets. I’m based in Vienna (so not Germany, but close enough and similar systems) and most people buy a ticket for the whole year, which costs 365€, or cheaper long-time-tickets if you are a student/disabled/etc.. What is true is that single tickets and other short-term tickets are much more expensive per ride than the tickets intended for residents and that the purpose of the pricing is to hurt/tax tourists (a single ride costs e.g. 2,4€ ).

        • Matt M says:

          As a tourist in Munich, it certainly appeared to me that I was the only person actually bothering to buy tickets at the train stations before getting on.

          Although I figure perhaps the locals had some sort of membership or app or something?

          • Elephant says:

            Even as a tourist, in most German cities, it’s better (convenient, cheaper) to buy a tourist pass than to buy a ticket each time at a station. It does feel a bit weird at first to get on a train with no “external” proof that I paid. For a few days I felt like waving my receipt around… I’m sure no local buys a ticket every day, rather than buying a pass.

    • Brad says:

      For anyone unaware of the context, including maybe Andrew, the Manhattan DA recently announced that they would refuse altogether to prosecute turnstile jumpers. Previously they had put a memo saying that they wouldn’t prosecute most turnstile jumpers but apparently they still got too many for their liking.

      While it is still possible for the police to issue a desk appearance ticket, the police aren’t meter maids and if the DA’s office is going to refuse to prosecute their misdemeanor arrests I guess they figure that there is no point in making them. I’m sure our terribad mayor also encouraged the police in that direct.

      All of which means that sometime in the past year the police have totally given up on enforcing the law that says you need to pay to ride the subway. Paying is now optional as far consequences go. As a result turnstile jumping has increased significantly.

      If I can read between the lines, I think I agree with the underlying sentiment. Seeing someone jump a turnstile induces rage in me. And contra what someone says above it isn’t solely or even primarily obviously street homeless that are doing it.

      • albatross11 says:

        Is this a general retreat from the whole “broken windows” idea of enforcing laws against minor crimes as a way of signaling that the law is in force in a given area?

        • Brad says:

          Yes. And while I don’t know if it will lead to more serious crimes it is certainly leading to more broken windows. Further it isn’t at all clear to me what the NYPD and Manhattan DA are doing with all the extra time on their hands. Certainly I haven’t seen them returning money since they now have less need for manpower.

      • xXxanonxXx says:

        Have you seen the people who swipe an empty card and then step over the turnstile while pretending to struggle with a snagged backpack or purse? I was so impressed by it the first time I almost forgot to be angry that I was paying $120 a month like some chump while they rode for free.

      • Matt M says:

        To refer to some discussion above, I think the primary purpose of having rare spot checks isn’t to influence the criminal or the destitute, but to simply remind the “regular folks” (who have no particular desire to break the law and who can easily afford tickets) that yes, you are expected to pay.

        To loudly broadcast a policy like this, or to have no spot checks ever, is to essentially concede the point and to suggest to everyone the opposite – “No, you are not expected to pay.” At which point I think the average person flips from “I should probably pay, even though I could probably get away with not paying” to “Why should I pay? They don’t seem to care whether I do or don’t.”

        If I were a New Yorker, I would probably start jumping until this policy is reversed.

        • The Nybbler says:

          If I were a New Yorker, I would probably start jumping until this policy is reversed.

          Hasn’t happened yet. It’s surprising how law-abiding New Yorkers can be sometimes. As I said, the emergency exits really aren’t used for emergencies only; they even disconnected the alarms a while ago. But I was in the Penn Station 7th Avenue subway station, and because of interminable construction the stairwells to reach Penn Station proper were backed up, the platforms were packed, and the nearest outside entrance had only an emergency exit. A couple of people looked at that exit and decided not to take it. I suspect this is because unlike most such exits, it was a large glass door rather than a barred iron gate, so it didn’t match anything on the “rules to ignore” list.

          I took the exit; it wasn’t alarmed and I was able to get to Penn Station proper by going outside and back in.

        • Andrew Hunter says:

          I am pretty sure the Manhattan DA would reverse his policy after a new yorker article blamed techbro fare evasion for subway failures, and asked loudly why someone like me gets away with such disregard for society.

          • Viliam says:

            Let’s think out of the box for a moment…

            The decision to not prosecute turnstile jumping is sexist, because… it is easier for men to jump over the turnstiles. Yeah, that.

            Anyone here in the business of writing clickbait articles? Or perhaps let’s make a petition against this sexism, and share it on social networks.

      • Viliam says:

        the Manhattan DA recently announced that they would refuse altogether to prosecute turnstile jumpers

        If they refuse to do their job properly, how do they still have one?

        What is the point of having a rule if you refuse to enforce it? Either enforce the rule, or remove the rule! Otherwise you only encourage contempt of rules in general.

        A rule that is not actually enforced, is in effect a non-asshole tax. You pay the tax if you are a civilized citizen, and you are free to ignore it with impunity if you are an asshole. (Concept analogical to the “asshole filter“.) Definitely not a way to maintain a civil society.

    • IsmiratSeven says:

      Even under the auspices of provoking a counter-argument, it’s a little disconcerting that you can’t figure out the counter-argument on your own.

      “You jaywalked to save thirty seconds, I bet you’d disembowel orphans to save half-an-hour”, “you offered someone a puff of weed to feel a little better, I bet you’d inject Fentanyl into a little old lady’s eyeball to make her feel a lot better”, “you’re playing your music at slightly-louder-than-average-but-still-generally-acceptable-levels in your bedroom, I bet you’d recreate the Oklahoma City Sonic Boom Experiment”.

      Even the general premises that are required to form the argument in the first place are inane outside of Thought Experiment World where every crowd-sourced Guess 2/3rds of the Average game ends with a 0/1 result. People just don’t think like that, and even to get to the “tragedy of the commons” argument necessitates that the jumper view their action as “a little bit wrong” rather than “zero wrong”.

      The “tragedy of the commons” argument, while not itself necessarily malicious, often ends up a refuge for bureaucrats, insurance adjusters, and Kantian busy-bodies who take personal affront to every pebble thrown in every lake, because, goodness, what if everybody threw a pebble in the lake.

    • Viliam says:

      I assume you would like Singapore. They have harsh penalties for chewing gum, annoying people with musical instruments, flying a kite that interferes with traffic, spitting in public, feeding pigeons, connecting to other people’s Wi-Fi, littering, graffiti, or jaywalking. When an American brat was caught stealing road signs, as a legal punishment he received four strokes of cane (the kind that leaves physical scars), despite USA officially complaining about extreme punishment.

  9. Deiseach says:

    I imagine ye all have seen the latest announcement from SpaceX? Whatever about flying a civilian tourist round the moon, somebody run the engineering on this one for me, because I can’t see how the hell they’ll get this to escape velocity without breaking asunder or turning the crew/passengers into strawberry jam:

    The two-part BFR, as described by Musk before his private-lunar-voyage announcement, will consist of a 157-foot-tall spaceship sitting atop a 191-foot-tall rocket booster. Together, such a system would be 35 stories — taller than the Statue of Liberty.

    And he’s making it out of a new carbon-fibre composite, because it needs to be reusable:

    To be able to launch, refuel in orbit, endure months of flying through space, land on Mars, leave that planet, and safely return to Earth — then do all that over again — the BFR can’t be an ordinary spacecraft.

    That’s why Musk is planning to build the entire spaceship “primarily of an advanced carbon fiber,” he said in 2016.

    Also the “we’ll refuel in mid-air just like they do with planes” is making me go “Hmmmm”. There are so many ways this can go wrong, and the article having experts calmly stating that “People are gonna die when we set out to colonise Mars, that’s just how space travel is” (while it’s true) is not very reassuring.

    Genius or insanity? You tell me, I am not remotely qualified to have an opinion on this! It’s astoundingly cool and I’m very happy that any kind of space programme is happening again, and they must think it will work because they’ve started working on it in their Big White Tent, but I really need someone to explain to me like I’m five about this 🙂

    • bean says:

      I’m not optimistic about this. John Schilling has described the BFR model as being equivalent to demanding a ship that can go from Paris to St. Louis direct, instead of the sane solution of transshipping at Le Havre and New Orleans, and he’s right. I’m not sure what Musk is thinking.

      The refueling is the one part that probably isn’t insane. Transferring fluids between docked spaceships hasn’t been done AFAIK, but it’s not conceptually hard. Rendezvous and docking in orbit is well-established science, and the connector design for this isn’t that difficult.

      • ana53294 says:

        What would be the middle step? After the moon, I mean.

        • bean says:

          The moon? I’m confused.

          The sane way to do an architecture for Earth-Mars transport is to have three vehicles. One flies from Earth’s surface to LEO, another flies LEO-Mars Orbit, and the last flies Mars Orbit-Mars Surface. Those are very different environments, and have very different design drivers.

          • James C says:

            While I sort of agree, the issue is, unless you’re planning on building these things in orbit, 2 and 3 will have to be launched from Earth anyway. Now, I’d favour the building in orbit option but if you’re going for quick then just having the same vehicle for all stages kind’a works. We’ll see if it’s practical, I guess.

          • bean says:

            But we’re not dealing with quick here. If this was an Apollo-style charge for Mars, without serious intent of staying long-term, I’d agree that you’d be looking seriously at trying to cut development costs you wouldn’t get to recoup in operation. But Musk is trying to build a sustainable transport infrastructure, so that’s out. 2 would probably be assembled on orbit, and doesn’t have to carry launch engines (heavy) or a serious heat shield (aerobraking will probably produce a much lower heat load). 3 can probably be launched in a single piece, but it doesn’t need engines capable of launching from Earth, or life support for an Earth-Mars transfer.

          • The simple solution is to build a ship capable of getting to orbit and hire some Moties. When you reach LEO they take the ship apart and rebuild it as a ship designed to get to Mars. When you get into Mars orbit, …

          • John Schilling says:

            but if you’re going for quick then just having the same vehicle for all stages kind’a works.

            If you’re going for quick and you don’t care about economics or efficiency, we know how that works. And where it leads. Six flags, twelve sets of footprints, fifty years of nothing. Meh, don’t care, why bother.

            When Christopher Columbus set out to explore a strange new world, he used three second-hand freighters purchased at the local seaport. The story of the human race on Mars, on the Moon, the Asteroids, as anything but flags and footprints, begins when someone realizes they can do it with three second-hand space freighters.

            Elon Musk’s bankruptcy should put a fair number of second-hand space freighters on the market, along with a half-complete BFR that should best be left for a museum.

          • Dan L says:

            @ John:

            Excellent post, though I’m a little more bullish on SpaceX in the short term.

            What kind of design would you expect for a dedicated interplanetary craft? I’m not convinced yet that anyone has a sufficiently reusable LH2 engine, though Blue Origin seems to be trending in that direction.

            Edit: whoops, this was supposed to be in response to your post on the BFR/BFS

          • John Schilling says:

            The 1962-vintage RL-10 demonstrated twelve flights worth of reusability in the DC-X program, and that was nowhere near the limit despite having only a shoestring budget to do the necessary modifications. The next generation of the RL-10 is rated for fifty flights at 200 seconds each, and I’m pretty sure that’s still nowhere near the limit.

            But for a proper Mars ship, the chemical rockets are a secondary propulsion system in any event. You use them for about a km/s of delta-V on each departure burn, to exploit the Oberth effect, but the main propulsion is going to be a few megawatts’ worth of solar-powered ion or plasma thruster; we’re already testing 100 kW versions that could be clustered to provide the necessary power if we somehow can’t scale them up.

            Then you’re going to need a sizeable crew habitat, ideally spun for gravity. Long-duration life support. Radiation shielding, at least to the extent of “storm shelters” for solar particle events. Long-range communications, and other avionics well beyond the norm for launch vehicles. Self-repair facilities. Possibly an aerocapture heat shield. And a truss to hold all of this together, with provisions for whatever cargo you are carrying along with the people.

            It will probably look something like this, because Andy Weir is better at designing spaceships than Elon Musk(*). And you do not want to compromise the design by trying to squeeze it into something that launches from Earth in one shot.

            Sometimes, the cargo will include a Mars lander/ascent vehicle, but you’ll want to carry those to Mars as rarely as possible and reuse them as often as possible; they won’t be a standard part of the Mars transfer vehicle.

            * And Gwynne Shotwell is better than either of them, but she only has so many “tell the boss he’s wrong” cards to play.

          • Lambert says:

            What’s actually causing these Hydrolox engines to fail?
            If it’s the heat in the combustion chamber itself, that’s one thing, but if all the damage is due to the stresses of launch and reentry, that won’t be a problem for interplanetary travel.

          • John Schilling says:

            They’re not actually failing; the people buying them are just saying “we can’t imagine ever wanting to do more than N flights and rocket engine tests are expensive, so just test to N + margin and call it a day”.

            To the extent that there is parts wear that suggests the engines eventually will fail, I believe that’s mostly in the bearings, etc, of the turbopumps. If you don’t need 50:1 thrust:weight ratios, and for this application you can get away with less, the pump hardware can be substantially beefed up.

          • Dan L says:

            @ John Schilling:

            The 1962-vintage RL-10 demonstrated twelve flights worth of reusability in the DC-X program, and that was nowhere near the limit despite having only a shoestring budget to do the necessary modifications. The next generation of the RL-10 is rated for fifty flights at 200 seconds each, and I’m pretty sure that’s still nowhere near the limit.

            Yup, I called out BO for a reason – on many dimensions, I view New Sheppard as a successor to the DC-X. Will be interesting to see what they go with as an upper stage for New Armstrong. “We can’t imagine ever wanting to do more than N flights and rocket engine tests are expensive, so just test to N + margin and call it a day” definitely explains some of the behavior I’ve seen, I appreciate the perspective.

            But for a proper Mars ship, the chemical rockets are a secondary propulsion system in any event. You use them for about a km/s of delta-V on each departure burn, to exploit the Oberth effect, but the main propulsion is going to be a few megawatts’ worth of solar-powered ion or plasma thruster; we’re already testing 100 kW versions that could be clustered to provide the necessary power if we somehow can’t scale them up.

            Last I checked, the X-3 was slated for its 100-hour test in April – do you know if that ever happened? (Apologies if the answer is in one of the papers on the linked page, I’ll get around to reading them this weekend.) Between that and the planned VASIMR test, it does look like there’s a lot of interesting stuff finally hitting TRL-5.

            That said, I’m still skeptical about manned electric propulsion happening anytime before 2030, certainly not before the first man on Mars. Same goes for most Cycler designs. Maybe I’m just being overly cautious, but I distrust low-force systems in the absence of infrastructure on both ends of the trip.

            …the Hermes is pretty cool though.

          • John Schilling says:

            The 100-hour X-3 test was pushed back to October. And unfortunately, there’s no vacuum chamber capable of testing it at its full 200 kW design power anywhere on Earth.

            I agree that building the Hermes by 2030 would be a stretch. But so would building a Mars-capable BFR. Elon is predicting 5 years for just a cislunar BFR, and his schedules always have to be understood as using Martian years.

      • Random Poster says:

        Transferring fluids between docked spaceships hasn’t been done AFAIK

        It has been done. About once every three months for the last forty years, actually. The first time was in 1978 when Progress 1 refueled Salyut 6, and after that there has been roughly 160 cases of one spacecraft transferring liquids (both fuel/oxidizer and potable water) to another.

        Also, even NASA has understood since the 1960s how to get liquid out of a tank in zero gravity; if you know how do that, then it doesn’t make all that much difference whether said liquid goes to a rocket engine or to a different spacecraft.

        • bean says:

          It has been done. About once every three months for the last forty years, actually. The first time was in 1978 when Progress 1 refueled Salyut 6, and after that there has been roughly 160 cases of one spacecraft transferring liquids (both fuel/oxidizer and potable water) to another.

          Oops. My bad. For some reason, I hadn’t considered water or thruster fuel for stations.

          Also, even NASA has understood since the 1960s how to get liquid out of a tank in zero gravity; if you know how do that, then it doesn’t make all that much difference whether said liquid goes to a rocket engine or to a different spacecraft.

          This, I’m not so sure about. Handling liquids in zero-G is still not trivial. In the 60s, NASA was using ullage motors to settle the propellant before pumping it out, and that’s not feasible here, particularly because you’d have to keep the thrust on while pumping. I know about PMDs, but those tend to be for small tanks. (I was looking at them in conjunction with cubesats, so I don’t know for certain what the state of the art is for large tanks.) This is of unprecedented size, which makes thing harder, but I’m certainly not suggesting it’s impossible.

    • beleester says:

      A 348-foot-tall rocket is about the same height as a Saturn V (363 ft), so the scale is right around what we should expect for Earth-Moon trips. I can’t comment on the actual design (how many stages and where to refuel them) – if bean says it’s nuts I’ll believe him.

      • Deiseach says:

        Yeah, but the Saturn V was in three stages where the ratio was two stages of propulsion to the one stage of “contains the crew”; this BFR is about half-and-half, so the whole “yeah we’ll burn up all our fuel just getting off the ground but never mind, we’ll refuel in mid-air!” is the part causing me concern. I think you might be able to get it into low-earth orbit, but after that? I have no idea. I’d like someone who can do the maths to run the figures for what you’d need to achieve escape velocity for a stage that big and if the thrust generated would smoosh any crew/passengers, because that seems to be a sticking point.

        I mean, there’s a reason SF has spacecraft being built in orbit at spacedocks, because it’s easier to get the parts up there and assemble them rather than assemble it on the ground and then try to get it out of orbit but what do I know?

        • bean says:

          Wait. Who’s talking about mid-air refueling? The BFR is a fairly typical two stage to orbit design. Once in orbit, it refuels from another BFR that went up with fuel instead of payload, and then flies to Mars. The delta-V for the upper stage is ~6.25 km/s from wiki numbers (maybe 6.39 km/s for the Mars transit if they only run the vacuum engines) which isn’t absurd for a Mars insert, although it’s not exactly blistering, either. I don’t have the data for the lower stage to do delta-V, but it’s not ludicrously undersized for putting the upper in a place to get into orbit. Refueling will be expensive, but not impossible. And the thrust isn’t likely to be the end of the world, either. I wouldn’t be surprised if they shut down the three atmo Raptors to keep that under control when burning for Mars, leaving aside throttling, which is complicated.

          • Deiseach says:

            Wait. Who’s talking about mid-air refueling?

            Okay, this is why I needed someone to look this over for me. Glancing at the diagram I thought it was mid-air refueling, looking at it more closely now it says after the BFR gets into earth orbit, then they launch more rockets to refuel it:

            The ship will be nearly out of fuel by that point though, so SpaceX plans to launch nearly identical tanker spaceships to meet up with the first one in orbit. A series of rendezvous at about 17,500 mph would refill the spaceship’s tanks with liquid methane fuel and the liquid oxygen required to burn it, though this may take about a dozen tanker flights.

            I honestly don’t see how this is supposed to be more cost-efficient than the current model; they’re emphasising the whole thing can be re-used, but if you have to launch a dozen tankers to refuel one rocket, surely that sends the costs up?

            And can you really do it in earth orbit like that? The whole concept is fantastic, but is it workable? I know: they’ll have to do it to find out, but that is going to cost a large fortune. It’s amazingly bold, but is it a dead end? Once you get the thing out of Earth’s sphere and on the way, that’s fine (though landing on Mars is a whole other headache, not to mention the “and then we make our own fuel on Mars to send it back to Earth”), my problem is getting my head around “can they get this thing off the ground and into orbit in one piece at the start?”

          • bean says:

            I honestly don’t see how this is supposed to be more cost-efficient than the current model; they’re emphasising the whole thing can be re-used, but if you have to launch a dozen tankers to refuel one rocket, surely that sends the costs up?

            It’s not the most terrible idea. Sending the fuel up on the same rocket means that you need a much bigger rocket. That drives up initial purchase cost, while the cost of a marginal flight is, in theory, pretty low. Using the same engines for the upper stage and Mars insertion stage makes a demented kind of sense, particularly if you’re trying to use the same vehicle for everything. John’s/my preferred architecture is going to do the same thing, refueling the transfer vehicle from Earth, at least at first. (ISRU from the moon is a good idea, but not something to count on immediately.)

            And can you really do it in earth orbit like that? The whole concept is fantastic, but is it workable? I know: they’ll have to do it to find out, but that is going to cost a large fortune. It’s amazingly bold, but is it a dead end?

            Actually, that part’s fairly low risk. We’ve been doing rendezvous and docking since the 60s. It’s about as well-known as anything in space travel. SpaceX might still be able to screw it up, but on-orbit refueling has been floating around for a long time. It’s essentially just docking and fluid transfer. I’m sure there will be problems with that (handling liquids in zero-G is hard) but I’d judge it as “fairly straightforward engineering problem” rather than “conceptual hurdle”. I’m more concerned about needing to make an upper stage/Mars Transfer Vehicle/Mars Lander/Mars Ascent Vehicle all in one.

            my problem is getting my head around “can they get this thing off the ground and into orbit in one piece at the start?”

            Probably. He’s pushing state of the art fairly hard, but it’s not completely unreasonable.

          • John Schilling says:

            looking at it more closely now it says after the BFR gets into earth orbit, then they launch more rockets to refuel it:

            Yes, that’s definitely part of the plan, and it’s the part that mostly makes sense. Reaching Low Earth Orbit with two stages is quite straightforward; the classic Atlas, Titan, and Saturn rockets of the Space Race era could all do that; modern launch vehicles can usually get halfway to geosynchronous orbit on two stages. Specifically, Elon Musk has demonstrated that his Falcon rockets can do that.

            So, getting to Low Earth Orbit with a third of a tank of fuel(*) for the upper stage, pretty straightforward as rocket science goes. If you can do that, you can launch three rockets to Low Earth Orbit with a third of a tank each, and some zero-G ballet later, have a fully-fueled rocket ready to go to Mars or wherever.

            It’s not going to be as easy as Musk thinks, but it’s doable. It’s just not very efficient, because it insists on the foolish paradigm of a spaceship that takes off from Earth and lands on Mars.

            The LOX-Methane rocket Musk wants to use, while ideal for Mars landing and ascent and tolerably good for Earth launch, is almost the worst choice for flying from Earth orbit to Mars orbit. But, one ship, one engine, which is guaranteed to be the wrong engine for part of the trip.

            The high-thrust engines he needs for Earth launch, the heavy heat shield he needs for Earth atmospheric entry, the landing gear, and the special adaptations because Mars isn’t Earth, are all dead weight on the trip between the two planets (which he’s doing with an inefficient engine).

            The voluminous crew habitat and the long-duration life support system he needs for the trip to Mars, are an even bigger useless burden for the launch from Earth. And the landing on Mars. No, you can’t justify this by saying it has to be launched from Earth anyway. He’s explicitly planning on making BFR reusable, so while he has to launch that stuff once and can simplify by launching it in pieces, he’s doing it the hard way by launching and landing it every single time, fully assembled.

            Then there’s the bit where he insists on not building at least a fuel depot in Low Earth Orbit to support the refueling, which means he has to have all three BFRs in the same orbit at the same time, launched in close sequence and with little tolerance for snafus because the cryogenic fuel and oxidizer will be boiling off from day one. A starter-level depot doesn’t need to be much more than a BFR upper stage stripped down and modified, and will almost certainly pay for itself by the time he launches a dozen Mars missions. He’s planning hundreds.

            And then there’s the bit where fitting all this functionality into one BFR, means making this rocket really Fing B. And everybody who has tried to build a new ROUS(**), has found themselves besieged and befuddled by problems nobody has ever experienced before and which are solvable only at great cost in time and/or money.

            But, ultimately, it’s just engineering and the math looks good. If he can afford to throw enough money at it for enough years, he can make it work. One thing we don’t have any decent math for, is calculating how much time and money that is – except that it will be much more than it will be for people who pursue more efficient transportation architectures. And it may be more than Elon can afford even if he isn’t worried about beating the competition.

            * And oxidizer, pedants.

            ** Oh, come on.

      • James C says:

        Glancing at an orbital transfer map, Mars has a higher delta-v requirement to reach but you can save a lot of fuel with aerobreaking maneuvers if you design your spaceship right. It’s probably a little more expensive to do the Earth-Mars-Earth and Earth-Moon-Earth but that’d depend a lot on your mission structure, in-situ refueling operations and whether the mission is manned as the living requirements difference for the missions is huge.

        Tldr, theoretically the right size of rocket but with a hundred or so devils in the details.

    • baconbits9 says:

      I have nothing to add on the technical side, but where is the profit in this? Unless the idea is to guilt/shame the government into funding missions to mars by making it technically feasible I don’t see the finances paying off no matter how well they do it.

  10. Scott Alexander says:

    How likely is a world where people have an Industrial Revolution before/without developing gunpowder? Before/without handheld guns, even if cannons exist?

    • Protagoras says:

      I lean toward not very likely in both cases. On the former, the industrial revolution was at least preceded by and arguably was largely caused by improvements in economic and social organization which made widespread investment in large scale projects more feasible. A strong case can be made that progress in military organization spurred by the need to adapt to the new weapons was a big factor in the development of the strong centralized governments which enforced consistent property rights and laid the foundations for sophisticated economies. On the latter, handheld guns seem to have appeared at pretty close to the same time as cannons, it’s just that the early ones were so poor that their presence didn’t make a lot of difference. Hard to see what would have prevented someone from noticing that such a thing was possible.

      • kokotajlod@gmail.com says:

        Was the Roman Empire not a sophisticated economy with a strong central government that supported consistent property rights? When the IR happened in western europe, how consistent had property rights been over the past century? Was there no century during the history of Rome in which property rights had been at least that consistent?

        • Protagoras says:

          No, it wasn’t, better than at any time previously, and no, in order. The Roman Empire left a tremendous amount to local authorities to handle in their incredibly diverse, confusing and incompatible ways, at all periods of its existence (like any ancient empire).

      • Lambert says:

        Agree (with Protagoras).

        The Industrial Revolution was driven by the incentives of capitalism. I can’t see a feudal society allowing those kinds of technological and social changes. And Feudalism relied on an elite warrior class of heavy cavalry. The ability of a peasant with a cheap musket to best a highly-trained knight in costly armor changed everything.

        The only way I can see a firearm-less industrial society happening is if they never became feudal. Perhaps if stirrups etc. were never invented or heavy armor never became a thing.

        Rudimentary handheld guns actually preceded cannons in China. (and cannons as siege weapons never took off like they did in Europe, due to stronger fortifications in China)

        Edit: the IR might well have happened without gunpowder, but it would have looked rather different. Slower and more Piecemeal.

        • Deiseach says:

          What’s interesting to me is that everyone agrees China invented gunpowder and from there it disseminated into the Islamic and European spheres. China had early firearms (the Wikipedia description of firelances made from paper and bamboo makes me wince, no surprise they later used metal) and indeed developed cannon.

          And then they stuck there, while the Europeans refined, improved and basically ate their lunch when it came to artillery and hand weapons. There’s an interesting discussion here which blames the Manchu (as a former ‘barbarian’ dynasty not wanting the peasants and disgruntled non-Manchu to get their hands on weapons that would equalise the power imbalance) and the lack of need to engage in an arms race: China was the 800 lb gorilla in the region and didn’t need better tech, whereas in Europe your next-door neighbour was just as good as you were and a potential threat, so having bigger and better boomsticks was to your advantage.

          I don’t know how much of that is a good explanation, though; the idea also floated about the relative stagnation of Chinese culture seems to have something to do with it (though again, why? the Chinese are not and were not stupid), and the real question here is, I think, does the innovation shown by the development of firearms also mean the kind of innovation that leads to an Industrial Revolution (something China also does not seem historically to have had, prior to contact with Western influences*)?

          So I think you can have an Industrial Revolution without firearms, if you have the kind of promotion of invention and the competition spurring on research and improvement of technology and science. Without that spirit of discovery, it doesn’t matter whether you have guns or not, you’re not going to make paradigm changes.

          *Description of 17th century Jesuit ‘cultural exchanges’ has me going “But surely the Chinese could have worked this out already for themselves? I mean, translating Euclid???”

          The Jesuits introduced to China Western science and mathematics which was undergoing its own revolution. “Jesuits were accepted in late Ming court circles as foreign literati, regarded as impressive especially for their knowledge of astronomy, calendar-making, mathematics, hydraulics, and geography.” In 1627, the Jesuit Johann Schreck produced the first book to present Western mechanical knowledge to a Chinese audience, Diagrams and explanations of the wonderful machines of the Far West.

          …Jan Mikołaj Smogulecki (1610–1656) is credited with introducing logarithms to China, while Sabatino de Ursis (1575–1620) worked with Matteo Ricci on the Chinese translation of Euclid’s Elements, published books in Chinese on Western hydraulics, and by predicting an eclipse which Chinese astronomers had not anticipated, opened the door to the reworking of the Chinese calendar using Western calculation techniques.

          • baconbits9 says:

            Isn’t it generally accepted that Europe had periods of stagnation and decline (or at least low growth) that lasted several hundred years?

          • broblawsky says:

            Composite bows have always seemed like a pretty good explanation for the stagnation of Chinese firearms technology. In dry climates, they’re better for most purposes than any firearm built by Europeans up until roughly the early 19th century.

          • engleberg says:

            The Chinese had crossbows for a long time. I don’t know if anyone can reliably tell ‘fire arrow’ from ‘rocket arrow’ in ancient scrolls. But if sulphur was available cheap, and iron was expensive, I could see Chinese going from crossbows to spigot-launched rockets like the WWII PIAT without guns being more than experiments.

          • cassander says:

            @baconbits9

            the “dark ages” is a concept that turns out not to be very accurate, at least not as is traditionally imagined. There was definitely a post-roman slump in literacy, material culture, etc, but it was a lot shorter than is usually thought, europe starts to have a larger, richer population than the roman era by as early as the 10th century, and the decline doesn’t set in until as late as the 7th or 8th.

          • baconbits9 says:

            @ cassander- true, but the argument for the Chinese not developing guns isn’t “well they went into the dark ages” it was more like “they progressed more slowly than you would expect” (if I understand correctly).

          • cassander says:

            @baconbits9

            I wish you’d asked me this question in a month, because I just downloaded this book.

            Guns are a non-obvious use of gunpowder. Early gunpowder was an incendiary, and “advancing” it to the point where it became a quasi-explosive took a long time and happened largely by accident. Not having read the book yet, my understanding is that the Chinese kept up fairly well until relatively late in the game, at least through the ming dynasty. This book makes the case that the absence of siege warfare for the Chinese limited their need to advance the technology past that point, which eventually came to bite them in the ass.

          • albatross11 says:

            Is it that technological / scientific progress was stalled in China, India, MENA, etc.? Or is it that technological / scientific progress was moving along at its normal pace in those places, but just hit the gas hard in some parts of Europe and its colonies?

          • baconbits9 says:

            I know very little about China’s history, but it strikes me as strange that the country with the largest fortification known to man would have not needed or developed siege warfare.

          • cassander says:

            @baconbits

            The Chinese had tons of fortifications, but they weren’t fighting the people in them. They were fighting steppe barbarians, quasi steppe barbarians, and peasant rebellions, not “peer competitors” to use modern parlance.

            By contrast, European warfare in the same period becomes an almost constant stream sieges.

        • cassander says:

          Peasants with spears could always kill knights, if the peasants were disciplined about it. The dominance of cavalry was a consequence of small weak “states” that couldn’t afford standing armies, not technology. Gunpowder changed things by increasing the capital intensity of war and making small states less viable, not by making peasants able to kill knights.

          • Lambert says:

            Was Early Modern warfare really more capital-intensive (compared to labour-intensiveness)?

            Plate armour became useless once it could be penetrated by firearms. There’s more capital in a breastplate and helm than there is in a musket.

            Also, training spearmen to hold fast in the face of a cavalry charge is a much more difficult task than teaching a musketeer how to operate adequately. (or teaching longbowmen, for that matter)

          • cassander says:

            Was Early Modern warfare really more capital-intensive (compared to labour-intensiveness)?

            It was more capital intensive that previous forms of warfare, definitely. By the early 16th century, cannon were an essential part of warfare. They were expensive, defending against them even more so., and the massive amounts of gunpowder they could use weren’t cheap either. The breastplate/musket side of things was relatively small change.

            Also, training spearmen to hold fast in the face of a cavalry charge is a much more difficult task than teaching a musketeer how to operate adequately. (or teaching longbowmen, for that matter)

            It’s certainly easier than teaching them to use a longbow, but I’m not so sure about the spear. If your musketeers can’t reload their weapons while being shot at or run away the second cavalry start charging, they aren’t much good. Muskets required a fair bit of discipline to use effectively, just like pikes.

          • mustacheion says:

            I only have very limited training (as a child, in boy scouts) with weapons, but in my experience training to use a gun is trivial compared to training to use a bow. And I am under the impression that crossbows, and maybe even sufficiently powerful longbows were capable of penetrating even the heaviest wearable armor that could be manufactured at the time long before firearms were powerful enough to do so.

          • cassander says:

            @mustacheion

            Using a gun on a range isn’t hard at all, even a muzzle loading musket. Using a gun in sync with 500 other people after a canon ball just took out a half dozen your buddies and the enemy cavalry of forming up to change is considerably more difficult.

          • Matt M says:

            Using a gun in sync with 500 other people after a canon ball just took out a half dozen your buddies and the enemy cavalry of forming up to change is considerably more difficult.

            IMO this type of thing falls under “general military discipline” which will be highly necessary regardless of the specific circumstances of combat (including the technological age and the weapons one is employing)

            The question is whether, in addition to that sort of thing, certain weapons require extra training in order to use effectively, and you can count me in the class of people who say “yes, that is almost certainly true”

            Particularly given that in say, the American Civil War, a huge proportion of the soldiers would have brought their own rifles, that they were already familiar with using (for hunting purposes) for several years. It strikes me that the average English peasant never had any particular occasion to train with a longbow aside from “you have been selected for longbow training so you can fight in the King’s army”

          • Plumber says:

            “…It strikes me that the average English peasant never had any particular occasion to train with a longbow aside from “you have been selected for longbow training so you can fight in the King’s army””

            Matt M,

            Longbow training was required by law.

            An analogous situation today would be for all able bodied men to be in the National Guard. 

            Anyway, I find the stories of the Battles of Crecy and Agincourt, where English speaking yeoman archers beat French speaking armored Knights, heartwarming.

        • Ketil says:

          What about medieval city-states? Trade-oriented, mostly non-feudal, competitive, literate. Surely they could develop technology, regardless of the state of peasants in remote provinces?

          And there are other weapons that allow peasants to fight knights effectively: pikes and crossbows spring to mind. Perhaps what stalled the IR so long was the second Lateran council, which in 1339 prohibited the use of crossbows against Christians?

          More: who where the people inventing and developing technology? I haven’t really investigated, but it seems to me they were typically creative and industrious souls from well-off backgrounds. Perhaps a major factor driving the IR was a class of merchants and tradesmen with enough wealth that their offspring could afford the luxury of developing their trade? Feudal lords would increase their income by draining their peasants, and would send any scholarly sons off to a monastery to copy old manuscripts in order to buy themselves some heavenly cred. It wouldn’t occur to them to think about better ways to drain an iron mine of water or weave textiles. Regardless, I think a mercantile middle class might arise without gunpowder.

          • AlphaGamma says:

            AFAIK the Second Lateran Council (of 1139, not 1339)’s prohibition on crossbows isn’t quite what it is often stated to be. It does not refer to crossbows, only to the “art, which is deadly and hateful to God, of ballistarii and sagittarii“. These are clearly two different types of troops armed with missile weapons- sagittarii are archers, while ballistarii could be crossbowmen but are usually translated as slingers.

            So, according to the usual English translation, the Council prohibits the use of essentially all missile weapons against Christians. There is no indication that this was at all honoured- and in fact, some commentators have said that the relevant Canon is actually a prohibition of archery contests, in line with the same Council’s ban on jousting.

            Another proposed explanation is that at the time of the council, the Papacy was at war with King Roger II of Sicily (who was excommunicated at the council). Roger used large numbers of Muslim mercenary archers. So by prohibiting the use of missile weapons against Christians, the Council allowed the Pope’s allies to shoot at Roger’s (largely-Muslim) army, while anathematising his Christian officers if they ordered their archers to shoot back…

      • albatross11 says:

        If somehow gunpowder wasn’t workable (say, sulfur was really rare and hard to get), I expect that:

        a. People would have developed alternative weapons, as the pre-firearms weaponry in Europe was improving over time. (Armor got a lot better, for example.) Maybe they’d have found some other chemical explosive that worked well enough to take the place of cannons and hand weapons, maybe they’d have developed along very different lines (steam-engine-powered catapults, say). But there were large rewards available for developing better military technology in Europe, and that would have continued.

        b. The rest of the industrial revolution would have been workable anyway. Guns aren’t all that important a part of the story. For example, you could learn production lines from mass-producing crossbows and mail shirts.

        c. Europe conquering a lot of the world probably funded a fair bit of the industrial revolution, but I suspect better military organization + whatever technology they came up with instead of gunpowder weapons would have been enough to conquer the locals in most places.

        d. More generally, I think the Civilization style technology tree is a bad way of looking at the history of technology. Way more choices were possible than we pursued, everything is path-dependent as hell, and if we could run history back a thousand years and try again, I suspect we’d often see *very* different paths and eventual kinds of technology.

        • Europe conquering a lot of the world probably funded a fair bit of the industrial revolution,

          I wonder if that’s true. The first big influx was gold from the New World. That came into Spain, which wasn’t where the industrial revolution happened. It enriched Spain at the cost of other parts of Europe, since the effect was to inflate everyone’s money, but I don’t think it made Europe richer in any relevant sense.

          As best I can tell, it’s an open question whether the British Empire ran at a profit, as Orwell assumed, or at a loss.

          • ana53294 says:

            It enriched Spain at the cost of other parts of Europe, since the effect was to inflate everyone’s money, but I don’t think it made Europe richer in any relevant sense.

            Did it, though?

            While Spain did dominate the world for a while, the people in Castille, Aragon, Catalonia, Andalosia, and other parts were not that much better off. There were multiple bankruptcies of the Crown, a massive overinflation, hunger, rebellion, limitation of religious freedom, etc. And all of that was financed by American gold. Maybe if Phillip had to finance expensive wars in Germany for the Holy Roman Empire by taxes in Spain*, the whole silly thing would have been stopped earlier by a this time succesful rebellion in Spain (there were several, and Portugal succeeded). Instead, they used American gold to opress Spanish people. On the whole, American gold was not used for anything useful (roads, bridges, paying off debt, making the roads safer). It was instead used in a multitude of empire building silly nonsense such as the Thirty Years’ war.

            There is a reason for the term resource curse.

            *He did, but then he also had the American gold to opress any rebellion.

          • albatross11 says:

            Yeah, this is a good question. It seems like increasing trade networks + more resources fed into having enough wealth to build up industry. But I don’t know how the balance worked out for actual colonialism or conquests. Except that the European conquest of the Americas (and later, Australia) was a huge win. Maybe not immediately for the industrial revolution, but definitely in terms of huge numbers of Europeans getting to move to places where there was lots of land and not so many hungry mouths to feed and room to expand for several generations.

          • Matt M says:

            huge numbers of Europeans getting to move to places where there was lots of land and not so many hungry mouths to feed and room to expand for several generations.

            I always thought this was the predominant theory that explained why, in the end, it was England who benefited primarily from the new world and not Spain.

            Spain made a calculated error in thinking the best thing to do with the discovery of the new landmass was to extract gold (and other resources) from it – while the English focused on colonization and settlement for their own sake.

            Surely the piles of gold coming over helped Spain in the short term, but in the long term, resulted in unsustainable policies that led to ruin.

          • cassander says:

            Matt M says:@

            Spain made a calculated error in thinking the best thing to do with the discovery of the new landmass was to extract gold (and other resources) from it – while the English focused on colonization and settlement for their own sake.

            You’re ascribing too much intentionality to both countries in this case. the English colonialism was something carried out largely outside the control of the english government, at least at first. And the spanish didn’t set out to find gold, they found lots of gold almost the same time they found the indies, with conquistadors operating under absolutely minimal direction from spain. And even if they had wanted to do differently, they probably wouldn’t have been able, because north america wasn’t suitable for spanish style resource extraction and latin america mostly wasn’t suitable for english style settlement. You’ll note that english colonies in the caribbean had a lot more in common with other spanish caribbean colonies than either had with north america, or coastal latin america.

          • Ketil says:

            There’s no point in a textile mill if your market is your local village, and your wool supply is the local sheep. What conquest and especially colonization did, was increasing trade. This meant an influx of raw materials, but also routes out to larger markets. Britain was already a major wool cloth exporter in the 1600s, which I expect they leveraged to build cotton import, manufacture, and exports.

          • cassander says:

            @Ketil

            when you look at European trade in the period in question, colonial trade is a tiny fraction of the overall effort. The trading companies are sending a few dozen ships to a year to the indies, but they’re sending hundreds to haul pickled fish and timber around. Colonial goods were higher value, but even when you get it down into dollars and cents, the colonial trade is dwarfed by more boring stuff. the technology developed for long distance trade unquestionably aided mundane trade, but I don’t think you can cite the colonies as truly expanding markets all that much.

        • Nornagest says:

          Maybe they’d have found some other chemical explosive that worked well enough to take the place of cannons and hand weapons

          Nitrocellulose or nitroglycerine seem most likely. All you need for them is plant products and nitric acid, the latter of which doesn’t take any rare elements and was known as early as the 13th or 14th centuries. In our timeline they weren’t invented until the 1800s, but there’s no technical reason why it couldn’t have happened much earlier.

          If you have nitrocellulose and some kind of stabilizing additive, you have smokeless powder. But you need much better metallurgy for a gun based on smokeless propellants than on black powder (they’re far more powerful per unit volume), so maybe weapons would have evolved in a different direction if smokeless powder had been invented first.

          • John Schilling says:

            All the easy ways to make nitric acid require sulfuric acid, so if we’re handwaving away gunpowder by making sulfur hard to come by, you don’t get nitric acid and nitro explosives until late in the industrial era.

          • But you need much better metallurgy for a gun based on smokeless propellants than on black powder

            Why can’t you just use a smaller volume of propellant to get the same effectiveness with the same metallurgy?

          • sfoil says:

            Smokeless powder burns much faster than black powder, so for any given projectile weight smokeless charges reach peak internal pressure faster than their black powder equivalent. The slope of the pressure curve (impulse) is the main problem, and the only way around it is to have tiny charges (limiting you to pistols).

            The other problem — that I’m a little more shaky on in physics terms — is that I think the relationship between chamber pressure and projectile velocity isn’t linear because of the bullet’s inertia and friction. This contributes to the impulse problem above, but it also means that — wild handwaving here — given a smokeless powder that burns 3x faster than blackpowder, you need more than 1/3 as much smokeless powder to reach the same velocity as the blackpowder counterpart.

            You should be able to get around this somewhat by using low-mass, low-friction projectiles. But the benefits of toning down rifle cartridges weren’t noticed until way, way after the IR.

            Edit — here’s an illustration someone made about why you should be wary of loading smokeless powder in antique revolvers, though I have no idea how reliable the measurements actually are:
            http://img.photobucket.com/albums/v495/Driftwood_Johnson/Pressure%20Curves/pressure_curve.jpg

          • Nornagest says:

            Why can’t you just use a smaller volume of propellant to get the same effectiveness with the same metallurgy?

            It’s internal pressure, not muzzle energy, that matters w.r.t. barrel weight and metallurgy. And a more energetic propellant packed into a smaller volume means higher maximum pressures for the same energy.

            There are things you can do to mitigate this somewhat, like forming your propellant into larger grains, but you’re starting from a higher baseline.

          • John Schilling says:

            Smokeless powder burns much faster than black powder, so for any given projectile weight smokeless charges reach peak internal pressure faster than their black powder equivalent.

            There are double-based rocket propellants that are basically the same stuff as smokeless powder, but take as long as a minute to release their energy. And, conversely, fine-grained black powders used only for priming because they will burst most any barrel. Chemistry + morphology + inert diluents gives you enough trade space to work with, I should think.

          • sfoil says:

            You know more than me about rocket propellants, but it seems it would take an awful lot of diluent to get any “smokeless powder” down to black powder pressures. There would have to be some way to ensure the mixture was and stayed homogeneous so my barrel doesn’t blow up because of lumpy distribution. I don’t know if it’s reasonable for a knowledge & manufacturing base to exist with that level of understanding and control of chemical formulation while still being stuck with pre~1870 metallurgy.

        • cassander says:

          c. Europe conquering a lot of the world probably funded a fair bit of the industrial revolution,

          I don’t think this is really true, at least not in a classic exploitation model. As David Friedman points out, only the spanish got rich by sheer exploitation, and if anything it retarded their industrialization. It was the dutch and english that started off the revolution, and, in the period in question, mostly conquered outposts from other europeans. that’s not to say that colonies weren’t economically important, they certainly were, but they were important because they expanded networks of trade, improved specialization, all that fun stuff, not because the europeans were able to plunder native wealth to build factories.

      • baconbits9 says:

        A strong case can be made that progress in military organization spurred by the need to adapt to the new weapons was a big factor in the development of the strong centralized governments which enforced consistent property rights and laid the foundations for sophisticated economies.

        I think this is putting the cart before the horse, you need a stronger economy to support new technology and weapon making as well as supporting such a government. An agrarian revolution of some kind has to already be underway to allow the technological change to take hold.

        • Deiseach says:

          From a very scrappy recollection of a pop historical series, what seems to have helped the Industrial Revolution get started in Britain was the use of coal as a fuel. Now instead of wood or charcoal, you had this new high-energy (relative) source of heat. Tiny snag – you need to pump water out of your mines. Lots of water. Need big pumps to do this. Make steam engines to do the pumping, and you’re starting to get places – instead of having to rely on human and animal labour, you have machine power. This lets you mechanise production and reduce labour costs and need for skills – you don’t have to be an experienced weaver, you just have to learn how to operate the factory loom, and even women and children can be employed. Tie in that with the canals, where you can get your coal to your factories and get your factory products to large markets, and things begin to take off.

          (Looks like John Green has a quickie run-through which comes to the same conclusions though he trips over himself not to be “Euro-centric” and goes “India and China totally could have had their own Industrial Revolutions except the Brits had coal near the surface easy to mine”. Yeah, sure, John. Entirety of China had no easy coal but tiny by comparison Britain was picking it out of their morning porridge.)

          • baconbits9 says:

            Tiny snag – you need to pump water out of your mines. Lots of water.

            I think this is true at a certain point, but that early coal mining and use was exposed seams that didn’t require much actual mining. It isn’t until you have the right combination of easily accessible coal, plus technological growth from using that coal that you get to a demand point when aggressive mining is profitable/possible.

          • Watchman says:

            But coal itself wasn’t a new discovery. Surface coal had been burnt in Britain at least sine the High medieval period. I’d need to check but I’ve got a feeling that most of the accessible seems were given to monasteries to help keep them going. If so this offers another possible cause of the industrial revolution in England: monastic land was sold off by Henry VIII after the abolition of the monasteries. The subsequent great estates were probably less effective at exploiting resources but were, unlike monastic estates, prone to division and sale, so gradually coming into the hands of those who needed to exploit resources effectively. And when coal became valuable this enabled investment and competition between multiple landowners which meant mining to get maximum revenue.

            Its a theory anyway…

          • albatross11 says:

            I think coal mines were early places for the use of steam engines (really low-efficiency ones, but you’re at a *coal mine*, so fuel isn’t so hard to come by) and also for railroad tracks/cars. That seems like a pretty important combination.

          • ADifferentAnonymous says:

            The theory I’ve heard is that urbanization drove demand for coal. When people are all spread out on farms, they can go cut down trees when they need to burn something, but a big city quickly uses to the nearby timber and coal, bring more compact, is easier to ship long distances.

    • bean says:

      I don’t see handheld guns not showing up if cannon exist. “Hey, what if we make this small enough that one man can carry it?” is not a complex thought. I can’t say as much about what would happen if gunpowder hadn’t been invented.

      That said, weapons don’t seem to have been a serious driver of the industrial revolution. In a lot of ways, the modern military-industrial complex, where the military and industry were working together to keep the military on the cutting edge of technology, didn’t show up until the second half of the 19th century. Obviously, the weapons of, say, the American Civil War were influenced by the industrial revolution, but to the naked eye, they weren’t that different from the weapons of the Revolutionary War. (The biggest difference IIRC was rifling being fairly cheap and easy.)

      At sea, where I’m more familiar with things, the leading edge of warship weaponry in 1860 was fairly similar to that of the Napoleonic Wars, only bigger. That changed a lot over the next decade, though. The civilian world was the primary driver of the development of the marine steam engine, although the RN at least was not as conservative as it’s usually portrayed to be. Iron ships were resisted by the RN (not entirely unreasonably, although that’s a story for another time) and the first generation or two of ironclads were armored with fairly normal iron from civilian production.

      • Nornagest says:

        The biggest difference IIRC was rifling being fairly cheap and easy.

        Cost was probably a factor, but rifling had been common for a while, especially in the civilian world — it was especially useful for hunting, where accuracy was important and rate of fire was less so. The big difference between Civil War-era and Revolutionary War-era rifles was that loading the former was much faster — muzzle-loaders were still most common, but now they used asymmetrical projectiles that could easily be rammed down the barrel but expanded when fired to engage the lands.

        Previously, rifles had used symmetrical ball that tightly fit the barrel, and so had to be slowly and painfully wedged down it — a much harder job than loading a smoothbore musket. They also tended to leave residue in the barrel, leading to problems with fouling.

        • bean says:

          AIUI, the development of the mine ball was only part of the reason for the wide adoption of rifiling. (Which was not limited to small arms, IIRC.) The other aspect is that it is a nontrivial amount of extra work (going from comments in the Safehold series) which meant that muskets were significantly cheaper.

    • Nabil ad Dajjal says:

      Disclaimer: I am not a historian, a lot of this is based on what I understood from Capitalism and Civilization.

      As far as I know, gunpowder was only ever invented once. It might have been invented eventually if whoever originally invented it hadn’t done so, but given that it wasn’t reinvented during the centuries it took for it to spread to Europe that might have been a very very long time.

      The industrial revolution, as I understand it, wasn’t dependent on any singular invention but on a slow process of improvements in milling in Europe. There were recognizably modern factories powered by water wheels before the first practical steam engine. And key technologies like the steam engine were reinvented several times each.

      So I could see a hypothetical world where gunpowder was never developed in China but the industrial revolution happened at roughly the same time in Europe. European or American chemists would eventually discover gunpowder but there could have been time in between.

    • Watchman says:

      I’m going to go with my default answer to this sort of question and say on the observable evidence about zero chance.

      Note gunpowder is not a known precondition of an industrial revolution, which at base needs an agricultural revolution and the prior conditions for capitalism, neither of which requires a powder that goes bang and can be directed as a weapon. But assuming a reasonable preponderance of the required materials then our evidence suggests it can be discovered at least one and a half millenia before an industrial revolution, so it seems that if we’re dealing with creatures with the ingenuity of humans (I’d argue a precondition for the preconditions of an industrial revolution) then it seems pretty well certain to be discovered and to spread, as it has olin the historical record.

      Looking at this functionality, there’s three broad areas of use for gunpowder: making pretty colours and loud noises; moving large objects (including castles and things like that) and killing people. The reason gunpowder developed is that it beat any existing technology on each of these fields over time. The only logical way that gunpowder would not be developed therefore would be where magic (apparently good for these activities as well according to my bookshelves) is present and widely available; even then, gunpowder weaponry would perhaps be useful for non-magic users albeit they might not be possible without the earlier applications.

      So to quantify my almost zero chance, this requires that there are suitable materials to make gunpowder in sufficient quantities to use presumably the case for any planet similar to Earth, or that magic exists, which is not particularly likely. If we think a hive mind or the like could manage an industrial revolution then perhaps the chances would be higher.

      • Matt M says:

        The only logical way that gunpowder would not be developed therefore would be where magic (apparently good for these activities as well according to my bookshelves) is present and widely available

        Finally, someone answers the age-old question of “Why didn’t they just shoot Voldemort with an AK?”

    • helloo says:

      One thing people have not mentioned – metallurgy.

      Remember that originally gunpowder was not used in metal cannons or such but bamboo rockets/fireworks. And that there were no “bamboo guns” that I’m aware off.

      There’s quite a few light novels that presume if someone was sent to a fantasy world/back in time, they could just whip up a gun by knowledge of gunpowder and a blacksmith, but it took a while before the ability to create a barrel strong enough to withstand the explosion (and not crack or warp) came to be. And sure, the knowledge of gunpowder spurred such advances, it’s not improbable to imagine one where it never came to pass.
      If we assume a world that is so metal poor or otherwise did not develop the metallurgy that could create cannons or guns, then … the Industrial Revolution might also probably be rather troublesome to them but hardly impossible.
      There’s not any specific functions of guns/cannons that can’t be replicated by other technology. Bombs can still exist to break fortification and crossbows can function fine to match the ranged capabilities of a gun.

    • The Nybbler says:

      I don’t think gunpowder is in any way necessary for an industrial revolution. But the stuff’s not hard to discover, so if you don’t have it yet, the systematic thinking required for an industrial revolution is going to provide it early on. I can see going straight to guncotton, perhaps.

      I think cannons without handheld guns would be less likely. The limit there is metallurgy, and you probably DO need that to get your industrial revolution going.

    • James C says:

      I’d say it’s definitely possible, much of the industrial revolution was an outgrowth of social and financial changes rather than a military ones. To my understanding the era of the gunpowder revolution pre-dated the economic revolution by a century or so. Now, you can argue that paying for all those horrendous expensive canon and muskets was what drove the economic restructuring that lead to the right conditions for industrialization… but that goes on forever. We have, after all, just one data point on spontaneous industrialization. At a guess, you could have the degree of economic sophistication without firearms, but firearms are a likely early outgrowth of the sophistication.

      If you’re willing to be flexible on ‘people’ to include non-humans, though, I’d say definitely. We live in a pretty narrow range where combustion is effective without being too effective and anything out of that band is going to have a hard time with firearms until their knowledge of chemistry is significantly beyond that of Renaissance Europe.

    • sfoil says:

      I don’t think gunpowder was necessary at all for the Industrial Revolution. While the need to manufacture high-quality precision metal parts for firearms was pretty important for developing machine tools, in a world where firearms didn’t exist they could just as easily have gone into manufacturing crossbows or steam cannons (liquid-fuel firearms?). Gunpowder looks like it was only discovered once — it’s entirely possible that never happens, and it never shows up until chemistry is understood much better post-IR.

      I think the best case that gunpowder had anything to do with the IR is that cannon manufacture drove foundry development in a way that bell casting didn’t because while huge bells are cool they aren’t as useful as something that can knock over a castle. I don’t think there’s any plausible world in which cannons exist but handheld firearms don’t.

    • cassander says:

      Gunpowder is a weird technology. Gunpowder doesn’t lead directly to guns. Guns are a surprisingly non obvious use of gunpowder, which was used in literal firearms for centuries before you got proper guns. Now, once you have an industrial revolution, you you’ll get explosives which will lead you to guns pretty quickly, even if it’s not with gunpowder, but there’s no real technological reason that you need gunpowder.

      That said, I think this understates the social importance of the gunpowder revolution. Dramatically increasing the capital intensity of war was important in establishing larger and more stable states. Guns helped establish the concept of technology and improving technology in a way that didn’t exist previously, which was hugely important to the industrial revolution. It raised the status of mechanics and tinkerers and created huge demand for better sources of fuel. I really do think gunpowder and the changes it wrought, if perhaps not strictly necessary, certainly were a huge accelerant for the industrial revolution.

      • sfoil says:

        I’m trying to think of ways that could have occurred without gunpowder. Steam-pressurized pneumatic guns maybe. Motor transport came too late to make sense, but powered water transport didn’t. Maybe even railroads.

        Discovering high explosives before deflagrating gunpowder doesn’t seem plausible but if it occurred it would mean launching bombs from catapults and the like.

        • cassander says:

          pressurized systems were not unknown in the pre-industrial world. greek fire was shot out of hoses with some sort of pressurization system the details of which are debated.

        • engleberg says:

          If we’d bred a bamboo with walls tougher than ironwood we’d have got piping for air and water as early as the late stone age. Maybe no harder to breed than Egypt breeding the pharaonic chicken.

    • John Schilling says:

      If you’ve got humans, and especially if you’ve got humans + livestock, then you’ll discover potassium nitrate. If you’ve got organic matter generally, you’ll discover charcoal. If you’ve got humans + surplus wealth of the sort an early industrial society will generate, you’ll have pyromaniacal quasi-chemists mixing together every possible combination of not-vanishingly-rare compounds with known incendiary properties to see what happens.

      That leaves sulfur. If you can tweak the geology so that sulfur (including pyrites, etc) is vanishingly rare throughout your civilization’s domain, then you can probably postpone the discovery of gunpowder until late in the industrial age. And then they’ll probably go strait to the nitro explosives and smokeless powder. As others have noted, you can probably get a full industrial revolution with “only” steel and steam and coal and machine-woven textiles and all the rest.

      • The Nybbler says:

        Leaving out sulfur loses you a lot of ores, including galena and most copper ores. Going to be tough to get to the bronze age, never mind the industrial revolution. I think copper oxide ores are more difficult to smelt… and use sulfuric acid in the process, though maybe there’s some older process which works for them.

    • ilikekittycat says:

      If there’s no cannons it means the doctrine of using castles as tactical defense/strategic offense never becomes obsolete, you can just build a huge thing wherever you need to keep control and unless some state has a whole siege apparatus to contest your position, you’re good. Castles never stop being the primary source of civil adminstration (in the real world they got replaced by star-forts/artillery forts that were just military structures without real civil power) and outside of maybe Holland/Italy you won’t see mayors and town council and all that kind of government start to become powerful in its own right

      The aristocracy never has to deal with the bourgeoisie getting rich and powerful enough to start challenging them, they can just ride out and plunder them back down to size as necessary and go back to living in safety

      No bourgeois reordering of society means no capitalist mode of production and finally, no industrialization. At least in our timeline

      • albatross11 says:

        Could we use steam engines + catapults/trebuchets or compressed air to make siege engines that could knock down old style castle walls? I suspect so.

      • James C says:

        This seems to be more an argument that Europe wouldn’t industrialize, rather than it not being possible. Castles were nowhere near as prevalent outside of northern Europe, even in places as close as Italy the city remained the predominant unit of political organisation.

        • ana53294 says:

          Spain, which is definitely in the south of Europe, is full of castles built during the Reconquista. Other European countries did not have a moorish invasions, but I am not sure cities were that important throughout southern Europe.

    • ADifferentAnonymous says:

      Pop history says that feudal Japan effectively banned firearms as a weapon of war until Commodore Perry made it clear why that was a bad idea.

      Cursory research suggests this is oversimplified if not outright false, but the idea stands. Maybe the first place industry developed could be a place that intentionally shunned gunpowder weapons?

    • idontknow131647093 says:

      I think the only realistic scenario would be one where sulfur is not accessible by the same levels of mining skill that are required for Copper, Iron, etc. Gunpowder is really easy to make compared to accurate casting and forging of iron/steel that was needed for the industrial revolution.

  11. Ninety-Three says:

    In Charles Stross’ The Traders War, we see an alternate history Earth where the industrial revolution never happened and America is operating at a strange Victorian+ level of technology. The most remarkable part of this is they have managed to invent nuclear bombs despite still operating on the corpuscular theory of light. Given how much work it was to develop nuclear weapons even with all the correct physics, this struck me as a bit of a stretch, but is it really?

    Once you have some enriched uranium sitting around, I could imagine noticing that it reacts violently when you put too much of it in one place, the reaction is more violent the more of it there is, and that seems sufficient for a team of dedicated engineers to try putting more and more of the stuff together, explosively compressing it and so on until they’ve got an atom bomb. But “Once you’ve got some enriched uranium sitting around” seems like a tall order. How much physics would people need to know in order to identify uranium enrichment as something not only possible but worth bothering to do?

    • bean says:

      Lots of problems here.
      1. An accidental atomic bomb isn’t really plausible. Even if you did get a bunch of HEU for some reason, you’re going to see something more like the Demon Core accidents than an actual bomb if you make the pile too big. Those are irritating, but not likely to point to a bomb.
      2. I don’t think you’d have any reason to enrich uranium to the required level without knowing about nuclear weapons. Uranium enrichment is hard.
      3. How do you even known what enrichment is if you don’t understand isotopes? I’m reasonably sure that the necessary experiments to understand atomic structure also disprove the corpuscular theory. Not a physicst, but physics is fairly tightly coupled.
      4. You can’t do enrichment on Victorian tech. I looked into this a while ago, and even gaseous diffusion requires serious manufacturing for the diffusion barriers themselves. Issues there will compromise your enrichment factor, and that blows the size of the plant from “incredibly massive” to “no way at all”.
      5. What does Victorian+ tech without an industrial revolution mean? Victoria came to the throne in 1837, and by that point, the industrial revolution was well under way. The more traditional Victorian era is at least 20-30 years later, and by that point, they had ironclads. (And other things, but I think in naval eras. You get my point.)

      • John Schilling says:

        You can’t do enrichment on Victorian tech.

        You might be able to do graphite-moderated natural-uranium breeder reactors and once-through pyroprocessing to extract plutonium from the spent fuel elements. But it would be very expensive, very dangerous, and very much not going to happen unless you have a sound theoretical and experimental basis for understanding what you’re going to get out of it. So, no atom bombs based on fundamentally wrong theories of particle physics.

        If you somehow freeze engineering at a Victorian level for a century or two, maybe the scientists would give you that theory and then the Crown funds the London Project for the glory and power of the British Empire. But then, why is it that whatever magically freezes the engineering then stands aside and lets you do this, when all the lesser feats would offer more immediate payoff?

        • bean says:

          Granted that Pu bombs are in theory viable with Victorian tech. I’m well aware of the process (I’ve seen a Pu breeder reactor up close), but skipped it for time and because the OP didn’t mention it.

    • Andrew Hunter says:

      As a pedantic point, while they missed the industrial revolution, it’s not clear they don’t have modern atomic theory or relativity.

  12. bean says:

    Naval Gazing finally has come back to auxiliary ships, looking at the proliferation of types built to support the Pacific offensive in WWII.

  13. DragonMilk says:

    I am experimenting with marinades!

    What are the best (in terms of forgiveness/ease of execution/commonness of ingredients to tastiness) recipes you know for the ziplock bag technique for:

    Steak
    Pork cutlets
    Chicken

    I am experimental and tried making steak for the first time on a pan on the stove. I put soy sauce, pepper, oil, and garlic powder in a bag, and the garlic powder ended up burning in the frying pan for a very blackened steak. Lesson learned!

    • A Definite Beta Guy says:

      The absolute easiest are those pre-packaged ones that Lawry’s makes. They come in containers that look like salad dressing and are reasonably cheap on sale. 😉

      I don’t marinade much anymore(I prefer dry brines), but I follow SAF.
      S – Salt
      A – Acid (wine or vinegar usually)
      F – Flavor (whatever flavor you want to infuse)

      I particularly like chicken in red wine. I take a whole leg quarter and throw it in a big with a crap ton of red wine, a lot salt, and the standard trinity of celery/carrot/onion. You’ll marinate it for 8-24 hours, then throw it in a dutch oven and sear on high heat. Then you’ll add in MORE red wine, new celery/carrot/onion, garlic, and some rosemary/thyme/bay leaves, and simmer for 1.5-2 hours.

      For quick marindaes, I pound out some chicken breast and throw it in some italian dressing with some extra salt and vinegar for a few hours.

      I love rosemary and thyme, so I would throw those in a lot of different marinades. You don’t get a TON of flavor, so I always add more when I actually cook them, but it’s pretty delicious.

      Soy sauce is a great addition. Specks of dry spices, as you now know, can very easily burn when subjected to high heat.

      If you want to do fried chicken, marinating in buttermilk is essential, and incredibly easy. You throw it in buttermilk. Come back in 24 hours, rub it in your flour/spice concoction, and then throw it in some shortening in a cast iron skillet.

    • Matt M says:

      I often did something very similar to yours that seemed to work well. Something like 3 parts vegetable oil, 1 part soy sauce, with a generous sprinkling of salt and garlic pepper.

      If steak, I would substitute the pepper with a generous sprinkling of McCormick’s Montreal Steak seasoning, which is some great stuff.

    • Nornagest says:

      Chicken adobo. It’s dead simple: skin-on chicken, plus peppercorns, bay leaves, and plenty of garlic. Marinate for at least a few hours in equal parts soy sauce and vinegar (use cane vinegar if you can find it, otherwise white or distilled is fine), a bit less than half a cup each per pound of meat. Then simmer covered for half an hour, uncover, and cook until the sauce is reduced.

      Especially well suited to situations where you have to feed a lot of people, since the ingredients are cheap and it scales up pretty much indefinitely. Serve with rice.

    • dndnrsn says:

      Season a steak with your favoured steak spice. Toss it in a container where it can lay flat and then dump enough Worcestershire sauce (Worcestershire sauce is one of the products where the name brand is better; the ingredients list on the store brand stuff is usually pretty junky-looking) over it to mostly cover it. Let it sit for a couple hours, flip it over, another couple hours. Take it out and cook it. If you cook it on a barbecue, you can pour the leftover sauce over it.

      EDIT: If you’re not cooking on a barbecue, the extra stuff should be tossed, because it’s got meat juice in it now, and if you pour it over the steak it will gunk up your pan.

      • Nornagest says:

        If you’re not cooking on a barbecue, the extra stuff should be tossed, because it’s got meat juice in it now, and if you pour it over the steak it will gunk up your pan.

        The gunk is a resource! Recently I’ve been making a lot of pan sauces for my steaks and other pan-fried meats. The basic procedure is to remove the meat when it’s done, pour off most of the fat if there was a lot of it, and then throw in the leftover marinade, whatever spices and finely chopped aromatics you feel like, and enough extra liquid (stock or wine work well) to just flood the pan. Simmer until reduced, stirring constantly; optionally, mix in butter or flour or both to thicken. It’ll convert the crunchy bits on the pan into a delicious sauce, which you can pour over the meat before serving, and make the pan much easier to clean.

        Works especially well if you’re cooking on cast iron, which I’ve been doing a lot of lately. You can do it for roasts, too. As an added bonus, it makes you look like a gourmet chef to anyone you’re feeding it to.

        • A Definite Beta Guy says:

          This requires clarification. Are you talking about using the leftover bits in your pan AFTER cooking or re-using your marinating liquid?

          Your marinating liquid is contaminated. It touched raw meat. You basically need to boil it. Your pan drippings can be added directly into whatever sauce you are making.

          Also, suggestion on your sauces. I used to add the flour and reduce directly in the cast iron, but you might find better results using a saucepan. It sucks because you need to clean a dish, but it’s much more precise (IME). Heat up a saucepan, make a roux in it, and then add your pan drippings (along with chicken stock). I’d recommend giving it a shot and seeing if you like it. I had enough bad pan sauces in the cast iron that I wanted something more consistent.

          • Nornagest says:

            Are you talking about using the leftover bits in your pan AFTER cooking or re-using your marinating liquid?

            You can just do the former and it’ll work, but I like doing both. I hate to waste delicious meat juice.

            You basically need to boil it.

            Which is why you’re reducing it. The word’s a little imprecise, granted — you can reduce sauces by simmering without bringing them all the way to a boil — but if it’s not boiling at least temporarily it’ll have a hard time dissolving the fond, and anyway a cast-iron pan that’s hot enough for frying will immediately boil the quantities we’re talking about.

      • DragonMilk says:

        What sort of things do you put on at the end of cooking vs. in marinade?

        For the longest time I thought Worcestershire sauce was like a better A-1 steak sauce to be put on after the steak is done.

    • SamChevre says:

      Classic lemon pepper chicken.

      1 part good olive oil
      2 parts freshly squeezed lemon juice and grated zest
      lots of coarsely ground black pepper
      1/2 tsp salt per pound of chicken

      Add chicken thighs or leg quarters, marinate between an hour and all day; all day is better.

      Cook under a HOT broiler, and baste with some of the marinade.

  14. dndnrsn says:

    RPG thread: Orc Inflation and its pernicious effects.

    Through all editions of D&D through 2nd ed AD&D – so, in all TSR versions – orcs are basic opponents; the platonic ideal of the 1st level party (Cleric, Fighter, MU, Thief or whatever) can take on 4+ orcs in an encounter that will probably be middling in difficulty.

    In 3rd ed, suddenly enemies have stats, and the orc gets tougher because now they’re got an attack bonus of +4 due to HD and STR instead of an attack bonus of +1 due to HD. They’re now CR 1/2 – meaning 2 orcs is a moderate challenge for the default 1st level party. In 4th ed, it’s more complicated, because 4th ed threw in all sorts of poorly-thought-out and dissociated rules. An orc “drudge” is a “level 4 minion”, while a warrior is a level 9 minion. Minions go down in one hit always but may have powerful attacks depending on level, may be hard to hit, etc. A 1st-level party is not meant to be tangling with orc warriors, and even the drudges have decent to hit and AC. In 5th ed, much like 3.5th, an orc is Challenge 1/2 – 2 of them is a moderate threat to a 4-person party. So, an orc in WOTC versions of D&D is at least 2x stronger than an orc in the TSR versions.

    This is bad because it contributes to the miscalibrated expectations regarding D&D levels. Heroic fantasy characters might do things like take on a half-dozen orcs each. For a party of 4 to take on 24 orcs in 5th, that means a party of 13th or 14th level (at least, the DMG p82 says that a medium encounter is 2.2k xp for a 13th level party, 2.5k for a 14th, and orcs are 100xp ea). However, in terms of character abilities, survivability, and especially magic something in the area of 5th, 6th, etc level fits a lot better for most heroic fantasy. A 14th level PC is really powerful.

    Especially magic. A 14th level wizard can teleport a group to anywhere on the same plane, a 14th level cleric can resurrect the dead. Most fantasy games do a really, really bad job of reconciling the world building with the amount of magic that tends to be available in fantasy games: far greater than one finds in most heroic fantasy novels (a 13th or 14th level D&D party makes Frodo and co look like a bunch of wimps as far as magic goes). The more people who can do magic running around, and the higher their average level, the more your game becomes either some kind of weird magepunk game where communication and travel are very easy kings are brought back from the dead and the rich can escape the plague and war is about wizards on giant enchanted birds raining fireballs on enemy cities while golem-powered tanks wait for the Druid Corps to arrange a deal with the water nymphs so the river can be forded (which is all cool, but hard to do!) or it’s fundamentally unbelievable because despite the possibility for those things all clearly being there, it’s still some kind of medieval pastiche or whatever.

    If orcs are around 1-for-1 with 1st level PCs, or a little weaker, than a 5th or 6th level party can fight a couple dozen orcs no problem. If orcs are tougher, then to have PCs do the sort of thing that heroic fantasy characters are expected to do – fight a bunch of individually weak enemies – the PCs have to be of a level where in other ways they are considerably more powerful than heroic fantasy characters ordinarily are. To be able to fight a couple dozen orcs, you also have to be able to teleport across the world and bring people back from the dead, and you have scads of HP: there’s zero chance that one of those orcs will get a lucky hit and really hurt you, unless you were also badly hurt.

    So, in the end, raising the toughness of a fairly common enemy by 100% throws a whole bunch of other things out of whack. It truly was a mistake to leave the orc standard and enter the unpredictable world of fiat orcs.

    • Unsaintly says:

      A major problem with this essay is that you are taking CR seriously. In reality, Challenge Rating has never been an accurate assessment of monster power or threat. In 3.5, an orc has 5hp and 13AC. A typical fighter is going to have at least a +4 to hit and +3 damage, and is very likely to have better than that. Similarly, it will have at least a 16 AC. Even an unoptimized level 1 fighter is more than a match for a single orc, and an optimized fighter could likely handle 3-4. A wizard can cast a Sleep or Color Spray spell that takes down 2-4 orcs with a single action, provided they won initiative or started at least 65 feet away from said orc.
      5e orcs have 15hp, but are otherwise the same. The increased HP means they are harder to kill in a single action, although Sleep or Color Spray remain effective. However, no competent 1st level 5e group will be even moderately threatened by two orcs, barring the sort of swingy crit luck that mars low level D&D play (and would still apply even if orcs were significantly weaker).

      Your D&D 4e example seems to misinterpret 4e as well. A first level party absolutely can fight orcs, they just fight level 1 minion orcs instead of level 4 or 9 or 30. 4e often provides multiple example levels for enemies, so that you can pick how dangerous you want said enemy to be to the group. If you want to fight orcs at level 1, have the party attacked by a band of 8 level 1 minions lead by a level 1 brute or something. And your level 14 party can be attacked by dozens of level 1 minion orcs if you wish, or by fewer higher level orcs if that better suits your needs.

      In the end, I don’t think things are so different as you are presenting them. Even in first edition, from what I can tell (I don’t have the BECMI books available anymore so I’m going by AD&D) an orc had 1 hit die and dealt 1-8 damage. A first level fight against four orcs could easily result in a dead character with a little bad luck, especially given that hp at first level was rolled. This is one reason why goblins and kobolds exist, to be the trivial opponents at first level that are much less likely to kill a player when fought in moderate groups.

      • dndnrsn says:

        If I’m wrong to be taking CR (or whatever) seriously (which I’ll cop to; I haven’t run anything using any sort of D&D except retro clones in more than a decade) – well, that’s a deeper problem in the game, that when the designers say over three and a half separate editions “this is how you make balanced encounters” doesn’t even do that! CR’s had its own pernicious effects (the idea that encounters should be balanced, which got converted by popular opinion into “all encounters should be level-appropriate”, which kinda sucks the fun out of the game – if you know that a GM doing their job is supposed to be providing encounters you can swing, you will think less strategically in terms of what fights to pick and when to run) and it doesn’t even do what it says on the box?

        Regarding 4th ed: in the MM, there’s no 1st level orcs. There’s over a dozen stat blocks, but the lowest-level minion the drudge, and the lowest-level non-minion is the level 3 skirmisher. If a GM wants to use a 1st level orc, minion or not, they’re going to have to stat that up; they can’t use the book of monsters they paid fifty bucks for.

        • Unsaintly says:

          Regarding 4th ed: in the MM, there’s no 1st level orcs. There’s over a dozen stat blocks, but the lowest-level minion the drudge, and the lowest-level non-minion is the level 3 skirmisher. If a GM wants to use a 1st level orc, minion or not, they’re going to have to stat that up; they can’t use the book of monsters they paid fifty bucks for.

          This is one of the advantages of having consistent enemy math. Once example orcs are printed to give you an idea of what their powers and stat offset would be it becomes trivially easy to run those orcs at whatever level your game calls for. The book you paid fifty bucks for shows you the expected abilities of various monster types, and you can scale the numbers up or down with a few seconds of additional work. This means that you don’t have to wait until you’ve “earned” the fun of fighting extraordinary monsters, and it means that if the game calls for it you can always fight orcs without lazily carving through a hundred per action.

          But that aside, even if you refuse to stat up additional monsters beyond what is provided for you, there are a lot of monster manuals and other sources for monsters. Included in these other materials are two example first level orcs.

          • dndnrsn says:

            I’m not averse to statting up new enemies, but I’m considerably more invested in RPGs than the average player or GM. The 13-year-olds playing D&D for the first time shouldn’t have to do much tinkering with the rules after they’ve spent 200 bucks tax included of their parents’ money, basically. It’s not that it can’t be done, but it’s that – why not just include the Orc Nobody stats in the plain-jane MM, like every other version?

          • Randy M says:

            There was a reason it wasn’t done–Orcs, by then, weren’t seen as the starting stone for adventures. Possibly it is because 4th transitioned to a 1-monster per PC paradigm, away from the 1-monster per group paradigm. Possibly because of the cultural changes we discussed elsewhere, Orcs were seen as slightly stronger than town watchman.

            The interesting thing about your complaint is that if you set 4 4E level 1 PCs against 1 level 5 Orc (non-minion, just a brute or whatever) it would probably be a balanced encounter–if a bit boring (and assuming you didn’t use the MM1 Orcs; took them awhile to get it right, certainly).

            To me your complaint sounds like “Why is there no level 5 drow in Drow in 4E?” Which, for all I know, is a complaint you do have; presumably one can make a drow equivalent to a first level fighter in some other edition. But anyway, the designers had a certain progression in mind, and it doesn’t mean the game is broken or bad for RP because they thought of the typical Orc monster as being stronger than the typical goblin monster and weaker than the typical ogre monster.

            (However, I do kind of like 5E flattening the level progression and tightening the accuracy. I’d like to use monsters by the book for a wider period of play, especially since my campaigns don’t tend to go for more than a few levels.)

          • dndnrsn says:

            I don’t know how powerful drow are supposed to be; I’ve seen everything from “they’re cool so of course they’re badasses” to “here is another flavour of elf.” 5th seems to go with the latter.

            Orcs have always been stronger than goblins and weaker than ogres, but those are relative to other monsters. Orcs being weaker or stronger isn’t really a 4th edition problem, it’s a 3rd/d20 problem – an HD1 monster becomes stronger when they have STR and CON bonuses instead of “it’s got +1 to hit and it has 1d8 HP; whaddaya want.”

            My issues with 4th are not about how strong they saw orcs or whatever as being primarily, although that might feed into the larger problem that 4th feels different from other editions in a much larger way than normal. My issue with 4th is that it failed to fix a lot of problems with 3rd, worsened a bunch of bad tendencies in 3rd, and added some new problems, foremost the dissociation.

            EDIT: But these problems are separate from inflation of enemies leading to inflation of PC levels creating setting consistency problems due to magic. To crap on 5th and leave orcs out, a random knight from the generic NPCs bit of the MM has 50hp. If Sir Whoever has 50hp, then a PC fighter capable of taking on a couple Sir Whoevers (Sirs Whoever?) without much trouble (even fairly gritty fantasy novels will have some cool swordsmen around who take on multiple skilled opponents and win will have to be higher level than that, and then the wizards and clerics he’s palling around with will be reversing time and forgiving the sins of humanity or whatever.

          • Randy M says:

            Orcs being weaker or stronger isn’t really a 4th edition problem, it’s a 3rd/d20 problem – an HD1 monster becomes stronger when they have STR and CON bonuses instead of “it’s got +1 to hit and it has 1d8 HP; whaddaya want.”

            Hmm, ok, maybe I crossed the streams there.

            Point of reference, when you play older D&D (or 5E, afaik) and they say a monster has “1d8” HP, do you give them 8 HP, or roll a d8 and give them that many?

          • dndnrsn says:

            Giving monsters and NPCs max HP would be unfair to PCs, who have to roll (I give PCs max HP at first level because I’m not that hardcore) but having an individual count for every enemy is too much of a hassle. I just give them average, rounded up. So 1HD is 5, 2HD is 9, 3HD is 14, etc. I’ll roll for any particularly important monsters or NPCs.

        • Perico says:

          Regarding CR in 3E, it may be the designers’ intent to use CR as a guideline for balanced encounters, but I think it’s safe to say that it didn’t work as intended. There is huge variance in power level across same level monsters (and, for that matter, same level PCs), and this only gets worse as you level up.

          As for 4th edition, it is true that there are no level 1 orcs because level 1 is reserved for pathetic monsters such as Kobolds and Goblins. Orcs in the Monster Manual average a more respectable level 4. But that doesn’t mean you can’t have level 1 adventurers fighting orcs. Per DMG guidelines, you can take monsters within 2-3 levels of the characters’ level (or up to 3-5 levels apart for hard encounters, though in practice the to-hit math tends to break at that point, so I wouldn’t recommend it). You can take 2 Level 4 Orc Berserkers and 1 Level 3 Orc Raider from the Monster Manual, without modification, for a perfectly legal and reasonable encounter for level 1 adventurers. If you don’t mind throwing minions into the mix, you can have players fight 11 level 4 Orc Drudges.

          Finally, I have to disagree with the idea that ‘balance’ should be a dirty word. At the end of the day, I think it’s a valuable tool for the DM. A ruleset that tells you ‘this is how you make balanced encounters’ also happens to tell you ‘this is how you make fairly easy encounters’, as well as ‘this is how you make insanely difficult encounters’. I think it’s a great thing when players can find themselves in a very dangerous encounter – provided this was the DM’s intention to begin with, and that the campaign’s expectations had been set accordingly. I’m not so fond of scenarios where unexperienced DMs can unintentionally set up a killer encounter while following the official guidelines . Or, for that matter, cases when a supposedly threatening boss is dispatched in a single turn not due to the players’ ingenuity but because the magic system eats big dumb guys for breakfast.

          In other words, I think you’re right that not all encounters need to be level-appropriate. But same-level encounters do need to be level-appropriate, or the DM’s job becomes much harder.

          • Randy M says:

            Are you the author of the Square Fireballs blog?
            That was fantastic.

            (I’m ashamed for only now making the connection)

          • dndnrsn says:

            So, balance in the sense of “this is how you eyeball how powerful an enemy is” is a good thing. It’s a good ability for a GM to have. However, it doesn’t take a CR-type system – even if it works as advertised – to do this, just a familiarity with the game’s system, with what spells do, etc. A CR-type system might work against this by fooling a GM into thinking something is a fair challenge when it’s actually a TPK, when if they ignored CR and looked at the monsters’ abilities they’d realize that.

            However, I’d question the degree to which “designing encounters” is a good thing. I’ve found that when the basic assumption in the game is that encounters should be planned to be challenges that the PCs are by definition overcome, the players behave differently – they are more likely to resort to combat, more likely to trust in battle tactics over larger strategies, etc. When there’s no such assumption baked into the system, and no guarantee that overwhelming odds won’t be faced, players tend to get more creative.

            As a GM, I find it’s a lot more fun for me when instead of being under pressure to make sure everything works, more decision-making power is placed in the hands of the players (they know where the low-risk low-reward stuff is and where the high-risk high-reward stuff is, and it’s up to them) and wandering monster tables (random dragon attacks could happen to you!)

            EDIT: Was a lack of balance a problem in 2nd ed? Out of all the things where 3rd ed was a major fix – I’m just old enough to remember 2nd, and it was a mess – trying to quantify balance seems like something they didn’t really need to do; what problem was it solving?

          • Perico says:

            Yep, that’s me.

          • Perico says:

            Re: designing encounters. Looking back, I don’t think I ever ran a random encounter while DMing 4E. This was due to a combination of personal preference, running canned adventures, and fairly short sessions – we usually had time for a single fight and some role playing, so preparing encounters in advance worked for us.

            It shows my bias in this matter that I can’t even tell, without looking it up, whether the DMG even included rules for random encounter generation. I mean, it must have, because D&D, but I’m trying to remember them and drawing a blank.

            That said, other than the system philosophy slightly discouraging them, I can’t think of a reason why random encounters wouldn’t work in that version of the game. You would make combat a bit less satisfying, but the gain in spontaneity should probably make up for it, if you’re into that sort of thing.

            For what it’s worth, my ideal system would be one where you could generate randomly and without effort, say, a hard encounter for level 6 adventurers involving orcs. And with enough variety that this was somehow significantly different from the same level encounter with orcs that you just had. 4E fell short in the variety department with off the shelf monsters, in my opinion (and the fact that monsters were only usable within a relatively narrow level range didn’t help)

          • dndnrsn says:

            3rd had a system for random encounters, but it was based around CRs, I think. I don’t think any rules change is the reason for the decline in their use; in practice they were probably largely vestigial by the end of 2nd ed’s lifespan, and at least some published adventures/campaigns must have been assuming they weren’t being used earlier than that.

            I’ve enjoyed completely unbalanced random encounters – when paired with reaction rolls, it’s very rare the PCs are going to run into something that instantly attacks them. Random encounters aren’t just combat – you run into friendly stuff; it’s a good way to come up with NPCs more interesting than the usual. They make it harder for the GM to railroad. On the other hand, they can be a liability if you’re not setting out to do a sandbox, outside of very specific random encounter tables set to a given task. But right now I’m on a sandbox kick so it’s all good.

    • Nabil ad Dajjal says:

      There are two big problems with your reasoning here. One is a system-specific problem with how you’re interpreting challenge rating and one is a conceptual issue with what kind of game D&D is.

      In an adventuring day, whether that’s literally one day or not depending on whether you use the optional rest rules, a first level party is expected to be able to overcome 300 XP worth of enemies per PC. An orc is a CR 1/2 enemy, which translates to 100 XP each or three orcs per PC per adventuring day. After that one day, the adventurer will reach second level and be expected to overcome 600 XP worth of enemies, or six orcs, per adventuring day. After that second day, the adventurer will reach third level and be expected to overcome 1,200 XP worth of enemies, or a dozen orcs, per adventuring day.

      The other issue is that D&D really isn’t designed for the kind of pitched battle you’re envisioning. A third level character can fight a dozen orcs over the course of a(n adventuring) day, but fighting a dozen orcs all at once is as you point out considered a deadly encounter for up to a fifteenth level character. Wading into hordes doesn’t work well in most editions of D&D because the action economy favors the enemy, and 5e’s bounded accuracy makes it downright suicidal. Adventurers in D&D don’t stand out on open fields facing overwhelming numbers; they go into dungeons and fight one room full of enemies at a time (up to an including a dragon). If you want to run a large battle in D&D it needs to be broken up into waves of smaller skirmishes over the course of a day.

      • dndnrsn says:

        I don’t mean take on 6 orcs in a spherical cow battle on a flat plane; more a running battle from room to room where the PCs take advantage of the terrain and blah blah blah. But not “one room at a time” either. More like, you’re in hilly woody terrain, you just stumbled into a platoon of orcs, what do you do? The PCs are probably going to run away a little, set up an ambush, that sort of thing. Not multiple skirmishes over the course of a day, but faced with a couple dozen orcs, they should be able to take them on if they can do that in 2 or 3 chunks.

        • Nabil ad Dajjal says:

          A couple dozen in 2-3 chunks, taken literally, is 8-12 orcs at a time. What you’re asking for is that parties should be able to handle two to three times their number in enemies right out of the gate.

          That’s not an unreasonable expectation for fiction in the fantasy genre but it’s going to cause a lot of issues for an ongoing game where you’re expected to continue to grow in power over time.

          D&D creates tension through attrition of resources. And as the adventurers get more experince and treasure their resources continuously grow. If the starting point is that they have the resources to handle dozens of orcs, that both rules out nearly any lesser challenge and sets the bar much higher for higher level characters. For the same reason that it’s hard to write Superman well, it’s hard to DM for a party which can casually wipe out an orc warband.

          • dndnrsn says:

            Not out of the gate – this was probably 3 or 4 levels in, and they got very lucky. Out of the gate, the campaign started with a fighter running into a cave and getting killed by kobolds with slings.

          • Nabil ad Dajjal says:

            Ok so then that still works fine for 5e.

            A fourth level party of five should be able to fight ten to fourteen orcs at a time “a slim chance that one or more characters may die” or more with the possibility that “the party risks defeat.”

            Two to three encounters of ten to fifteen orcs puts you right in the range of what a group of adventurers should be able to handle before they need a short rest. And it’s pretty much exactly your target of a couple dozen.

            (If I seem dogmatic or rigid about CR, pacing, and the adventuring day it’s mostly because the underlying math is actually surprisingly good at creating dramatic tension. The main problem for me is that it’s an “adventuring day” instead of an adventuring week, but that’s what the Gritty Realism variant is for.)

      • ManyCookies says:

        but fighting a dozen orcs all at once is as you point out considered a deadly encounter for up to a fifteenth level character.

        The way I’ve seen hordes handled is “Only three to five enemies can effectively gang up on dude in melee, they don’t have enough room to maneuver/shoot arrows through with more”.

        • Nabil ad Dajjal says:

          Your DMs have been much nicer than me.

          Part of why I’ve never liked the “one at a time” rule in action movies is that it’s hard to believe that an enemy is seriously trying to kill you when they don’t act like it. If a bunch of orcs are just milling about randomly in the background watching you have a three-on-one duel then it makes me think they aren’t very invested in winning the fight. I’d rather face a smaller group which will pull out all the stops than a larger group that fights half-heartedly.

    • Randy M says:

      This is bad because it contributes to the miscalibrated expectations regarding D&D levels. Heroic fantasy characters might do things like take on a half-dozen orcs each

      And yet you dislike minions. (insert shrug ascii)
      I can’t tell if this is a parody of some other essay or a serious complaint. I think you are saying that the harm here is that players will expect to be able to kill dozens of orcs after gaining levels but having orcs calibrated for 4th level pcs instead of first will confuse them.

      But it sure seems to me like an Orc in Lord of the Rings would thrash a careless starting hero. They are stronger and tougher than other races. Why should you ever expect to take greater numbers down other than through clever tactics?

      Also, PCs are demonstrable proof of the variability of intelligent characters, with powers that grow through experience; why shouldn’t you have them face a variety of orcs?

      Deal with your players misaligned expectations with descriptions, or use goblins to make them feel powerful.

      • dndnrsn says:

        I dislike minions for the same reason I dislike 4th in general: dissociated mechanics. The minion’s ability to go down in one hit doesn’t represent anything in the game world, it represents a game design decision made to allow large numbers of opponents in a complex system. Minions don’t take damage from missed attacks, again for game reasons, and this doesn’t represent anything in the game world either. You as a player will be making decisions as a player based on stuff that your PC has no way of knowing about unless they are self-aware that they are in a game; I think this really, really hampers roleplaying.

        And, it is a serious complaint. My complaint isn’t that PCs are somehow going to get confused. My complaint is that if “serious badass” (capable of taking on a half-dozen low-level but not trash opponents in combat) is met at level 14 instead of level 7, in a party of serious badasses there will be at least a couple with abilities that radically change the world building of the setting. Players want their characters to be serious badasses; it’s better if they can get that at level 7 than level 14.

        • Randy M says:

          All games have abstractions.

          It sounds like the genre conventions have shifted and you don’t like where they’ve ended up. Fair enough, but you don’t play recent games anyway.

          D&D magic doesn’t really look like magic from any myth or much contemporary fantasy fiction. True enough.

          • dndnrsn says:

            4th edition went extremely hard with the dissociated mechanics, though; it’s not really abstraction. Abstraction is “instead of futzing around with gold pieces, I’m going to give all of you a Wealth modifier” like various d20-system games did. Abstraction doesn’t produce the same problem as dissociation: it still represents something in the game world in a way that a person within that world could comprehend without requiring the knowledge that they’re in a game.

            It’s also not a genre convention that’s shifted – 4th did it, and the public reaction to 4th was terrible (to the point that Pathfinder, a clone of 3.5th, outsold 4th for a while). WOTC junked most of the innovations of 4th when they did 5th. I’m actually probably going to play 5th soon, but flipping through it, one thing I know I’ll have to do if I want to run it is restat various monsters and enemies to be weaker, so the players can feel like badasses at 5th level and I don’t have to deal with a world in which either I explain away the ability to resurrect the dead or figure out all the ways magic would change the world.

          • Randy M says:

            It’s also not a genre convention that’s shifted

            Nah, that referred to orcs not being throw away mooks any more.

            Abstraction doesn’t produce the same problem as dissociation

            Not seeing how this isn’t a somewhat arbitrary label for things you don’t like.

            As far as your favorite incantation, dissociated mechanics, I don’t think that really applies. Why can the PC’s not be expected to realize dude holding the banner can be taken out in one hit? How is that different from a 5 HP Orc when you deal 1d6 + 5 damage or whatever it is that allows 1e fighter to mow through a 30’x30′ room full of them?

            I’ll have to do if I want to run it is restat various monsters and enemies to be weaker, so the players can feel like badasses at 5th level

            Or, you know, don’t use monsters that are supposed to be big hulking brutes who live for combat as mooks. Goblins ain’t good enough for you?

          • dndnrsn says:

            Not seeing how this isn’t a somewhat arbitrary label for things you don’t like.

            It’s absolutely not arbitrary; have you read the essay I linked? There’s plenty of stuff I don’t like that’s not dissociated. There’s a few things I like that are dissociated.

            As far as your favorite incantation, dissociated mechanics, I don’t think that really applies. Why can the PC’s not be expected to realize dude holding the banner can be taken out in one hit? How is that different from a 5 HP Orc when you deal 1d6 + 5 damage or whatever it is that allows 1e fighter to mow through a 30’x30′ room full of them?

            Because the level 9 minion is an absolute badass in other ways – he has the stat block of a serious opponent, except that the minion rule applies. There’s no way for the PCs to know the mechanics of the game they live in. The 1HD town guard with average 4-5 HP; the 1st level fighter swinging a longsword for 1d8+2 damage knows that he can reliably deal with a guard in one or two sword blows. The level whatever minion with a really powerful attack who goes down in one hit regardless – how does your 1st level fighter explain that? He can’t unless he’s genre-savvy.

            Or, you know, don’t use monsters that are supposed to be big hulking brutes who live for combat as mooks. Goblins ain’t good enough for you?

            The change in genre – it came over from Warhammer by way of Warcraft, right? That orcs went from “like a human, little bit tougher, little bit dumber” to “the average orc deadlifts 500 you nerd” is probably due to that.

          • Unsaintly says:

            Dissociated mechanics is a bad label for things. Why is Wealth (to specify which Wealth system, I’ll pick “a modifier on a d20 roll that you use to determine if you can afford a thing, and how easily”. There are many Wealth systems, but this discussion will apply more or less to all of them) just abstraction, when an opponent being a Minion is dissociated? Or even the archetypical example of a non-magical Daily Power?

            One answer to this question, and the one most frequently given, is a characters ability to understand what is happening in the game. However, this explanation falls apart when examined even a little. You could easily explain a minion dying in one hit as being poor luck on the minion’s behalf (this hit happened to catch its guard down) or by these monsters being weaker than normal. Similarly, a daily power isn’t some black box button a fighter can only press once. It’s something that’s difficult to use, and the circumstances for using that particular maneuver happen only rarely, for example. This brings me into an important point when discussing mechanics: you are not bound by the same flow of time as the characters.

            The world of your D&D game is not real. You are not controlling the people within that world in real-time, the DM is not manipulating an actual world and describing what’s inside it. You are all describing a world, and can establish things out of sequence while still having the characters within the narration act and react in a way consistent with the final sequence. From within the narration, a fighter doesn’t kill an enemy in one hit because they were a minion. An enemy the fighter managed to kill in one hit gets labeled a minion. The circumstances to use that fighter’s daily power don’t mysteriously happen only once per day when the fighter decided to use the power. The fighter decided to use the power because those circumstances happened. And those circumstances happened because the player declared that they happened.

            As a final point, I would like to note that I am objecting to your claim that public reaction to 4th edition was terrible as well as the implied claim that 5th edition’s removal of 4th edition’s improvements were based on anything resembling sound design principles. However, since that is a completely separate discussion, I’ll leave it shelved for now.

          • Nabil ad Dajjal says:

            Do people want to fork the discussion on dissociated mechanics?

            I agree that it’s a useful way of describing a failure of verisimilitude but it seems a bit off topic here.

          • Randy M says:

            Because the level 9 minion is an absolute badass in other ways – he has the stat block of a serious opponent, except that the minion rule applies. There’s no way for the PCs to know the mechanics of the game they live in.

            There is no reason why this can’t be an accurate representation of the fiction the game is set in. An opponent that is easily dispatched but nonetheless presents a lethal threat (where lethal means has a chance to inflict some small portion of HP damage) in the heat and confusion of combat sounds like–well, it sounds like a grunt armed with a weapon, honestly.

            The level whatever minion with a really powerful attack who goes down in one hit regardless – how does your 1st level fighter explain that? He can’t unless he’s genre-savvy.

            The monster manual is not an encyclopedia; sorry if you get disassociated merely prepping for the game, but any GM who threw a level 10 minion at the players and said it was cool that it wrecked them because it was balanced by it’s 1 HP was doing it wrong. Monster stats are relative to a PC of it’s level. Yes, the fiction breaks down if you minionize high level threats and present them to the 1st level PCs. No, this is neither how the game was player or presented, nor is that a good argument against using minions.

            re:orcs,
            Why would anyone be scared of these guys? Was it different in the LOtR text? I think those movies had a bigger impact than — well, actually probably not than WoW, but sooner, anyway.

          • dndnrsn says:

            Dissociated mechanics is a bad label for things. Why is Wealth (to specify which Wealth system, I’ll pick “a modifier on a d20 roll that you use to determine if you can afford a thing, and how easily”. There are many Wealth systems, but this discussion will apply more or less to all of them) just abstraction, when an opponent being a Minion is dissociated? Or even the archetypical example of a non-magical Daily Power?

            One answer to this question, and the one most frequently given, is a characters ability to understand what is happening in the game. However, this explanation falls apart when examined even a little. You could easily explain a minion dying in one hit as being poor luck on the minion’s behalf (this hit happened to catch its guard down) or by these monsters being weaker than normal. Similarly, a daily power isn’t some black box button a fighter can only press once. It’s something that’s difficult to use, and the circumstances for using that particular maneuver happen only rarely, for example. This brings me into an important point when discussing mechanics: you are not bound by the same flow of time as the characters.

            That you can come up with post-hoc justifications in-world for it doesn’t make it not dissociated. The fighter’s daily power is, in fact, a button the fighter can only press once: the evidence for this is that it can only be used once a day. There’s no relation to the circumstances in the power’s rules, simply the restriction that it’s only usable once per day. 4th edition starts from the game’s mechanics and you have to justify what happens in the world of the game based on those mechanics; I much prefer the opposite, where you start from the world of the game and interact with the mechanics based on that.

            The world of your D&D game is not real. You are not controlling the people within that world in real-time, the DM is not manipulating an actual world and describing what’s inside it. You are all describing a world, and can establish things out of sequence while still having the characters within the narration act and react in a way consistent with the final sequence. From within the narration, a fighter doesn’t kill an enemy in one hit because they were a minion. An enemy the fighter managed to kill in one hit gets labeled a minion. The circumstances to use that fighter’s daily power don’t mysteriously happen only once per day when the fighter decided to use the power. The fighter decided to use the power because those circumstances happened. And those circumstances happened because the player declared that they happened.

            This sounds like the ethos of a storytelling game applied to 4th edition’s mechanically-justified design decision. Every other edition of D&D was the other way around: it described a world, and tried (often rather poorly) to model that world. When I play a roleplaying game, I try to inhabit my character and behave accordingly. When I play 4th edition, I find myself yanked out of that, adopting a mindset that feels much more like playing a board game: making decisions on the basis of the rules of the game as I know them, without them being connected to the game world (there’s nothing in the rules of Pandemic that makes it a game about fighting outbreaks of disease in the modern day rather than fighting outbreaks of communism in the Cold War; that stuff’s all flavour). There’s nothing wrong with board games, I enjoy board games, but 4th is an unhappy medium, because it’s got all the complexity of modern D&D and so plays slowly (whereas currently-popular board games tend to be graceful).

            EDIT: Yeah, we should fork this, and @Unsaintly if you want to discuss the 4th/5th business, sure, start a thread.

            @Randy M: having enemies who can deal out tons of damage but go down in one hit no matter what is one way to model that, but I think there’s considerably better ways to model it. 2nd or 3rd level PCs fighting a bunch of orcs models it fine. Heroic fantasy usually doesn’t have “hero gets taken out in one hit by a nobody” as a trope – which is modelled well by a PC with 17+ HP no longer facing the threat of an enemy doing 1d8 damage taking them out with a really lucky critical.

            EDIT: bah, the 1st level fighter shouldn’t be fighting the high-level guy. Mistake there. It should have been “how does your fighter explain that” or similar.

            EDIT EDIT: Also, I think Warhammer and especially 40K had a big impact – Warhammer orcs used to be smallish, then the 40K Orks got swole, and when the new (by new I mean new 15 years ago) Warhammer line came out, their orcs got swole too. I don’t know where kids get their minis these days, but back in the day, even if you didn’t play Warhammer, when you wanted minis for your D&D game, you bought some Warhammer figurines.

          • quanta413 says:

            I’d be interested in a discussion of dissociated mechanics.

            My basic claim would be that if you want a game that from a simulation like understanding of the rules stories emerge similar to those of fantasy literature, then no edition of D&D from 3rd on will work. I haven’t played 1st or 2nd, but I’d be very surprised if those actually worked either. It might work for a very short time for certain specific elements of certain very specific books, but by and large it doesn’t.

          • John Schilling says:

            Basically no edition of D&D can recreate Tolkienesque fantasy, or the earlier swords-and-sorcery version, because those all posited rare-magic worlds with no magic-using protagonists whereas including any mechanistic magic system that even allows for wizard PCs almost invariably drives the game to simulate a high-magic world. That nonetheless has castles and knights and peasants because reasons don’t look too closely here, but OK.

            That’s not what “dissociative mechanics” means. Dissociative mechanics means, OK, the game simulates a high-magic world rather than Tolkien or Howard, but it does so implausibly because there are rules that massively constrain the players but which are utterly divorced from the experience of their characters – even for characters in a fantastic high-magic world.

          • quanta413 says:

            I guess I misunderstood the complaint from dndnrsn’s examples. I read the essays linked now (including the updates), and I think the author has mostly just internalized any bizarre mechanics of earlier editions and is blind to how fundamentally weird it all is if you take it as literally as the author takes daily powers.

            I have a hard time imagining playing 3e or 4e in a manner where every constraint on me as a player made sense to the character in their world. The world is underspecified (as you say, D&D simulates very high fantasy; that basically means it simulates itself and things it inspired), and you can’t assume even simple details of our world transfer over. When I play and read about people’s games though, it’s obvious that they (A) don’t play the game in such a way that there isn’t some significant dissociation and (B) they paper over abstractions in ways that don’t make sense in world if characters were to react to the in-game results of the abstraction.

          • dndnrsn says:

            @quanta413

            OK, I’m going to answer this below; someone’s got to make a move and start a new high-level comment.

          • Le Maistre Chat says:

            @John Schilling:
            Basically no edition of D&D can recreate Tolkienesque fantasy, or the earlier swords-and-sorcery version, because those all posited rare-magic worlds with no magic-using protagonists whereas including any mechanistic magic system that even allows for wizard PCs almost invariably drives the game to simulate a high-magic world. That nonetheless has castles and knights and peasants because reasons don’t look too closely here, but OK.

            It’s been argued that Gandalf stats out as a 5th level 1E or B/X PC (other than being allowed to use a sword).

            This passage from the Moria sequence of The Fellowship of the Ring also looks like a single round of D&D combat with a mix of 4-5th and 1st level characters, assuming you run an edition with morale checks:

            “Legolas shot two through the throat. Gimli hewed the legs from under another that had sprung up on Balin’s tomb. Boromir and Aragorn slew many. When thirteen had fallen the rest fled shrieking, leaving the defenders unharmed, except for Sam who had a scratch along the scalp. A quick duck had saved him; and he had felled his orc: a sturdy thrust with his Barrow-blade. A fire was smouldering in his brown eyes that would have made Ted Sandyman step backwards, if he had seen it.”

            (26 orcs appeared. Legolas won initiative and stood still to shoot two arrows, felling 2 1HD orcs. An orc tried to jump Gimli and failed his attack roll. Gimli went next and killed him. The Ranger and human Fighter were tanking and felled 9 because they were entitled to 1 attack per level against enemies of 1HD or less. Orcs more than 5 feet from the tanks tried to flank and kill squishier targets, and they would have started succeeding if Level 1 Samwise’s attack roll hadn’t dropped a 13th orc, triggering a morale check that failed.)

    • beleester says:

      In addition to what everyone else has said, you’re focusing way too much on Orcs. Goblins fill the 1/4th CR niche (with Hobgoblins at 1/2 so you can mix in some beefier leader-types), and Kobolds are 1/6th CR if you really want to go quantity over quality.

      To me, it makes sense that orcs fill the niche of human-equivalent enemies – they’re typically portrayed as “stronger and dumber humans,” – human-sized, equipped like human fighters, capable of military organization. Four orcish warriors vs four human fighters should be a fair fight. And since a fair fight has a high risk of death for both sides, it makes sense that a fight that won’t kill your PCs involves fewer orcs than that.

      What you’re really complaining about is two things – first, low-level characters are “realistically” fragile and don’t really fit the heroic fantasy mold, and second, that magic scales up much faster than fighters do, so by the time you’re in the heroic fantasy range the wizard has learned Fireball and made the horde of orcs irrelevant. But “orc inflation” has nothing to do with it.

      • dndnrsn says:

        I don’t mind that low-level characters are fragile; at first level in what I’m currently running, they were sweating over fighting kobolds. But “heroic fantasy” is found a lot lower than level 10 or 15 or whatever – a 5th level fighter with 50 HP can take a couple critical hits from an opponent doing max 10 damage a hit and not be out of the fight, which seems plenty heroic to me.

        I don’t actually think wizards outscale fighters that quickly, provided that wandering monsters, random encounters, etc aren’t ignored or abandoned. Those serve to keep the PCs from adventuring at their leisure: it’s a lot easier to wander into the dungeon and fire everything off if the chance of running into something on the way out is taken out of the rules (of course, the GM could always spring something on you; in my experience, GM fiat tends to be kinder than the rules saying there’s a 1/6 chance of running into something every 10 minutes or whatever) and it’s a lot easier to tailor your spell load out to the challenge if you can do a bit of recon and take a nap without monsters showing up outside the camp, etc.

        (Fireball is a lot less of a game-changer when it’s not treated as a 2D effect: in the original game, it states that “In a confined space the Fire Ball will generally conform to the shape of the space (elongate or whatever).”which makes throwing it around underground an excellent way to accidentally toast the party.)

        • beleester says:

          Making fireball a fixed volume rather than a fixed-radius spread has an appealingly realistic feel, but I’m not sure how you could apply that on the tabletop without doing an annoying amount of math. It sounds good if your fight is set in a standard 5-foot-wide hallway (although that has its own problems, like the “kobold conga line”), but what happens when that 5-foot-wide hallway is connected to a larger room? Does the fireball elongate down the hallway, then resume expanding in a spherical shape when it hits the room? What if the hallway has corners or branches?

          If old-school D&D actually had rules for that, I can understand why 3.5 boiled it down to “20-foot radius spread”

          EDIT: Also, you could sidestep all of those worries by preparing Lightning Bolt, which works even better in those confined 5-foot-wide hallways.

          • dndnrsn says:

            Yeah, it’s a real pain in the ass. What we’ve done so far is rule that after you turn a corner volume is doubled, then quadrupled, etc. So if you cast a fireball into a corridor with a 10′ ceiling, each square is 250 cubic feet, but when the blast turns a corner, each square is now 500, if it turns again, 1000, etc. Otherwise, a 20-radius sphere (which has a volume of over 33k cubic feet, people who understand math assure me) will fill up a lot of space in a dungeon.

            I like this because it has a fireball as man-portable siege artillery (and fireballs are devastating in the open) rather than a default wizard attack.

            (Old school D&D didn’t really have rules for it, it just said “oh yeah hey and it fits the space” and left it at that; other editions didn’t explicitly rule that it was a flat 20′ radius blast effect, but everyone kinda went with that)

            EDIT: On the other hand, OD&D sleep is grotesquely OP, even with the most restrained possible reading of the spell’s effect (it’s really unclear); every edition seems to have toned sleep down a little bit more. Old-school D&D at low levels seems to be a lot of “wizard hits ’em with sleep, party cuts throats”)

    • John Schilling says:

      n 3rd ed, suddenly enemies have stats, and the orc gets tougher because now they’re got an attack bonus of +4 due to HD and STR instead of an attack bonus of +1 due to HD. They’re now CR 1/2 – meaning 2 orcs is a moderate challenge for the default 1st level party.

      As others have pointed out here, and we’ve discussed before, the CR mechanics need to be treated as loose guidelines, and the rules don’t offer much guidance on how to do that. But, in 3.5e, I’ve never had much trouble putting orcs against 1st-level PCs at 1:1 or even 2:1 under the right circumstances.

      Yes, Orcs got stats. Those include (check SRD), yep, INT 8, WIS 7, and CHA 6. Which means they don’t have tactics, and they don’t have leaders who can keep them in the fight when they start taking casualties. Usually, I have them do a frontal assault into whatever the players can come up with to deal with a frontal assault by 4-8 orcs, which for reasonably capable players should be able to kill 1-2 of them fairly quickly. Then they back off, shout, posture, throw a few javelins, then the Orc leader (retro-designated to be the strongest survivor) stands forward to challenge the entire party while the rest stay just out of reach behind, and when the leader falls the rest flat-out run away. Players seem to find this adequately challenging and rewarding.

      Kobolds and goblins skirmished with similar lack of tactical eptitude, and I used hobgoblins sparingly when I needed inhuman mooks who would fight smart and disciplined.

      The one thing I did find problematic was that the stat bonus to damage would occasionally let an orc warrior in melee one-shot anyone other than a straight fighter, which yes can be healed before PC death but still causes problems.

    • dndnrsn says:

      It is also possible that I am wrong about the cause of the miscalibration of PC levels, and the problem goes back earlier. Based on reading early D&D and on anecdotes about the early history and so on, it appears that there was an intention for PCs past a certain level (usually hovering around 10th) to be less adventurers and more lords. Like, you’re a level whatever fighter, congrats, you get a castle. It makes sense, it was originally a mod for a wargame.

      However, most people aren’t wargamers, and most people don’t want to play Super Kingdom Manager either. So, you end up with this situation where your 15th level characters are still murderhoboes, because murderhoboing is more fun than collecting taxes.

      • Matt M says:

        murderhoboing is more fun than collecting taxes.

        They aren’t mutually exclusive!

        • johan_larson says:

          Just imagine if the mayor of Los Angeles were also an LAPD SWAT officer, and actually occasionally went on non-knock raids. That’s the modern equivalent of a lord who also goes dungeon delving. Probably not very practical.

          • Matt M says:

            I was thinking more along the lines of “You can become exempt from federal taxation if you declare trial by combat and defeat the President in a bare-knuckle street fight.”

          • albatross11 says:

            Are there historical examples of anything like this? Lords went off to war, at least, and sometimes that would involve putting down bandits or peasant uprisings or local incursions of hostile neighbors (who might map to orcs in the minds of the lord and his followers/subjects). Probably not so much treasure-hunting in an old abandoned castle rumored to be haunted by some kind of terrible boss-level monster and its mooks, though.

          • dndnrsn says:

            “We’re pretty sure those guys have automatic weapons. This is a job that calls for… the mayor!”

            So, yeah. The basic issue is that if you use D&D for nothing but adventuring, you get King So-and-So going out and delving into dungeons at 10th level.

            Advance further, and it’s Jesus, Hanuman, Gilgamesh, and Hercules teaming up to fight evil. This is a very cool setup, but it doesn’t work if they’re adventurers who do nothing else, if it’s implied that there’s a decent number of people with these abilities, etc.

            You’ve got PCs doing stuff that’s meant for level 3 or level 5 characters, except they’re using level 15 characters. It causes the world to stop making sense, unless you go full Spelljammer or whatever to reconcile the powers available with the world.

          • johan_larson says:

            Are there historical examples of anything like this? Lords went off to war,…

            There’s a theory that King Tut, the pharaoh, was killed by being thrown from and run over by a war chariot. Some Maya rulers are also depicted in military gear, particularly when young. But it’s hard to tell whether they were principal combatants, generals leading from the back, or ceremonial commanders-in-chief.

          • Randy M says:

            Are there historical examples of anything like this?

            King Richard Lionheart going to crusade comes to mind. But I don’t know how much spine bisecting he personally did. Commanding an army in the field is closer to the old D&D way of graduating to running a kingdom than to dual-classing in murder-hobo and throne-warmer.

          • Matt M says:

            Do we have to go back that far?

            Didn’t George Washington start as soldier, graduate to general, then become President (offered King but refused), and then, while President, personally lead the army westward to suppress a literal tax revolt?

          • Deiseach says:

            Do we have to go back that far?

            Not at all, in the 17th century you had Charles I and his famed nephew Prince Rupert of the Rhine fighting in the English Civil War, then later in the same century both William of Orange and King James leading their armies on the battlefield at the Battle of the Boyne in 1690. For modern-day examples of the British royal family there are:

            (a) Prince Andrew served in the Falklands
            (b) Prince William also had the traditional military training and service, though because of his position (heir to the heir to the throne, they tend to be nervous about putting the direct succession into situations where they might be seriously injured or killed, unlike the spare – see Andrew and Harry) he didn’t see active service on anything near the frontline
            (c) Prince Harry did two deployments to Afghanistan

            There still is somewhat of an expectation that male royals will have more than a merely notional or decorative role as heads of regiments and will at least go through officer training and have some kind of service under their belt, even if it’s only on training cruises and the likes. Even Prince Edward, who absolutely hated the very notion, had to do at least one year with the Royal Marines (though he dropped out four months into the year-long training and never went back).

          • Evan Þ says:

            @Matt M, President Washington ceremonially led the militia on their march through a few friendly towns, but he returned to Philadelphia long before they crossed the mountains to face the actual revolt. Entirely aside from his personal safety, I suspect Washington’s health wouldn’t have tolerated an actual campaign at that point.

          • Protagoras says:

            Also, more relevantly, in traditional legends of heroes as well as in more recent fiction, captains and lords and kings do a lot more of their own fighting and leave a lot less to their armies than was historically the case. Surely D&D is more based on such stories than on history.

          • albatross11 says:

            Leading armies in the field is a job you’d expect various local lords, counts, dukes, barons, and kings to take on. It’s in the job description, even though maybe the king’s an old man so he’s mainly relying on his sons to do the actual leading men into battle with swords part. And that would (I think) include putting down uprisings, incursions, bandits, etc.. But probably not going on a dungeon crawl looking for loot and glory and XP.

          • Le Maistre Chat says:

            “We’re pretty sure those guys have automatic weapons. This is a job that calls for… the mayor!”

            Ah, so you’ve played Final Fight.

  15. proyas says:

    If the U.S. hadn’t entered WWI, could the British and French have won by themselves? Discussions of this counterfactual always focus on whether or not the U.S. Army’s battlefield contributions tipped the balance, and skeptics will point out that the British and French managed to halt the German Spring Offensive in July 1918 before significant numbers of U.S. troops had arrived, but to what extent did that owe to increases in U.S. materiel aid and to the boost in Franco-British morale that came from the knowledge that American troops were eventually coming? Does anyone know of any scholarly books or papers that quantify increases in U.S. exports of weapons, food, fuel, and other supplies to Britain and France that followed the American declaration of war against Germany in April 1918?

    Assuming that the U.S. hadn’t entered the war, and exports of war making materiel to Britain and France had stayed at 1917 levels, could the latter have been just weak enough to succumb to the German Spring Offensive?

    • James C says:

      Nothing scholarly off the cuff, but I’m going to say that the British and French still win but it takes longer.

      The biggest question is could the Spring Offensive have worked? That very much depends on to what degree we’d accept it as having ‘worked’. If we want it to reach the Calais and cut off significant allied forces, then there’s probably a world where a weakened British and French army conceded the territory if sufficiently undersupplied. If we want it to win the war… well the French are sure as hell not giving up at this point of the war, and the British haven’t lost a single square inch of their homeland. Neither are surrendering, not after so many dead and their entire foreign policy riding on divvying up slices of Austrian and German territory post war. What would happen would be a redrawing of lines, a calling up of fresh men and equipment and a rebuilding of the trench works twenty or thirty miles closer to Paris.

      So, to win Germany needs not only to achieve breakthrough success on the Western Front, but to repeat the feat. Multiple times. Until France is forced to the negotiating table and Britain bows out of the war due to having no significant presence in a theatre threatening Germany. I can’t see that happening. The Spring Offensive as was gutted the German army and even in a world where it achieved the strategic goals it would still have involved horrendous loss of life. There was practically no German army left in 1918 after the offensives failed. Just half starved old men and boys holding a line carved out two years before by the dead. Maybe, maybe, there was still one more offensive in the Germany army that was forestalled by the American arrival, but there weren’t two. There wasn’t enough to win the war.

      I’m sure someone better informed on the topic can add more, but from everything I’ve studied Germany could not win the war in 1918. I’m not convinced, despite all the ups and downs of the WWI, that they could have won past October 1914, but that’s another very lengthy topic.

      • bean says:

        Well said. Basically, the problem was that Germany was always running on borrowed time, and conditions at home were getting steadily worse. I think they were hoping the defeat of the Russians would solve this, but it doesn’t seem to have worked. Ultimately, the British blockade of German trade actually worked. It just wasn’t obvious.

      • idontknow131647093 says:

        It really depends, IMO, on your definition of “America not being in the war”. Its fairly inarguable that America was supporting the French and English far before its declaration of war, and Wilson was really obsessed with the European conflict. When we look at the progression of the war, even with large American material support, Germany was able to knock out major enemies in 1916 and 1917. Italy and Romania were recruited to the allied side and Germany fairly quickly routed both. Russia was forced to sign an embarrassing treaty in 1917, making it a 1-front war for Germany.

        It wasn’t merely the blockade by England that deprived Germany materially, it was the US’s alliance with the blockade as well. Without that France and England would have been nearly as materially deprived.

        There were consistent movements in England and France right after Russia dropped out that called to sign an armistice that would have restored something like the pre-1914 status quo. However, the prospect of America’s entry convinced those in power to wait and try to get a victory instead of a tie.

      • Eric Rall says:

        There was practically no German army left in 1918 after the offensives failed. Just half starved old men and boys holding a line carved out two years before by the dead.

        That’s overstating the case a bit: Germany took a lot of casualties in the Spring Offensives, and those casualties were disproportionately among their best front-line troops, but the Entente took a lot of casualties as well, and Germany still had a substantial army on the Western Front when the offensives ended.

        There were three things that killed the German position relative to the Entente in the first half of 1918:
        1. The home front: the blockade was really hurting Germany, and it had become a lot tighter with American entry into the war. Meanwhile, the UK and France could buy American food and materiel on credit.

        2. The German army got their asses kicked in the Second Battle of the Marne. Before that, they’d given roughly as good as they’d gotten, casualty-wise, but at 2nd Marne they took about 40k more casualties (dead/wounded/captured) than the Entente and had about half their artillery captured. This, more than the previous battles, was what broke Germany’s remaining capacity for offensives: the casualties hurt, but the loss of the artillery was devastating, since any effective offensive on the Western Front required an overwhelming local superiority of artillery.

        3. Germany, Britain, and France were all scraping the bottom of the manpower barrel (IIRC, France was the worst-off of the three, and Britain the best-off, but all three were in pretty sorry shape), but about 2 million American soldiers arrived in France over the course of 1918. The AEF was about 15% of total Entente strength in July 1918 and about 30% in Nov 1918. Without American reinforcements, I’m pretty sure Germany still would have outnumbered the Entente on the Western Front even after 2nd Marne (assuming historical losses in that battle).

        The AEF played a notable role in all of the Spring Offensive battles, mostly by manning quiet sectors of front to free up British and French divisions to form the reserve that contained the German breakthroughs (especially earlier in 1918), but also by directly participating in the battles (especially in 2nd Marne, where 8 of the 58 Entente divisions were America).

        Maybe, maybe, there was still one more offensive in the Germany army that was forestalled by the American arrival, but there weren’t two. There wasn’t enough to win the war.

        I think there were probably two, maybe three, offensives left in the German army without the AEF. And without the ticking clock of the American force build-up, Germany could have taken more time to rest, resupply, and prepare for those offensives and make them count more. They’re still at a disadvantage due to the blockade, but Germany only has to win big once to change the game. Capturing a major logistics hub like Amiens or Hazebrouck would force a large-scale withdrawal, which would be demoralizing and would probably require leaving a lot of supplies and heavy equipment behind. Germany would also have opportunities to kill or capture rear-guard units covering the retreat, and the process of retreating would open up big gaps in the line that, without the AEF, there wouldn’t be a lot of reserves left to seal the gap.

    • cassander says:

      Could they have won? Probably, if they kept at it. Would they have won? I think that’s more questionable. If you’re the rulers of France and the UK, 1917 is a terrible year. The French army is revolting and Russia is collapsing. The only bright light is the US joining up, providing huge amounts of money and food, and the promise of millions of men. Petain puts down the mutinies by promising the men there would be no offensives until the Americans show up. Without that promise, I think a negotiated peace in 1917 is quite possible.

      If that doesn’t happen, contra to what James c says, I don’t think they have to make the spring offensive work. The spring offensive happened because the Germans knew they needed to win before the Americans showed up with too many men. Without that rush, the Germans can take their time, extract more resources from their eaten conquests to keep the home front fed. They have an army and people who feel like they’re winning, not one that feels they’re making a last desperate throw of the iron dice. I think you get a wider dissemination of storm troop tactics, a later attack that is larger and less desperate, and while it might not take Paris, it might be enough to force the French to the table, especially since they’re not feeling physically or empirically boosted by a powerful new ally.

      • proyas says:

        I wholly agree that, without the U.S. joining the war, the Spring Offensive would have been delayed and would have been better-executed. Of course, exploiting the newly captured Russian Empire territories for food and other resources would have meant shifting mass hunger from Germany to the East. It’s another debate altogether as to whether those captured foods and resources could have offset the mounting privations of the British blockade. I think time would have still been against Germany, and it wouldn’t have been able to sit behind the Hindenburg Line for the Allies to give up–a decisive offensive against the Allies was needed to force them to the peace table.

        • cassander says:

          > It’s another debate altogether as to whether those captured foods and resources could have offset the mounting privations of the British blockade.

          I don’t think they need to offset them entirely. Remember, in actual history the germans do keep fighting through most of 1918, and when the collapse comes it was as much due to morale more than strictly material factors. You just need to provide enough to keep up the impression that the war was being won, and without america signing up and a more measured spring offensive, that becomes a much easier sale.

          I think time would have still been against Germany, and it wouldn’t have been able to sit behind the Hindenburg Line for the Allies to give up–a decisive offensive against the Allies was needed to force them to the peace table.

          Sitting behind defenses is not how the german army operated. They would have attacked, there’s no doubt, and they very well might have gone for a spring offensive total victory in one go, because that army loved its haymakers. But the whole context of the war is different at that point. the french army might still be in revolt or quasi revolt, there’s no american food flowing in to the home front. I can very much see the french making the same sort of decision the germans made in 1918, throw in the towel now and hope for good terms rather than fight it out to the bitter end and risk domestic revolt and total defeat.

    • John Schilling says:

      Germany’s doom was sealed on the plains of Armageddon, when the heavens opened to rain fire and thunder and smite the unbeliever.

      We can certainly imagine Germany holding the upper hand in France while turning the spoils of Brest-Litovsk into a breadbasket that would forestall starvation and hunger at home. But Germany alone vs. the Western Allies (and don’t forget Italy in that calculus) is not a winning proposition in the long or even medium term. And it is so obviously not a winning proposition that I don’t see France surrendering even if the Spring Offensive does reach Paris.

      For Germany to prevail, she needs allies. And as formidable as the Austro-Hungarian and Ottoman Empires have proven in SSC Diplomacy games, they were clearly on the ropes by 1917 and it didn’t require American intervention to push them over the edge in 1918. Their failure leaves a great big hole in Germany’s defenses, against an Italian Army and British Expeditionary Force flush with victory. Central Powers vs Triple Entente may have been something resembling a fair fight, but the Triple Entente picked up Italy to at least partially compensate for the loss of Russia, while the Central Powers lost two of their three powers to no gain.

      Unless the French do surrender after losing Paris, or the British decide to let Germany off with a draw after decisively defeating both of her allies, either of which is possible but a long shot.

      • idontknow131647093 says:

        You do know italy was routed in 1917 and effectively dropped out of the war right?

        • John Schilling says:

          I did not know that, and am confused as to how a nation that “effectively dropped out of the war” in 1917 then sent an army of a million and a half men to conduct a decisive offensive in 1918.

        • Eric Rall says:

          You do know italy was routed in 1917 and effectively dropped out of the war right?

          That’s a pretty big overstatement. Caporetto was a huge defeat for Italy, and it did knock Italy onto the strategic defensive for most of a year, but they still had an army in the field that tied up German and Austrian forces to cover against, they were able to stop an Austrian offensive cold in mid-1918, and as John pointed out, they were still able to mount effective offensives towards the end of the war when the Central Powers had been weakened by another year of blockade and by the combined losses from the Spring Offensives and the Hundred Days.

      • Eric Rall says:

        And it is so obviously not a winning proposition that I don’t see France surrendering even if the Spring Offensive does reach Paris.

        I’m not sure. I’ve heard claims that there was quite a bit of panic among the Entente leadership when it looked like Operation Michael was going to take Amiens, with Petain (and possibly Haig as well: it’s been a while and I don’t recall all the details) expecting that suing for peace would be necessary if Germany took and held Amiens and forced the British and French armies to retreat in different directions.

      • Andrew Hunter says:

        Does your calculus change if “America does not enter the war” is strengthened slightly to “American continues to trade with Germany, and does not allow British interception of American vessels?”

        • James C says:

          Well, that might lead to a shooting war between Britain and America. If that happens and the US joins the central powers then it’s a very different calculation.

          • Protagoras says:

            Surely Britain would recognize that having America join the central powers would be a disaster for them. So it seems unlikely that they would escalate to a shooting war if America refused to cooperate with the blockade; more likely they would start seriously considering a negotiated peace.

      • Le Maistre Chat says:

        Germany’s doom was sealed on the plains of Armageddon, when the heavens opened to rain fire and thunder and smite the unbeliever.

        It’s really unfair that World War I has been reduced in popular memory to “really sad trench lines in France” and “Czar Nicholas lost somehow; were trenches involved?” when there were battles in the vicinity of Troy and friggin’ Armageddon.

        • John Schilling says:

          Yeah, and that was the third battle of Armageddon. I was promised trumpets, and the end of the world rather than just a second-rate empire.

          Still pretty awesome, as battles go. Basically Operation Desert Storm carried out with wood-and-fabric biplanes, horse cavalry, Rolls-Royce armored cars, and a supporting appearance by Lawrence of Arabia. Yeah, keep your trench warfare. If someone offers me a one-year tour in World War I, that’s where I’m serving.

      • bean says:

        Germany’s doom was sealed on the plains of Armageddon, when the heavens opened to rain fire and thunder and smite the unbeliever.

        That was also what Fisher called the coming naval battle during his tenure at the Admiralty. He was very specific about setting Jellicoe up to command. His timing was somewhat off, but Jutland played as big a part as Megiddo in the eventual victory.

    • Related question: assuming the US doesn’t show up but Germany is still incapable of winning, how long would it have taken Germany to completely breakdown in terms of manpower and resources? At what point would it have been physically unable to continue the war?

      • cassander says:

        we know from world war two that there really is no such point.

        • I was under the impression that WW1 was worse from the perspective of having brought countries to their economic breaking point.

        • Eric Rall says:

          Up until fairly late in WW2, Germany put a higher priority on the standard of living on the home front compared with WW1, often to the detriment of war production and manpower mobilization. This was a deliberate political calculation by Hitler, based on his belief that Germany’s defeat in WW1 had been driven by the collapse in civilian morale that had fed the 1918 German Revolution.

      • Eric Rall says:

        If civilian morale holds out, maybe the winter of 1919-20. Germany got a lot better at rationing and managing the agricultural sector after the infamous Turnip Winter (1916-17), but was still critically short on food until the blockade was lifted in 1919. The high estimate for deaths the winter of 1918-19 (after the armistice, but before the blockade was lifted for food) is around 100,000: a hell of a lot of deaths, but not catastrophic by WW1 body count standards. But hunger is cumulative, so a fourth hungry winter in a row could have been catastrophic. Especially if Wave 4 of the Spanish Flu (which hit Germany really hard in Spring 1920 historically) were to show up on schedule.

        • Chevalier Mal Fet says:

          Ah, but without American intervention, the Spanish flu would never have spread beyond Kansas (if you buy the Kansas-origin explanation). That alone would save millions of lives and hell, on net would probably have been a good thing even allowing for the hundreds of thousands of extra dead a longer war would have produced.

    • proyas says:

      So far, no one has answered my bolded question: Does anyone know of any scholarly books or papers that quantify increases in U.S. exports of weapons, food, fuel, and other supplies to Britain and France that followed the American declaration of war against Germany in April 1918?

      I also tried researching this subject and found nothing online. Looks like this is a gap in the scholarship about the War, and would make for a solid History Ph.D thesis.

      • Eric Rall says:

        The War with Germany; a statistical study” by Col. Leonard Ayres. It’s a compilation of statistics about the American contribution to WW1 from the declaration of war through the signing of the Treaty of Versailles, put together in 1919 for the US Army General Staff.

        Chapters 4-7 look like they’re probably the most relevant to your question. I don’t think it’s a direct answer, since it’s focused on official military and merchant-marine activity and doesn’t (so far as I’ve seen in skimming it) really do a before-and-after comparison of the private arms trade, but it’s got month-by-month numbers of how many men, rifles, cannons, aircraft, bullets, shells, etc got made in the US and shipped across the Atlantic in whose hulls for the full 1917-1919 span of American involvement in the war.

  16. Nabil ad Dajjal says:

    Is sex addiction real?

    I’m aware that some psychiatric disorders can cause people to compulsively have sex, like some bipolar people during a manic phase. So the idea of a pathological compulsion to masturbate or have sex sounds reasonable on its face.

    But every time I hear stories about sex addicts it trips my BS detectors hard. The typical case I hear about takes one of two forms. The first is a middle aged married man with a seemingly typical male libido and below-average faithfulness who gets caught in an affair and tearfully “confesses” that he has a totally-legit illness which forced him against his will to have sex with younger more attractive women. The second is a younger man with a seemingly typical male libido who gets caught watching porn by his hardcore Christian / feminist wife or girlfriend who promises to get therapy for his sex addiction after she threatens to break up with him.

    So what’s going on? Am I just too cynical and unempathetic and these guys are actually addicts? Is it a real condition that people cynically over-diagnose? Or is it entirely BS?

    • baconbits9 says:

      The first is a middle aged married man with a seemingly typical male libido and below-average faithfulness who gets caught in an affair and tearfully “confesses” that he has a totally-legit illness which forced him against his will to have sex with younger more attractive women.

      I don’t think this is a charitable interpretation. The popular examples of “sex addiction” are not just men who get caught cheating, its men who are discovered to have had dozens or more partners. Tiger Woods (good results) admitted to 120 different women over 5 years of marriage, which plausibly sounds different from a guy who had a 5 year affair with his secretary or a guy who had a one night stand.

      • Matt M says:

        This. I would also add people who do such things repeatedly, even when very ill-advised, even when they have a hell of a lot to lose.

        In addition to Tiger Woods being a good example, Bill Clinton immediately comes to mind. I mean, the women he went after were young but not particularly attractive and the consequences of his behavior were pretty devastating – yet he never seemed particularly inclined to stop.

        Weinstein might be another example – although his behavior was predatory enough to probably involve some sort of power trip motivation as well.

        Basically, a man who takes advantage of available opportunities every once in a while in low risk situations = normal male libido. A man who regularly makes great effort to create those opportunities even in high risk situations = some sort of mental issue.

        I’d compare it to the “functional alcoholic” who might get really drunk on weekends, even to the extent of blacking out, but has never been violent or committed a crime or had it affect their job vs a real true blue alcoholic for whom drinking causes all of those sorts of negative consequences.

        Almost every male is a “functional sexoholic.” Only a certain few allow it to really destroy the lives of themselves and those around them.

      • Nabil ad Dajjal says:

        Maybe, that’s entirely possible.

        That said, if I was as famous as Tiger Woods was in his prime and didn’t particularly respect my marriage I can easily imagine myself having sex with 20 women a year. Even as a single overweight graduate student I was able to have sex with 10 women the year before I got together with my girlfriend, so if anything that’s very conservative. Maybe that makes me a sex addict too.

        I guess I’m looking for something that a regular guy wouldn’t do given the opportunity. Maybe that’s the wrong way to think about it.

        • Matt M says:

          It’s been awhile, but IIRC wasn’t Tiger Woods basically employing madames to arrange his encounters with porn stars well in advance?

          “Opportunity” is a tricky word. Everyone as rich and famous as him has the “opportunity” to structure their life such that their off-work hours are devoted almost entirely to having sex with a revolving door of women.

          Few seem to actually do it that way.

          Even as a single overweight graduate student I was able to have sex with 10 women the year before I got together with my girlfriend … I guess I’m looking for something that a regular guy wouldn’t do given the opportunity. Maybe that’s the wrong way to think about it.

          It sounds as if obtaining a girlfriend changed your habits. Unless you think you aren’t a “regular guy” anymore, this would seem to be proof positive that you are categorically different from Tiger Woods. He continued this sort of behavior, despite having a hell of a lot more to lose than you presumably do (no offense, of course)

          • Nabil ad Dajjal says:

            No offense taken. I’d be delusional if I thought otherwise.

            So if I understand you correctly, you’re saying that it’s not the desire which is pathological here but the inability to restrain it. A regular guy might fantasize about being Wilt Chamberlain but a sex addict is the one who doesn’t have the self-control not to act on that desire. Does that sound right?

            If so it doesn’t sound like an addiction so much as habituation. You can tell there’s a difference between e.g. a pothead who doesn’t have the impulse control to stop from lighting up every morning and a smoker who is chemically dependent on nicotine.

          • Matt M says:

            Sort of yes. I’d reiterate the concept of “functional alcoholic” and clarify that I would say such a person is not actually addicted to alcohol at all.

            To me, something being an “addiction” implies that it becomes so central to your life that you behave in very damaging and irrational ways in order to get it.

            And a bit of that is relative. I think every male on Earth has, at least once, done something at least a little bit damaging and irrational in order to get sex.

            Woods and Clinton are sex addicts because their sex-seeking behavior is so incredibly high-risk that it goes well beyond what is expected of any normal adult male. But don’t get me wrong – I fully agree with you that there are a lot of “normal” people who get caught in “normal” sexual behaviors who claim “sex addiction!” for various self-serving means.

            It ends up being a little counter-intuitive in a way. If you get 1 DUI, you’re just a scumbag who deserves severe punishment. If you get 10 DUIs, you’re probably an alcoholic who needs some form of treatment.

          • raj says:

            What exactly did he lose? Reputation?

          • Matt M says:

            Woods?

            Reputation, a huge divorce settlement, a lot of sponsorship deals I believe, and while it’s hard to prove a direct causation, that seems to be the exact moment his career went into a tailspin from which it has not yet recovered (and likely never will).

          • Nancy Lebovitz says:

            I’m not convinced that Clinton’s affair with Lewinsky was especially high-risk. It took extreme bad luck for him to get into trouble.

          • Matt M says:

            Well, it was hardly his only offense. It seems like he “got away” with a lot of high-risk sexual behavior before he actually got caught with Lewinsky.

            And man, no matter how careful and secret you think you’re being, an act that could cost you the Presidency of the United States seems like the ur-definition of “risky.”

            ETA: And at the risk of seeming crass and cruel here, the fact that Clinton mostly got involved with plain women probably says something about the nature of his behavior. Nailing porn stars in Vegas is one thing, but Clinton risked it all to diddle a random chunky intern with a cigar in between conference calls…

          • dndnrsn says:

            Concerning sex addiction, this sounds about right. If Woods had been a single guy, it would have been, holy shit, look at that guy, can he reel them in or what? Or, a gay guy who uses Grindr a lot, he might be having plenty more sex Woods – it’s only a problem for him if he’s cheating, taking unnecessary risks, etc.

            However, concerning alcohol – I quit drinking some months ago, and while I was certainly functional I was alarmed at my inability to stay off the sauce if I had any.

          • idontknow131647093 says:

            The thing about Clinton is I don’t think he’s a sex addict (ditto Weinstein) his is a power seeker who thinks he can get away with anything. Look, he didn’t just have sex with women, he raped at least one women, and the majority of “plain janes” he did while Governor/President were underlings. He got off just as much on the thrill of getting away with it.

          • noddingin says:

            To idontknow131647093 and others:

            This is a no culture war thread. You are making statements about Clinton contrary to fact. Replying as they deserve would put us over the edge into culture war.

            You’re doing major sideswiping.

    • Nancy Lebovitz says:

      I saw a story or two about men who were financially self-destructive, back when you had to pay for porn.

    • Randy M says:

      It seems like it is as real as other behavioral addictions. Hard drugs seem uniquely able to hijack brains and cause major physical pain when withdrawn, which makes sense given the uptake of neurologically affecting chemicals. This is the central case of addiction.
      Then there are behavioral addictions, which are all basically failures of self-control. Given particularly egregious consequences are still insufficient to deter some people from certain behaviors, it’s clear that some temptations strike some people much harder than others. It’s a spectrum, and the people on the ends seem to do things no reasonable person would.
      However, considering it as an illness, like a seizure that ensues from a malfunctioning brain offers people justification for behavior that we expect them to be able to refrain from. In some cases it may be an easy excuse that allows them to avoid making difficult stands against destructive behavior.

      Now, if you’ll excuse me I seem to have fallen prey to a bout of Akrasia here.

    • raj says:

      I think this is accurate. Also the addiction narrative provides a way for a woman to give a man a second chance while saving face (since there is a social norm to punish defectors in relationships i.e. cheaters by breaking up with them immediately, but that is costly to both parties).

    • Watchman says:

      You know sex addiction is not gender specific right? Only women have only been mentioned in this conversation as victims/those who forgive/passive sexual partners…

      • Randy M says:

        Technically if it is an addiction, isn’t the person under the compulsion the victim? I mean, isn’t that why the terminology is used? Granted it still doesn’t paint the person in a very favorable light, but better weak than wicked, I guess.

      • Nabil ad Dajjal says:

        In my experience it’s exclusively men who are called sex addicts. I have heard a lot of these stories over the years and never once with a woman.

        I’m not saying that it can’t happen. I don’t even know if this is a real thing or not, so how would I judge that? But I am saying that that’s not how it’s talked about.

        • A1987dM says:

          Yeah because the corresponding term for women is “nymphomaniac”.

        • quanta413 says:

          I vaguely remember an example in news with a woman, but she supposedly only behaved that way after a severe head injury or something.

          It was very different from what Nabil ad Dajjal is talking about, which is the sort of behavior by some men I often suspect is an excuse made up after getting caught.

        • Tarpitz says:

          I once listened to a BBC talk radio section on the subject. The guest who most sticks out in my memory was a middle-aged woman who described a condition of constant, overpowering obsessive thoughts and urges such that attempting to assuage them barely left time to sleep or eat, much less actually function as a person. Other cases were less extreme, but far closer to that than to male celebrities sleeping with 100-200 partners in a year.

    • yodelyak says:

      Not a scholar of “addiction” but my pop-sci definition looks for three things:
      1. Clinically significant harmful effects of unusual/excessive behavior
      (I think there are three sorts of effects that usually rate: job, health, and family. Naturally there’s the example of a heroin addict in withdrawal so painful it includes convulsions (that’s going to be clinically significant health effects), but you can also just point at someone who cannot sleep, and will go days un-rested, unless they’ve had 3 shots or more of alcohol (not sleeping is clinically significant), or someone who can only do any work when chain smoking, or someone who e.g. destroys their marriage or loses custody of their kids by way of compulsive behavior.)
      2. subject has really really tried, but subject has failed to quit,
      3. subject was aware of the scale of the harm (not sure if you really need this, but it’s usually how you know that when subject says they’ve “really really tried” to stop, they really really have tried–if they’ve noticed they are coughing up blood, and they still can’t quit smoking, that’s a damn good sign there is something going on other than just that they like cigarettes. Another example is someone addicted to pornography who risks getting caught whacking off at work.)
      4. Usually with ego-damaging effects. (“I’m a bad father” / “I’m a terrible human being” / “I’m lazy and worthless” / “I’m a pervert” etc.)

      The distinction that might persuade you re: addiction to porn and cheating sex is this: Yes, there are a set of middle age men who have never sought treatment for their cheating until their wife catches them at it. You can explain the existence of this set wholly by saying “motivated lying.”
      But there are also a set of middle age men who seek treatment for cheating without ever getting caught by their wives, because they can tell they aren’t even enjoying themselves, and are terrified of the damage they’re going to cause once they are caught. (I’m not a clinical therapist, and my experience with this category is limited to one guy I knew in college who a couple times drunkenly asked me for help with a “problem” he was having, because he was afraid his girlfriend/fiance would find out about his problem. But outside my own experience, I think a *lot* of men and women get private help from priests/rabbis/therapists, and my prior is that if you ask a priest/rabbi/therapist “is [sex] addiction a real thing” they will answer in the language of their own faith.)

      Another distinction that might help you is that some people would rather have the reputation of having been willfully bad, but capable of deliberate reform, than accept the label “addict” which smells of helplessness.

      Which would you rather own: a dog that, if you make it really mad, will deliberately take a big gleeful dump on your pillow, or a dog that simply is uncontrollably addicted to pooping on plush surfaces its human sleeps on? I might not feel good about my choice–the latter dog can’t help its behavior–but I’d probably take the former, because I can at least try and arrange to not piss that dog off, or can make sure to give it treats enough to stay in its good graces or whatnot. Plus the former dog is probably pretty fun to be around on days it isn’t mad at you, whereas the latter dog may be consumed with guilt and misery.

  17. helloo says:

    It’s been mentioned before that as powerful as nuclear/fusion bombs are, the possibility they will cause human extinction is very unlikely and Nuclear Winter is mostly a myth.

    However, there ARE things capable of creating something similar.
    The most well-known is that of a large asteroid hitting the Earth of which is part of numerous stories, plans and prevention/alerting agencies.
    Another is a super volcano eruption – but while their existence and danger are becoming more well-known, they never seemed to have gotten the same response as from asteroid impacts.

    Why?

    Though they are also very hard to predict when they will happen, unlike asteroids, these are quite easy to identify where they might occur.
    Sure, one of them is located and responsible for one of the larger national parks and is the source of a lot of tourism, but this is a possible civilization ending event.
    And unlike asteroids, there have been a handful of them in human history (and not prehistory either) which have caused widespread effects.
    Or perhaps that’s partially why? Perhaps humans just fear unknown improbabilities much more than known tragedies?

    • The Nybbler says:

      1) Asteroid is the most popular theory about the dinosaurs becoming extinct. Everything to do with dinosaurs gets a +3 boost to public awareness.

      2) SF writers can come up with believable (if not credible) ways of stopping an asteroid. Not so much the Yellowstone caldera.

    • Matt M says:

      My first hunch is that there’s no Hollywood-friendly theory of how they might be neutralized/prevented.

    • Well... says:

      Has anyone written in detail about what a supervolcano eruption would be like, and how exactly it would end civilization? I could see that being pretty compelling reading, and if there’s somewhere in there where a hero could realistically eke by and survive, then you have a great story.

      • Mark V Anderson says:

        Well Turtledove wrote the trilogy “Supervolcano,” which is about Yellowstone erupting and dramatically changing the climate. I don’t know how scientifically accurate it is, but it makes sense that it could change climate over decades and maybe centuries, since supposedly the eruption of Krakatoa, not a supervolcano, is said to have caused one year without a Summer.

    • albatross11 says:

      I suspect the nuclear winter thing also got more attention/traction because it was something we had some power over. We could make decisions that would make large-scale nuclear war[1] happen or not happen, whereas it’s not so clear what we could do about a supervolcano. We could also try to push an incoming object off its Earth-smacking trajectory, assuming it’s not too big. (If the object’s the size of the moon, I’m pretty sure we’re screwed no matter what we do.)

      [1] Which probably wouldn’t drive mankind to extinction, but would kill billions of people directly and indirectly.

      • Well... says:

        How much did Carl Sagan have to do with the nuclear winter thing becoming a thing? I kind of remember a whole section of a Cosmos episode dedicated to it.

        • I suspect the nuclear winter thing also got more attention/traction because it was something we had some power over.

          How much did Carl Sagan have to do with the nuclear winter thing becoming a thing?

          These are related. Nuclear Winter became a thing because of a massive publicity campaign, starting before the relevant scientific work had been out and subject to criticism. The reason for that publicity campaign, presumably, was that whether or not nuclear war caused nuclear winter, nuclear war was obviously a bad thing, which was a reason to persuade people it would cause nuclear winter whether or not it was true.

          • helloo says:

            Not exactly – some of the biggest proponents were scientists tasked with doing that scientific work.

            The results of the TTAPS model (which Sagan was one of the authors) was made “mainly for public consumption. When they announced their results in 1983, it was with the explicit aim of promoting international arms control.” link

            Also, war itself is also a bad thing, but this was an attempt to make it an existential threat (remember that by 1980 MAD has been a term for a while).

      • Nancy Lebovitz says:

        We probably can’t deflect something the size of the moon, but the more time you have, the better the chance of deflection. Something the size of the moon presumably won’t be good at sneaking up on us.

        This question is probably worth some numbers.

    • andrewflicker says:

      I think it’s an imagination problem. It’s trivially easy to imagine a big enough rock flying through space that it cracks Earth like an egg when it hits- it’s harder to imagine that the ordinary volcanoes we’ve seen and read about can also scale up to be an extinction level event. It also smacks of a “self-cause”, which are always harder to imagine.

    • And unlike asteroids, there have been a handful of them in human history (and not prehistory either)

      There have been destructive volcano’s but the most recent supervolcano listed in the Wikipedia article was 26,500 years ago.

  18. baconbits9 says:

    For those who were interested in my blog on macro economics and investing, I’ve posted my 2nd and 3rd installments in the past few days. Blog link here.

  19. dndnrsn says:

    RPG thread (split)

    @ quanta413

    (in reference to link and link)

    I guess I misunderstood the complaint from dndnrsn’s examples. I read the essays linked now (including the updates), and I think the author has mostly just internalized any bizarre mechanics of earlier editions and is blind to how fundamentally weird it all is if you take it as literally as the author takes daily powers.

    This isn’t about bizarre mechanics, it’s about stuff that’s “outside-in” if that makes any sense. 3rd was a lot less bizarre than 2nd, I think. But both were attempts to model a world. 3rd did it a lot better than 2nd (its flaws were mostly that various elements of it were too complicated and the crunch splatbooks ran wild and free, which made it more complicated) but in both a lot of stuff clearly started from an attempt to simulate something to some degree. They didn’t necessarily do a good job, but from the perspective of a hypothetical inhabitant of this magical world, they would see something that to us a rule is an attempt to quantify and simulate. While saving throws were a horrible system (for the un-2nd initiated, everybody had six or whatever different saving throws, pertaining to different overly-specific categories, and they went down as you got better, and you had to roll high; the chart was in a different section of the book from the classes) they represented that as you got better as an adventurer, you toughened up, got better at getting out of the way of things, etc. The system was bad and stupid but it was an attempt to simulate something.

    In comparison, a lot of the abilities in 4th ed start with mechanics and are an attempt to provide a certain tactical combat experience. If you have the Nifty Strike daily, which attacks with Dex against Reflex and does triple damage on a hit, and its description is something about how you get past their armour with your agility and cunning – what does this represent in-game; what does it mean to your character that they have this thing that they can do only once a day and the description of which gives no in-game-world explanation of why? The reason the wizard has spell slots is that it’s an attempt to model a series of fantasy novels in which spells actually took up real estate in your head or something like that; Vancian magic models something that within the game world is real. In the game world in 4th, there’s these invisible walls in decision-making, and it means that decisions must be made by the player without the character’s imagined decision-making playing a role.

    I think that a major goal in roleplaying games is to make your decisions in-character, especially in high-stakes situations where the metal meets the meat. It’s part of the things that pen and paper gaming can do that other media can’t, so I think that if you’re going to go to the trouble of doing pen and paper gaming (other media are much more convenient, for starters) it’s something you should work towards. It’s harder to make those decisions in high-stakes situations like combat when you the player are making decisions based on elements of the rules that aren’t matched by elements of the game world. I think that if everyone sorta behaves as though the game world is a real place, it helps with that, in addition to various other benefits to the game.

    3rd edition, especially as the line went on, became more and more focused on heavy-crunch tactical combat. There’s various reasons for why this happened but that’s a story for another time. The complexity of 3rd made this a time-consuming affair; it wasn’t unusual to have one or two hour combats (whereas if you are using a more streamlined system, a one-hour combat would be reserved for really big combats or combats with a lot of powers flying around).

    4th seems to have been designed around trying to deliver the best possible version of that tactical combat battle-mat experience, and to be far more devoted to design of balanced set-piece encounters than 3rd. I don’t think the juice was worth the squeeze; tightening up the tactical combat wasn’t worth the loss of verisimilitude/immersion and ever-more-increased focus on combat (put another way, decreased ability to do other things that previous D&D editions had done). Any speed that was gained was then offset by the habit of making encounters more difficult by, among other things, slapping more HP on enemies. Various concessions to it being an RPG instead of a board game weaken the tactical combat, too – tactical combat board games tend to play much faster, for starters.

    I have a hard time imagining playing 3e or 4e in a manner where every constraint on me as a player made sense to the character in their world. The world is underspecified (as you say, D&D simulates very high fantasy; that basically means it simulates itself and things it inspired), and you can’t assume even simple details of our world transfer over. When I play and read about people’s games though, it’s obvious that they (A) don’t play the game in such a way that there isn’t some significant dissociation and (B) they paper over abstractions in ways that don’t make sense in world if characters were to react to the in-game results of the abstraction.

    Could you describe what you mean in A? The biggest knock in this regard I can think of about 2nd is the utterly arbitrary stuff, dissociated not even for reason of game balance but because Gary Gygax wrote it that way for no apparent reason. Dual-classing versus multi-classing comes to mind. The biggest knock on 3rd in this regard is probably that it started trying to quantify difficulty and started to create the norm that the GM’s primary job was to deliver finely-tuned combat encounters meant to be a challenge but fundamentally winnable (which I think alters the world in a way that makes in-character decision-making harder: if you know that this thing you just heard about is something that you are guaranteed to have a shot at, you know that based on nothing in-world, but rather because you know you are playing a game where that’s the goal).

    With regard to B, abstractions and dissociation are different things. Abstraction is measuring money by Wealth modifier as opposed to counting dollars. Someone inside the imaginary game world could still tell you that someone with Wealth +5 is halfway between +0 and +10; he’s not going to know the modifier they have, but he knows how rich they are or aren’t. Maybe he knows the actual dollar amount and in simulating this imaginary world we don’t bother to give an actual dollar amount. A dissociated wealth system would, I don’t know, let someone of a certain level of wealth buy one expensive thing a day, one less expensive things an hour, and any number of cheap things. Why can’t she buy two expensive things a day? Because the rules say she can’t, because the Rich Person class has to be balanced against the other classes and against the enemies.

    • Nabil ad Dajjal says:

      This is a good explanation and I agree with it.

      One thing that I will quibble about though is the idea that Vancian magic isn’t dissociated in the same way that 4e style per day or per encounter abilities are. It makes no sense that fighters have Dragon Ball Z style super-attacks which they can only use once a day, but it also makes no sense that wizards can’t sit down and re-memorize a spell they’ve just cast. The explanation in the game is post-hoc rather than logically emerging from the fiction.

      The reason we still tolerate it is that spellcasting needs some sort of limit in order to be balanced, and the other options aren’t great. I like the idea of ritual magic but magic item crafting in 3.X proved that time, gold, and experience costs don’t adequately limit spellcasters. Mana or spell points are a viable option but fundamentally change how magic is supposed to work in the setting. Casting based on skill rolls just wasn’t viable in 3.X and I haven’t seen anyone make a system which takes advantage of 5e’s bounded accuracy to try to make it work.

      While 4e is definitely guilty of proliferating dissociated mechanics, there’s still a ways to go before they’re totally excised from D&D.

      • dndnrsn says:

        I haven’t read the books, so I don’t know how faithful “yeah you can only do this once per 8 hours of sleep” (it’s always been inconsistent whether it’s once per day plus you need a rest, or whether you can just sleep 8 hours, blow all your spells in an hour, sleep another 8 hours like a magical cat) is to Jack Vance.

        The reason that it teeters on the edge of dissociation in D&D is probably because it’s never really explained in one way. Does casting one fireball tire a 5th level mage out mentally until they get some sleep, and when they level up, it’s like someone going from lifting a given weight five times to ten times? I’m pretty sure the explanation varies.

        • Nornagest says:

          I have read Vance’s books, but they aren’t very clear about that sort of thing either. As a writer, he seemed a lot more interested in filling the page with baroque details than explaining exactly how his worlds worked mechanically. And only maybe a dozen spells get cast in them across all six Dying Earth stories.

      • Nornagest says:

        Basing casting on a scarce resource tied to roleplaying seems to work pretty well. A couple of examples:

        Casting in Call of Cthulhu burns Sanity, which is a resource that everyone starts with a lot of, but which is tough to regain and tends to erode just from running around and shooting at Things That Should Not Be. To make matters worse, it also erodes from reading the books (usually bound in human skin and coming with titles like De Vermis Mysteriis) that your spells came from in the first place. So if you don’t want your caster to spend all his time talking to the voices in his head — faster, I mean, than all Call of Cthulhu characters naturally do anyway — you really want him to pick his shots.

        2E Storyteller games have a lot of different systems for spellcasting or its equivalent depending on what you’re playing, but often you need to burn Willpower to get off one of your bigger abilities. You don’t usually have a lot of Willpower, and the easiest way to get it back in that system is by giving in to one of your character’s vices in a gameplay-significant way. If it doesn’t affect play, it doesn’t count. So you can only get your characters so stressed out before they need to e.g. self-medicate or vent at an NPC they should be making friends with.

        • Nabil ad Dajjal says:

          Call of Cthulhu’s system sounds interesting but wouldn’t really work for D&D or most of the fantasy genre for that matter. If I was running a game set in a setting like Conan’s Hyperborea it would be fantastic though.

          The storyteller system, on the other hand, is a good example of dissociation between the “fluff” and “crunch.” How is taking drugs or yelling at some guy related to casting a spell? Would the character be able to explain why his magic is fueled by his smack habit but his buddy only gains magical power by yelling at strangers? It’s a purely metagame concept which doesn’t correspond to anything in the game world.

          • Nornagest says:

            The basic idea doesn’t seem dissociated to me at all. Willpower doesn’t work like “mana” in some systems, for one thing — it represents focus and dedication, you need it for especially difficult spellcasting, but you can also use it for other things, like resisting mind control or gaining a temporary bonus on a roll (kinda like “hero points” in recent versions of D&D), and sometimes you’re even forced to. In character, if you’re out of Willpower points you’re feeling too stressed, distracted, or mentally exhausted to do whatever you were trying to do, and taking drugs or yelling at some guy would be how your character unwinds.

            The “needs to be significant to gameplay” caveat is somewhat dissociated, but I think it’s reasonable to prevent metagame abuse of the mechanic.

          • beleester says:

            Willpower represents your focus, drive, ability to give 110%, and so on. If you’re fed up and stressed out, then you can’t push yourself as hard. Even if you’ve got a really good reason to push yourself, like a giant demon bearing down on you, you can only dig deep and find hidden reserves of strength so many times before you’re just completely worn out.

            Why does one guy de-stress by yelling at people, while another de-stresses by doing an unhealthy amount of drugs? You should be able to figure that one out for yourself.

          • brmic says:

            Alternatively assume the characters power is drawn from their Id (in a Freudian sense) and the more they deny their baser instincts the less they’re able to draw upon the power working through them.

          • Plumber says:

            “Call of Cthulhu’s system sounds interesting but wouldn’t really work for D&D or most of the fantasy genre for that matter. If I was running a game set in a setting like Conan’s Hyperborea it would be fantastic though…”

            Conan’s Hyperborea setting is what D&D is supposed to be!

            Let me quote from Dungeons & Dragons, 
            Book 1: 
            Men & Magic

            “…These rules are strictly fantasy. Those wargamers who lack imagination, those who don’t care for Burroughs’ Martian adventures where John Carter is groping through black pits, who feel no thrill upon reading Howard’s Conan saga, who do not enjoy the de Camp & Pratt fantasies or Fritz Leiber’s Fafhrd and the Gray Mouser pitting their swords against evil sorceries will not be likely to find Dungeons & Dragons to their taste. But those whose imaginations know no bounds will find that these rules are the answer to their prayers. With this last bit of advice we invite you to read on and enjoy a “world” where the fantastic is fact and magic really works! 
            E. Gary Gygax 
            Tactical Studies Rules Editor 
            1 November 1973 
            Lake Geneva, Wisconsin”

            When D&D doesn’t do Conan well, D&D isn’t doing itself well (and, while I’ve never played them, 3e/3.5/4e and Pathfinder, look like  Marvel’s Avengers-ish abominations!

            Give me Appendix N, and down with “the rest of the fantasy genre”!

            I loved playing old D&D, but I walked away from D&D for decades with the publication of 1985’s Unearthed Arcana (an ABOMINATION!), and I found Call of Cthullu the easiest FRPG to GM (“Keeper’) besides the 1977 “bluebook” Basic rules Dungeons & Dragons, as it’s very intuitive. 

            Close to the CoC rules was 1981’s Stormbringer!  which I’ve found to be the truest to the Swords & Sorcery setting game rules yet.

            The magic system was based on summoning and attempting to control Demons and Elementals.

            It was BADASS! 

            I play the 5e version of D & D now and….

            ….it’s kinda lame (better than no game though), and the kids (twenty and thirty-somethings) that I play with don’t know the old tales, instead they talk of Dragonlance and video games that I’m ignorant of, and they know Tolkien, but none of them have read any Poul Anderson. 

            It’s LAME! 

            I hate change.

          • Nornagest says:

            D&D, even OD&D, isn’t just trying to be Hyperborea, though. I’ve heard it said that “the fighter’s playing Conan, the thief’s playing the Gray Mouser, the magic-user’s playing Tujan of Miir”, which is a decent way of putting it — no idea who the cleric or the druid or the bard are supposed to be, but never mind.

            Anyway, to do that, you need to build a world and a set of rules that Conan and the Mouser and Tujan can all coexist in without eclipsing each other, and that demands certain compromises with the source material, which of course was written to highlight its protagonists’ skills and adventuring styles. Making those compromises while preserving each class’s feel and playability isn’t an easy problem, and every edition of D&D has its own points of friction between them — granted, some worse than others.

          • Plumber says:

            “….I’ve heard it said that “the fighter’s playing Conan, the thief’s playing the Gray Mouser, the magic-user’s playing Tujan of Miir”, which is a decent way of putting it — no idea who the cleric or the bard are supposed to be, but never mind that…”

            @Nornagest,

            My best guess is that the cleric is Peter Cushing as Van Helsing, the druid is Christopher Lee as Lord Summerisle, but the bard?

            Yeah, I got nothin’

          • Nornagest says:

            Van Helsing gets you Turn Undead, wisdom as primary stat, and about the right level of fighting skill, but everything else about the class is still inexplicable. Why’s there a prohibition on edged weapons, even if you’re playing a priest of Armok, God of Blood? How come every village priest can invoke miracles? Why do miracles use the Vancian model? And that’s not even getting into the nitty-gritty of the spell list, which is deeply weird in places.

            I haven’t seen either version of The Wicker Man, so I’ll have to take your word on Lord Summerisle.

          • andrewflicker says:

            A lot of good bards are playing a version of Robin Hood- a dashing rogue with noble blood and a penchant for singing and inspiration.

          • Le Maistre Chat says:

            @Nornagest: Besides turning undead and casting spells, I assume Gygax let a Cleric do anything a monastic knight unit from medieval wargaming could do (wear armor, use shields, etc).
            The spell list is a general idea of what Saints could do: Cure Wounds, Purify Food & Water, Protection From Evil, Detect Evil & Magic, let their Light shine, Bless, Speak With Animals, Cure Disease, Remove heathen Curses, Neutralize Poison, Dispel Evil, Commune… turns out it was a much shorter list in OD&D. A couple come from Exodus: Insect Plague, Sticks to Stones…

            It’s pretty weird for the same character to be a Knight of the Hospital, St. Nicholas Thaumaturgos, and Van Helsing, but you can see what he was thinking.

          • Nornagest says:

            I really wanted Pathfinder‘s Oracle class to be a squishier divine caster. It’d be great to play a prophet or miracle-worker who wasn’t also a combination Templar and vampire hunter, especially if they were suffering from their connection to the divine as much as they were benefiting from it. There’s some real roleplaying meat there, and it’s not territory that’s been covered well in any previous edition. Instead I got a spontaneous-casting Cleric with some fluff abilities. Oh well.

            Pretty typical of Pathfinder’s variant casters, though; I had similar issues with their Witch and Inquisitor.

            (Also, apparently the snake episode in Exodus made such an impression that Gygax used it twice: sticks to snakes and the snake staff.)

          • engleberg says:

            The average party has a thief in front, some fighters, a cleric, a magician. The average US Army fire team has a point man, some riflemen, a medic, and a radio guy. D and D came from war gaming, then Gygax listened to his customers and said sure, you can be Conan, Elric, the Grey Mouser, whoever.

      • Randy M says:

        I think D&D should leave per day abilities behind for all classes. They won’t, but it would make it easier to balance encounters and adventures for various classes and levels–or unbalance them in a conscious way, if you are prefer.

        On a related topic, I think they should change how they balance fighters and wizards. Casters get stronger exponentially as they level, while fighters get stronger linearly. Different editions adjust the formula slightly, maybe adding a +2 or x2 to the fighter to attempt to compensate, or starting wizards off with a -2, but eventually it becomes evident that the casters have different power functions.

        This is true even if you use wandering monsters and so on. The things high (and often mid) level wizards can do is just worlds apart from what fighters can do. This might emulate some genre fiction, but not all of it, and their are plenty of stories to tell about magic that is subtler–and if you want the world shaking spells, make them rituals requiring meaningful sacrifice, long preparation, or teamwork like they often are in genre fiction.

        But, as I’ve said elsewhere, there’s something to the complaint that classes are too similar feeling if we take the 4E approach. What if instead of making the power level of wizards P=Level^2, it made it P=combat rounds^2? In other words, let the wizards play different, but take the progression from target dummy to demigod and fit it into the standard encounter time? Say, wizards spend five rounds gathering power, not doing nothing but having minimal impact on HP totals (say they use their standard action to meditate and a minor action to cast a cantrip), then let out a huge burst that does, say, 50 damage to a foe. Meanwhile, the Fighter was doing 10 HP damage each round, drawing attention, etc.

        What do you think the reaction to this paradigm would be? Would martial characters be seen as nothing more than bodyguards? Would it look like wizards were dramatically nerfed? Or is it rpg blasphemy to approach design from standpoint of making the math fun rather than making it feel like controlling Gandalf & Conan?

        • quanta413 says:

          I think equalizing the rough overall value and scaling of classes is the way to go. It’s that or have all fighters transform into warlords or wizards at a certain level.

          Scaling that hard with combat rounds is probably too much. But maybe wizards could be much more easily interruptible or could require the setup you talk about to make big effects. Maybe rather than scaling with combat rounds, wizards act less often but with more broad battlefield level effects.

          If you heroically manage equalize the classes in combat, I think that then reveals how much more interesting the casters tend to be out of combat.

          4e made some late attempts to deal with this by introducing martial practices which were mechanically like rituals but were a way of giving martial types explicit rules for doing things like making disguises, taming a mount, scouting a large area, or forging papers. It wan’t a perfect solution, but I think it was a good rough direction to go in.

          I think the skill system on its own doesn’t have quite the amount of depth needed.

        • Plumber says:

          The more I learn of it, the more 3.x sounds like “The caster edition”, and “Not for me”.

          FWLIW, if I’m invited to play five sessions WotC 5e D&D is my choice, but if I’m invited to play fifty sessions TSR D&D is my choice (assuming 2e AD&D plays like 1e AD&D, I already know B/X and RC play close enough to oD&D).

          If spell-casting is powerful, make it dangerous to the caster and/or what the caster loves.

          Elric had a very powerful magic sword….

          ….which had a habit of consuming the lives and souls of Elric’s loved ones.

          Make magic have a price, or make it NPC only (the Pendragon model), or make everyone a caster (the Ars Magically model).

          In the old Stormbringer game instead of casting spell you summon demons and elementals to make magic. For more poweful magic you have to summon more powerful beings and they need to be persuaded!

          Couldn’t demons just decide to eat you up yum-yum or rend your psyche and soul instead?

          Damn straight! 

          What part of “secrets man was not meant to know” didn’t you understand?!

          Practicing magic is a dangerous act, otherwise every Tom, Rick, and witch Hazel would do it!

          Magic as tool box “Levels to move the world” is LAME!

          Magic should be more like fire, specifically hellfire!

          Yes you may boil your tea (and incinerate your enemies!), but you run the risk of dooming yourself.

          Now that’s genre!

          5e D&D seems pretty balanced at low levels, and every class is too OP at high levels, which makes me want to start at 1st level again regardless.

          I have no interest in the “build mini-game”, nor in playing a superpowered “god wizard”, I just want to role-play a Fafhrd, Gray Mouser, Robin Hood, or Sinbad-like “guy with sword”, and having to even think of “power levels” sounds like a chore.

          Other than having so many tables, why would I want to play 3.x D&D?

          In the 0e/1e D&D games I played Magic Users were weaker than Fighters until you got to rarely played levels, but that was known early. While in theory Magic-Users became the most powerful characters (it even suggested so in the rules:

          1974 – Dungeons & Dragons Book 1: Men & Magic,
          (Page 6)
          “Magic-Users: Top level magic-users are perhaps the most powerful characters in the game, but it is a long hard road to the top, and to begin with they are very weak, so survival is often the question, unless fighters protect the low-level magical types until they have worked up….”

          IIRC, in practice Mages were so weak that no one I knew played them long. We only did it when we rolled badly or (briefly) wanted a challenge, so I never saw any Mages past second level that weren’t NPC’s at my usual tables.I can very much remember how in 70’s early 80’s it was hard to get anyone to play a “Magic User” (even when the Intelligence score roll was higher their Strength), simply because at low levels they had the least they could do (and the lowest hit points).

          Most everyone played “Fighting-Men” to start, but those few who played for “the long game” found that “Magic Users” vastly overpowered other classes at high levels. Thematically and for “world building” it made sense, magicians should be rare, and “the great and powerful Wizard” should be more fearsome then the “mighty Warrior”. But as a game? Having separate classes each doing their unique thing is more fun, and always hanging in the back while another PC does everything isn’t…..anyway, it was such a long slog before a Magic User PC became less weak than the other classes that if they survived to become poweful it seemed like a just reward in old D&D

          I’d describe it as Robin Hood with Friar Tuck turning into Doctor Strange at the end.

          What is right, true, beautiful, and proper on the other hand is after Hristomilo casts his wicked spell, he meets Fafhrd and the Gray Mouser’s blades (Yes, my ideas of how D&D should be pretty much begin and end with  Leiber’s works)!

          Old TD&D also had different classes gain levels at different rates for the same XP, and “homebrewing” that is easier than balancing each level, but ideally (to communicate to playets what the “power level” is, classes of the same level should deliver, not “power” per say, but roughly equal spotlight time, and ideally the same should be true with the “power points” used in HERO and GURPS

          • Randy M says:

            You might like Warhammer. Any magic spell has a chance to go very, very badly.

          • dndnrsn says:

            2nd ed in theory wasn’t very different from 1st ed; in practice, the game’s “culture” had changed by then. It’s as much about the adventures getting published as the actual rules. I started playing in the late 90s, with 2nd ed, and had no awareness of the kind of play that’s associated (at least in idealized imagination) with the 70s and early 80s. What people call “old school” now. The rules for random encounters and enemy morale and all that jazz were still in the books, but published adventures were written as though they didn’t happen, which leads me to believe that most people were ignoring them.

          • Plumber says:

            “You might like Warhammer. Any magic spell has a chance to go very, very badly”

            Randy M,

            Sounds badass!

            Any “magic system” in which spellcasters may wind up like René Belloq and the Nazis did in Raiders of the Lost Ark meets with my approval! 

        • dndnrsn says:

          Wizards are more powerful than fighters even early on, depending on how broken sleep is in your favoured edition. Of course, they’re very squishy. So it’s up to the fighter to defend them.

          If I ran another fantasy game right this second, I’d run 5th and swipe the spells from d20 CoC. No sanity rules, but spells cost their user ability damage (not drain; it does regenerate). No magic-using characters. No divine or healing magic; the only magic is eldritch abomination stuff. Give out some extra skill choices to make up for the “smart guy” classes no longer being available.

          • Plumber says:

            “….If I ran another fantasy game right this second, I’d run 5th and swipe the spells from d20 CoC…”

            @dndnrsn,

            That sounds AWESOME!

            D&D with the CoC magic system?

            I like it!

            An idea that’s well worth stealing being inspired by.

          • dndnrsn says:

            The d20 CoC magic system was an attempt to square the circle and make a classless, casting-these-messes-you-up magic system for the d20 system. It actually worked pretty well, far better than the attempt to do a really simple class system so CoC could be played with classes and levels.

          • Nornagest says:

            I liked d20 CoC’s class system better than d20 Modern’s. It was clunky and shoehorned and didn’t play well, but at least it tried to be a system and serve a purpose — as best I can tell, d20 Modern had classes because d20 was expected to have classes and for no other reason. Everything about it would have been better if they’d just dropped the whole pretense.

          • dndnrsn says:

            d20 Modern‘s classes were also pretty unbalanced relative to each other, I seem to remember.

    • Randy M says:

      The system was bad and stupid but it was an attempt to simulate something.

      I think our difference is that I prefer mechanical elegance to clunky simulation.
      Not that it has to be either/or, of course.
      Anyway, it’s easy enough to make justifications for non-simulation abilities, like a barbarian rage being on a daily power. It’s just that the justification might differ from day to day or encounter to encounter within a day.

      In the game world in 4th, there’s these invisible walls in decision-making, and it means that decisions must be made by the player without the character’s imagined decision-making playing a role.

      I think that a major goal in roleplaying games is to make your decisions in-character, especially in high-stakes situations where the metal meets the meat.

      So, I can see where you are coming from, I guess. In a world where there is no justification for abilities (that is given to you; as said, it’s easy enough to justify) perhaps it’s harder to role-play the MacGuyver-esque problem solver of previous editions because some things might be jarring. “I want to yell Come-and-get-it, but I can’t actually make them come and get it because I already did that this morning! What’s even going on?!”
      But if you look at it like “I’m playing a character with a certain set of defined powers with some limitations that may be negotiable but won’t always click with the world. But how I use them, and why I use them, are up to me!”, you are still role-playing. In a more interesting way to me than the excluded example, maybe because I find exploring the psychology of ostensibly human characters more interesting than exploring the physics of a made up world that’s probably not consistent or reliable beyond what makes sense to the DM anyway.

      And if the short-cuts make the game easier to run, they may well empower the DM to improvise and prepare on the fly, creating more freedom to role-play. I’m not really sure that’s the case here, but it is a point against detailed simulationism.

      Any speed that was gained was then offset by the habit of making encounters more difficult by, among other things, slapping more HP on enemies.

      Monster design in 4E started out pretty poor and got pretty good (offer not valid for all levels). Also, it helps here, just like with prior editions, to consider the morale of the opponents–they don’t have to fight to zero.

      Speaking of which, here’s a disassociated mechanism for you–in 4E, you can decide for any attack that reduces an opponent to 0 HP if you want it to be fatal or not. Or rather, it simulates fiction rather than any consistent world. I think most DMs would house rule out certain abilities and rule that, no, using your Thousand Cuts of Hell cantrip is not going to just leave that minion pinning for the fjords.

      Abstraction is measuring money by Wealth modifier as opposed to counting dollars

      I don’t think you want to keep sticking with this example. What does it mean when Mr. Wealth +3 goes to buy passage on a ship? Does his wealth modifier decrease? If not, this seems disassociated. Do you make them roll? How is that a simulation of anything, unless they have a joint checking account with a wastrel.

      End of the day, I have come around to the “Classes are too similar; why do martial dudes act like casters?” argument, partially for variety reasons, but also because it could ease people into the RP (though I haven’t seen it be a problem in practice, at least not consciously). I’d make plenty of changes to 4E if I were to design my perfect system. But as is it works pretty well for me.

      • dndnrsn says:

        I think our difference is that I prefer mechanical elegance to clunky simulation. Not that it has to be either/or, of course.

        I think that 3rd was too far in the direction of making things complicated. There were too many modifiers to keep track of, and too many choices in character creation and advancement. This slowed things down and rewarded optimization too much, in my opinion (I think optimization is often bad because it funnels characters into a few archetypes and kills serendipity, and because it often entails rewarding That Guy for learning the rules really well).

        Anyway, it’s easy enough to make justification’s for non-simulation abilities, like a barbarian rage being on a daily power. It’s just that the justification might differ from day to day or encounter to encounter within a day.

        It doesn’t run into the problem with the stuff where, yeah, you can only do that once a day (why can you only rage once a day? I don’t know, why wouldn’t you deadlift three times a day or whatever?). It runs into problems with things where there’s obvious arguments for it working differently. Example: PHB p121, “Sand in the Eyes” – why can only rogues throw sand in someone’s eyes, why can they only do it once an encounter, where does the sand come from? It’s not an attempt to model throwing sand in someone’s eyes, but rather it’s a power the rogue gets that lets them, once an encounter, blind an enemy for a turn upon hitting them, and it’s flavoured as throwing sand in their eyes. It doesn’t depend on sand being present or on the enemy having eyes – that’s in the title and the flavour text, but it’s not present in the mechanics.

        The alternative is to have rules for resolving what happens when a PC throws something in someone’s eyes, and then when the PC says “when I get up off the floor pretending to surrender, I’m bringing my hands up, and I chuck the sand I just picked up in the guard’s face” they make a DEX-based touch attack against the guard’s flat-footed AC if they get surprise and blah blah blah. 3rd would likely make it too complicated. Earlier editions might not have rules, forcing the GM to make a ruling on the spot (which some people like) or it might have rules that were just weird. 5th would probably have a nice simple system but might fall prey to the math problems I hear lurk in that version of the rules.

        So, I can see where you are coming from, I guess. In a world where there is no justification for abilities (that is given to you; as said, it’s easy enough to justify) perhaps it’s harder to role-play the MacGuyver-esque problem solver of previous editions because some things might be jarring. “I want to yell Come-and-get-it, but I can’t actually make them come and get it because I already did that this morning! What’s even going on?!”

        The problem with supplying justifications is that, as one of the linked essay notes, it amounts to coming up with house rules. Beyond resulting in inconsistency from table to table, it also throws the supposedly-important balance out of whack, because you’re changing the mechanically-defined powers by adding restrictions or limitations to them (eg, if we say that throw sand power can only be used when there’s sand and when the opponent has unprotected eyes, then we’ve made it considerably weaker).

        But if you look at it like “I’m playing a character with a certain set of defined powers with some limitations that may be negotiable but won’t always click with the world. But how I use them, and why I use them, are up to me!”, you are still role-playing. In a more interesting way to me than the excluded example, maybe because I find exploring the psychology of ostensibly human characters more interesting than exploring the physics of a made up world that’s probably not consistent or reliable beyond what makes sense to the DM anyway.

        In that case you’re hanging roleplaying on a tactical combat system that kinda disrupts the suspension of disbelief – for starters, your characters’ tactics in combat will change over the course of an encounter or day based on the powers they have available to them at any given moment; you can’t say “you know what, my dude is an idiot and so his primary strategy is to stand there and shout for enemies to come fight him one by one and he’ll do that over and over unless there’s a pressing reason not to” because he can only do that once an encounter or once a day.

        If the GM is good, the world will be somewhat consistent and reliable, because inconsistency and unreliability mess with decision-making both at the character and player levels.

        And if the short-cuts make the game easier to run, they may well empower the DM to improvise and prepare on the fly, creating more freedom to role-play. I’m not really sure that’s the case here, but it is a point against detailed simulationism.

        A focus on encounters that are intended to be individually designed – instead of a focus on encounters that are frequently randomly determined – runs counter to improvisation. There were a lot of problems with 3rd, especially late in its life, but 4th didn’t really solve them.

        • Randy M says:

          Example: PHB p121, “Sand in the Eyes” – why can only rogues throw sand in someone’s eyes, why can they only do it once an encounter, where does the sand come from? It’s not an attempt to model throwing sand in someone’s eyes, …

          This is true, but it really stems from defined abilities being implicit (or sometimes explicit) rules that they are not able to be attempted without that ability.
          4E does allow stunts and has rules for them, though they aren’t simulationist rules, but rules for what level of effect is appropriate.
          But that doesn’t make your point less true that “Sand in the Eyes” power is not attempting to model throwing sand but attempting to give the rogue something interesting to do that is within a martial striker ability set and then apply rogue paint over it.

          you can’t say “you know what, my dude is an idiot and so his primary strategy is to stand there and shout for enemies to come fight him one by one and he’ll do that over and over unless there’s a pressing reason not to” because he can only do that once an encounter or once a day.

          Sure you can. But he’ll only tend to find idiots that fall for it once per day. 😉 (And by the way, I was referring to a specific power that generated this kind of complaint a lot, which actually works to drag ~all~ nearby enemies toward the fighter so he can keep them within OA range.

          In that case you’re hanging roleplaying on a tactical combat system that kinda disrupts the suspension of disbelief – for starters, your characters’ tactics in combat will change over the course of an encounter or day based on the powers they have available to them at any given moment

          Different strokes, I guess. It makes sense to me that the characters will adjust their tactics based on many different factors, some of which will be explicitly accounted for, and some of which will not. The world as a place where every encounter can be met with the same tactics… I don’t think that is the intent with D&D of other editions, either, hopefully, but “Trip, Power attack, rinse, repeat” doesn’t strike me as having great versimilitude either.

          The problem with supplying justifications is that, as one of the linked essay notes, it amounts to coming up with house rules.

          No, that’s not what I meant. The rule is, “Come and Get It” works once per encounter (I was wrong about 1/day, it isn’t so disassociated as I thought). It’s the justification that is inconsistent.

          A focus on encounters that are intended to be individually designed – instead of a focus on encounters that are frequently randomly determined – runs counter to improvisation.

          True, but 4E encounters don’t need to be designed. Pick a number of monsters equal level, and the PCs will have an interesting fight that they will probably win. Bam. Monsters work out of the book, no need for cross-reference or considering if your low CR monster has damage resistance or some save or die effect (as I hear 5E is prone to).

          Random encounters don’t work as well because 4E characters are harder to kill (so the resource depletion is less likely to matter) and fights take longer (unless you make intelligent morale considerations for creatures), and designed fights where you consider the enemy tactics and terrain effects ahead of time, are just so much more fun than random ones. But I don’t think a random fight in 4E is any less fun than in other editions that don’t have the interesting but generally easy to run (barring as always epic levels) tactic aspect.

          a lot of problems with 3rd, especially late in its life, but 4th didn’t really solve them.

          Now that’s a different thread. And one I’m not really experienced enough to want to wade into. But 4E did attempt and to a degree succeed, in addressing some complaints with 3rd, such as caster supremacy, 5 min workday, need to plan characters over many levels, difference between optimized and not optimized characters (which it later basically undid), need to build complex npcs to oppose the players at later levels, need to cross reference several books for monster powers, etc.

        • dndnrsn says:

          This is true, but it really stems from defined abilities being implicit (or sometimes explicit) rules that they are not able to be attempted without that ability.
          4E does allow stunts and has rules for them, though they aren’t simulationist rules, but rules for what level of effect is appropriate.

          I think it results in some things being too rare and something being too common. Everyone swinging off a chandelier once per combat is just odd, especially when you can only stab an enemy in the back of the leg once a day.

          Different strokes, I guess. It makes sense to me that the characters will adjust their tactics based on many different factors, some of which will be explicitly accounted for, and some of which will not. The world as a place where every encounter can be met with the same tactics… I don’t think that is the intent with D&D of other editions, either, hopefully, but “Trip, Power attack, rinse, repeat” doesn’t strike me as having great versimilitude either.

          I’ve acknowledged 3rd had problems: the excess of options led to some optimal paths that overwhelmed other tactical factors.

          No, that’s not what I meant. The rule is, “Come and Get It” works once per encounter (I was wrong about 1/day, it isn’t so disassociated as I thought). It’s the justification that is inconsistent.

          For some things, it’s inconsistent but wouldn’t have a mechanical impact. For others it would.

          • Randy M says:

            I think it results in some things being too rare and something being too common. Everyone swinging off a chandelier once per combat is just odd, especially when you can only stab an enemy in the back of the leg once a day.

            I think the abstraction necessary to make complex physics work in human brains in quickly and explicitly make some things too rare and others too common. In real life, does swinging on a chandelier convey an advantage? Well, probably never, really, but let’s go with sand in the eyes. It depends on many things–the texture of the sand, height of the combatants, the wind, whether they are expecting it, what both are armed with (does the defender have a shield? Can the sand thrower re arm quickly?), actions of other combatants, and so on. You can’t model all these, and an attempt to make the option relevant only sometimes through simulationist rules is very tricky and prone falling over into over or under powered. And you can’t make a ruling for every possible clever move.
            You could wait for the player to decided they want to try the move, then give a judgement based on modeling all the factors–but that’s quite difficult, and probably cashes out into letting it work now and then. Whoops, you’ve just accidentally created a disassociated martial encounter power!

            I’ve acknowledged 3rd had problems: the excess of options led to some optimal paths that overwhelmed other tactical factors.

            Eh, the point wasn’t to make a dig at 3rd, but to say that players needing to come up with new tactics increases, rather than decreases, verisimilitude, even if you can’t explicate all the factors why they should, similar to my sand in the eyes stunting example.

          • Civilis says:

            You could wait for the player to decided they want to try the move, then give a judgement based on modeling all the factors–but that’s quite difficult, and probably cashes out into letting it work now and then. Whoops, you’ve just accidentally created a disassociated martial encounter power!

            I think some of the issue comes about because some of factors are subjective, often those dealing with the players, the narrative, and the game, and this is especially the case with martial powers, where what is being done isn’t normally impossible. No matter how accurately he chants the magic words and waves his hands, “Lucky” Lightfoot the Thie… Rogue isn’t going to cast a fireball. On the other hand, all the players in the game have likely seen the Conan films and a couple of Jackie Chan movies and think that fighting like that is something they could do.

            There’s nothing that says Sir Xavier Pendable, Paladin of Light physically can’t throw sand in someone’s eyes just like “Lucky” Lightfoot can. However, our esteemed Paladin isn’t likely to be the sort that thinks of fighting dirty, even if his player is. And even if he did just watch “Lucky” throw sand, he’s not going to have the training to spot when a good opening to potentially blind someone is, know how to do it properly, be able to quickly think of what is on hand to throw, and how to hide the tells so that what he’s doing isn’t obvious. You could spend a page detailing all the die rolls necessary to give him that one in a million chance on something that would be statistically highly unlikely to help the party to do, but you can’t do that for every stunt. For the Rogue, most of those die-rolls are effectively gimmes, so the stunt becomes statistically useful, with the “how often you can use this” taking the place of the rarity of the opening the enemies grant you for the sake of narratives. (In the very desperate case where having the Paladin throw sand is one of the best options for the character despite having little chance of success, you can make up the rolls on the fly or just substitute ‘it works if you roll a natural 20’; see the story of the dwarf in full plate failing a climb check and falling off a cliff*).

            One difference between 4e and other D&D editions is 4e tends to treat the monsters and the characters with different rule sets, whereas in other editions powers are balanced because the enemy is likely built from the same toolbox. To think of how fair a power is, don’t just think about how realistic it is, think what the player would say or do if the enemy was able to use the ability against the player at will. It’s perfectly possible that a fighter could get caught up in the moment and get too focused on a prominent enemy, especially one that’s just daring him to attack; it’s not going to happen more than once a fight.

            *Note: stolen from the internet; did not happen to me.
            Context: the party is traversing a mountain pass in a blizzard. DM has everyone roll dice to see if they fall. Dwarf in full plate fails the check.
            DM [to Dwarf]: A gust of wind has caused you to fall off the cliff. What do you do?
            Dwarf: I flap my arms real hard.
            DM: Seriously?
            Dwarf: It’s not like I got better options.
            DM: Roll.
            Dice: [Natural 20]
            Party: …
            DM: Roll again.
            Dice: [Natural 20]
            Party: !?
            DM: With an astonished look on their faces, the party beholds a miraculous sight! A dwarf in full plate is rising up from the cliff in defiance of natural law by flapping his arms insanely hard.

          • dndnrsn says:

            @Randy M

            You could wait for the player to decided they want to try the move, then give a judgement based on modeling all the factors–but that’s quite difficult, and probably cashes out into letting it work now and then. Whoops, you’ve just accidentally created a disassociated martial encounter power!

            Eh, honestly, with the kind of D&D I favour – retro clones or old versions – if they’re in a situation where they can throw sand in someone’s eyes, I’d probably let them take a DEX check to give the enemy a -4 penalty (the same penalty as one gets for a foe who’s invisible but you know their location) until they can clear their eyes out. You can’t always throw sand in someone’s eyes – it would probably require the drop on an enemy, or at least a non-hostile enemy who’s not already guarding themself. If the next session everyone is trying to huck sand in everyone else’s face, it’s too powerful and I’d need to adjust. Having it as a defined power creates the same problem: it’s something that’s going to happen every fight, or every day, as the players check off their powers.

            Eh, the point wasn’t to make a dig at 3rd, but to say that players needing to come up with new tactics increases, rather than decreases, verisimilitude, even if you can’t explicate all the factors why they should, similar to my sand in the eyes stunting example.

            The more you define powers, the more that tactics will be based around learning your powers, optimizing your character build, etc. I think this was a problem in 3rd (an optimized character played by someone with zero tactical acumen is going to perform better than an optimized character played by someone who is good at tactics) and it’s a problem in 4th. The answer to “when do you throw sand in someone’s eyes?” should be “when it’s possible and will help you” not “as often as you can use the power.”

            @Civilis

            One difference between 4e and other D&D editions is 4e tends to treat the monsters and the characters with different rule sets, whereas in other editions powers are balanced because the enemy is likely built from the same toolbox.

            This isn’t the case. Up to 3rd, enemies didn’t follow all the same rules as players: if you crack the 2nd ed MM, it won’t tell you a kuo-toa’s DEX, whereas the 3rd edition MM does.

          • quanta413 says:

            I’ve acknowledged 3rd had problems: the excess of options led to some optimal paths that overwhelmed other tactical factors.

            I don’t think that that’s the really the main problem with 3e. 3e is horribly broken with just the core books.

            It’s more like a complete lack of good paths for many classes. A fighter is very weak in core. Too many big monsters physically kick his ass, and he has to sink all of his feats into a single trick (like the spiked chain tripper) to get an effect comparable to what a wizard of comparable level gets by basically waking up in the morning and casting a spell.

            And the fighter’s vaunted endurance isn’t really much of a factor because his hp is still finite.

            It’s not like a spellcaster has to twink to be strong either. Even though the wizard is really strong and gets the most love/hate, he’s hard to play because you’re drowning in options of hideous complexity but have less spells per day than a sorcerer and no back up plan like a druid. It’s usually easier to play a druid or sorcerer or cleric.

          • Plumber says:

            While I really don’t like how “high magic” aspect of 3e/3.5/4e/5e/PF is (at least in TSR D&D you didn’t race up the levels), a main beef of mine with 3e/3.5/PF is that there’s no equivalent “training wheels class” that’s as east to play as a 5e WD&D Champion Fighter or any TD&D Fighter, and I’m especially irritated by having to choose “Feats”

            When I’ve asked for suggestions at a 3.5 devoted forum I was deluged with suggestions of spell-casters and non-PHB classes, which I don’t want to play!

            The 3.5 Fighter forces me to choose Feats, and the Barbarian to deal with “Rage”.

            No thanks!

            AFAICT, a 3rd level Ranger in 3.5 WD&D is simpler to play than a 5e WD&D Ranger, so that’s a plus for 3.5, but both the 3.5 and the 5e Ranger scale up the complexity considerably after 3rd level. 

            I agree that 5e is more complex than many adherents argue it is, but having the Champion Fighter sub-class and, most importantly, having Feats be optional makes it easier for me to keep playing after 3rd level than 3.5.

            Going both Ranger and Rogue could extend upwards the levels of how long I could play 3 5 without options fatigue and cognitive overload, if I could understand the 3.5 rules better, but at my age patience is scarce.

            I do agree that a Paladin 2/Ranger 2/Rogue 2 in 3.5 would probably be simpler to play (if one just accepts not Sneak Attacking ever) than a Battlemaster Fighter 6 or most of the 5e classes (the 5e Paladins can be suprisingly simple if one only “Smites” and doesn’t cast spells, but AFAICT the 3.5 Paladins doesn’t have that option).

            But the thing is the class that is simple to play in 5e (Champion Fighter) is obviously that, but may still contribute to the Party success, at least to 10th level, and if you want high complexity and options in 5e the Wizard is an obvious choice (and as obvious for me to stay away from).

            A simple “training wheels class” for 3.5 is not obvious to me, I just know that with Feats Fighter is not the “go to” class for me that it is with every other version of D&D.

            3.5 lacks and needs “training wheels” (which I don’t see in Pathfinder either).

            Oh, and as a side rant, most “fixes” that I see for the Champion in the 5e devoted Forum’s don’t work for me, and they just look like “Battlemaster 2.0’s”, and I really hope that a 6e doesn’t go in that direction.

            If I could learn 3.5/PF I could easily double the tables available, but when I asked “Why don’t I just play “Redgar”? at a 3.5 devoted Forum the majority there said “no”, and many have even suggested playing a 3.5 Warlock instead, as if my wanting to play “a character like the Errol Flynn version of Robin Hood” somehow suggests playing Faust!

            If you want to play a bow and sword wielding PC, then 5e just looks easier to learn to me.

            If you want to play a spell-caster on the other claw, then both 3.5 and 5e look too complex for me, and I suggest playing 0e/1e or B/X instead.

            Most of my FRPG experience was in the 1980’s (many games, but AD&D,  Call of Cthullu, D&D, and Traveller were probably the ones I played the most).

            For those who grew up with 3.5 and/or PF I imagine it’s easier (just as 1e AD&D is relatively easy for me despite being pretty wonky), but for someone (me) who played a lot of RPG’s from 1978 to 1992, but none between 1993 and 2015, PF just doesn’t seem that easy..

            Mostly I see a “sink or swim” attitude now, with lots of “maybe your just not smart enough” comments at Forums, and co-players the of “pick up” 5e games don’t seem to understand that the game wasn’t always this complex. 

            With 5e I can suss out for myself a “training wheels” class and just play, and while I have encountered some obnoxious “how dare you reduce party effectiveness by not utilizing maximum optimization” type, other players told me “nah it’s close enough”, and I could just play, but with 3.5/PF I’m repeatably told that I “can’t just play” and “have to study and make a build first”, which is completely backwards to how I remember the game used to be, as it used to be one would play first, have fun, and then decide to study the rules further.

            “Homework first, then you get to play” doesn’t look like a way to grow the hobby to me.

            Despite it’s share of gatekeeping “How dare you not optimize!” players, 5e is a welcoming rules system to new players (where it fails most is in encouraging new DM’s).

            There is supposed to be a revision of Pathfinder soon, and I hope they have a training wheels class in the new core rules, or better yet I hope that the tables playing 3e/3.5/PF switch.

          • quanta413 says:

            @Plumber

            I agree that 3e (and I’d add 4e too) are mostly horribly complicated to play. Especially for someone who doesn’t want to spend hours building a character and ensuring they know how to use the character to perform to spec. I’ve never played 5e.

            For 3e and 4e, I often asked my players (who were often new) what sort of character they wanted, then I’d build it for them with their input on what they wanted in world terms. Then I’d explain any specific things they needed to know. I rolled pretty free form too.

            “Homework first, then you get to play” doesn’t look like a way to grow the hobby to me.

            I agree with this completely. The level of mechanical detail in a lot of RPGs is too high to be fun for most people. That sort of thing is better handled by a computer.

            The advantage of a tabletop RPG over a computer is the ability of the DM to adapt to player actions and respond to basically anything on the fly.

            For the sake of actually getting to play tabletop D&D, I wish there was more focus in the rules for handling this well and in an engaging manner and less focus on combat builds and tactics.

            And if you think the homework is bad for a player… christ, DM’ing is even worse (which I see you kind of touch on). I almost always DM because no one else want to. It can be fun, but the homework makes player homework look breezy.

          • Plumber says:

            “Eh, honestly, with the kind of D&D I favour – retro clones or old versions….”

            @dndnrsn,

            Would you be my DM?

            Please!

            (Assuming that you mean pre-3e)

          • Plumber says:

            “….if you think the homework is bad for a player… christ, DM’ing is even worse (which I see you kind of touch on). I almost always DM because no one else want to. It can be fun, but the homework makes player homework look breezy….”

            @quanta413,

            Before I played D&D, I DM’d it using the sublime 48 pages of the 1977 “bluebook” Dungeons & Dragons Basic rules (I was ten years old).

            It was just more inviting than most later versions (the ’81, ’83, and ’91 “Basic” rules come close though).

            Here’s a PDF peak.

          • dndnrsn says:

            What I’m using is, I suppose, most like OD&D or B/X. I cut my teeth on 2nd ed, played most of 3rd/3.5th, tried 4th briefly, and mostly abandoned D&D for Call of Cthulhu for more than a decade. I ended up deciding to run a retroclone and was positively surprised; I thought I didn’t like D&D when what I really didn’t like was excessive rules complexity, really long combats, and level scaling to the PC (as opposed to the geographic area in-game, like dungeon level).

            What I’d like most would be 5th without the bounded accuracy, which weirds me out a little, and with the same attitude towards game balance as I have. There’s a set-it-and-forget-it fighter build in 5th that’s way simpler than building a fighter in 3rd ed.

        • Matt M says:

          where does the sand come from?

          The pocket, obviously.

      • dndnrsn says:

        Monster design in 4E started out pretty poor and got pretty good (offer not valid for all levels). Also, it helps here, just like with prior editions, to consider the morale of the opponents–they don’t have to fight to zero.

        Some of the previous editions (I can’t remember when it came in; maybe 1st ed AD&D?) had defined morale rules. They mostly got vestigial and then disappeared entirely (did 2nd ed still have them? I can’t recall them getting used by the time I started gaming), probably because being able to determine by fiat whether enemies run or fight makes it easier for the GM to guide the story.

        I also gather that 4th ed received a lot of patching. This isn’t a judgment on the game, but on the game design, it says a lot that so much of the out-of-the-box game had to be patched. Even if you like 4th, it’s hard to deny that Wizards really, really fumbled the design, play testing, rollout, etc.

        Speaking of which, here’s a disassociated mechanism for you–in 4E, you can decide for any attack that reduces an opponent to 0 HP if you want it to be fatal or not. Or rather, it simulates fiction rather than any consistent world. I think most DMs would house rule out certain abilities and rule that, no, using your Thousand Cuts of Hell cantrip is not going to just leave that minion pinning for the fjords.

        Which creates the problem that if it’s supposed to be a mechanically determined ability, suddenly it’s been weakened mechanically: it can’t be used to leave an enemy alive. Knockout rules are a constant problem, though, and I don’t blame them for trying something.

        I don’t think you want to keep sticking with this example. What does it mean when Mr. Wealth +3 goes to buy passage on a ship? Does his wealth modifier decrease? If not, this seems disassociated. Do you make them roll? How is that a simulation of anything, unless they have a joint checking account with a wastrel.

        OK, d20 Modern has it like this: his wealth modifier decreases if buying a ticket is higher than his wealth modifier, or if it’s higher than a given DC; the character has to roll unless he can take 10 or whatever. It’s an abstraction of the money available to him: credit, cash, etc. I can see how it’s a bit weird that one week you can’t afford something and the next week you can, and how the explanations are post-hoc (“last week your boiler broke and you had to pay to fix it”) – yeah, I can see how that feels a bit dissociated, and I could see how someone could define it as that.

        However, the reason it’s there is to abstract it away: otherwise a game set in a believable modern world would require a lot of constant annoying little transactions. Before you go out and fight vampires in the secret war against the supernatural, better make sure you gas up and pay your rent! Money is important, but it’s important in what it enables you to do; tracking it would create huge hassles.

        The dissociation in 4th ed is in the core of the game: combat. It’s not “this isn’t important something it improves the game if you track it, so you can do it once a day; let’s shut up and move on to fighting vampires” – it’s a combat system built around dissociated powers, it’s scaling difficulty by slapping extra HP on enemies, etc.

        • Randy M says:

          I also gather that 4th ed received a lot of patching. This isn’t a judgment on the game, but on the game design, it says a lot that so much of the out-of-the-box game had to be patched. Even if you like 4th, it’s hard to deny that Wizards really, really fumbled the design, play testing, rollout, etc.

          I think it comes from a variety of factors. More and more complicated effects, more concern with game breaking abilities, more accessibility in the general audience to on-line content made errata a possibility, and more openness in design leading to admitting mistakes.
          Don’t forget, 3rd edition had a 3.5 update that changed things.

          Which creates the problem that if it’s supposed to be a mechanically determined ability, suddenly it’s been weakened mechanically

          Yeah, but in practice it’s an itty-bitty problem, since if your move is going to kill a dude that matters, you are probably in control of the situation. And it introduces a good moral tension into the game. But it’s an judgement call.

        • Nornagest says:

          2E did have morale rules by the book, but I’ve never seen them used in play.

        • dndnrsn says:

          @Randy M

          I think it comes from a variety of factors. More and more complicated effects, more concern with game breaking abilities, more accessibility in the general audience to on-line content made errata a possibility, and more openness in design leading to admitting mistakes.

          The story I’ve heard is that 3rd got more play testing, and for 4th they didn’t just hand the rules to people, they had them play through situations intended to test the rules systems.

    • arlie says:

      Yes. 4th edition combat reminds me somewhat of a collectible card game. It’s all about when to use your one big move.

      It’s also notable that it feels like game play is identical for all classes – each have the same pattern of abilities (one big move available daily; a smaller big move available once per fight; minor abilities….). Whereas in prior versions, the physical classes had endurance, and the magical classes had only burst power, but so much *more* power in that one burst.

      I find 4th edition great for pickup games, mostly focussed on combat. But for campaigns, I acquired Pathfinder, once enough time had passed that it was unlikely that the average potential player would own the earlier editions of AD&D.

      My personal favorite remains what I call 1.5 – AD&D 1 with Unearthed Arcana and perhaps some other add-ons.

      I haven’t played 5. I’m more of an ex D&D player than an active one, and those I do play with (rarely) generally own the earlier books.

    • Le Maistre Chat says:

      (for the un-2nd initiated, everybody had six or whatever different saving throws, pertaining to different overly-specific categories, and they went down as you got better, and you had to roll high; the chart was in a different section of the book from the classes)

      “Why does my character sheet have a field for Save vs Death Ray? I thought this was going to be a medieval fantasy setting. Are there going to be mad scientists with death rays in the dungeon? I hope the genre shift wasn’t supposed to be a surprise…”

    • quanta413 says:

      When I said bizarre there, I meant in an outside in-way. I was no longer misunderstanding after reading the essay.

      What I meant by (A) is stuff like HP.

      What is your character reacting to in-world if you decide he should rest because HP is low? Various editions have described HP as capacity to take injuries or skill or gumption or luck (adding more and more flavortext to the abstraction with each edition). But none of those make any sense as in world explanations unless you admit that D&D’s reality is very strange. HP doesn’t affect your characters functioning in the world (rarely- until they drop to 0). Whether you are at 1 hp or full hp, your characters available actions are essentially unaffected so how does it have anything to do with injuries or gumption as we would understand it? If it’s luck, why does boosting constitution increase HP? If it’s skill, why does my HP deplete over the course of a battle? If it’s a combination, which?

      Your fighter should essentially be unaware of their HP until they are disabled or dying unless they have an HP counter in their head or there is some spell that tells him his hp value. In the entire world, people should behave approximately as if they have two states- on or off. And they should have only a very rough idea of when they’re going to cross from one state to the other.

      I’ve never seen a player behave as if HP wasn’t something like stamina or injury even though if they were roleplaying without metagame knowledge, they should behave approximately the same at 1 HP as they do at full (Exceptions for 4e where they can be bloodied).

      Now you could say that your character knows when he got hit and he keeps a careful tally, but what in world knowledge lets a higher level fighter know he has more HP than a lower level one or than an equal level wizard? How would he know he needs a cure moderate wounds instead of a cure light wounds?

      The whole idea doesn’t map well if you inspect it a little, but people still tend to have their characters drink potions or rest at appropriate game times and just paper over it as a vague abstraction related to injuries or stamina or luck or the phases of the moon in world. And this doesn’t phase players even slightly.

      Well it phases a few obsessive people. I did find some people discussing how maybe a less dissociated system could be made (although they didn’t use those words). But by and large the whole issue is totally ignored.

      I don’t feel that daily powers require me to use more metaknowledge to create in world behavior than I already did when I had to determine when to drink a potion. Or determine that if I wanted my character to be better at picking locks or deciphering arcane scrolls in the world, then the fastest route to that in world time is probably to go stab owlbears.

      Daily powers are just adding to the amount of metaknowledge I have to use. But I’m already using so much metaknowledge in any D&D that I don’t find the complaints at all convincing.

      I could go on mechanic by mechanic, because I hardly think HP is alone in not mapping well to the world (if you assume the world is even slightly like ours). Maybe I’m missing some really genius explanation of how HP fits into the world that isn’t in any of my RPG books, but if it’s so important that HP really fit into the world, why isn’t that explanation there?

      • dndnrsn says:

        HP is a weak point and always has been. It is abstracted to the point of bordering on dissociation, and it’s inconsistent: if it represents your ability to take punishment short of lethal damage (and that’s what your character gains as they gain experience), yeah, why do spells and potions that knit flesh and bone restore that.

        However, it’s a weak point that’s very hard to fix. Location-based damage slows things down and increases the chance that PCs get killed by lucky hits (as does more realism in general). There are systems where you take wounds that give you penalties (instead of someone at 1/10 HP being as spry as someone at 10/10) but you have to keep track of it, and it often means that if your PC is hurt you get to sit around doing nothing for weeks or months (realistic, but D&D isn’t realistic).

        The d20 version of Star Wars split your HP into one that never went up, that was your actual physical state, and one that represented your ability to roll with punches, etc, which went up. Mooks didn’t have the latter. I think criticals went directly to the former. It was supposed to make combat more “cinematic” (the not-really-health HP restored very quickly) but in practice it made combat more lethal for PCs, because suddenly enemy criticals weren’t just doing double damage but were going straight to their real health, which never increased, and was low (I can’t remember if it started as being equal to CON, or if it was equal to 1st level HP, or what).

        So, yeah, HP is bad, but everybody who’s tried to fix it has introduced new problems. It’s bad because it doesn’t make sense, for the reasons you describe.

        The same stuff is true about XP. It doesn’t make sense that you can stab owl bears to get better at picking locks, and there are games where they increase your skills in more realistic ways. Unfortunately, these systems are limited to more “realistic” games: in Call of Cthulhu you can increase your skills, but not your HP, not your magic points, etc.

        In both cases, these things are kludges that seem to be necessary evils. Attempts to do them in other ways have failed repeatedly (fantasy heartbreakers are usually an attempt to fix one of these, or Vancian magic, or whatever). If these things are annoying and kinda gamey, why introduce more game elements that don’t make sense in the same fashion, when they were working in a way that did make sense? Yeah, sure, HP and XP and levels and all that stuff strains credulity a bit, but no better way for D&D has been found. Whereas, nobody’s complaint about 3rd edition was that it didn’t give rogues magic pocket sand.

        • Randy M says:

          Whereas, nobody’s complaint about 3rd edition was that it didn’t give rogues magic pocket sand.

          What people complained about 3rd edition was that everything a rogue did well, a caster did better, and a fighter did nothing well. Giving the rogue magic pocket sand and the Fighter Come and Get It! did, in fact, fix this complaint, albeit not in a way universally appreciated.

          • dndnrsn says:

            I disagree with this. The wizard replaces the rogue, if the wizard gets good intel on what she should prep, gets a nap, etc. In practice, you only use knock for magic doors. Wizards don’t get enough spell slots to pack both combat and utility load outs.

            If fighters did nothing well, why did parties without combat-types tend to get smooshed? Half the fun of being the fighter or whatever is to remind the rest of the party that it’s your ass in between them and the orcs. Being a team player is cool.

          • quanta413 says:

            Maybe tanking works in 1e or 2e, but my feeling about 3e is that if you’re trying to tank in core, you’re playing risky. Too many chances to get instagibbed by spells or crits. Tanky classes tend to have more hp but not necessarily better saves.

            Melee types don’t really have much of a way to prevent enemies from beating down on squishies if you’re not fighting in tight corridors. Even if you are, there are monsters that climb ceilings or tunnel or teleport or what have you. If the DM plays the monsters smart, tanks should be ignored.

          • Randy M says:

            I disagree with this.

            You disagree that the complaint was made? Or you disagree that it was valid? You said nobody ever complained about Rogues not having pocket sand. Not sure what level of literalness you want to make a stand on, but the designers letting casters walk all over other classes shtick and not giving non-casters the same range of interesting and effective options was indeed complained about by more than no one.

          • dndnrsn says:

            @Randy M

            I disagree that the wizard replaces the rogue, or at least, the wizard can’t simultaneously replace the rogue and the fighter. The rogue can pick locks forever, the fighter can hit stuff forever; a wizard with no relevant spells left is worse than useless. This gets compounded by wandering monsters and random encounters and all the other things that prevent players from taking strategic control.

            People complained about it, sure. Going back to 2nd ed, I remember people complaining the wizard replaced the rogue, the fighter was useless. Yet one thing I definitely don’t remember is people playing all-wizard parties. Weird thing is, I also remember people complaining that clerics are no fun to play; it’s not as though clerics don’t have magic powers, some of which are pretty OP, right?

            Then, with third, the character optimization people could mathematically prove that wizards were the best, or whatever. Their proofs tended to revolve around situations where the wizards could adjust their spell load out perfectly to the task, and where wizards could fire off everything they had without the risk of some goblin stabbing them with a stick on the way out of the dungeon.

            I don’t think it’s about actual class balance, but about the fact that classes are different to play, and in every group of 4 people you aren’t guaranteed one person who wants to play one sort of character. This is a problem with an overly-restrictive class system like D&D has, and has increased with time as things become “class abilities” with the corollary that the other classes either can’t do those things, or figure they can’t – if the rogue is the only one with “move silently” then everyone else is gonna figure they can’t sneak, etc (the solution I’ve found is to specify thief abilities in the game I’m running as being above and beyond the norm – anyone can try to climb something, but thieves can climb things ordinary people can’t; anyone can try to be sneaky, but thieves are really good at it).

          • Randy M says:

            Weird thing is, I also remember people complaining that clerics are no fun to play; it’s not as though clerics don’t have magic powers, some of which are pretty OP, right?

            And, strangely enough, in 4E clerics no longer have to use a standard action to heal people and, tackling the problem from a different angle, no one needs to play a cleric in order to have a healer.

            I’m not trying to argue that you should like 4E, just that you are wrong that it’s changes were not attempts to address issues that some people raised. Maybe only a relative few, disproportionately amplified by internet message boards versus the silent majority, but nonetheless, it isn’t disassociating things up for no reason. Arguments that it’s bad to let Fighters have spell like powers because of disassociated mechanisms but HP is a necessary evil is special pleading, which is fine, since you don’t have to justify your preferences, but they aren’t really convincing otherwise.

          • dndnrsn says:

            I’m not trying to argue that you should like 4E, just that you are wrong that it’s changes were not attempts to address issues that some people raised.

            It’s not that I’m saying they weren’t attempts to fix things. It’s that I think the attempts to fix problems created problems worse than the original problems – there was an old lady who swallowed a fly, etc – and those original problems could have been addressed by other means. Or, they tried to fix problems with 3rd, and didn’t fix them – does combat run faster in 4th than 3rd? Not so as I’ve noticed, any time I’ve played 4th. I’m not fond of either because having time to go to the washroom and grab a soda between every turn you get in combat kinda sucks, and that’s been my experience in every game of 3rd and 4th I’ve played.

            Arguments that it’s bad to let Fighters have spell like powers because of disassociated mechanisms but HP is a necessary evil is special pleading, which is fine, since you don’t have to justify your preferences, but they aren’t really convincing otherwise.

            HP kinda sucks, and it might just be path dependency, but lots of other games have tried alternative means of keeping track of damage. If 6th ed just uses a Star Wars-like “wounds/vitality” system or whatever, I’d celebrate, because that would take a bit of the weirdness out. It would change the game less than giving everyone special powers changed the game. I don’t see how “we may be stuck with this one thing that lacks verisimilitude, but let’s not add more stuff that breaks verisimilitude even worse” is special pleading.

            EDIT: I gave up D&D for a decade, with the exception of one stab at 4th and one at Pathfinder, and HP was part of the reason I did this, as was the XP/level system. With “old school” flavoured play I’ve found that those things can be overlooked because the thing as a whole is worth it – the juice is worth the squeeze. 4th added a ton of squeeze and, as far as I’m concerned, released less juice.

        • quanta413 says:

          Yeah, sure, HP and XP and levels and all that stuff strains credulity a bit, but no better way for D&D has been found.

          But the for D&D thing is largely circular. D&D plays weirdly for weird historical reasons, and people expect it to keep playing the same way across editions. So we’re stuck with dissociated mechanics for historical reasons. Not because there aren’t really better options. I don’t buy that the mechanical space is very explored.

          Wounds are not more fiddly than the modifiers I have to add up in D&D. You could have wounds create minor penalties that heal quickly rather than large ones.

          Similarly, leveling up skills instead of a generalized level doesn’t have to lead to a more complicated system in total. You just have to be willing to get rid of some complications somewhere else. I recommend the fiddly modifiers go out the door first. All these do is punish people who don’t like arithmetic and legalism.

          If these things are annoying and kinda gamey, why introduce more game elements that don’t make sense in the same fashion, when they were working in a way that did make sense?

          I’m not annoyed by these things. I’m bummed that people seem blind to how fundamentally gamey the entire system is that they’ll refuse mechanical changes because they make some new part feel more dissociated to them even if that mechanical change significantly fixes other problems that I find far more significant.

          The fighter will forever suck next to spellcasters if we need D&D’s world to look like 3e and earlier.

          D&D is a poor choice of RPG if you want to play a not dissociated game. It’s also a poor choice for a storytelling style. It’s good as it’s own murder-hobo tactical madness, but it could be so much greater if it either embraced that goal more fully or realized that you need to take building systems for noncombat scenarios more seriously if you want that sort of story to arise more naturally.

          • dndnrsn says:

            But the for D&D thing is largely circular. D&D plays weirdly for weird historical reasons, and people expect it to keep playing the same way across editions. So we’re stuck with dissociated mechanics for historical reasons. Not because there aren’t really better options. I don’t buy that the mechanical space is very explored.

            There are a ton of games that aren’t D&D. They’re not as popular, not by far, but the people who are really into D&D know about them. I’ve seen and used a lot of different systems to determine what happens when people get hit. I don’t think any of them would work with D&D. Maybe the one in the Star Wars game.

            Wounds are not more fiddly than the modifiers I have to add up in D&D. You could have wounds create minor penalties that heal quickly rather than large ones.

            It’s not that they’re more fiddly, but that it’s harder to keep it in your awareness than just a number. The problem with modifiers is more in the time it takes to do the math.

            Similarly, leveling up skills instead of a generalized level doesn’t have to lead to a more complicated system in total. You just have to be willing to get rid of some complications somewhere else. I recommend the fiddly modifiers go out the door first. All these do is punish people who don’t like arithmetic and legalism.

            It doesn’t have make a more complicated system, but how does it handle getting harder to kill? CoC has a simple way to raise skills: you check all skills you use in an adventure or session, and at the end you roll against that skill. If you fail you gain 1d6 or 1d10 to the skill. So, the higher your skill, the more likely you’ll roll to improve it but the less likely you’ll improve it. But in CoC, you never gain HP, which is based on your stats: average person probably has 12 HP. It’s hard to gain POW, which gives you magic points. XP and level are also really motivating for players, and that’s maybe worth the mild dissociation.

            EDIT: Who’s to say murderhoboing doesn’t produce stories? It produces stories, but only in retrospect. It’s a limited palette, granted, but I think you can have a story worth telling in a game with a solid degree of murderhoboing.

          • quanta413 says:

            There are a ton of games that aren’t D&D. They’re not as popular, not by far, but the people who are really into D&D know about them. I’ve seen and used a lot of different systems to determine what happens when people get hit. I don’t think any of them would work with D&D. Maybe the one in the Star Wars game.

            Sure, I’ve played some of them. But most of them weren’t trying to shoot for D&D tropes. Although maybe one of them can serve as the mechanical basis if you then adjust the numbers and scaling to fit D&D.

            It’s not that they’re more fiddly, but that it’s harder to keep it in your awareness than just a number. The problem with modifiers is more in the time it takes to do the math.

            You could make a fixed HP like system where you can directly translate the number into penalties, or you could have a system with the possibility of wounds in specific places that you’d definitely need to track on paper. I don’t think the second makes sense for D&D but the first could work.

            It doesn’t have make a more complicated system, but how does it handle getting harder to kill?

            This is a fair complaint. People have mentioned GURPs above where you pack on armor to up your effective tankiness, but yeah, scaling toughness would work very differently.

            Who’s to say murderhoboing doesn’t produce stories? It produces stories, but only in retrospect. It’s a limited palette, granted, but I think you can have a story worth telling in a game with a solid degree of murderhoboing.

            Sure. But murderhoboing doesn’t describe a lot of the original source material or match up with many settings. That’s what I mean by D&D basically being self referential.

          • Plumber says:

            “…murderhoboing doesn’t describe a lot of the original source material or match up with many settings. That’s what I mean by D&D basically being self referential….”

            quanta413,

            Only if your using LAME original source material and settings, but if you dig deep into the AWESOME literary roots of Dungeons & Dragons that won’t be a problem. 

            Forget those ponces Aragorn and Frodo (Sam’s cool though).

            I suggest 1933’s Tower of the Elephant by Robert E Howard, 1934’s The Seven Geases by Clark Ashton Smith, and 1939’s The Jewels in the Forest/Two Sought Adventure by Fritz Leiber because AWESOME!!!

            If you find Swords Against Death by Fritz Leiber get it, because it’s THE MOST D&D BOOK EVER!!!

            Then go forth and hobo and murder with pride (don’t forget to carouse at the Silver Eel tavern)!

          • dndnrsn says:

            @quanta413

            You could make a fixed HP like system where you can directly translate the number into penalties, or you could have a system with the possibility of wounds in specific places that you’d definitely need to track on paper. I don’t think the second makes sense for D&D but the first could work.

            Something I saw an OSR blogger propose was a rule that you can’t do more damage than your current HP count. This may have been in a version of the game where all weapons did 1d6, though. In any case, it meant that taking hits affected your ability to deal out hits.

            Sure. But murderhoboing doesn’t describe a lot of the original source material or match up with many settings. That’s what I mean by D&D basically being self referential.

            Beside the argument that some of the original source material does describe murderhoboing – murderhoboing is a natural response to an accurate presentation of a lot of the worlds of the source material. A character in a book will do what the author wants them to do – PCs will follow incentives. I bet that heroics are more common in war movies than actual war, even if the war movie is ostensibly as realistic as a movie can get. For a D&D game to have PCs who do a lot of heroic, daring stuff, you have to reward that (and insulate them from the consequences of taking greater risks). In contrast, in what I’m running right now, the PCs are miserable cowards who fight from ambush, cast sleep and cut throats, always send a hireling first, and in the battles that have gone against them, there’s been a serious risk that one or two would book it and abandon the party.

          • quanta413 says:

            @Plumber and dndnrsn

            That’s fair. I haven’t read the murder hobo-y source material. The most pulpy stuff I’ve read is Edgar Rice Burroughs. Which is pretty heroic.

            I’ll read one of your recommendations Plumber. I read a book about appendix N recently; you suggestions are enough of a nudge for me to get a book by Howard or Leiber.

            @dndnrsn

            That rule is reasonably simple although the consequences feel pretty wide.

            In contrast, in what I’m running right now, the PCs are miserable cowards who fight from ambush, cast sleep and cut throats, always send a hireling first, and in the battles that have gone against them, there’s been a serious risk that one or two would book it and abandon the party.

            I find this sort of style fun sometimes, but I think most of the people I’ve DM’d for prefer something a little bit less lethal.

            It feels like D&D is pulled in too many incompatible directions though for one game to cover all the styles people want out of it.

          • dndnrsn says:

            @quanta413

            You’re correct that D&D is being asked to do too many different things.

            I think it’s pulled in different directions less by the system itself than by the extra-system tendencies. All the rules to do a random-table sandbox can be in there, but it means nothing if the GM ignores that stuff and runs a game that’s based around a predetermined story (this doesn’t have to be a railroad, but it often is). Combat can be very lethal, but if the GM is fudging die rolls to keep PCs alive, combat won’t be lethal.

            I think there’s a change in the mid-80s that’s solidified by the time TSR stops calling them “modules” and starts calling them “adventures.” The DragonLance adventures – praised for bringing “story” in – include GM advice that boils down to “here’s how you keep your PCs on the track so they can see this predetermined adventure; don’t let them take it off the rails but also don’t let them die, because their characters are Important; of key importance is deceiving them so they think they’re driving the game” and one sees this explicitly or implicitly in many adventures and campaigns, for many games, including some that are highly praised. There are, for example, adventures and campaigns for CoC (a game that is supposed to be very lethal and sanity-blasting, and by the book is very lethal) which are written in a fashion that presupposes no need to introduce new PCs very often or at all – which carries the implicit assumption that PCs aren’t going to be dying or going crazy very often, or at all.

            So, I don’t think the lethality is really linked to the rules, necessarily. The change in the mid 80s that’s solidified by the 90s where the GM’s job is seen as telling a story to the players, with or without the pretence that the players have much input into going on, has very little to do with the rules. Rules would change to better enable it, but it happened due to a cultural change rather than a rules change.

          • quanta413 says:

            @dndnrsn

            That makes sense to me. 3e is still pretty lethal even though like you said, the adventures I have seem to assume PC’s won’t be dying. 4e definitely feels like an attempt to make the rules fit the nonlethality idea better. Much more HP relative to hits and getting saves every round for most status effects really increases your chances of being able to at least retreat.

            I’m surprised CoC adventures aren’t supposed to be crazy lethal though. How does that keep the horror up? The source material is mostly short stories. A lot of deaths (possibly everyone) in a 3 or 4 hour session seems faithful to the base material.

            here’s how you keep your PCs on the track so they can see this predetermined adventure; don’t let them take it off the rails but also don’t let them die, because their characters are Important; of key importance is deceiving them so they think they’re driving the game

            This sounds really hard. The players I’ve had will often do whatever. Telegraphing what they’ll do is hard. Sometimes I’d just draw vague “maps” after asking them where they want to go.

            I remember running one adventure book with some complicated scheme as a solo adventure, and the player accidentally tripped over the end boss about an hour in.

            I guess at least with rails, you can guarantee something will happen in game.

          • dndnrsn says:

            @quanta413

            I’m surprised CoC adventures aren’t supposed to be crazy lethal though. How does that keep the horror up? The source material is mostly short stories. A lot of deaths (possibly everyone) in a 3 or 4 hour session seems faithful to the base material.

            It’s more a problem with campaigns than with written adventures. I’ve read a lot of really bad written adventures. Campaigns often have the problem that they have to keep the PCs alive a bit more to provide continuity, and depending on the campaign, it’s more or less believable to introduce replacement PCs. So, you might want to limit character death/insanity.

            However, they are still written lethal, if played fair – not trying to protect the PCs, but not trying as hard as you could to kill them or make them go crazy. I have seen many campaigns that include high-lethality encounters, but minimal advice on introducing new characters. Biggest example I can think of is a very low-key, atmospheric campaign – very different from the style of gameplay that CoC often produces (see below) – you’ll see stuff like a low-key, atmospheric campaign introducing non-climactic fights with monsters that could kill a PC or two easily at points in the campaign where introducing new PCs would be very hard, without even tipping the cap to the possibility this might handle and how to handle it. My textual analysis skills suggest that the unspoken assumption here is that the combat is going to be played with a lot of pulled punches.

            This sounds really hard. The players I’ve had will often do whatever. Telegraphing what they’ll do is hard. Sometimes I’d just draw vague “maps” after asking them where they want to go.

            It is hard. It’s a lot more effort and emotional energy to keep something that can come apart running. Lot more prep time, too.

      • Lambert says:

        The thing about HP is that realistic fighting is boring.

        Ignoring any magical elements, combat in small groups is short, random and incredibly dangerous. Every chance you entered combat, there would be a not insignificant risk of you taking an arrow to a major artery and dying. Even victories would prove Pyrrhic an awful lot of the time (see: inexperienced rapier duelists running each other through at the same time). I suspect any attempt at accurate simulation of that kind of combat would look more like gang warfare than anything heroic.

        The consequence of this is that throughout History, the credible threat of violence has been a much more useful tool than violence itself. But outside of Open Thread Diplomacy, people don’t want to play games based on credible threats of violence and judicious retreats.

        I’m not saying there’s no third option other than HP and random deaths, but that any fun system will lack verisimilitude.

        • quanta413 says:

          I definitely agree any fun superheroish system will lack verisimilitude. But I think we can probably come up with mechanics that maps better to the world itself so to speak if we really want that.

          Combat in 3.5e can feel like the gang warfare you speak of, because a significant chunk of save or dies are a lot like getting an artery cut. They pretty much bypass HP and almost kill you. Color spray is pretty brutal. You are either immune or have a significant chance of kicking the bucket.

      • Eric Rall says:

        I think I remember reading that the HP system had been borrowed from a naval combat wargaming system. A hit point system is a completely reasonable way to model warship damage: barring a luckily-placed hit cooking off a magazine or disabling an engine (which can be simulated with a critical hits table), it’s a reasonable simplification to model a battleship or a cruiser as being able to take a certain amount of pounding before it’s effectively battered into scrap.

        There have been various attempts to replace hit points with a more realistic damage system in RPGs, but they’ve generally run into the problem of producing a worse gameplay experience than hit points. dndnrsn’s d20 Star Wars example is one such. Another is the GURPs system of a low, relatively fixed number of hit points (based on your Constitution attribute), combined with there being a ton of ways (skills, advantages, equipment, etc) for a leveled-up character to avoid getting hit in the first place. It’s a pretty good model from a simulation perspective, but it makes for tedious combats which mostly consist of attacks missing, getting dodged or blocked, or bouncing off armor or force shields. And that’s a pretty fair encapsulation of the flaws of GURPS as a system (at least as of its early 2000s incarnation I’m most familiar with): it makes a great simulation but a tedious gaming experience.

        The best all-around damage system I’ve come across in an RPG is from Shadowrun 2nd or 3rd edition. There’s a small, fixed number of hit points (10), skinned with a concept of damage categories (minor is one HP, moderate is 3, serious is 6, and deadly is 10). But when you get hit, you get to make a roll to stage the damage down (from a base value determined by the weapon used to hit you, and staged up by extra successes on the attack roll). You can make a character tougher by adding armor bonuses (wearing armor, casting bullet-resisting spells, or getting cybernetic enhancements that give you built-in armor) which lower your target number to resist damage, or by raising/getting bonuses to your Body stat, which allows you to roll more dice on your damage resistance check. The success-counting system means that a character designed to tank damage tends to be pretty reliably good at the job, but the damage category descriptors give a better sense of what the hit points actually mean from an in-game perspective.

        • bean says:

          I think I remember reading that the HP system had been borrowed from a naval combat wargaming system. A hit point system is a completely reasonable way to model warship damage: barring a luckily-placed hit cooking off a magazine or disabling an engine (which can be simulated with a critical hits table), it’s a reasonable simplification to model a battleship or a cruiser as being able to take a certain amount of pounding before it’s effectively battered into scrap.

          I have to disagree with this. It’s not a particularly better simplification for ships than it is for people. OK, so the ship is somewhat more likely to have redundancies than a person is, but there’s still a huge difference between a shell that sets off a path of cordite leading to the magazine, one that blows away a single secondary gun, and one that just messes up a berthing area above the armored deck. Just man up and use an effects table.

          There have been various attempts to replace hit points with a more realistic damage system in RPGs, but they’ve generally run into the problem of producing a worse gameplay experience than hit points. dndnrsn’s d20 Star Wars example is one such.

          I really like that system. The critical thing is a problem, but even with Star Wars lethality (which was a lot higher than typical low-level D&D) it usually meant a trip to the bacta tank, not a character death. And that required confirming criticals, too.

          • Eric Rall says:

            Just man up and use an effects table.

            I was thinking of a combination of both. HP tracks the basic structural integrity of the ship, while an effects or criticals table handles magazine explosions, blowing away individual guns, jamming the rudder, etc.

            I looked it up just now, and it seems the immediate naval wargame inspiration for D&D’s HP system was “Ironclads” and “Don’t Give Up the Ship”, both of which lifted the system from the Fletcher-Pratt Naval Wargame. I can’t find the full rules online, but it looks like I was mistaken: F-P uses hit points only, with speed and firepower being degraded proportionately with HP loss. The dual system I was thinking of came quite a bit later, apparently.

          • dndnrsn says:

            You: “use an effects table”
            Me, an intellectual: [reading the vehicle-hit rules in Twilight 2000, crying softly]

      • John Schilling says:

        What is your character reacting to in-world if you decide he should rest because HP is low? Various editions have described HP as capacity to take injuries or skill or gumption or luck (adding more and more flavortext to the abstraction with each edition). But none of those make any sense as in world explanations unless you admit that D&D’s reality is very strange. HP doesn’t affect your characters functioning in the world (rarely- until they drop to 0).

        So long as everybody has reasonably similar HP totals (at least proportionate to stature), it does a fair job of modeling accumulated bruising, bleeding, muscle tears, sprains, cracked ribs, and the like, all signaling “too much more of this and you’re just going to collapse”. For perfect realism, you’d want to model the degradation in performance as someone e.g. bleeds out, but for a combatant running on adrenaline and the perfect will that is the conceit of almost all PCs in all RPGs, modeling this as a step function is not a horrible abstraction. Without hit location, you lose the ability to e.g. break someone’s leg and leave them absolutely incapable of running while remaining otherwise conscious and mostly-functional, but that’s a defensible trade of realism vs. complexity.

        I’d have done it differently, but HP per se aren’t that bad. And, yes, after you’ve been battered, bruised, and bled almost to the point of collapse, your choices are going to be magic healing or many days of bedrest. Which models as recovering HP.

        When you have e.g. a 10th-level fighter vs. a horde of old-school orcs, then no, there’s no non-dissociative way to model it. A max-damage critical hit and, “Hey, that battleaxe hit me full force square in the face! Two or three more like that, and the bruising might be too much for me!”, just doesn’t work. That’s when they start handwaving skill/gumption/luck, as expendable resources that are fungible against blood loss, and it becomes irredeemably silly.

        Hit Points that increase linearly with level are a grossly dissociative mechanic that were baked into D&D from the start, probably can’t be pulled out at this point, and shouldn’t be defended by anyone who (legitimately) complains about all the dissociative crap that was thrown into the 2nd and especially 4th editions.

        The other grossly dissociative mechanic that was baked in at the start is rigid class limits, where Fahfrd can’t possibly accompany the Grey Mouser because he has zero chance of moving silently or climbing a wall, where Gandalf simply cannot swing a sword, etc. One strength of 3.5e/Pathfinder is that they broke down the walls between classes as best they could within the constraint of having a recognizable D&D class system at all, but others have noted that this means increased complexity in character design/growth that is offputting to some.

        • quanta413 says:

          Hit Points that increase linearly with level are a grossly dissociative mechanic that were baked into D&D from the start, probably can’t be pulled out at this point, and shouldn’t be defended by anyone who (legitimately) complains about all the dissociative crap that was thrown into the 2nd and especially 4th editions.

          This (and everything you said) is a good point that I didn’t think of. If HP didn’t scale so quickly with level, it wouldn’t be as egregiously bad. Everyone would have much more similar toughnesses if they get hit, so it would be easier to explain how a character in world would know they needed to rest. Getting hit by a weapon once (or maybe twice if you’re really tough) would be the signal to rest.

          • dndnrsn says:

            You don’t have the “what are HP, anyway” problem then, but the play style gets very different when even the most experienced fighter can only take a few hits. Let’s say you’re playing something with BRP rules, where the maximum HP for any character is usually around 18 – the toughest guy in the world would then be able to take the most dangerous hit from an ordinary enemy (assuming ordinary enemy is rolling 1d8, gets a crit, and rolls well for that) but be done for days or weeks or months, depending on how fast healing is.

            I predict that in this system, PCs would murderhobo even harder. After all, letting the enemy know you’re there so it’s not straight-up murder, great way to get killed, right?

          • John Schilling says:

            Picking fights with people you could have just talked to (or tiptoed past) is also a great way to get killed. But if the DM is going to treat all encounters as combat encounters until proven otherwise, and the game system is going to encourage that, then yeah, you’re stuck with the murderhobos.

            The solution to fragility for even epic PCs is to have, in D&D terms, a bonus to armor class that increases with level. A classic orc needs to roll a critical just to connect with a 10th-level fighter at all, probably doing only normal damage in the process. Same average result in terms of p(TPK:[N]orcs), but grittier, more chaotic, and less dissociative.

            And for Crom’s sake, put some thought into the morale rules and put them front and center. If every monster/NPC is a ruthless Terminator and only PCs can exercise sensible discretion, that also points to murderhoboing.

          • quanta413 says:

            I agree. They’d definitely murder hobo even harder if you don’t drastically adjust armor rules or something.

            On the other hand, I’m not sure it’s as different in practice as it sounds. It might just make clearer how you “should” play. One unlucky color spray at early level in 3e could be a TPK. Save or dies are pretty ubiquitous in 3e. But it’s not obvious to a noob how quickly you can all end up dead. Having HP obviously reflect that you don’t ever want to get hit might inspire caution that a quick skimming of the spells list might not. On the other hand, 4e is padded sumo. I dunno about 1e or 2e. My impressions from reading people talking about playing them is that they’re about as lethal as 3e, maybe even more.

            I predict that in this system, PCs would murderhobo even harder. After all, letting the enemy know you’re there so it’s not straight-up murder, great way to get killed, right?

            I fully endorse this point of view in 3e if your DM isn’t going to pull punches. I pull some punches and try to avoid using any monsters or abilities that could instagib a party even if they’re technically level appropriate. That or I’m going to try to telegraph exactly what’s coming up so the party can prepare easily. Even then I’d say in a dungeon not doing your best to sneak up and kill everyone before they know what’s going on is likely to make you an ex-adventuring party.

          • dndnrsn says:

            @John Schilling

            A true murderhobo knows that the best time to attack someone is when they’re not hostile, or even aware of your presence.

            @quanta413

            In my experience as a player and a GM, pulling punches happens less in what monsters show up, etc, and happens more in monsters doing suboptimal tactics and in die rolls getting fudged, and the like.

          • Nornagest says:

            My impressions from reading people talking about playing [1E and 2E] is that they’re about as lethal as 3e, maybe even more.

            Save-or-lose spells and abilities aren’t as common as they are in 3E, mostly because it’s much rarer for enemies to have class levels, but melee combat is swingier and less forgiving. You’re rolling for HP even at first level, you probably don’t have as many stat bonuses to work with, it’s harder to achieve very high (well, low) AC, and your enemies don’t need to confirm crits — although critical hits only became official rules in 2E. They were a common houserule much earlier, though.

          • Nabil ad Dajjal says:

            This chain has gotten a bit long but…

            @John Schilling:

            Picking fights with people you could have just talked to (or tiptoed past) is also a great way to get killed. But if the DM is going to treat all encounters as combat encounters until proven otherwise, and the game system is going to encourage that, then yeah, you’re stuck with the murderhobos.

            I blame the experience point system more than DMs for this. Not that DMs couldn’t be better but that the system is misleading.

            In my 5e game last week I ran an encounter with two pirates and their pet crocodile. The party had been lying in wait but instead of ambushing them a few of the party members ended up using fear, logic, and southern charm to talk their way past and get some useful information in the process.

            I gave them full experience points for overcoming the challenge. When you factor in that they didn’t have to use any resources fighting, that means they’re strictly ahead compared to a party which leapt out and attacked. Even in the worst case if they hadn’t been able to talk their way past they’d only have lost one surprise round trying.

            But a lot of DMs in a similar situation would look at the same rules and say “You get 100 XP for defeating two pirates and a crocodile. The party didn’t defeat any pirates or crocodiles. Therefore the party gets no experience points.” They might give some role-playing experience out of pity or throw each of the players an Inspiration point but it’s not obvious that the players earned their experience points. Especially since, as I pointed out, running it my way actually disincentivizes PCs to engage in combat and combat is one of the “three pillars” of D&D called out in the DMG.

            Rather than morale rules, I’d like it to be more clear that you don’t have to literally beat someone up to get experience points.

          • John Schilling says:

            @Nabil:

            Rather than morale rules, I’d like it to be more clear that you don’t have to literally beat someone up to get experience points.

            Oh, quite agreed. And I think this one is in the DMG for most editions, but from the observed behavior of far too many DMs, it needs to be explicitly front and center that overcoming a challenge or neutralizing a threat gets XPs.

            @dndnrsn:

            A true murderhobo knows that the best time to attack someone is when they’re not hostile, or even aware of your presence.

            “The best time to attack someone”, presupposes that you are going to attack someone. If the DM is going to treat all encounters as all-out mortal combat encounters, your players are going to attack everyone who doesn’t attack them first. If the DM is only going to reward players with precious, precious XP for victory in combat, your players are going to attack everyone who doesn’t attack them first. If you don’t want straight-up murderhobos, that’s what you have to address.

          • dndnrsn says:

            It’s definitely a problem that enemies always attack, etc, instead of a reaction role, and it’s definitely a problem that “you get XP for overcoming a challenge” began “those orcs that surrendered, and the others that ran? No XP.”

            On the other hand, my players spent a while as highwaymen, so having non-hostile encounters doesn’t necessarily cut down on the murderhoboing.

          • Randy M says:

            I’ve just finished watching the entire run of Gargoyles with my kids (both seasons) and I was struck how, though there was almost always fighting at some point in the episode, rarely was the resolution of the problem at hand beating somebody up. Perhaps because it was a Disney show, but the end result was more, not less maturity, as the adventures focused on persuasion, trickery, or some other form of non-murder problem solving.

            For D&D, the XP incentives are one tool for placing other options on the same level as combat. Just as importantly is to keep in mind the question “What is in conflict in this scene/encounter?” A DM who populates the world with creatures who are only satisfied with dead PCs should not be surprised when the PCs see killing as their first and last resort as well.

          • quanta413 says:

            @dndnrsn

            In my experience as a player and a GM, pulling punches happens less in what monsters show up, etc, and happens more in monsters doing suboptimal tactics and in die rolls getting fudged, and the like.

            I believe it. I’ve done a lot of DM’ing, and it wouldn’t be surprising if my style was different. I prefer not to engage in intentionally suboptimal tactics for the purpose of avoiding killing PCs (I would for story purposes; not every villain should be a chessmaster).

            I don’t think I’ve ever fudged a roll though. I suppose I might if I told the players? The whole idea just feels so wrong to me. When I don’t want death to be an omnipresent possibility in combat, I run a game that doesn’t allow for it. I really like Lady Blackbird. I’ve only run it… two or three times I think? But it was great fun every time.

            I really prefer to have thing set up the way I’d like ahead of time though.

            @Nornagest

            More lethal melee less save-or-lose sounds like it would help melee types in a relative sense at least a little. The fighter’s poor saves in 3e are a drag.

          • dndnrsn says:

            I don’t think I’ve ever fudged a roll though. I suppose I might if I told the players? The whole idea just feels so wrong to me.

            Shameful secret: I used to be a fudger and do stuff like adjust enemy stats after the roll and so on. I didn’t think of it as railroading, but it was a mild form – it was playing a role in determining outcomes. I always felt bad doing it, but my justification was that it was making a better story – can’t have PCs die to mooks, big bads gotta kill one PC to show the party it’s real, can’t have cool enemies die embarrassingly, can’t have the party fails something that needs to be a success, can’t have the party have an unexpected success that creates problems down the line, etc. I stayed away from harder forms of railroading. Now I’m clean, was blind but see, etc, and am enjoying running games without such terrible vices.

            There was a lot of GM advice about when I was in my innocent formative years I got out of everything from random websites to “this is how to run a game” sections in RPG books, which explicitly or implicitly told the GM to fudge. This was worse in the 90s, but you still find echoes of it now: the 5th ed DMG says the GM “runs adventures that drive the story” while mentioning a few times the GM has to be a storyteller and saying the rules aren’t in charge, and neither are the dice. Fudging is a mild form of railroading and it can lead to worse railroading; once you’ve decided that the dice shouldn’t get in the way of the good stuff you know to be good, you might decide that the players don’t know what’s good for the game either.

            Improvisation turned out to be a lot easier than I thought, and it’s a lot more fun for me. I get to be surprised by what happens in the game too.

          • quanta413 says:

            It’s interesting to me that books would recommend fudging.

            The temptation is clear though. After all, if players jump off your beautiful railroad, what was all that work making it for?

            I was a little nervous at first when I let players just do whatever, but I found that games actually flowed really well. And then I just kept giving them more and more latitude. I started playing during 4e though, and there was definitely a push to give players latitude. If the books had told me to manipulate and railroad the players, I probably would have done that.

          • dndnrsn says:

            For some reason, GM advice is often pretty bad, especially that found in the GM guide.

  20. Le Maistre Chat says:

    Hey guys, I just wanted to let you know that my Late Bronze Age effortposts are not over. Later today I’ll start the Hittite series.

  21. Well... says:

    I’ve had terrible posture since junior high and now that I’m approaching my mid-30s I’d like to get serious about fixing it. Anyone had success with improving their posture and can share advice?

    I’m able to remember to straighten up when I walk into a building (e.g. work, the grocery store, my kid’s school, etc.) or while I’m sitting and eating, but then after a minute or two I consistently forget to remain conscious of how I’m standing and I go back to my bad habits. Is there a fix for this?

    • Luke the CIA Stooge says:

      I strongly recommend the army method of just standing at attention for x amount of time a day/ walking about ramrod straight (pushups and long hikes also help).
      As someone whose bounced back and forth between having very good posture to pretty mediocre posture (i’m building up again from a nasty slouch right now) i can tell you that a big part is just the actual strength of the muscles, if you don’t have good posture it will strain your muscles to actually stand up straight and confidently, try it for 10 minutes (just stand at attention, whereas the inverse is also true, if those muscles are well toned and strong, slouching will actually be pretty uncomfortable.
      I’m pretty sure this is why posture is such a proxy for class and confidence, wealth and confidence are strongly correlated with athleticism.

      Don’t get me wrong habit is a big part of it, i see a lot of young fit people slouch and it drives me crazy, but if your neck get sore and strained after ten minutes of maintaining good posture, then its going to be nigh impossible for you to maintain that position for every second your awake.

    • dndnrsn says:

      Bro science: stretch and maybe roll out your chest, front shoulders, and back. Do resistance exercises that build up your upper back and rear shoulders.

    • Nancy Lebovitz says:

      I recommend Ageless Spine, Lasting Health.

      It portrays people from cultures where it’s common to be able to carry heavy loads on one’s head without getting damaged as the ideal, though things weren’t so bad in the US before 1920, either.

      One of the things to do is sway one’s pelvis back and forth like a bell until one’s breathing gets easier. Get the book for the details, though.

      This strikes me as incredibly sensible– instead just doing something and hoping it works or using a visual standard, it about exploring what way of standing makes one’s body work better.

      There’s a second edition– Natural Posture for Pain-Free Living— it’s considerably expanded and rearranged, but I haven’t worked with it, so I’m less sure about recommending it.

    • Levantine says:

      Recently, I looked into this more closely. I went to YouTube and searched the relevant terms. From the results, I first I clicked here and followed some of the exercises. Then I clicked for here, where the guy had a different approach, that seemed to somewhat contradict the first one’s. And he slouches in his other videos, so probably not the best person to instruct… I decided to follow the advices of both. I did only those exercises that didn’t demand extra efforts & material resources. The results were quick and quite impressive. Then I got somewhat ill and let the whole slip by for several days. Now I’m back to getting in shape.

    • Orpheus says:

      Doing yoga really helps with this.

    • Randy M says:

      Try walking with you palms facing forward. It twists your shoulders forward, straightening your back.

  22. Le Maistre Chat says:

    Another Dungeons & Dragons thread, this time for Monstrous Manual stuff.

    By 2nd Edition, the monster book for D&D had a whole bunch of world-building stuff in addition to the monsters’s stats. I’d like to mention some in alphabetical order, but first… let’s address the Humanoid Problem.
    The Humanoid Problem is that D&D worlds are populated with Men, Elves, Dwarves, Legally Not Hobbits and then a bewildering number of increasingly-powerful evil humanoids. There are Kobolds with 1/2 hit die, Goblins with 1 hit point less than Orcs, Orcs, Hobgoblins with 1 hit point more, Gnolls with 2 HD, Bugbears with 1 HP more than 3, Ogres with 1 more than 4… come on, these can’t all be different intelligent species sharing a habitat, right?
    And they really do share a habitat. Let’s start with the Habitat/society section for Kobolds (see here).
    “Kobolds live in dark, damp places underground and in overgrown forests. They can be found in almost any climate. As kobolds are good miners, any area with potential for mining is fair game for settlement.
    The average kobold tribe has 40-400 (4d10×10) adult males. For every 40 kobolds in a band there will be one leader and two bodyguards (AC 6; HD 1-1; hp 4 each; damage 1-6). In a lair there will be 5-20 (5d4) bodyguards, females equal to 50% of the males, young equal to 10% of the males and 30-300 (3d10x10) eggs. There will also be a chief and 2-8 guards (AC 5; HD 1+1; hp 7 each; damage 1-8).”

    That’s bizarre, guys. Kobolds live in underground lairs with 5-20 Goblins + 3 for every 40 males, 3-9 Hobgoblins, a 2-1 sex ratio in favor of males, 4d10 young and 30-300 eggs with no relationship to settlement size? It doesn’t help that “kobold” and “goblin” are German and French cognates of the same root (which Grimm traced as Greek kobalos, meaning “rogue”), and hobgoblins are obviously a subset of goblins.

    So what does it say about Goblin and Hobgoblin society?
    “Humans would consider the caves and underground dwellings of goblins to be dank and dismal. Those few tribes that live above ground are found in ruins, and are only active at night or on very dark, cloudy days. … A typical goblin tribe has 40-400 (4d10×10) adult male warriors. For every 40 goblins there will be a leader and his 4 assistants, each having 1 Hit Die (7 hit points). For every 200 goblins there will be a sub-chief and 2-8 (2d4) bodyguards, each of which has 1+1 Hit Dice (8 hit points), is Armor Class 5, and armed with a battle axe. The tribe has a single goblin chief and 2-8 (2d4) bodyguards each of 2 Hit Dice, Armor Class 4, and armed with two weapons.
    There is a 25% chance that 10% of their force will be mounted upon huge worgs, and have another 10-40 (1d4x10) unmounted worgs with them. There is a 60% chance that the lair is guarded by 5-30 (5d6) such wolves, and a 20% chance of 2-12 (2d6) bugbears. Goblin shamans are rare, but have been known to reach 7th level. Their spheres include: Divination, Healing (reversed), Protection, and Sun (reversed). In addition to the males, there will be adult females equal to 60% of their number and children equal to the total number of adults in the lair. Neither will fight in battles.”

    Same population size, similar skewed sex ratio (now with a normal number of children), possibly 5 orcs in every group of 40 (ambiguous: it could just mean you roll 1d8 HP and the eighth with the highest result outrank the rest rather than having the to-hit and other stats of orcs), 3-9 hobgoblins in a community of 200 or more, 3-9 gnolls, and a 20% chance of 2-12 bugbears.
    (I have to say I love that wolf part, though.)

    Hobgoblins: “Hobgoblins are nightmarish mockeries of the humanoid races who have a military society organized in tribal bands. Each tribe is intensely jealous of its status. Chance meetings with other tribes will result in verbal abuse (85%) or open fighting (15%). Hobgoblin tribes are found in almost any climate or subterranean realm.
    A typical tribe of hobgoblins will have between 20 and 200 (2d10×10) adult male warriors. In addition, for every 20 male hobgoblins there will be a leader (known as a sergeant) and two assistants. These have 9 hit points each but still fight as 1+1 Hit Die monsters. Groups numbering over 100 are led by a sub-chief who has 16 hit points and an Armor Class of 3. The great strength of a sub-chief gives it a +2 on its damage rolls and allows it to fight as a 3 Hit Die monster. If the hobgoblins are encountered in their lair, they will be led by a chief with AC 2, 22 hit points, and +3 points of damage per attack, who fights as a 4 Hit Die monster. The chief has 5-20 (5d4) sub-chiefs acting as bodyguards. Leaders and chiefs always carry two weapons.
    Each tribe has a distinctive battle standard which is carried into combat to inspire the troops. If the tribal chief is leading the battle, he will carry the standard with him, otherwise it will be held by one of his sub-chiefs.
    In addition to the warriors present in a hobgoblin tribe, there will be half that many females and three times as many children as adult males.
    Fully 80% of all known hobgoblin lairs are subterranean complexes. The remaining 20% are surface villages which are fortified with a ditch, fence, 2 gates, and 3-6 guard towers. … They are highly adept at mining and can detect new construction, sloping passages, and shifting walls 40% of the time.”

    Yeah, these are waaay too similar except for being more dangerous combatants.

    • Nornagest says:

      Was it you that proposed making all of those critters different names for the same species at different stages of growth? I thought that was a clever idea.

      Similarly, you could think of them as different castes in a complex caste-based greenskin society that humans, elves, legally not hobbits, etc. generally don’t know much about because ew, greenskins. Hobgoblins then would all be obsessed with military discipline because they’re the military caste, for example.

      • John Schilling says:

        I generally modeled kobolds, goblins, hobgoblins, and bugbears as the same race but increasingly well-fed. Goblins as the default, kobolds as underfed hunter-gatherer tribes in marginal conditions, hobgoblins being disciplined enough to hold farmland and slaves, and bugbears as either hobgoblin champions or tribal hunters with abundant game. I don’t think that was canon, but it worked for me.

        Orcs were completely different, with a meta-race (note interfertility) that went elf – human – orc in order of increasing strength, stature, dumbness, and ugliness. Also not canonical but it seemed right.

        • Nornagest says:

          Orcs were completely different, with a meta-race (note interfertility) that went elf – human – orc in order of increasing strength, stature, dumbness, and ugliness. Also not canonical but it seemed right.

          The Elder Scrolls games (Morrowind, Oblivion, Skyrim, etc.) take a similar approach, where orcs are a regional ethnicity of a species called “mer” that also includes three or four different kinds of elves, degenerate subterranean Morlock-like critters, and the local equivalent of dwarves, which are extinct but left their ruins everywhere. Humans are a separate species, though.

      • Le Maistre Chat says:

        Was it you that proposed making all of those critters different names for the same species at different stages of growth? I thought that was a clever idea.

        Yes, that was me. 🙂

      • Nancy Lebovitz says:

        I read this as saying that hobbits, humans, and elves were the same species at different ages. That sounds like a really promising idea for stories.

        Humans and elves are technically the same species.

        • Le Maistre Chat says:

          I read this as saying that hobbits, humans, and elves were the same species at different ages. That sounds like a really promising idea for stories.

          Oh, this so works. Hobbits are short with childlike proportions and usually asexual (Sam at the very end and elder Sackvilles excepted). Humans are humans, and elves are way older than humans. Elves probably think having a baby before you’re 100 is creepy/unethical and would look down on human parents for it.
          “Stop doing that! Listen to your Eldars!”

          • Nornagest says:

            Well, Tolkien based his hobbits on English country gentlemen, and I’ve always suspected that those guys reproduced by budding.

    • dndnrsn says:

      Did this get shuffled in writing and editing? Unless I’m missing something, it doesn’t look like there’s anything about non-kobolds living with kobolds or anything other than bugbears being found with goblins in the MM entries.

      EDIT: Or, wait, are you reading a bodyguard with 1 HD, etc, as goblin, etc?

      • Le Maistre Chat says:

        EDIT: Or, wait, are you reading a bodyguard with 1 HD, etc, as goblin, etc?

        Yes, I’m just reading “as a monster with 1 HD” as “orc”, 2 as “gnoll”, etc. It’s the same thing rules-wise: literally the only differences if each one is a separate species would be height and racial traits like facial prognathism or a different skin color.

        • dndnrsn says:

          Well, yeah, there’s not really an in-game difference between a goblin and an extra hit point (and a +1 to hit; I can’t remember off the top of my head) and an orc, by the rules of 2nd ed. That’s not the same thing as “what are all these orcs doing hanging around in the goblin camp? Huh?”

          Although I think the bugbears are there for a reason. I can’t remember if the justification was that the bugbears were bullying the goblins and eating their food, or the goblins were suckering the bugbears into fighting for them, or what.

    • Nabil ad Dajjal says:

      Usually when I run homebrew games I rationalize the variety of evil humanoids as different form of beastmen. Kobolds are ratmen, goblinoids are wolfmen, orcs are pigmen, lizardmen are still lizardmen, etc.

      I feel like it makes them all fit in better with the “civilized heroes versus chaotic wilderness” theme that I like for campaigns. Kobolds, goblins, orcs, gnolls and the rest are feral savages cursed by the dark untamed lands they inhabit. It makes for a more fun experience IMO.

      • Le Maistre Chat says:

        Oh, I like that. Rat-men, wolf-men, pig-men, bear-men is a very pre-modern way for civilized heroes to see things.

    • Le Maistre Chat says:

      Orcs

      # appearing: 30-300

      Habitat/Society: For every three orcs encountered, there will be a leader and three assistants.* These orcs will have 8 hit points each, being the meanest and strongest in the group. If 150 orcs or more are encountered there will be the following additional figures with the band: a subchief and 3-18 guards, each with Armor Class 4, 11 hit points, and +1 damage due to Strength on all attacks. They fight as monsters of 2 Hit Dice (THAC0 19).** For every 100 orcs encountered, there will be either a shaman (maximum 5th level priest) or a witch doctor (maximum 4th-level mage).
      If the orcs are not in their lair, there is a 20% chance they will be escorting a train of 1-6 carts and 10-60 slave bearers bringing supplies, loot, or ransom and tribute to their orc chief or a stronger orc tribe. The total value of the goods carried by all of the carts will vary between 10 and 1,000 silver pieces, and each slave bearer will bear goods valued between 5 and 30 silver pieces. If the orcs are escorting a treasure train, double the number of leaders and assistants and add 10 orcs for each cart in the train; one subchief with 5-30 guards will always be in charge.
      Orc lairs are underground 75% of the time, in a wilderness village 25% of the time. Orc communities range from small forts with 100-400 orcs to mining communities with 500-2,000 orcs to huge cities (partially underground and partially above ground) with 2,000 to 20,000 orcs.*** There will always be additional orcs when the encounter is in a creature’s lair: a chief and 5-30 bodyguards (AC 4, 13-16 hit points, attack as monsters with 3 Hit Dice (THAC0 17) and inflict an extra +2 damage on all attacks due to Strength). If the lair is underground, there is a 50% chance that 2-5 ogres per 200 orcs will be living with them. Most lairs above ground are rude villages of wooden huts protected by a ditch, log rampart and log palisade, or more advanced constructions built by other races. The village will have 1-4 watch towers and a single gate. There will be one ballista and one catapult for every 100 adult male orcs.
      Orcs are aggressive. They believe other species are inferior to them and that bullying and slavery is part of the natural order. They will cooperate with other species but are not dependable: as slaves, they will rebel against all but the most powerful masters; as allies they are quick to take offense and break agreements.

      It goes on to say “Orcs are patriarchal; women are fit only to bear children and nurse them.”, but no mention of sex or age ratios this time. Huh.

      *So for every three orcs encountered, there will be four orcs, and they all get max HP?
      **All orcs have THAC0 19, so this must be copypasta from the 1st Edition Monster Manual.
      ***This is the first time humanoid communities of more than 400 have been mentioned. So orcs are civilized – they could be the shudras of Nornagest’s proposed caste system, while Hobgoblins are kshatriya rural nobility.

      • Nornagest says:

        *So for every three orcs encountered, there will be four orcs, and they all get max HP?

        Four extra orcs by my reading. Proof by induction, countably infinite orcs.

        I’ve got a new idea for a D&D perpetual motion machine.