Open Thread 102.75

This is the twice-weekly hidden open thread. Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

679 Responses to Open Thread 102.75

  1. Anon. says:

    Peter Watts has a new book out, The Freeze-Frame Revolution. Seems to be set in the same universe as the excellent short story The Island. Anyone read it yet?

    • johan_larson says:

      I did. It’s good. Maybe a bit short, but good. It has a really intriguing setting. An almost entirely automated spaceship is travelling the galaxy, building a network of wormholes. The crew members only get woken up once every few hundred years, when the AI needs some weird monkey-logic done, and even then only a few of them get woken up at a time. The crew comes to resent the AI and organizes an underground movement and eventually a revolt under these bizarre circumstances.

      I’m planning to include it as one of the choices in the SF Book Club next time.

      • cassander says:

        I love peter watts, do you know if there’s going to be an audio book? And would you suggest reading the short stories before the novel?

        • johan_larson says:

          I occasionally stop by Watts’s blog, and I don’t remember anything about an audio book for FFR.

          I read the novel without reading any of the short stories, and don’t feel like I’ve missed anything. Watts certainly hasn’t mentioned the stories prominently when he has posted about FFR.

  2. Edward Scizorhands says:

    What’s the environmentally responsible way to get rid of coat hangers?

    My wife’s patience for my collection of them has run out. They don’t recycle (neither plastic nor wire) and my local Goodwill absolutely will not take them.

    • johan_larson says:

      You mean cheapo wire coat hangers like the ones you get from laundromats?

      • Edward Scizorhands says:

        I have a “collection” of all kinds, including the pure wire ones.

        • johan_larson says:

          I would try offering them for free on Craigslist or something like that. If someone wants them, great. If not, just throw them away.

    • quanta413 says:

      If you’ve got wire coat hangers, turn them into modern art? They’re a nice mix of flexible yet stiff.

      The plastic ones might be tougher to get rid of.

    • SamChevre says:

      The laundry where I take shirts will take their wire hangers back if they are in good shape.

    • LewisT says:

      Perhaps see if Goodwill or another local charity clothing store will take them.

    • John Schilling says:

      Wire coat hangers seem to be made out of steel. That gives you,

      A: Throw them in the trash. If your locale doesn’t run a magnet over it to collect the lowest-hanging fruit in the recycling world, they’re not serious about recycling so why should you be?

      B: Throw them in the same containers that “tin” cans go in. Enough of those are steel that they have to be doing the separation, and if some damn fool on their side pulls out the wire because they don’t get that steel is steel, that’s on them and you’ve done your part.

      C: Go to any junkyard or recycling center and ask “Where do I put the steel? Don’t worry, I’m not expecting to get paid for this little bit”. Now you’ve gone above and beyond the call.

    • Anonymous says:

      They don’t recycle (neither plastic nor wire)

      Are they toxic?

    • Let them turn into bicycles and then sell them. Everyone knows bicycles are environmental.

  3. johan_larson says:

    Favorite words, anyone? I can’t decide between “antepenultimate” and “defenestration”. It’s a wonder anyone bothered to coin words for such rare concepts.

  4. Le Maistre Chat says:

    Do we have any lawyers here who understand RICO? There’s a huge new Harvey Weinstein indictment that names Miramax and therefore Disney (!) as organized crime conspirators because of his purported rapes. Unpack this for the laymen?

    • Well... says:

      IANAL but I thought RICO tended to mainly only get invoked for stuff like drug money laundering?

      • Le Maistre Chat says:

        Tends to, yes. Drug money RICO is the standard way of nuking organized crime.
        To see it being used for rape charges is… wow. Could be nothing big, could burn down a quarter of Hollywood. IANAL!

    • Eltargrim says:

      So I’m also not a lawyer, but in the words of Ken White of Popehat, “It’s never RICO!”

      Also, important particular of language: that’s not an indictment. An indictment would be if criminal charges were being laid by the government. These are new allegations in a civil RICO suit.

      • Protagoras says:

        Also not a lawyer, but I noticed that the definition of “racketeering activity” that Popehat linked to included “trafficking in persons” and “white slave traffic.” Given the enthusiasm these days for calling anything which in any way involves commerce and sex in any combination or relationship “sex trafficking,” I’m going to guess that’s how they’re trying to pass this off as racketeering.

    • SamChevre says:

      Not a lawyer, but somewhat aware of the topic.

      RICO is a conspiracy law, and the traditional law of conspiracy (this is old, it’s in Blackstone IIRC) is as follows: if a group agrees to work together to do something illegal, the whole group can be held liable for any actions of any member of the group. That makes it a bit of a legal super-weapon, because the illegal action can be trivial (for example, planning a protest that violates traffic law is technically sufficient), but the liability extends for any act of anyone in the group.

      Reading the link, it doesn’t look like this is an indctment, which is a criminal law issue. This is a class-action, which is civil. This aspect of RICO is problematic, but has been used in several high-profile cases, including in the MLB Expos case.

      [Edit] Also, the Popehat link is great.

  5. Well... says:

    Suppose you have a black box with a hole in one end, where you put marbles, and a hole at the other end, where marbles come out. If you input marbles at a somewhat random, sporadic tempo, the box spits the marbles out in a roughly wavy tempo. (Meaning, for a while it will seem to be spitting out a lot of marbles regularly and then quickly the rate will drop off to nearly zero for a long while, then pick up again.) The output pattern doesn’t seem to change no matter at what tempo you put the marbles in (barring extremes like inputting only one marble per year).

    Know also that…

    A) marbles spend varying amounts of time getting through the box (so marble #1 might spend 5 seconds but marble #2 might spend 9 seconds, etc.), and
    B) the output opening contains a bottleneck, such that marbles all have to go through a 1 or 2 second process to leave the box, and therefore might have to queue up if there are marbles in front of them going through that process.

    Which do you think explains the wavy output pattern more: A or B?

    (FYI I’m asking this question because I’m curious about a pattern I’ve observed at retail checkout lines, the office water cooler, restaurants, and other places where there is a trickle of people coming in but often either long lines or no lines waiting to pay/leave/etc.)

    • maintain says:

      insufficient data for a meaningful answer

      • Well... says:

        Let me put it another way: which could you eliminate — A or B — and still get the wavy output tempo? “Neither” is an acceptable answer, but in any case I want to know why.

        • maintain says:

          B

          We can assume that the designers of the box wanted there to be a wave output. This would mean that they intentionally added a delay. They would have to add the delay at the end to make sure it was accurate. If the delay was in the middle, one ball could take too long and interrupt the wave pattern.

    • WashedOut says:

      If you input marbles at a somewhat random, sporadic tempo, the box spits the marbles out in a roughly wavy tempo.

      By ‘wavy’ do you mean it has a known deterministic sinosoidal-type output rate? Or is it random like the input?

      The output pattern doesn’t seem to change no matter at what tempo you put the marbles in

      So the output rate is independent of the input rate. We can also assume the box has functionally infinite volume (a ‘conceptual’ box).

      Input -> System -> Output

      The rate of input is a random variable; the system is a process that holds the information [M#1, M#2…M#n] for a (random?) time, until the system spits them out again such that the rate follows a sinusoidal function. So far is this the correct framing?

      The key thing for me is that by calling the information that passes through the system a ‘marble’ you imply that each individual bits of information is identical and does not ‘interact’ with the system apart from being beholden to its store-release function. In the case of literal marbles, (B) is a sufficient but not necessary condition for (A). (B) can cause (A). If (B) does not exist then the assumption of non-interactivity/non-individuality of ‘marbles’ is probably false.

      Restaurant patrons are not marbles in a flume. They have specific properties such as needs, demands, order size, ability/style of communication with staff, etc. etc. There is a big literature on queuing theory that may be of interest.

    • Nancy Lebovitz says:

      I *think* A and B are functionally equivalent. The bottleneck makes the marbles slow down.

    • beleester says:

      A doesn’t seem like a sufficient explanation – if the input is random, and delayed randomly, the output should still be random. A random output is going to fluctuate, so you’ll observe some periods of rapid marble output and some of no marble output, but it won’t be a regular pattern.

      That said, checkout lines and restaurants aren’t random in their inputs – people go shopping and eat lunch at predictable times during the day, so I’m not sure this is a good model.

    • The Nybbler says:

      The checkout line thing has an easily observable reason. You get customers who go through normally, then you get the _problem_ customer. The maximal problem customer argues over the price of every item, fishes through a disorganized bag for coupons (many of which are expired or inapplicable and result in more arguments), tries to pay with payment methods not accepted or requiring approval (checks, travelers checks, EBT cards for disallowed items) and generally makes trouble. You have a certain number of cashiers, and your output rate at any given time depends mostly on how many cashiers are blocked on problem customers (not all of whom are maximal). The input rate doesn’t matter much because the number of cashiers varies according to the (expected) rate.

      The same thing applies to banks, except the problems are different. At the office water cooler it’s the one filling up his 2L bottle.

    • bean says:

      The bottleneck is almost always B. This was an early operations research problem, and it turns out that there’s a fairly sharp divide between “enough to handle the traffic” and “not enough, long lines”. Think of it this way. If there’s not enough to form a bottleneck, then your dwell time is going to be the random amount of time it takes to pass through the box, plus the checkout time, plus maybe 1-2 seconds if you happen to be right behind someone. If there is a bottleneck and lines, then you have to wait for the extra ahead of you to burn down. If you can clear 1 marble/second, and there’s 1.1 marbles/second coming in, then the average bottleneck time is going to be .1 *seconds since start, and randomness might make the line fluctuate, but it’s not going to burn it down to zero.

  6. maintain says:

    I’ve heard that melatonin can slow aging. If that’s true, then not taking melatonin is really dumb.

    Should I be taking it?

    I read this article by Sarah Constantin, and it makes it seem less clear-cut: https://thelri.org/blog-and-news/melatonin-and-pineal-gland-extracts/

    • Interesting piece, but I’m wondering if there is a typo:

      Male, but not female, C3H mice given 2.5 mg/kg/day melatonin at night starting at 4 months of age lived 20% longer than control mice.

      That dosage, for an 80 kg human, comes to 200 mg/day. Melatonin pills are typically 1-3 mg. So if the quote is correct, the dosage humans commonly take is two orders of magnitude below the dosage that shows significant effects in male mice.

  7. Thegnskald says:

    A thought:

    There is a 20-30% “commute gender gap” – that is, men have longer commutes than women.

    I recall an article posted on Less Wrong some time ago suggesting longer commutes were associated with lower happiness.

    Has anybody come across anything about a correlation between commutes and suicide?

  8. johan_larson says:

    So, we are nearly 20 years into the 21st century. Has the new century produced any outright classics yes, books or films or games or whatever that will still be getting attention 50 or 100 years from now?

    I’m thinking “Zero Dark Thirty” might have staying power. I refuse to believe something a big as the War on Terror can happen and not produce anything that sticks.

    • James C says:

      Hmm, well it’s pretty hard to assess media in its own time but here’s my two cents.

      Books: No idea. I’ve read a lot of great sci-fi fantasy books but I don’t think I’ve seen any breach the social zeitgeist other than Harry Potter and Twilight. The former I hope becomes a childhood classic, the later I hope dies alone and forgotten with a stake through its heart.

      Films: I’ll go to the wall for Arrival and if that doesn’t become a classic there’s something wrong with the world. Other than that, the Marvel cinematic universe will be talked about for decades. I’m sure history will winnow the whole thing down to a few greats, but the idea of a sprawling cinematic universe is going to have major ramifications for the future of cinema now that’s its proven to be wildly popular.

      Games: Minecraft. The first game that really put player’s creativity front an center is already a classic and I can’t see that fading. Other modern games I can see their series continuing but few really lasting the century. I’m sure people will be arguing whether Civ VI is better than Civ XX, but that’ll remain a fairly niche topic.

      • Thegnskald says:

        *Twitches*

        Minecraft? A game explicitly and admittedly taking a single mechanic from a much more fleshed out and complex game, and replacing the nuance and complexity of the original game with grinding and tedium, subverting the mechanics of the original game which were intended to deliberately keep the game from feeding into addictive impulses, and thereby commercializing it?

        Minecraft -popularized- player creativity in that sense, and it do so by attaching it to mechanics known in game development to be addictive, such as variable rewards. It is an imitation of what is probably the current artistic peak of game development as an enterprise, printed onto the front of a slot machine.

        • Civilis says:

          What game do you consider Minecraft to have taken its mechanic from?

          Just sticking to the question of video games, I don’t think there are many that will still be played in recognizable form in a decade, much less a century. With video games, I think we’re looking at which games can be said to introduce concepts that will be regarded as significantly revolutionary enough to be remembered. Even if falling block games have come a long way from Tetris, it’s going to be recognized that Tetris originated the genre (although I suspect Tetris is one game that may remain viable in essentially its original form for the foreseeable future).

          Likewise, Doom will probably be associated with the First Person Shooter genre for a very long time. I’d wager Wolfenstein 3D, Half-Life, and Counterstrike will be there for a good while as well, as will either Team Fortress 2 or Overwatch (both of which are from post-2000). You could also add Portal to the list (another game post-2000), it’s innovation being to switch from shooter to physics puzzle. I’ve played Narbacular Drop, the predecessor (and I played it before Portal was released) and there’s a reason Portal gets remembered.

          There’s a reason I stopped playing Minecraft shortly after I started, and it ties in with your logic above, but there’s no denying Minecraft marked a significant change in the genres of games, across multiple categories. Again, these aren’t necessarily things Minecraft did that were original, but things Minecraft popularized enough that they stuck around as more than niche ideas. I think Minecraft is largely responsible for the following gaming concepts having anywhere near the share they do:
          1) Procedurally generated open world sandbox
          2) Artistic crafting (building recognizable things out of blocks in world)
          3) Mechanical Crafting (being able to build working mechanisms in the games engine)
          4) Open-world Survival
          5) Indie gaming on par with AAA games for public recognition
          6) Retro graphics (probably a poor choice of phrasing, as Minecraft not truly retro, but it does show that games that don’t have up to date realistic graphics can compete, and as such probably opened the way for a lot of the true retro-graphic games that followed).
          7) Games Released in an Uncompleted State (not necessarily a good thing)

          For influential games of the twenty-first century, I’d list:
          Minecraft
          Portal
          Team Fortress 2 OR Overwatch (one of them will be remembered for starting the class-based team shooter genre)
          Wii Sports
          Whatever game gets associated with the MOBA genre, possibly DOTA
          Whatever game gets associated with the Battle Royale genre, if it sticks around
          Hearthstone (unless a better online card battle game comes along)
          Pokemon GO (if a better real-world-interactive game doesn’t show up)

          • Thegnskald says:

            Minecraft is a stripped-down, graphically-oriented Dwarf Fortress.

          • Iain says:

            Minecraft is Dwarf Fortress in the same sense that Doom is Rogue.

          • Thegnskald says:

            Iain –

            DoomRL is Rogue. (It is also really fun, if you haven’t played it.)

            Doom isn’t just a graphical version of Rogue; it substituted tactics for strategy. Given that Rogue is basically a strategic game, this is a major change.

            Minecraft, in all the ways that matter, didn’t do any substitutions. The parts where it differs – combat – aren’t the parts people care about. Minecraft is like Twilight; more popular, but popular because of its shallowness, rather than in spite of it.

          • Civilis says:

            I hadn’t realized that Dwarf Fortress only dates back to 2006. It certainly should be on my list as well, as it’s definitely been influential. It’s definitely contributed to numbers 1, 3, 4, and 6 on my list, to some degree.

            Minecraft, in all the ways that matter, didn’t do any substitutions. The parts where it differs – combat – aren’t the parts people care about. Minecraft is like Twilight; more popular, but popular because of its shallowness, rather than in spite of it.

            In talking with kids that play Minecraft, it’s the ability to build castles, statues, spaceships and other 3D constructs is Minecraft’s selling point, because that’s what they talk about.

            The direct first-person real time single character control is also important, because it gives the player agency in a means that they’re used to, especially for combat. It allows the transition from simple survival game to building things which look pretty to building things which take advantage of the games more complex mechanisms, especially for people that don’t think that “Losing is Fun”.

          • Thegnskald says:

            Bah. “Losing is Fun” is important.

            It isn’t that I am a fan of masochism, either – arcade games in which you had to replay the same level until you memorized the position and pattern of every enemies are just painful to play.

            It is, rather, about the way the game is designed. “Losing is Fun” is about the attitude game developers should take – which is that -playing- should be fun, rather than -winning-. “Losing is Fun” isn’t telling you the game is hard – it isnt – it is telling you that losing is part of the game experience. And it is indeed fun to lose at Dwarf Fortress. It is hilarious when you accidentally flood your fortress and everyone drowns. Or when somebody’s cat dies and they get upset and punch a soldier and before you know it you have three children dwarves left in a pile of corpses, one of the children having beaten the remaining insane dwarves to death with a sock. (That literally happened in one of my games. The child in question then tripped and fell into a moat and drowned.)

            Dwarf Fortress doesn’t tell you losing is fun because you are going to lose (I mean, you will, but that is besides the point) and you had better like it. It tells you losing is fun because it actually makes losing fun. That elevates the game to high art – it is a game capable of generating a compelling (or at least amusing) tragedy, not scripted for your consumption, but arising naturally out of the mechanics of the game.

          • toastengineer says:

            What game do you consider Minecraft to have taken its mechanic from?

            Zach Barth’s Infiniminer – as in, Notch literally has said multiple times that Minecraft was his attempt to clone Infiniminer.

            Don’t see the connection with Dwarf Fortress though. If your point is that they’re based around “player creativity” then they’re both ripoffs of Garry’s Mod.

          • Nornagest says:

            Rogue is a tactical game. Nethack is a strategic game.

          • Anonymous says:

            @toastengineer

            https://infogalactic.com/info/Minecraft#Development

            Markus “Notch” Persson began developing the game as a project.[50] He was inspired to create Minecraft by several other games such as Dwarf Fortress, Dungeon Keeper, and later Infiniminer. At the time, he had visualised an isometric 3D building game that would be a cross between his inspirations and had made some early prototypes.[50]

    • A Definite Beta Guy says:

      It’s actually surprising how quickly the GWOT faded from view. It’s almost like the old 90s movies about drug lords.
      The Hurt Locker seems like it’ll be the classic from the GWOT period, especially since it’s the first film to win Best Director for a female director.

      Some candidates, at least from the big names?
      Lord of the Rings
      Wolf of Wall Street
      Amelie
      No Country for Old Men
      Boyhood?

      The Second Disney Renaissance Movies are staying with us for a long time. If you live to 2100, you will still see little girls dressing up as Anna and Elsa.

      I am hoping some of the “adult animation” movies keep some legs. I really liked Inside Out, and a lot of other people were touched by Up.

      • meh says:

        It would be weird to put Lord of the Rings in the 21st century, especially since the source material had a much larger cultural impact (as opposed to movies like the Godfather or Jurassic Park that exceeded the books in terms of impact)

      • Nornagest says:

        The Hurt Locker was not very good. IMO the best GWOT media so far has been Generation Kill, but it’s nowhere near as popular as Patton or Apocalypse Now — or for that matter Zero Dark Thirty.

      • The Nybbler says:

        _The Hurt Locker_ got awards where _Zero Dark Thirty_ didn’t because of politics (ZDT being seen as too approving of torture), but _Zero Dark Thirty_ was by far the better film.

      • CatCube says:

        Yeah, I bought The Hurt Locker after it won an Oscar, because I hadn’t had a chance to watch it and I thought that it’d be a good movie. I ended up turning it off about three-quarters of the way through, and debated making a cake for the EOD guys who’s headquarters was across the parking lot from my unit. “I’m sorry that the public thinks you’re such dirtbags” wouldn’t fit on a cake, though. Or probably be received in the spirit it was intended.

    • J Mann says:

      I’m thinking “Zero Dark Thirty” might have staying power. I refuse to believe something a big as the War on Terror can happen and not produce anything that sticks.

      I don’t think GWOT is going to be seen as a major historical moment (assuming that we don’t find ourselves locked in a global struggle between liberal secular capitalism and radical Islam). It’s too short and localized. Compare HIV awareness in the late 20th century – maybe Rent or one of the body horror movies of that era will be seen as a classic, but frankly I’m not even sure of that, and IMHO that was a much longer lasting and more significant cultural moment.

    • fion says:

      I agree with much of what James C said. Harry Potter will be remembered. Not sure how long it’ll remain popular, but it will always be “that thing that millenials grew up with and never stop talking about”.

      And the MCU as well… I’m not sure if it quite qualifies. I think people will regard the early 21st century as “the time we were all obsessed with superhero films”, but I don’t know if the actual films will be regarded as classics… Having said that, these are *very* popular films, so maybe.

      You didn’t mention music. I suspect that the age of classics is behind us in music. It’s so much easier and cheaper to record, and there’s more diversification of genres than ever before, so I don’t think there will ever be somebody who takes the world by storm in the way Bach or The Beatles did.

      In terms of games… does Pokemon count? Apparently it started in 1995, but I guess most of the games came out in the 21st century.

      I’d be really interested in what people a generation older than me think about this question. I’m in my early twenties, which puts me in the target audience of the examples I’ve mentioned. Maybe that biases me. Or maybe classics are decided by the youth at a given time?

      • Thegnskald says:

        Games I suspect will be regarded as classics, grouped loosely by era:

        Pac-Man, Galaga, Tetris, Pong.
        Doom (the first one), Final Fantasy VI or VII (maybe both), Ogre Tactics, Super Metroid, Zelda (either the first, or Link to the Past), Romance of the Three Kingdoms (probably VII or VIII), Super Mario, Half-Life, Dungeon Keeper, Sim City, Lords of Magic 2, Civilization, Harvest Moon, Age of Empires, Myst, King’s Quest.
        Portal, Team Fortress or Counterstrike, World of Warcraft, Fallout New Vegas, Elder Scrolls: Skyrim, Dwarf Fortress, Grand Theft Auto 3.

        Of the recent decade, Kenshi might end up there, but it is still too early to tell. Nothing else this decade has been particularly remarkable.

        There are quite a few games that might end up as classics for being particularly good examples of genres, as well, but I don’t include these; it is impossible to guess which will be supplanted by something better. My inclusions are largely based on whether or not games either carved out or defined genres, and whether I expect those genres to persist. (Star Ocean 2 defined a new genre of RPG, for example, carved out by the first Star Ocean but I don’t think that genre will persist.)

        • Randy M says:

          As much as I personally love tactics ogre, I think that’s too niche to be on a list of classics. You probably need a Diablo on there, though.
          Actually, that might be a good test of “classic”–does someone who never saw it know about it, even years later.

          • Thegnskald says:

            I debated Tactics Ogre. Ultimately I chose it because of the number of times I have been told I should play it (I still haven’t) by fans of the genre, who are better positioned to make a judgment. (Personally, I would give that one to Final Fantasy Tactics, but consider my judgment suspect.)

            ETA: But yeah, Diablo probably belongs on the list.

          • Randy M says:

            FFT is the better game in terms of presentation and accessibility, especially over the original TO (the version on the PSP is much improved in terms of balance, character advancement, % special features), but TO has a great, branching story that doesn’t devolve into a demon hunt (as quickly), more deployable units, and more difficulty.

            For a lot of these series, the best choice would, imo, be the last game made before going 3-D; FFVI, Zelda a link to the past, Super Mario world.

            I think you also need to add Street Fighter to your list. (Actually, a list of classic Nintendo games is going to look like the mini consoles that were released recently).

          • Thegnskald says:

            Mostly I liked FFT’s job and equipment system, particularly coming out of FFVII’s heavily limited set.

            I don’t actually like the combat in tactical RPGs very much; it is extremely difficult to make it more than grindingly slow-paced without making it annoyingly luck-based. So it isn’t a genre I spend much time in.

          • BBA says:

            The game that pioneered the genre was Fire Emblem, which debuted for the Famicom way back in 1990. But between how it plays so differently from other games in the genre and the fact that the series took over a decade to make it out of Japan, it isn’t nearly as recognized as FFT here.

            Likewise, the original JRPG series is Dragon Quest – formerly called Dragon Warrior in the West for trademark reasons. Phenomenally popular in Japan, a B- or C-tier series here (which is still better than in the ’90s when it was practically unknown). I’d call the original DQ a classic game even though to modern eyes it’s an unplayably dull slog, because you can see the roots of everything that came later.

          • quanta413 says:

            Fire Emblem is gloriously masochistic in ways most later tactical RPGs aren’t. Not in a lame level grinding sense (it’s pretty hard to grind in Fire Emblem although not always impossible), but in the sense that enemies are actually dangerous and a character dying matters (usually forcing you to restart a mission).

        • meh says:

          Of the previous eras, I would include adventure as a classic,
          https://en.wikipedia.org/wiki/Adventure_(Atari_2600)

          and also Madden football, and Super Mario Bros.

          • Thegnskald says:

            Haven’t heard of Adventure; I’ll have to boot it up. It does look like it might belong, though.

            Not a fan of sports games, and have little to no knowledge, so I will take your word on Madden Football.

            The Mario games were hard to pick from, so I just put down “Super Mario”. (Particularly, choosing between Super Mario Bros and Super Mario World was difficult, as I think World might have defined the genre slightly more than it’s predecessors)

        • Civilis says:

          As much as I disagree with you over Minecraft, I think your list is very good.

          In addition to Diablo or Diablo II and a fighting game such as Street Fighter II, I note the absence of any RTS games on the list. Even if it’s pretty far into the genre, it’s very hard to deny StarCraft’s influence, especially in eSports.

          • Randy M says:

            He has Age of Empires. Starcraft probably eclipses that and the genre starter Command and Conquer, but all make good candidates.

          • Skivverus says:

            I think Age of Empires counts as RTS, but wouldn’t know, spent the time playing StarCraft instead.
            I still think of the end of the ninth mission in the Terran campaign as the Great Betrayal.

          • Thegnskald says:

            I think StarCraft will be supplanted (it is a subtly different genre of RTS than Age of Empires).

            ETA: By supplanted, I mean I think StarCraft will be replaced as a classic by a successor. There is too much room for improvement – which isn’t to say it is bad, but that it has a lot of room to grow and expand into. Historical RTSs have to be constrained by, well, history, and Age of Empires pretty much nailed what can be done in that framework. Science Fiction RTSs have a huge amount of room to grow.

            I forgot about fighting games. Ha! That’s tough. Killer Instinct is the best in the genre (at least through the SNES days, stopped playing them after that) by far, but I doubt it will end up a classic. Hm. I don’t know, honestly.

          • Randy M says:

            Fighting games need to be represented by Street Fighter. You aren’t looking for the all time best, you are looking for the first great example. If you happen to find a pizza parlor with a couple arcade games even now, pretty good odds one of them will be Street Fighter II or a subtle variation of it. (Along with a couple shooters that aren’t good, so that’s hardly conclusive proof, I’ll concede).

            My favorite in the genre is Square Soft’s Bushido blade, a fighting game from the early playstation games with 1-hit kills (blocking was important!). My college dorm had a long running goal to make it through the kill one hundred dudes before dying mode.

          • Thegnskald says:

            Randy M –

            Bushido Blade 2 was great; never tried the first. (100 guys is easy, though, if you don’t mind spamming the same move over and over and over again, Musashi style. It is boredom that gets you killed.)

            I recommend Kengo, if you haven’t tried it. It is a fighting game with RPG elements, whose stat gains are tied to minigames, and with customizable (to some extent) movesets.

            ETA: I guess I have played later fighting games, albeit of what I would consider a different genre. Hm.

          • Civilis says:

            He has Age of Empires. Starcraft probably eclipses that and the genre starter Command and Conquer, but all make good candidates.

            I’ve generally seen Dune II cited as the first RTS, though some people cite earlier, more obscure games. I was a C&C player myself, and what turned me off from StarCraft is that it was heavily tuned for competitive multiplayer play, which might be it’s real influential innovation. The more the genre has turned to competitive multiplayer, the less I’ve enjoyed it. I have friends that still break out StarCraft for that sort of play, as no more recent game has the combination of balance and accessibility; it’s one reason the genre has tapered off in popularity.

            Other possibilities for influential games I thought of while I was at lunch: Zork (or another text adventure game; I picked Zork for its addition of grue to the gaming lexicon); Lemmings; Katamari Damacy; Wizardry. [Added:] X-COM.

            While I’ve never played the game, I’d say FFVII is more influential than FFVI, based on the role of a specific spoilered event and its place in gamer consciousness.

        • Nornagest says:

          Dungeon Keeper is a pretty niche choice.

          • Thegnskald says:

            True, but it is a game that has inspired both remakes, and an entire literary genre (speaking of which, I highly recommend the Divine Dungeon series. It is all the fun of exploring game mechanics, in book form. The prose is only so-so, but it is a surprisingly good series in spite of that. There are other works in the genre, but I can only speak to that one.)

          • Nornagest says:

            What genre is that?

          • James C says:

            Loosely they’re called Dungeon Core or Dungeon novels. It’s a sub-branch of Lit-RPGs where a dungeon is a core focus on the novel, if not a PoV character.

        • Nornagest says:

          As much as I liked Fallout: New Vegas, I don’t think it has the pop-cultural presence of Fallout 3, nor at this point the potential to develop it. It was a much better game, but there are lots of good games that’ve fallen into obscurity.

          • The Red Foliot says:

            I think F:NV will remain popular with small, niche communities of video game aficionados in the same way that Planescape: Torment, Vampire: The Masquerade: Bloodlines, and the original two Fallouts have, albeit to a lesser degree.

            Fallout 3 enjoyed a great deal of immediate popularity, but I don’t think its popularity will be at all enduring. It’s the sort of product that lends itself to an endless line of iterations, each absorbing the fanbase of the last, each basically the same as the last but with minor variations, updated graphics, and a new, terribly written plot. I don’t think it will be considered a classic in even a weak sense.

          • The Red Foliot says:

            One reason to expect modern popular cultural products NOT to endure is that much of their success is reliant on hype. Marketing departments today seem to pour inordinate amounts of cash into hyping products than in the past. So, much of what you’re seeing as being popular today didn’t build its popularity organically as in the past but instead was supported by millions of dollars spent on advertisements. The popularity persists so long as the marketing remains fresh and well-funded. But it is ephemeral.

            Another reason is that modern entertainment industries are so exploitative with their intellectual properties. Star Wars, the trilogy, originally went a couple of decades without much attempt at a followup, and while there were lots of toys and video games, those were decidedly side-products. Modern industries seem to milk their products much more thoroughly, with new iterations of main-products coming out on a yearly basis. So, whatever luster such products may originally be said to possess might unerringly be diminished over time, as the ruthless processes of Molochian capitalism drain them as a vampire feeds on blood.

            Another possibility is creative exhaustion. If there is a limited supply of sufficiently distinct artistic concepts floating around in idea space, then one might expect said supply to become diminished after a while, even exhausted. The production of creative content has seemingly increased exponentially since the industrial revolution, so one might expect our civilization to eventually reach its capacity in terms of what concepts there are to exploit. After that, creative content would begin to seem increasingly derivative.

      • beleester says:

        Speaking as someone who didn’t play Pokémon when they were a kid, but discovered emulators in their 20s, I think Pokémon definitely holds up. It’s simple enough for kids to enjoy, but has enough depth for adults, especially if you get into PvP. It combines a lot of different emotional appeals – exploring, discovering new creatures, putting those creatures together into your own personal team of badasses, and building them up until they can challenge bosses. There’s a reason the formula has survived for 8 generations with no sign of stopping.

      • Brad says:

        And the MCU as well… I’m not sure if it quite qualifies. I think people will regard the early 21st century as “the time we were all obsessed with superhero films”, but I don’t know if the actual films will be regarded as classics… Having said that, these are *very* popular films, so maybe.

        How do you think of cowboy movies? Not the ironic or nostalgic descendants, but the ones at the height of their popularity.

        • fion says:

          Yeah, that’s a good point. So there was definitely a time when we were making shitloads of cowboy movies and presumably lots of people loved them. And now it’s a genre that we reference and make fun of. But how many of the old cowboy movies are classics?

          (That question sounds more rhetorical than it is. I don’t think I’ve ever actually seen a cowboy movie so I don’t really know what I’m talking about…)

          But yeah, so I think “superhero movies in the early 21st century” will be like “cowboy movies in…[whenever it was they were mostly made]” in that everybody will think of it as a genre of its time, but it won’t necessarily produce many classics?

          • The Nybbler says:

            There’s a lot of classic cowboy movies, some of which don’t even have Clint Eastwood in them. “High Noon” doesn’t have John Wayne either.

          • The Red Foliot says:

            In literature, often a work of genre fiction that becomes regarded as a ‘classic’ will no longer be regarded as a work of genre fiction. Frankenstein and Dracula, both Gothic horrors, are two examples. Most works fail to attain such status and are duly forgotten. But even among works that do attain some kind of literary (or, at least, semi-literary status), there is some doubt as to their longevity. And this goes beyond genre fiction. Who remembers Saul Bellow? Philip Jose Farmer? It seems like very few works are truly enduring. Of the Western genre, I can only think of one that has the potential to endure: Blood Meridian. A deconstruction of the genre published after the genre had faded out. Having just one creative work achieve ‘classic’ status might be par for the course for a genre, however.

            But it should further be noted, that even as ‘enduring works’, these works are only really enduring among small groups of aficionados. Dostoyevsky is extremely popular, for a 19th-century writer, but his popularity is limited to that small group of readers who are interested in perusing all the ‘classics’. Being a classic isn’t a sign of mass popularity, but only popularity among small groups. This is for literature.

            For graphic forms of narratives, I think cultural endurance has a lot to do merely with imagery. Iconic characters such as Mario, Pacman, and such, seem to have it in them to persist even though the works they derive from have almost no value. I don’t know in what sense this makes them ‘classic’, but if we’re judging by popularity and cultural legacy it seems they are highly noteworthy. Being called ‘classic’ for iconic reasons implies they, as characters, will be remembered, but the products they are represented in will not.

            Other graphic works, such as those by Kubrick, endure, in part, for the same reason that Dostoevsky and Mary Shelly endure: relatively small groups of people take an intellectual or aesthetic interest in them.

            I don’t think the superhero genre will produce any content that transcends the genre to become ‘art’. While it does contain many iconic elements, these are not intrinsically linked to any given product they are a part of. They will endure as Mickey Mouse does, but the films now being created will not benefit from their longevity. So, the genre on the whole will likely be forgotten, and there will likely be an absence of any transcendent works. Maybe some of the Batman films will be remembered: Dark Knight and the Jack Nicholson one. Both of those are apart from the more CGI-focused iterations of recent years, however.

          • Le Maistre Chat says:

            Imma let you in on a secret: superhero films are stealth Christian fiction.
            It’s weird, because comic books are soft SF soap operas with fight scenes, but Superman? “I sent them you, my only son.” Marvel’s Thor? A god becoming worthy by offering to die for humans. The Dark Knight? Taking on the sins of Gotham, in a skewed way that requires more discussion (lying to everyone is not a very Christian message). If a writer and director nail that message as well as The Searchers or High Noon nailed “Civilization can only be preserved by men who kill, but any man who kills is uncivilized”, you’ll have your classic.

          • Nancy Lebovitz says:

            I’m inclined to think that the mere amount of new art is enough to erode the possibility of a canon, and short of a civilizational collapse, the amount of art is just going to keep increasing.

            This being said, what about art that keeps being remade in one form or another? Shakespeare has a lot of momentum that way, and I wouldn’t be surprised if Harry Potter and the MCU have some of that.

            Are there any children’s books since 2000– the sort that get passed on because parents read them to their children– that seem likely to have staying power?

          • Nancy Lebovitz says:

            There was some rather explicit Christian material in one of the Spiderman movies.

          • The Nybbler says:

            Of the Western genre, I can only think of one that has the potential to endure: Blood Meridian.

            Riders of the Purple Sage?

            And this goes beyond genre fiction. Who remembers Saul Bellow? Philip Jose Farmer?

            I remember Farmer from Dayworld and Riverworld, but he wrote genre fiction, and his work is unlikely to be on a list of unqualified classics. In SF, I suspect _Stranger in a Strange Land_, _The Left Hand of Darkness_, and some works by Ray Bradbury which aren’t _The Martian Chronicles_ would be most likely to make a non-genre list. Also Vonnegut’s _Slaughterhouse Five_ if you ignore his wish to not be classed as an SF writer.

            Of course, some of what makes a classic is the future taste of secondary school English curriculum writers, so (checks thread number) perhaps anything not already on the list by a white male is out.

          • John Schilling says:

            But how many of the old cowboy movies are classics? (That question sounds more rhetorical than it is. I don’t think I’ve ever actually seen a cowboy movie so I don’t really know what I’m talking about…)

            High Noon almost certainly, but that was using the setting to tell a different and more enduring kind of story.

            True Grit, maybe, particularly if people remember the one starring John Wayne and Hailee Steinfeld.

            Butch Cassidy and the Sundance Kid, very likely

            The Good, the Bad, and the Ugly, or maybe the Dollars Trilogy as a whole.

            The Outlaw Josey Wales, particularly if the Dollars Trilogy doesn’t endure as examples of “Clint Eastwood Movies”

            Possibly The Searchers, Stagecoach, and/or Fort Apache, if only as go-to examples of “John Wayne movies”

            And of course, Blazing Saddles will endure.

          • LewisT says:

            Of course, some of what makes a classic is the future taste of secondary school English curriculum writers, so (checks thread number) perhaps anything not already on the list by a white male is out.

            Based on the number of high school English classes that have to read it, The Kite Runner might well survive a few decades.

    • powerfuller says:

      For movies, I think the strongest contender is probably There Will Be Blood.

      For novels, Marilynne Robinson’s Gilead and Home, as well as Jeffrey Eugenides’s Middlesex are all very good, but I’m doubtful as to their long-term value. I haven’t read much contemporary literature though. Harry Potter probably will have more staying power than anything else (more for popularity than quality).

      It’s hard to know what will stick out for video games, as their selling points become outdated and adapted so quickly (e.g. the same game, but with slightly better graphics and controls). For a baseline, what’s the best video game of the 20th century? How does one even compare Tetris vs. Super Metroid vs. Half Life? What’s the best non-video game of the 20th century, for that matter? Twister? It’s gotta be Twister.

      • Civilis says:

        What’s the best non-video game of the 20th century, for that matter? Twister? It’s gotta be Twister.

        If you mean ‘Most Influential’, as opposed to ‘best’, Settlers of Catan. In retrospect it’s a deeply flawed game, and I haven’t been interested in playing in years, but still I can separate board games into pre-Catan (Monopoly, Trivial Pursuit, Risk, etc.) and post-Catan. (Note that I’m an American, the analysis might be different for someone in Europe).

        I would also accept as answers Dungeons & Dragons or Magic: the Gathering, for introducing whole genres.

        I think that there’s a difference between good and “great”/influential, and most of the “great” works, the ones that influenced others the most and hence are the most remembered are heavily flawed. It could be because the good made incremental refinements to something established, while the great make massive changes.

        • powerfuller says:

          You’re right about good vs. great. It would be weird to claim Half-Life Source is a greater game than Half-Life (or any other remastered editions), though technically it is a better one.

          Thinking about it seriously now, the best 20th century non-video game is probably D&D (referring to the general idea, not a particular edition), which is both potentially endlessly fun and broadly influential, more so than MTG I’d say. Most board games suck (Monopoly) or are too niche and involved to be appealing (Terrible Swift Sword anyone?).

          On the border of video games is pinball, of which I feel extremely confident saying Twilight Zone is the best table of the 20th century, and Dialed In the best of the 21st (it helps they’re both designed by the same guy).

          • Randy M says:

            What makes a pinball game great?

          • powerfuller says:

            @Randy M

            That’s an excellent question! I’m sure tastes differ, but I generally look for:

            Durability & Control: The table itself has to be well-constructed, both to last, and to assure that the same action produces the same effect. Every individual table will develop its peculiarities, but you don’t want to have to develop an entirely new strategy each time you play a round because the flippers are going wonky. A lot of this depends on maintenance, but some companies produce far better tables than others (e.g. Stern produces most new tables today, but they fall apart faster than Williams tables from decades ago). As a matter of taste, I tend to dislike tables with randomization elements (e.g. the spinning magic lamp in Arabian Nights), because I like a “fair,” predictable game.

            Complexity: I prefer tables with a large rulesets, lots of objectives, and various possible strategies to reach said objectives. Additionally, a table ought to have a variety of possible shots beyond “shoot up at a 30, 45, 60, etc. degree angle,” as you get in Attack from Mars, which has basically only 7 different shots, all straight up the table. Twilight Zone, for comparison, has 2 additional flippers on its sides (1 is visible in the link) so you can shoot across the breadth of the table. This biases me a lot toward tables with more than 2 flippers and a wizard mode (the pinball equivalent of a last boss, accessibly only after completing many smaller objectives). Simpler tables are also fun, but they have less replay value over time.

            Innovation: Tied in with complexity, a table ought to have some unique features. Twilight Zone, for example, featured “invisible” flippers (magnets) and a ceramic ball that was lighter and faster than the normal ones, a flipper that blocks objectives, etc.

            Mimesis: Obviously, a pinball table can only do a poor job of imitating whatever its theme is, but I want the objects on the playfield to evoke the theme, and I want the actions performed to imitate the objectives. For example, No Good Gofers has a great, difficult “hole in one” shot that complements its theme nicely. Twilight Zone is tops for me in part because playing it feels closest to its source material: trying to unlock “Lost in the Zone” mode will, in fact, distort your sense of time and space and make you go half-crazy. Also, some themes are just more fun than others.

        • dndnrsn says:

          I’d go with D&D also. It and M:TG both introduced genres/formats, but the genre/format D&D introduced is much, much wider. First, in the sense that there’s a ton of RPGs, including many which are vying for the #2 spot (I don’t think you can reasonably say D&D doesn’t have the #1 spot), while most people could probably not name more than three other CCGs, and compared to RPGs there’s no Vampire or Call of Cthulhu or whatever to obviously be in the running for #2. Second, I’m biased here, but you can do more with RPGs, by and large.

          • Randy M says:

            Probably L5R for second most influential TCG. Other contenders are Pokemon or Deadwood.
            The trouble is, to revisit an RPG you need a PDF and some dice that are common to all the games. To revisit a board game you need an intact copy, easier in some cases than others (you can play Catan with some missing pieces and not know it). To replay a TCG, you need some small fraction of the cards, but enough compatible ones to capture the feel of it, and a generous but hard to quantify amount to really capture the magic, (pun not intended).

      • Paul Zrimsek says:

        What’s the best non-video game of the 20th century, for that matter?

        Contract bridge.

        • SamChevre says:

          Has to be one of contract bridge, or D&D, or MMORPGs-but my money would be on bridge.

        • powerfuller says:

          @Paul Zrimsek and @SamChevre

          What makes bridge so great compared to other card games? I’ve never played bridge (and probably never will). Also, I’m surprised it’s as young as it is.

          • SamChevre says:

            Bridge is a lot of fun, but the reason I think of it as important and lasting is because so many other games use its mechanism of bidding, following suit, trick-taking. It is a game of skill, with enough elements of chance that it doesn’t get boring.

            I really recommend just playing bridge on bridgebase “Just Play Bridge” for a few dozen hands: it’s an easy way to see if you like it.

          • meh says:

            But if we’re just talking about trick taking games, I think that takes us out of the 20th century.

          • Paul Zrimsek says:

            That’s why I was careful to specify contract bridge, which is a refinement of an older game (auction bridge) and so could perhaps be disqualified as a 20th-century game on that basis. Anyhow, it gains a great deal of strategic depth partly because of the bidding process, which doubles as a way for partners to exchange information about their hands, and especially from the fact that in each hand there’s one player who gets to play their partner’s cards as well as their own– this makes the cardplay more amenable to planning than it is in other trick-taking games.

            ETA: The learning curve, unfortunately, is pretty steep, making it hard to recruit new players. It’s mostly an old people’s game these days.

          • SamChevre says:

            I’d agree on the importance of bidding, and the dummy, in contract bridge. I’d add that the scoring is one of the best-designed systems I have seen in any game to make play interesting, and to encourage cautious risk-taking.

          • SamChevre says:

            Random note on bridge: it’s still quite popular in India, apparently. When I was a young programmer, many of my colleagues were Indian–and I spent many night sitting cross-legged on the floor, playing bridge until way late at night.

            I was once dared/challenged to play for an evening at a nickel a point; I was sufficiently clueful to decline.

          • There are a number of successful games that can be thought of as training for a particular sort of human skill. What bridge is about is coordination with limited communication.

        • fion says:

          Misread that as “contact bridge” and got really intrigued…

          • Aapje says:

            Chess boxing.

            Full contact bridge.

            Mixed martial poker.

            Kickboxers of Catan.

          • Paul Zrimsek says:

            If you’re intrigued now, wait until you’re on the receiving end of a grand slam.

            “8 tricks to 5, that’ll be 2 points for us,” Tom said wistfully.

          • Nornagest says:

            Mixed martial poker.

            Cheating is allowed, but if you get called on it the hand is decided by a fistfight. It’d make great TV.

          • johan_larson says:

            Cheating is allowed, but if you get called on it the hand is decided by a fistfight. It’d make great TV.

            Do it in 19th-century attire, set in an old-time railroad car, and do the fighting by bare-knuckle boxing rules.

    • meh says:

      I am sure some (not me), would suggest Avatar; but I would not be surprised if it is forgotten.

      • DeWitt says:

        The franchise or the James Cameron movie? I think millennials are going to remember the former for much longer than the latter.

      • INH5 says:

        If you’re talking about the James Cameron movie, I have a friend who is in that movie’s fandom, and it is far, far smaller than you would have expected the fandom for the highest grossing science fiction movie in history to be 10 years ago. Among the general public, it’s already close to forgotten. Go to a bar and ask around until you find someone who’s seen that movie (statistically it shouldn’t take you that long) and then ask them to name a single character or quote a single line of dialogue.

        If you’re talking about the franchise that started as a Nickelodeon cartoon, I think that will almost certainly be remembered as a classic by animation buffs. As for the general public, I have no idea. The people who watched the cartoon are probably greatly outnumbered by the people who only know “The Last Airbender” as a godawful M. Night Shyamalan movie. Looking back, the cartoons that stick around in cultural memory the most tend to be the ones that are widely syndicated to run on Saturday morning cartoon blocks and the like, and I have no idea how much success Avatar has had in syndication. However, the fact that it is not available to stream on either Netflix or Hulu does not give me confidence in its ability to attract new fans outside of its original TV audience.

        • meh says:

          was talking about the movie, have not seen the cartoon. I was not a fan, but there is no denying the movie probably had one of the highest peaks of any movie/book in the 21st century (so far).

          • J Mann says:

            I think the movie did an amazing job using 3-D as more than an add-on. I guess the question is whether critics and film students will hold on to it as a touchstone, or if it will be overshadowed.

            What about Crank and the Fast and Furious movies?

          • Iain says:

            I suspect that games and films that are famous for their technical wizardry are unlikely to have long-term staying power. Avatar was the first big 3D hit, and it looked great in comparison to its predecessors. Fifty years from now, it will no longer have that advantage.

          • meh says:

            They can if they are also good. 2001, and T2 come to mind. King Kong has nothing a modern audience would want to watch, yet remains culturally significant. Maybe also Jurassic Park.

          • INH5 says:

            What about Crank and the Fast and Furious movies?

            I would not be surprised at all if Crank ends up as a cult classic in the same way that many old B-movies have. But in the Turner Classic Movies sense? Probably not.

            As for Fast and the Furious, the first movie is 17 years old, so we should be able to tell if the early installments are moving into the classic category. I could be wrong, but I don’t think they are. I don’t even remember any retrospectives in the aftermath of Paul Walker’s death.

            Granted, the movies from Fast Five onward are pretty different from their predecessors, so it’s possible that they could end up as action movie classics even if the first four movies are forgotten. But I think it’s still too early to tell at this point.

            I suspect that games and films that are famous for their technical wizardry are unlikely to have long-term staying power. Avatar was the first big 3D hit, and it looked great in comparison to its predecessors. Fifty years from now, it will no longer have that advantage.

            Not to mention that viewing a movie in 3D requires specialized hardware, and 3D TVs have never caught on despite multiple attempts over several decades. Avatar was an awe-inspiring visual experience when I saw it in a 3D IMAX theater, but I can’t imagine it having nearly the same impact watching it at home, which is why I’ve never bothered to rent or buy it since.

          • Nornagest says:

            I bought a DVD on sale in 2010 or so. It’s much less impressive on the small screen.

      • Nornagest says:

        Avatar (smurfs, not ninjas) is a weird one. It’s very, very flawed, but I keep coming back to it as an example of all kinds of stuff — it seems to have struck some kind of chord.

        • johan_larson says:

          I suspect it appeals heavily to a certain lefty eco-hippy anti-military anti-capitalist sort of person. But I think the real reason it has mostly dropped out of sight is that it took a lot of flack for being very trope-y, reminding people too much of other works. And greatest of those tropes was What These Guys Need Is A Honkey.

          And I think that accusation was unfair. Yes, White Dude does manage to Unite The Tribes, but ultimately he fails. The attack he leads against the military falters. It’s not until Eywa herself intervenes, at the request of Colored Lady, that the battle is won. And the same thing plays out a second time on a smaller scale, with Colored Lady ultimately defeating Final Boss, a middle-aged white man.

    • DeWitt says:

      or games

      Modern gamer culture is very, very, very strongly defined by the games people born from 1985-1995 or so played growing up into adulthood, and I predict that this cultural baggage is going to last for a long while. Compare this, where the author notes that Christmas music conforms to whatever some people born in 1948 grew up with; whatever game you want to make now, whatever you want to do, people are going to complain about it not living up to stuff from when they were fifteen, whether that’s Half-Life or Morrowind or Ocarina of Time. The nostalgia in games like these is very strong, and they have gained enough cultural baggage and sanctity that I don’t see them losing their status any time soon.

      More to the point, I’m not certain about any given games, but franchises or even genres are absolutely going to be remembered for a very, very long time. World of Warcraft is fifteen years old today, and the people who have played it for years are unlikely to ever quite forget about the game. Online first person shooters like Counter Strike, Battlefield, and Call of Duty are also ubiquitous and evocative enough that their legacy is going to last for a very long time as well. Someone else mentioned pokemon, the franchise of which is old enough not to qualify, but definitely has been popular enough among millennials that I’m sure there’ll be people going on about their childhoods playing those games even thirty years into the future.

      Outside of games, I see people mention Harry Potter, which I’m sure definitely qualifies. What about, again, the genre that has become Netflix series? Just as we recognised at some point that all Greek tragedies look alike if you take them apart, I wouldn’t be surprised if twenty to forty years from now people started making homages with common tropes from these.

      Finally, do memes and general internet culture count? A lot of it is ubiquitous and so widespread enough that their being unique to today’s age may not be immediately obvious. White letters with black outlining, advice animals, what have you, are all immediately obvious to a large section of the population today, but the language and format used are unique enough that I’m very sure they’d look entirely bizarre to a viewer from some twenty-five years ago, say.

      • Conrad Honcho says:

        I’m curious how multiplayer online games will be remembered. Anyone can pick up The Legend of Zelda and play it, and have an experience pretty similar to the one I did when I played it when I was 10. My five year old played it and loved it (he’s beaten the first eight dungeons and just has to go kill Ganon).

        But you can’t experience World of Warcraft at launch like that. I played WoW from launch (quit with MoP because fuck Pandas) and I have a lot of memories of how engrossing it was to explore that world with other people exploring it themselves for the first time, and you can’t get that again, even on a private server. Or all the multiplayer games that did not become such enduring classics that people still play them. I played the hell out of Gears of War multiplayer when it came out, but online services for that game don’t work anymore. You can’t experience it like I did.

        What happens to the “classic” nature of multiplayer games once there’s no one left who played the game, and you can’t play it anymore because no one else is playing it?

        • dndnrsn says:

          Also, how do multiplayer games affect the singleplayer game? I think that multiplayer games kind of hurt some “games as art” potential. For example, I really liked CoD 4’s single-player mode. It was relatively short, it was quite tight, there was a bit of moral complexity, etc. I’m given to understand, however, that most CoD players think that the single-player is just a sideshow to multiplayer.

    • Anonymous Bosch says:

      It’s not books, films, or games, and it’s kind of an obvious one, but I think The Wire is going to stand the test of time as a marker for both the historic corrosion of law enforcement by the drug war, and the current artistic trend towards long-form serialization (not claiming it was the first to do either).

    • ohwhatisthis? says:

      Film: I’m going with two series. The Lord of the Rings movie adaptations, and the Avengers series of films and the associated movies. For new(as far as I am aware of) creations, Avatar is there.

      Games: Should I just list the biggest blockbusters? I will go with the Halo series for starting off the FPS with a bang, and Skyrim.

      TV: Game of thrones, likely.

      Books: This one is harder. Influential books are more cleanly delineated amongst educational groups then other forms. The most influential book amongst academics earlier this century was probably “The Blank Slate”. For general culture, i’m going with Harry Potter(even though it was started near the end of the last century)

    • dndnrsn says:

      What does it mean for a video game to be regarded as a “classic”? I agree with DeWitt that there are franchises that will be considered classic. However, I’d note that there’s two kinds of movies called “classics” – those that people still watch for fun, and those they don’t. The division is probably by technology, and thus by time. Someone is far more likely to a talkie than a silent film for fun. There are lots of films that are considered classics for being innovative, for changing the history of the film industry – but they’re not films you sit down and watch to entertain yourself, by and large.

      I think this gap is going to be much bigger for games than with movies. Technology and game design are still advancing in a way that is not the case for movies. I remember GTA: Vice City as being a great game – memorable setting, without the narrative/ludonarrative gap that makes the fourth game in the series so jarring – but I would never try to play it now, because the lock-on based combat was terrible. In comparison, you don’t watch top-notch films from decades ago and think “yikes, that camera work.” Additionally, it’s harder to play old games than watch old movies: I can easily go and get a copy of Casablanca I can watch, but getting old games to work often requires a lot of hassle.

      Games that fall into “classics you can still play” tend to be more present in some genres than others, too.

      • DeWitt says:

        Accessibility is definitely a point about video games that shouldn’t be forgotten. Not only do games age out of playability, the existence of so many platforms limits the audience that can access them. Many people might fawn over Shadow of the Colossus, say, but somoene who’s never owned a Playstation is either never going to come into contact with it or have to resort to illegal means of playing the game.

        • Civilis says:

          A remaster of Shadow of the Colossus was released for PS3 in 2011 and PS4 in February.

          In some sense, the fact that a game is getting remakes testifies to the fact that there was something about the original that struck sparks with gamers. The same is true when a long-running franchise takes a step back. After a very successful and very good set of 3D Super Mario games, that they’ve gone back and done a couple of 2D platform games which have also sold well (New Super Mario Bros.) indicates that the original formula still has staying power.

          • DeWitt says:

            Remastered on the same platform’s successor, yes. I can see the Godfather as well as a guy in the 70’s might have, but a PC gamer isn’t going to be playing God of War without either serious investment or some illegal hassling about.

      • Randy M says:

        In comparison, you don’t watch top-notch films from decades ago and think “yikes, that camera work.”

        There are movies that are somehow well regarded despite my thinking “yikes, that camera work.” Bourne Identity, I’m (not) looking at you.

        Games that fall into “classics you can still play” tend to be more present in some genres than others, too.

        This is certainly true. Maybe some genres need more refinement than others. And it is hard to separate out the nostalgia. A couple years ago, we were at an acquaintances house and they had a Nintendo with Mario 3 that actually still worked; I played til halfway through the last world before getting bored of it. On the other hand, I have a friend who still talks about how great FF1 is, and I don’t think it’s aged very well–pretty repetitive and grindy combat with few real choices. Chrono Trigger fares better, though since a lot of the appeal is the story and I know it well it’s hard to say how someone new to it but aware of, say, Elder scrolls, would enjoy it.

        • dndnrsn says:

          The games that have aged the best seem to be mostly RPGs, in my experience, from a certain period (early 90s? I know I picked up Betrayal at Krondor some years back and played through the first 2 or 3 chapters), and some turn-based strategy games – you could still play Civ 2 just fine.

      • Nornagest says:

        In comparison, you don’t watch top-notch films from decades ago and think “yikes, that camera work.”

        I actually do think I find films from the Sixties and earlier harder to watch because cinematography wasn’t as well developed back then. Sergio Leone’s spaghetti westerns still hold up well in most respects, for example, but the camera is very static, and that creates a sense of distance that more modern movies don’t have. (I sometimes have the opposite problem with very modern movies — shakycam often hurts more than it helps.)

        It’s not as jarring as playing a shooter from before the mouselook + WASD control scheme was well established, but it’s a thing.

        • dndnrsn says:

          It feels a bit slow, but I think it distracts less from the experience than wonky controls do.

        • DeWitt says:

          I sometimes wonder how much refining video games are going to get in that regard. Even those movies from the Sixties had half a century of cinematography to build upon, whereas video games are a much, much younger medium, one that’s also much more tied to the development of technology. Do things plateau after a point, do standard conventions and principles continue to be adopted, or is a modern good game like Divinity: Original Sin 2 going to feel horribly dated after a decade, too?

          • dndnrsn says:

            It might be harder to build upon the past, to some extent, because of the technological changes being faster/more intense, at least now.

            EDIT: That made more sense in my head. What I mean is, it’s harder to go back and loot the past for good ideas in video games than in movies, novels, tabletop games, whatever. More cool innovations are going to get forgotten, so there’s less to build on.

          • DeWitt says:

            It’s not immediately clear to me that technological advances move quicker now. Moore’s law isn’t around any more, and I’m under the impression that things like new consoles are released less frequently instead of more frequently now.

            Still, definitely wonder just how many video game conventions will get further standardised and enshrined as time goes on.

          • dndnrsn says:

            Computer technology advances faster than film technology does, though. At least, analog film technology. More movies dating from, say, halfway into the history of film as anything beyond experimental (so, probably around 1950) are watchable than games from a comparable halfway point (around the late 90s or early 00s) are playable. Technology is probably a big part of this.

          • DeWitt says:

            Yeah, definitely. Additionally, I think technology offers more opportunities for video games to develop in a way movies do not. Graphics aside, making a game as heavy on processor power as Stellaris would’ve been beyond any but the most niche audiences, whereas your capability to make a good Mad Max movie isn’t even contingent on technology at all.

          • dndnrsn says:

            I think there’s a lot of opportunities for games to do a lot of cool things, but I don’t know if the incentives are there for those things to get done. Which is pretty sad.

          • The Nybbler says:

            Not much, much, younger. Space Invaders (which should probably be on the classics list) is 40 years old this year, for instance. Spacewar! is from 1962 but is a bit of an outlier.

    • SamChevre says:

      On the “not pop culture” topics, I expect N. T Wright’s apologetic, Simply Christian, to be read and influential for a century. In many way’s, it’s a successor to C. S. Lewis Mere Christianity, which is still very influential.

    • INH5 says:

      We’ve supposedly been living in the Golden Age of Television for a while now, so I expect the last 20 years to produce a disproportionate number of classic TV shows. With that in mind, I am pretty confident that Breaking Bad, The Wire, and The Sopranos will be remembered as classics for a long time to come. Lost and Game of Thrones will definitely be remembered, but they could easily end up as classics in the Gilligan’s Island sense, remembered more for their nostalgia value than their quality.

      On the question of the War on Terror, I expect that 24 will be remembered, but probably more as a relic of the zeitgeist a la Red Dawn.

      —-

      When it comes to movies, as other people have said, the Lord of the Rings trilogy will probably stick around for a long time. Other movies from the first decade of the 21st century are pretty much already classics. The Dark Knight is an obvious one, but The Departed, No Country For Old Men, There Will Be Blood, and Spirited Away also show up frequently enough on “Best Movies of the 2000s” lists for me to put them in the “already classic” category. Finally I expect that Finding Nemo, The Incredibles, and WALL-E will be fondly remembered for quite some time, with the legacy of other 21st century Pixar films being more open to question.

      The MCU will most definitely be remembered, but probably more for the achievement of creating a shared big budget movie universe than the quality of any individual film. I expect that it’ll end up in a similar position to the James Bond movie series.

      —-

      As for books, other people have mentioned Harry Potter and I can’t think of anything else either.

      —-

      Video games are a more difficult media type to assess, because while movies and TV shows can be easily updated to new formats, updating video games to run on new hardware requires a lot more time, money, and effort. After the very first releases of the Playstation 3, no one has even tried to make new generations of consoles fully backwards compatible with older generations, and PC hardware changes so rapidly that many games need regular patches either from the developer or fans to run on newer systems. Then there are licensing issues – the No One Lives Forever series might be a good candidate for a classic that was underappreciated in its own time if it hadn’t fallen into a legal limbo from which it may never emerge.

      The track record of 20th century video games suggests that most “classic” games are installments of successful and long-running franchises, because that’s the only way to both guarantee the funding to continually update and re-release the older games and make sure that the rights don’t end up getting lost somehow. Good candidates for 21st century games in that category include Halo, Grand Theft Auto: San Andreas, Fallout: New Vegas, Call of Duty 4, and, if the sales of its recent remaster are any indication, Burnout Paradise. Half Life 2 and Portal have also managed to stick around despite their franchises sputtering out a few years afterwards, probably because Steam gives Valve more than enough money to keep updating them.

      • dndnrsn says:

        I expect Game of Thrones will get very mixed retrospective reviews. It started off very strong, and really petered out in a lot of ways. The sad result being that by the time they had the budget to film big battles and such, the show was really changing the rules of the world to get the writers/showrunners out of corners they’d painted themselves into.

        On the games you list – wasn’t San Andreas lock-on aiming? I don’t think that will age well. CoD4 has aged well, however. It was also probably the best story of any game – a little bit of moral ambiguity, not just needlessly edgy, a relatively quick and tight story mode.

        • Randy M says:

          Ending well is a good thing to consider for choosing a classic. I understand that’s a big ding against Lost–the finale made it clear that the producers were making it up as they went along, hoping to be able to come up with something sensible to tie it all together.
          Breaking Bad, which I binged recently, stayed strong throughout and had a clear, satisfying ending. Arrested Development, a really good comedy, actually had a very good ending for the series after season 3… which they have since subverted. Subsequent two seasons have held some laughs, but possibly at the cost of the show losing its status as an unqualified great show.

          Firefly gets added to the list of classics, doesn’t it, even though it has to cross mediums to get it’s ending.

          • dndnrsn says:

            The first three seasons of Arrested Development were great; the ending of the third season paralleled the beginning of the first really well. Not sure what the point of anything after that was. The fourth season kind of wrecked a lot the first three had set up; haven’t seen the fifth.

          • mdet says:

            Thanks for the words of “ended well”. With the recent Solo movie (which I haven’t seen, and probably won’t) I was thinking about how the Star Wars Original Trilogy “ended well”, then the prequels came and “ended unwell”, and I initially welcomed the New Trilogy as a chance for the franchise to End Well again, but have now realized that it simply Won’t End. But I didn’t have the words “ended well” so I was having trouble articulating this.

      • dodrian says:

        You make a good point about the technical problems of keeping video games up to date. I think that means the “cultural classic” video games will come down to one of three categories:

        Easily reproducible:
        This is stuff like Tetris, Asteroids, etc, which are simple enough that they can even be assigned as projects in CompSci courses. I think most of these games will be older than 21st century, though it’s possible in the future better languages, tools, and frameworks early 21st will come under this category again.

        Studios with Staying Power, especially console first party games:
        It looks like Nintendo, Sony, and Microsoft won’t be going anywhere (things could change, but Nintendo has been around 130 years already, and though last decade was pretty shaky they’ve soared to new heights this year). The big studios have the licenses, dev power, and legal clout to keep their games alive, and are able to re-release classics again and again for updated hardware (making more money in the process). Halo (2001), Pokemon (1996), Ocarina of Time (1998) are the games that come to mind for me, and stuff like Knights of the Old Republic (2003) might also count due to their ties with massive enduring IPs (KOTOR was just added to XBox Game Pass).

        Open Source/Open Source Clones
        Not really “Easily Reproducible”, but some games have built up communities around them that dedicate themselves to making them Open Source. They usually start by creating an engine clone, but requiring the user to drop in graphics and other copyrighted files from a copy of the game, then they slowly replace all parts of the game until it’s an entirely ‘original’ game. Open Transport Tycoon Deluxe is one of the best examples, it no longer needs any files from the original 1994 game. Other active projects to clone/replace games include Open RCT2 (RCT2 – 2002), OpenAge (Age of Empires II – 1999), Freedoom (1993).

        I realize that most of my examples are actually from the 90s, but maybe others can provide more examples from the 2000s.

        • mdet says:

          Easily Reproducible… it’s possible in the future better languages, tools, and frameworks early 21st will come under this category again.

          Certain game genres might get recreated, but I don’t think specific 21st century classics will be reproducible. Tetris, Asteroid, Snake, etc. don’t have specific assets that are someone’s intellectual property, but most 21st century games do have iconic assets. If I recreate Halo, minus the Halo assets, and with the actual mechanics imperfectly replicated, then I just have any old FPS. Probably not enough to keep Halo’s specific prestige alive.

          Something like Portal is an exception, as a game known more for its unique mechanics than for any particular characters / settings / aesthetics.

    • mdet says:

      As a spinoff question, how many “great” or “classic” works should we expect to put out in a decade?

      One thing I’m sort of anxious about is that the 21st century media landscape is becoming so niche and fragmented + we’re producing so much “content” that there will be very few things popular enough to reach truly “classic” status. (The upside is that niche content can be incredibly relevant and popular with a small audience).

      I think The Dark Knight is the 21st century movie that best hits the sweet spot between “Who hasn’t seen it?” blockbuster and “Film critics can dissect it endlessly” acclaim. Also I’m just remembering The Passion of the Christ, which no one’s mentioned but I feel like was a big deal at the time.

      Musically, College Dropout (even though 808s was probably more influential) and To Pimp A Butterfly are the first two albums that come to mind for “classic” status.

      (Since it’s probably relevant to what kind of media I’ve been exposed to, I’m mid-20s)

      • johan_larson says:

        As a spinoff question, how many “great” or “classic” works should we expect to put out in a decade?

        In literature, I would guess there are more than a hundred but less than a thousand works that wouldn’t look out of place on the syllabus of a course in English Literature (meaning works originally written in English, from anywhere.) Figure those works almost wholly come from the last 300 years. So, one new classic per year? Maybe one every decade if you take only the most famous works, the Moby Dicks and Portnoy’s Complaints.

      • mdet says:

        I’ll add to my take that, while I don’t know how the MCU as a whole will be regarded a generation or two from now, I think the circle shot from The Avengers could be on par with the boulder shot from Raiders of the Lost Ark in terms of iconic movie moments

    • BBA says:

      I suspect, if things in the future stay as woke as they are now, that 2004’s Crash will be remembered like 1915’s Birth of a Nation – with astonishment and horror that something so incredibly racist won such acclaim from the “enlightened” people of the time.

      (If not, it’ll be a difficult trivia question as the movie that beat Brokeback Mountain at the Oscars.)

    • WashedOut says:

      Predicted future classics

      Books: Harry Potter, The God Delusion

      Games/game franchises
      (high confidence): Mario Cart, Minecraft, Starcraft, Final Fantasy (probably VII)
      (low confidence): Myst, Counterstrike, Pokemon, Tekken/Street Fighter
      (wishful thinking): Dark Souls

      Musicians/Bands: Alex Cameron, Justice, Daft Punk, Massive Attack, Death Grips

      Films: No Country for Old Men, The Dark Knight

      Serials: True Detective, The Wire, Breaking Bad, either House of Cards or The West Wing

      ~Notes/justifications~
      -Starcraft was the first game (AFAIK) that drew a bright line between casual play and online competitive actions-per-second quantifiable min/maxing, and probably the first game people could play and stream professionally
      -Game of Thrones went down the drain after approx season 3, and will be (un)remembered as such
      -The Wire ended up being an omen of the future American politico-cultural zeitgeist
      -People will be listening to Death Grips in 50 years’ time and be thinking it was modern music
      -People will be listening to Alex Cameron in 50 years’ time in the same way we ironically listen to Bruce Springsteen now
      -The Dark Knight will have a cult following sort of like The Crow, but less weird
      -If you didn’t play mario cart as a child born between 1985 and 1992 you didn’t have a childhood

      • The Nybbler says:

        _No Country For Old Men_ might be a classic, but if so it will be remembered as the last echo of the Western genre in film, not the start of something new.

      • John Schilling says:

        -Game of Thrones went down the drain after approx season 3, and will be (un)remembered as such

        A great many people don’t seem to have realized that, and hold GoT to have been quite good through season 6. Possibly that’s just because of the extra social justice points they got for all the strong female roles, but if so they did it in a way that was much less annoying than the usual grade of social justice pandering and I don’t think they’ll be stained by it when social justice goes out of fashion. And there was some really good, memorable work in those seasons as well.

        Season 7, even fans and fannish critics have been calling weak, but if they stick the dismount next year I think it does go on the classics list.

        • dndnrsn says:

          Personally, I think it was the end of season 6 where things started to go wrong. It was near the end of that season (penultimate episode, really) and throughout season 7 that worldbuilding, character, and in some cases physics were all subordinated to keeping the plot moving. Narrativium became a much more common element.

          I doubt they’re going to pull it back together. It would be like a GM who’s drifted into hardcore railroading suddenly turning back and restoring player agency. Once it’s been decided that the driving force is going to be the plot, that’s not happening.

          • cassander says:

            I strongly suspect, but of course have no evidence, that GRRM has a pretty good idea how the series ends. the books are delayed because he doesn’t know how to get there from where he is, but I think he as a good idea of the ending. If that’s the case, then there’s some hope that the final season will actually have some story structure and characters making decisions in character. It’s a faint hope, I grant you, but I have a feeling that the last season will have some excellent landings for at least a few of the characters that are left, even if everything else is a mess.

          • dndnrsn says:

            I really do think it’s kind of like railroading in a tabletop game. In railroading, the GM has decided that he knows which way The Plot is going, and he can’t let the players or random chance change that. So he keeps them from doing things, fudges die rolls, etc.

            You’re right that the issue is probably not knowing how to get from point A to point Z. The railroad-like way that manifests that is they know The Plot needs to get to Z. So they just do whatever gets them closer to Z, even if it makes no damn sense.

          • meh says:

            It reminds me of King’s Dark Tower series. The first 3 were great books, and set up an interesting premise, but then King had no idea how to resolve it. He often told fans that he was uncertain he would finish the series in his lifetime. He eventually did, but the last 4 books were not very good. I think ASOIAF is running into the same problem… it’s a bit of a universal problem, it is easy to set up grand interesting premises, but hard to resolve them in a way that feels meaningful.

          • cassander says:

            @meh

            I’d agree, though I’d also point out that the dark tower books did have an amazing ending, there was just a lot of blah meandering to get there.

      • b_jonas says:

        Starcraft and Pokémon are definitely classics, but the first games in those franchises were already popular in the 20th century. I believe that many other games you list are also 20th century, including Final Fantasy, Mario Kart, Myst.

    • LewisT says:

      I second the Second Disney Renaissance suggestion. I also imagine the first Pirates of the Caribbean will have some staying power. Possibly Despicable Me as well.

      In the classical music scene, I suspect the film scores from Pirates of the Caribbean, Lord of the Rings, Harry Potter, and the like will end up just as popular a century from now as the opera overtures from the late 19th/early 20th century are today. John Williams will probably be about equally popular as Wagner (perhaps more so) in 2100.

    • meh says:

      I loved The Prestige, thought it was an almost perfect movie. But Roger Ebert make a fair point

      “The Prestige” has just about everything I require in a movie about magicians, except … the Prestige.

      • John Schilling says:

        David Bowie playing Nikolai Tesla, and doing it well enough that I never once questioned it and never even noticed it was Bowie until the closing credits, doesn’t count for both definitions of Prestige?

        I do agree that, if it were going to be recognized as a classic, it would have happened by now.

    • Night Watch, maybe. On the more literary side: Life of Pi, Wolf Hall, White Teeth.

    • mdet says:

      “Film is heading towards death as a genre”

      What do you define as “film”?

      Whenever conversations come up about the state of Hollywood, people inevitably make the point that “Hollywood is so terrible, they make nothing but derivative franchise films. Clearly they’re dying and can’t come up with any new, better ideas”. But whenever conversations come up about the state of tv, everyone raves that there are so many new shows of such great quality covering such a wide range of different genres and premises, that there’s several times more great stuff than any individual is capable of watching. Does the success of tv mean that film is dying, or that it’s better than ever before?

      Would it change people’s opinions of filmmaking if we started referring to Game of Thrones as a long running film franchise that’s about to put out its final movie, and the MCU as a tv show that’s currently in the middle of its two-part Season 3 finale?

      ————

      On Blade Runner 2049: I thought the movie was gorgeous, and just like the original I enjoyed it for its aesthetics and atmosphere. But I also thought the story was a little half-baked. I don’t remember the specifics but I recall walking out of the theater thinking there were several loose ends left over, and I’m not sure if anything Jared Leto’s character did actually impacted the story in any way. I did enjoy it overall, but the original had much tighter storytelling, so the sequel’s legacy might not fare quite as well.

      I agree that Christopher Nolan might end up as the most notable filmmaker of this decade.

  9. fion says:

    I’d be interested if anybody else has watched this conversation between Jordan Peterson and Matt Dillahunty and what your thoughts are on it. I thought JP didn’t do very well. As usual, he failed to be straight about whether he believes in God, and he repeatedly straw-manned MD’s views.

    In particular, I think his opinion that “a secular morality inevitably leads to something horrible like Raskolnikov, and any atheists who aren’t like Raskolnikov are evidence that they’re not really atheists and not evidence against my original opinion” is pretty poor, and not the best interpretation of the available information.

    • J Mann says:

      I don’t generally watch videos, so I have to admit to commenting without watching the Peterson-Dillahunty discussion, but these seem like a pretty good steelman of Peterson’s position if you want to test your reaction. (The second one is a little more focused, but it helps to have read the first one.)

      https://reasonrevolution.org/my-disappointment-with-the-matt-dillahunty-and-jordan-peterson-discussion/

      https://reasonrevolution.org/revisiting-the-matt-dillahunty-and-jordan-peterson-discussion/

      and more generally:

      https://reasonrevolution.org/introduction-to-jordan-peterson/

      Peterson doesn’t help things by assigning ideosyncratic meanings to existing words and by generally being so gnomic, but if we really want to engage with why many people find his ideas interesting, I’d argue that we to make the effort to figure out what he means, or what his admirers understand him to mean.*

      * Note: It’s definitely possible that he doesn’t mean anything, and his followers are using his opacity to attach their own meanings, but we probably can’t conclude that without doing the work. This is why I find studying most 20th century European philosophers so frustrating, BTW – there probably is a pony in most of those piles.

    • professorgerm says:

      This podcast from The Bridgehead does a better job of getting at what I think JP is getting at with his ‘secular morality of Raskolnikov’ bit.

      The short version is without some form of transcendent being human rights aren’t actually based on anything except the whims of the culture, and that this is dangerous for various reasons.

      This is an area where I find the left to be especially weak. People like to claim things about rights all the time, but what the hell is a ‘right’ in the first place? Where do they come from? (Note that I do not think the right in the US has a great track record on ‘rights’ either, but they don’t try to claim one)

      The Founders put ‘endowed by the Creator’ in the Declaration for that very reason.

      • meh says:

        This makes a lot of sense in theory, but in practice interpretations of the transcendent being seem to still be subject to whims of the culture.

        Can the rights be based on our biology, optimized for the well being of our species and civilization?

        • professorgerm says:

          Fair!

          The podcast leans heavily on Christianity, given the beliefs of the host and the guest, and that it’s the whole ‘created in God’s image’ bit that allows for universal human rights. This does still require people to live up to that standard- historically, people claimed to be Christian and yet committed atrocities, so it’s not a perfect solution.

          That’s a good question, and one that I’ve been puzzling over for a while, and will continue to do so for the foreseeable future. But what part of our biology is it based on? Some common aspect of our genetics? Or should we actually strive for improving our species as well?

          In the podcast they reference ethicists that say a fetus is human based on its genetics, but it’s not a person based on brain activity and thus it has no rights. They go on to mention the implications this could have for people with dementia or severe brain damage; can this loss of ‘personhood’ result in euthanasia?

          Or say the current thinkpieces over Musk and his dreams of establishing a Mars colony, saying we shouldn’t spend money on spaceflight while poor people still exist. If we’re optimizing for the species, I think it has to be said that we need to put far more funding into space rather than welfare.

          Yes, I think rights could be based in biology. But to do that, and to do it consistently, is complex and requires potentially-unpleasant or at least potentially-politically-impossible realizations.

          Have Peter Singer or Robin Hanson written much about rights? They seem like the types willing to go to logical endpoints but I’m not too familiar with them.

          Edit: http://str.typepad.com/weblog/2009/03/peter-singer-rejects-inalienable-human-rights.html

          Singer rejects unalienable rights and appears relatively fine with the concept of personhood, despite implications and slipperiness of definition. At least he’s consistent, right?

          • meh says:

            Or should we actually strive for improving our species as well?

            Always! But what do we consider an improvement? Could preferences for what is considered an improvement not also be innate? (Though there will be some variation among individual preferences) Would humans tolerate a set of rights that were opposite their biological preferences?

          • professorgerm says:

            Right! An improvement in one condition isn’t necessarily an improvement in all conditions, either (sickle cell comes to mind although it’s not a perfect metaphor).

            No, I don’t think humans at large would tolerate rights opposite of their instincts. Some will; Tesla argued for eugenics and believed he shouldn’t reproduce, but some people say that’s just post-hoc rationalization. And there’s some weird corners of the net that espouse the same thing. I think people like that are the exception rather than the rule, however.

            It depends how far off it is. Is the right adjacent to their instinct? Sure, they might tolerate that. But if it’s truly opposite, most likely not.

        • Nornagest says:

          The rights the Founders liked to talk about are rooted in natural law, which isn’t revelatory but rather deduced from how the world we live in works. You need a telos for humanity for its logic to work, but that telos doesn’t have to be divinely ordained (although they would have thought it was). I don’t find inclusive genetic fitness very satisfying as one, but it’s an option.

          • professorgerm says:

            How might one go about deriving that telos, or what options do you find better than inclusive genetic fitness?

    • I disagree with Dillahunty’s politics to a much greater degree than Peterson’s, but it’s nearly the complete opposite when it comes to their respective philosophies. I’m trying (barely) to be charitable to Peterson, but the question of what charity means is damn hard, man. How do you even even begin with that? Charity could be when you give food to a beggar, but then the beggar gets enraged because he wanted hard cash to exchange for hard drugs. Are you responsible for that? Fyodor Dostoevsky once said that…

      (trails off)

      Peterson’s entire argument here seems to be the semantics of faith in relation to things like self-awareness and innate impulses… except that’s a totally different sense of faith to religion, and you can’t use that to smuggle in a particular idea of religion, and then argue for it on the basis of its consequences. There’s a certain strawman of post-modernism that reduces it to goal post moving language games, and Peterson’s arguments sound very much like this pop-cultural idea of post-modernism.

      The game of “everybody is religious” only works when you redefine religion so broadly that it encompasses any higher ideals, and is not instead defined by the presence of an actual deity with prescribed properties as a creator, or some deity/set of deities that exist in a world beyond the human realm. We all know (from his other work) that his goal is then to take the chain even further and smuggle in Christianity, but this is self-contradictory, because if everything is religious – including atheists (he also acts like a smug dickhead and tries to strawman and mindread Dillahunty as “religious” at about 40+ minutes in) – he can’t really then go on about “losing” religion and it’s consequences. He says we’d lose art and culture, but then his response to atheist artists is to declare them implicitly religious anyway – in which case: what do we lose? We’d have to be losing specifically Christian religion, and not just “religion” in this overbroad sense he wishes to invoke.

      The articles J Mann has posted don’t improve my opinion. Peterson wants skeptics to have a totally different conversation about faith, and conduct it beyond the terms of whether or not God is a literal being. The problem with this is that the atheist community (as Matt Dillahunty himself demonstrates late in their debate), is already aware of these alternative interpretations. If we need to read deeper to understand Peterson, he should read deeper and look over the 2000s New Atheism and the debates that went on. For Dillahunty, if God is simply some abstract faith, love, feelings of warmth, then he’s already willing to accept that and has said as much. Dawkins has expressed a similar sentiment. The problem of atheists is only with literalistic interpretations of God.

      “The secular humanist movement would be better off, especially in its relation to religious people and its understanding of religion and religious belief, if it sidestepped the question of the existence of God and asked what it means to say that God exists and what it means to believe or have faith in God.”

      The problem with this is that either God is some sort of entity or he isn’t. There’s no deep inquiry to be had here that can’t be had by using the same lexicon. Maybe saying “God exists” means some sort of feeling of love for all humanity instead of belief in a creative being, but that’s not the intrepretation atheists have an objection to, so it’s already entirely settled. The only response at that juncture is “Oh, that’s interesting. I use a different word for that concept.”

      I also don’t think that literalistic belief is as uncommon as claimed. One of the well established tactics Christian apologists would use back in the day was to invoke more vague semantic notions of God to get atheists to agree, and then smuggle in the literalistic persona of God later. Peterson may not be doing that, but it’s interesting that he refuses to adress the object level question. If he wants skeptics to have a “different” debate, then that requires some give and take.

      He could have said “Yeah, I don’t really believe in God as a literal being, but I think the mythology is really valuable”, but instead he gets into this pot huffing “What does meaning even mean, man?” stuff. He’s leading us down the garden path with a map of unmeaning, because he doesn’t want us to find the exit.

      It’s right there!

      “Ah, but you see, you’re not interrogating different interpretations of what an exit might mean in the memeplex of different cultures…”

      • WashedOut says:

        I’m trying (barely) to be charitable to Peterson…

        Sounds like you should try harder. Start with listening to his Biblical Lectures series. Same recommendation goes for the rest of SSC, who apparently decided a while ago that JP was resonating too much with the ‘young-atheist’ section of the market and proceeded to construe him as a hand-waving motivational speaker with curious insights.

        I know JP won’t come out with the rationalist slam-dunks people want, and his points of view on God don’t lend themselves to ‘object-level’ discussion that SSC craves. However I think we can admit it’s difficult terrain to explore, especially if you are trying to reconcile deep psychological meaning with scientific materialism. Given that, I’ve found the best way to understand and be charitable to his position is to listen to his lectures, where he admittedly stumbles his way through complex ideas, and doesn’t necessarily have all the answers.

      • J Mann says:

        Thanks, Forward Synthesis. Can I ask a follow up question?

        It seems to me there are two questions:

        1) Is there a God, and

        2) Are some of the cultural archetypes (including some aspects of God) useful?

        If I understand, Lovins’ take on the conversation, Dillahunty wants to answer #1 and Peterson is interested in talking about #2.

        Does that sound about right, and if so, am I reading you correctly that you’re frustrated that Peterson won’t just come out and frame things that way?

        • That’s more or less right.

          What annoys me the most is that Dillahunty is willing to discuss 2 even though he’d rather discuss 1, but Peterson doesn’t seem to want to address 1 outside of framing it as intractable without knowing what God even means. This leads to Peterson diluting his own argument on 2. He wants to show that we’d lose something but when challenged he brings up art and music, and then when Dillahunty points out that there are great atheist artists and musicians, Peterson proclaims them religious anyway. In which case, how do we lose anything?

          He seems to want to have his cake and eat it. You can’t simultaneously think that God and religion are undefined enough to apply to broadly any creative field and believe that we’re in danger of losing the specific aspects of religion that make it culturally vital. In his desire to avoid question 1 he destroys his ability to credibly argue his case on question 2. Peterson falls into his own trap, which is probably why he starts getting so ratty at 40 minutes and weakly accuses Dillahunty of being religious.

          • Conrad Honcho says:

            He wants to show that we’d lose something but when challenged he brings up art and music, and then when Dillahunty points out that there are great atheist artists and musicians, Peterson proclaims them religious anyway. In which case, how do we lose anything?

            Perhaps it’s not so much a criticism of “atheism” as “nihilism.” I think Christianity is so baked into our culture that an awful lot of people who don’t believe in God and don’t go to church agree with most of the things Jesus had to say and generally act like it. Nihilists not so much. Are there great nihilist artists?

            What I think one does lose with the loss of religion is clarity. You get a generation or so removed from the church and people are still basically acting Christian. They start to believe this is just the natural way of things. “Do unto others” is the baseline. When it’s not. Not at all. Jesus did not show up and say things everybody already knew. That everyone was equally human, that you should care for the lowest of the low, and even love your enemies was non-obvious. You take that away and then, well, what’s the argument for not liquidating the enemies of the proletariat or the Aryan race? “I get everything and you get nothing” is perfectly rational. It’s way more rational than loving your enemies.

            That’s what started Peterson on this whole thing, was trying to understand the structure of belief. Of what it means to “believe” something, so you can try to understand how people could come to believe the things they believed and commit the acts they did in Stalin’s Russia or Hitler’s Germany. I think these are very important questions and I’m glad Peterson’s got people talking about them. And I’m perfectly fine with him dodging the object-level question of God because once he goes down that route he’s just another street preacher telling you what to believe instead of a psychologist discussing the architecture of belief itself.

          • Nihilists not so much. Are there great nihilist artists?

            There aren’t a lot of people who self-identify as nihilists, so we come back to having to say “Oh, this person is a nihilist because…”

            You take that away and then, well, what’s the argument for not liquidating the enemies of the proletariat or the Aryan race? “I get everything and you get nothing” is perfectly rational. It’s way more rational than loving your enemies.

            I’m not convinced by this, because Christians throughout history have managed to love their enemies abstractly while slaughtering them physically. I’m not making the point that Christians are horrible and brutal here, but that they are nothing special in the humanism stakes and have committed tremendous atrocities like their later rationalist counterparts. The ideal of “loving your enemies” is certainly more promising than “hate your enemies”, but there are more rationalist philosophies than merely winner takes all proletarianism or aryanism.

            The philosophy of liberalism and libertarianism are often premised on secular natural rights, and the idea that people simply deserve rights by dint of existence as an individual.
            Now, you can say that these are just a hold over from Christianity, but the problem with Christianity is that it has been so influential that it has affected everything in one way or another. There are elements of Christianity in proletarianism and aryanism as well, and from this we gain the “cultural Christianity” concept, and the seeds of the idea that everything is ultimately religious.

            The idea that without Christianity we would be able to pick and choose which desires to maximize thoughtlessly doesn’t really hold up, when historical Christianity is a total mess of competing factions, and that millenialist groups were able to bring into being horrific maximizing regimes that eerily mirror their later secular counterparts.

            That’s what started Peterson on this whole thing, was trying to understand the structure of belief. Of what it means to “believe” something, so you can try to understand how people could come to believe the things they believed and commit the acts they did in Stalin’s Russia or Hitler’s Germany. I think these are very important questions and I’m glad Peterson’s got people talking about them. And I’m perfectly fine with him dodging the object-level question of God because once he goes down that route he’s just another street preacher telling you what to believe instead of a psychologist discussing the architecture of belief itself.

            I’m always glad to see people interrogating the origins of belief. I just feel like Peterson is trying to have it both ways. He acts as an advocate for religion, but then dodges away from defining it when convenient. It feels like a slippery defensive maneuver. He’s not a literal preacher, but he’s not approaching his analysis from a neutral standpoint with respect to religion, and he is making arguments for a particular worldview, which can of course be challenged. When he turns up to a debate which is explicitly about that very topic, I think it’s a bit rich for him to try and avoid it by saying it’s hard (man), and then try to act like his opponent already agrees with religion implicitly absent any argument for it.

            For a much better interrogator of belief structures, I would go to Jonathan Haidt.

  10. If politics are highly heritable, why do pretty much all democratic countries (non-democratic countries can cheat by taking voice away from opponents) have fairly contestable splits between their left wing and right wing factions?

    • Protagoras says:

      Because there are incentives for the parties to line up that way; if a country happened to be full of people further right than average, there would still be a left (well, more left) and a right party of about equal size to one another, the split between the two parties would just be further to the right than average. And conversely, obviously.

    • Iain says:

      Because left and right are relative, not absolute.

      You can line everybody up on a political spectrum and have the left half fight the right half regardless of how they’re distributed.

      • It’s mostly relative in terms of how extreme a level a principle is taken to, not in terms of what those principles are. There are relative and absolute elements to ideology. The right is still the economic darwinist socially conservative side the world over. In theory, a country could be 90% people who focus on minority rights and think we should have an egalitarian economic model, or 90% people who favor a highly competitive system with traditional morals, but outside of dictatorships, I can’t think of any examples where a particular set of principles (or emotional tendencies) is this totalizing. Where are the countries where pretty much everyone is a conservative, or pretty much everyone is a progressive?

        Those countries that would be shifted over so much like that wouldn’t have political debate revolving around principles like “how much government” or “are immigrants bad or good” but about how we best achieve the things we all agree on. Even if the UK is skewed left compared to the US in some sense, this hasn’t resulted in everyone holding left wing values and merely measuring left vs right on the basis of implementation. Instead, the values difference persists and it remains relatively even.

        • Wrong Species says:

          Some European countries might be considered 90% leftist by American standards but that doesn’t mean they all agree. If 90% of people agree on something, they’ll split up in to different parties that focus on the remaining differences.

        • JulieK says:

          The right is still the economic darwinist socially conservative side the world over.

          I though that “far right” parties in Europe generally support welfare spending and economic protectionism.

        • James C says:

          The right is still the economic darwinist socially conservative side the world over.

          This is really simplifying matters. The right is far more often defined as reactionary and conservative, rather than having any particular policies. The focus is usually either keeping policies in such a position as to benefit those currently in power, or to restore power that has recently been taken away from the old order.

          Historically the right wing of the French government, where the term originated, was both protectionist and monarchist. Something modern right wing parties would despise on principal.

    • quanta413 says:

      Frequency dependent selection. When one side is dominant, their opposite reproduces more rapidly.

      Ok, not really. I’ll offer some more serious speculation.

      “Left” and “right” are fuzzy relative terms. The dividing line tends to be placed roughly at the median of a countries’ political opinions. So we can safely expect countries to have roughly 50/50 right/left terms more as a matter of definition than reality.

      I don’t think it actually makes a whole lot of sense to say politics is highly heritable without being specific about what that means over more than one generation given how rapidly politics shifts. But if you took heritability estimates of political affiliation very literally, you could imagine that “culture” mostly determines the location of the center while genetics just influences the odds of falling in various directions from the center.

      I think a more accurate model would be that the heritability of politics is probably just a side effect of the heritability of more fundamental traits like IQ and personality and those mediate political affiliation in rather complicated ways. Assuming that whatever estimates you’re using have successfully corrected for obvious non-genetic routes of transmitting political belief between generations.

    • cassander says:

      because political parties aren’t stupid and divide into median voterish factions, adjusted for the peculiarities of the local political system and inherited politics of the populace.

      • Anonymous says:

        Yeah. See how certain East Euro countries have a right-wing vs right-wing split.

        • Conrad Honcho says:

          I frequently hear Europeans say the same thing about America, that there is no “left” in the US.

          • Anonymous says:

            From my perspective, it’s more like USA doesn’t have a “right”. You have the Status Quo party, and the Progressive party. The Progressive party pushes insanity, the Status Quo party solidifies any insanity that the Progressive party manages to push past it.

            (Not that I think that, say, Poland’s supposed “right-wing” parties are especially right wing. The current government is semi-patriotic, quasi-Catholic socialists. Their opposition is Euro kleptocrats who are also substantially socialists.)

    • Nornagest says:

      Well, first of all, they don’t; many democratic societies are dominated by one party for long periods of time. The Japanese Liberal Democratic Party is a prominent modern example, and Canada’s Liberal Party pretty much owned it for most of the 20th century. Almost every established democracy has had periods as a dominant-party state.

      But insofar as they do, that might be more a consequence of the mechanics of democracy. It’s not really politics that are substantially heritable so much as policy preferences, and party lines can be drawn wherever there’s a split.

      • Douglas Knight says:

        Right, it’s policy preference that is genetic (from here). The last column shows that party is not genetic (which is an even stronger repudiation of the claim than made by your last sentence, and all the other commenters).

        • INH5 says:

          If foreign policy preferences have zero shared environment influence, how do you explain the wide swings in that area over just the last 15 years? Actually, the increasing political polarization in general makes me very skeptical of a result that political party identification has a large shared environment component but policy preferences do not, because political party identification, or at least things that very strongly correlate with political party identification, is clearly having some downstream influence on policy preferences.

          • Douglas Knight says:

            Those are good points, but (1) The paper is from 2012 and your graphs seem to show that half of the polarization is since then; (2) Asking whether Islam is a religion of peace may be more of a partisan dog-whistle than other foreign policy questions; (3) Chaining together family→party→policy results in small effects, plausibly as small as the observed family→policy numbers.

          • INH5 says:

            The # part of the url was copied by mistake. I was actually referring to the data on the page in general, which shows wide swings on foreign policy questions as non-specific as “should the US take an active role in world affairs.”

      • dndnrsn says:

        Probably relevant is that the 3 parties in Canadian politics result in less of a split down the middle than in the US. The Liberals are pretty centrist; they can’t go too far in one direction because the other wing of their party will jump ship for one of the other parties, and there’s another party in that direction already. I don’t know that the conclusion is that Canadians are somehow genetically predisposed to lukewarm centrism, though. It breaks differently depending where in the country you are, the provincial scene is different, etc.

    • If politics are highly heritable, why do pretty much all democratic countries (non-democratic countries can cheat by taking voice away from opponents) have fairly contestable splits between their left wing and right wing factions?

      I very much agree with the other commenters that (at least in a polity with winner-take-all features) political factions tend toward 50%.

      Heredity is surely a factor, but issues and coalitions change. It would be easy to prove that geographically-stable descendants of the Alabama Democrats and the Massachusetts Republicans of a century ago are now mostly Alabama Republicans and Massachusetts Democrats.

      Even on a shorter time scale, there have been plenty of political figures whose views diverged sharply from their parents. In fact, when I was younger, this was assumed to be a typical occurrence.

      A New Yorker cartoon from around 1970 (I can’t find it online) showed two young, liberal parents with their toddler son. One of them says, approximately, “When he grows up, he’ll reject everything we believe in, so for his sake, we should become stinking reactionaries.”

  11. grendelkhan says:

    Seen on Boing Boing: “Big Data, Platform Economy and Market Competition: A Preliminary Construction of Plan-Oriented Market Economy System in the Information Era”. It’s outside of my bailiwick, but it looks like the Chinese are considering trying to do the whole computerized-planned-economy thing that Scott imagined in his review of Red Plenty. The abstract doesn’t mention Glushko or Kantorovich, and the English is… difficult to understand. But I really do wonder if this is what they’re doing.

  12. JulieK says:

    What’s with the thread URL?
    (…100-75-2-2 instead of …102-75)

    • Douglas Knight says:

      That’s pretty weird.
      The URL is generated from the post name. If a new post is created with a duplicate name, it gets a -2 suffix. Even if the name is changed, the URL stays the same. This happened with a lot of recent open threads, that they used the wrong number and were fixed. But what if the title is used a third time? I think that I’ve seen wordpress do -3, so I’m surprised by -2-2. Or maybe Scott is doing something weirder, like duplicating an existing post; maybe since that already has a URL, it doesn’t generate it from the title, but just adds a -2?

      • Nick says:

        He’s very likely duplicating existing posts. And yeah, I would not be surprised if wordpress just adds the -2 again, or maybe it matters if you’re duplicating a duplicate instead of reduplicating the original. Compare the behavior of Windows: if I copy the excel file “stuff” and paste it into the same folder, I get “stuff – Copy”. If I copy that and paste it, I get “stuff – Copy – Copy”. If I copy and paste the original again, I get “stuff – Copy (2)”. If I open it from excel as a copy, I get “Copy (1)stuff” and repeating that gets “Copy (1)Copy (1)stuff”. Finally, if I open it as a readonly and try to save it to the same folder, I get “Copy of stuff”. I also got a “stuff Copy1” in the midst of this and don’t know how.

  13. rlms says:

    Inspired by a facebook comment: which historical person killed the most Americans*? My guess is FDR for the WWII casualties. Other causes I considered were the Civil War and Spanish flu, but those are harder to find a single individual with as large a share of the responsibility IMO. Henry Ford is a possible contender, but from looking at a table of motor vehicle fatalities I don’t think he could’ve hastened the adoption of the car by enough to beat the more concentrated causes above. Probably there are some obvious people who stopped medicines being adopted or something that I have missed.

    *as in, whose actions were the cause-in-fact for the greatest number of untimely deaths, but feel free to come up with alternative definitions

    • quanta413 says:

      I’m going to go for pre-U.S. Americans.

      Hernando de Soto. Brought European diseases to previously uncontacted North American tribes. I think the estimates of how many this killed is extremely uncertain but it may be higher than U.S. WWII casualties. It also may be lower, I’m not sure there’s any way to get a good count.

      Regardless, the diseases and animals he brought greatly affected the collapse/transformation of an entire culture (Mississipian).

    • SamChevre says:

      Limiting to “USians”, so America as USA, the country.

      Definitely Abraham Lincoln: the South wanted to secede, and he could certainly have let it do so.

      If you add in everyone living in the Americas including pre-US, definitely whoever introduced smallpox.

      • j1000000 says:

        Wow, my mental estimate of the Native American population at the time Columbus landed was an order of magnitude off.

    • Wrong Species says:

      How is Lincoln less responsible for Civil War deaths than FDR for WW2 deaths?

      • rlms says:

        My impression is that a civil war was more inevitable than American involvement in WWII, but I may well be wrong. Also, a counterfactual later civil war would presumably have had higher casualties, which favours Lincoln (or FDR depending on how you view the competition): if without his actions there would have been a 50% chance of no civil war but a 50% chance of one with double the casualties, he would be responsible for zero deaths (like how Henry Ford is not responsible for all deaths due to the popularisation of cars, just the ones that wouldn’t have occurred if that popularisation happen later).

        • SamChevre says:

          I would disagree that a Civil War was inevitable: I would argue that either a Civil War, OR separation between exporting and importing states, was inevitable.

          • cassander says:

            Other slave owning countries managed to eventually stop slaving without civil wars. The US, for a variety of reasons, had a narrower needle to thread, but it seems at least possible that had the civil war been postponed a couple decades it might not have happened.

            There is also the strong possibility that there might have been a quicker, much less costly civil war had things gone differently.

          • BBA says:

            There was already low-level violence between the North and South – Bleeding Kansas, Harper’s Ferry. I don’t think a “velvet divorce” would have done much to reduce the underlying tensions. Border incursions by radical abolitionists from the North and slave-catchers from the South, both with the tacit acceptance of their respective governments, could easily spark a hot war between the two.

          • Nancy Lebovitz says:

            Very tentative: there’s a difference between people who own slaves and people who view owning slaves as part of their identity, and it’s harder to abolish slavery among the latter.

          • Aapje says:

            @Nancy Lebovitz

            You see something similar in Japan, where whale hunting is defended by conservatives as part of their culture, even though the meat is in very low demand.

        • Evan Þ says:

          Let’s also not forget all the suffering and deaths from longer-lasting slavery.

          • Tarpitz says:

            I think the suffering is ruled out of consideration by the question, and the amount of deaths, while non-zero, probably wouldn’t have been enough to move the needle very far compared to civil war. Once the transatlantic slave trade had more-or-less stopped, US slavery strikes me as having had an unusually high suffering:death ratio by the standards of major evils.

          • Evan Þ says:

            @Tarpitz, good point; I agree.

        • My impression is that a civil war was more inevitable than American involvement in WWII

          I think both were inevitable. Also, weaker leadership than FDR (e.g., the John Nance Garner presidency envisioned by Philip K. Dick) could have led to greater American casualties, not less.

        • johan_larson says:

          My impression is that a civil war was more inevitable than American involvement in WWII, but I may well be wrong.

          Sounds like a good thesis topic in history: was the American Civil War inevitable?

          I have to wonder. You’d think there would have been a slow-motion Fabian sort of path whereby slavery would have become increasingly regulated and eventually abandoned as uneconomic.

          • Nancy Lebovitz says:

            Confederate Reckoning claims that the Civil War was a result of manipulation by southern elites. The majority would rather have let things drift instead of seceding. If there’d been an honest vote limited to white men, there wouldn’t have been a Civil War, at least not at that time.

            One thing about the book that I think is of value to rationalists. “The South seceded” is a common sentence, but if you look at the history, it was a very complex process.

            While I’m on the subject, I’ve been told that a quarter of the food for the south (important– food shortages became an issue) was grown in the Susquehanna valley, and it kept changing hands in the war. It’s tempting to think that the South was both immoral and stupid (the Union being more industrialized) but I can’t help wondering whether things would have worked out differently if all of the South’s major agriculture had been farther from the border.

          • Nancy Lebovitz says:

            Shenandoah Valley, not Susquehanna.

          • John Schilling says:

            Sounds like a good thesis topic in history: was the American Civil War inevitable?

            If the Democratic party hadn’t splintered in 1860, it is possible but unlikely that they could have put someone like Stephen Douglas in the White House. The resulting political compromises would likely have made the slave states comfortable enough to forestall secession for a decade, maybe two, by which point the Industrial Revolution and the rise of Egyptian cotton would have made secession economically and diplomatically implausible. Though it’s not 100% certain that Southern political leaders would have understood that.

            Alternately, while leaders on both sides clearly wanted to follow up secession with the classic politician trick of a Short, Victorious War, they both also wanted the other side to be the one that “obviously” started it so that their side would be the Good Guys. In practice, we got Fort Sumter, but there are probably a few timelines where “no, you go first” led to nobody going to war at all.

          • mtl1882 says:

            Many theses on that topic have been written. It was originally considered inevitable in the aftermath, then the revisionist school insisted it was the result of huge mistakes (mainly by abolitionists), and now it has swung back to inevitability.

            If you really study it, IMO the inevitably angle looks pretty good. If the South had seceded, I think tensions would have been quite high, as they had been. I think violence would have become a major issue, and probably sparked another war. It may have been a lot less deadly. I’m not sure. It was fine as long as people believed slavery would die out, which was a generally accepted belief. But then it got too profitable and too entrenched in Southern culture. The Confederacy’s leaders made statements to the effect that they wanted to found their society on slavery and white supremacy, and find new areas to spread slavery to (Cuba etc.) It wasn’t going to die out, and it was going to piss off the North and other countries. Perhaps the North and other countries eventually would have shut it down, via war, sanctions, or inducements. Once railroads and telegraphs became popular, the North and South could not ignore their radically different cultures, and they could not coexist.

            Deaths would have been a lot lower had the Union strategy made more sense in the first 2 years – unfortunately it was a mess. But it was not an easy thing to fix, especially for Lincoln, who had no military experience. Had he understood the situation more quickly, he might have ended it a lot sooner with fewer deaths.

          • by which point the Industrial Revolution and the rise of Egyptian cotton would have made secession economically and diplomatically implausible.

            According to the analysis of slave labor I’ve seen, it was much more successful in gang labor industries–activities where a bunch of people did things together, so it was easy to monitor them, make sure they were doing their job, and punish them if they were not. That applied to cotton and sugar but not to wheat.

            Arguably the development of the assembly line provided a new gang labor industry.

          • Nancy Lebovitz says:

            In Michener’s Hawaii, there’s a mention that slave labor made sense for sugar cane, but not for pineapple. Knocking one flower off a prospective pineapple fruit was enough to make it unsaleable. I don’t know whether this meant slaves weren’t used for pineapple (or only after the fruit was formed) or whether slaves were better treated.

            Economic arguments for the substantial end of slavery don’t explain why domestic slavery was ended at the same time.

    • Chalid says:

      Thomas Midgley, the inventor of leaded gasoline, if you hold him responsible for decades of increased crime and hard-to-quantify health problems?

      • j1000000 says:

        I say we lump those in with Henry Ford — 4 million dead in automobile accidents, without him cars may never have become such a central part of American society.

        (I don’t actually think this is the answer, but I think the answer is clearly “Lincoln” so we might as well do interesting butterfly effect-y things as you’re suggesting.)

        • Chalid says:

          The car was inevitable. Leaded gasoline wasn’t – it was controversial when introduced, and I have seen it argued that Midgley made claims about its safety that he knew were false.

        • Mark V Anderson says:

          As far as cars causing deaths, is this offset by lives saved? I believe that the automobile greatly increased prosperity, and prosperity in turn saved many lives through better medicine,better public health, healthier lifestyles, and ultimately less war.

      • rlms says:

        That does seem plausible, if it wouldn’t have been invented without him. If leaded petrol was responsible for a significant proportion of the increase in murders from ~1960-2000 it’s the right order of magnitude (hundreds of thousands).

        • Tarpitz says:

          It seems incredibly unlikely that it wouldn’t have been invented without him; I think the question is about adoption. Perhaps if someone else had invented it a few years later, some other solution to knocking would already have been widely adopted and leaded fuel would have gone the way of Betamax.

          He also apparently did crucial early work on CFCs…

          • Protagoras says:

            They knew at the time that blending in ethanol can also help knock. If TEL hadn’t been falsely advertised as safe, perhaps they would have just gone that way (though the oil companies really didn’t like the ethanol approach; apparently they were concerned, pretty unrealistically in hindsight, that it could be a step in the direction of widespread replacement of oil by ethanol as a fuel).

      • James C says:

        He also went on to develop CFCs, which had an oversized impact on depleting the ozone layer and on global warming. It’s deeply unfortunate as, as far as history can tell, he really had no intention to cause any of the damage he did. He was just looking for better chemicals for existing processes, but he had an “an instinct for the regrettable that was almost uncanny”. Which is a nice euphemism for saying that everything he made invariably turned out to be a slow acting poison.

    • meh says:

      As an alternate definition, I think we should consider a DAR (deaths above replacement), similar to the WAR (https://en.wikipedia.org/wiki/Wins_Above_Replacement) used in baseball.

      This should make explicit what I think many commenters are pointing out already, that many historical events and inventions had some inevitability/unavoidability to them.

    • James C says:

      There’s a fair argument to be made for Gavrilo Princip, the assassin of Archduke Franz Ferdinand, for his role in sparking WWI. While WWI wasn’t particularly bad in terms of American deaths, certainly not in comparison to European deaths, one can argue that WWI set up pretty much every other 20th century conflict. Which did claim a lot of American lives.

      Now, whether you can really blame Gavrilo for WWI is a subject of much debate. Old school WWI scholarship preferred to put the blame on Germany and Austria’s warmongering and take the assassination as a pretext, rather than a cause. Modern scholarship and my own opinion seems to point more towards the idea that it was a significant event on its own. The target, the timing, the location and the wider political situation all played off each other in the worst possible way imaginable. Indeed, there were far worse crisis in the same period that didn’t lead to a continent devouring war due to good diplomacy or simple luck.

      A great analogy I’ve heard is that pre-WWI Europe had built itself a doomsday device out of mobilization plans and grand strategies. (Do check out Blueprint For Armageddon by Dan Carlin, he explains this so much better than I). The pin had been pulled and hurriedly replaced many times before, but when Gavrilo started the count down no one was able to stop it in time. Maybe that bomb would have always gone off at some point, maybe there needed to be a war of WWI’s scale to convince everyone that fighting like that wasn’t an option any more, but there’s a chance that we could have made it through the 20th century with the pin never being pulled.

      If WWI and everything that came after it could have been avoided (which is a big if), well then, Gavrilo Princip would hold the dubious honor of causing the deaths of more humans and Americans than anyone by an order of magnitude.

  14. johan_larson says:

    Huh. Yahoo is still among the top 10 sites in the US. Ebay, too.

    https://www.alexa.com/topsites/countries/US

    I thought they faded a lot faster than that.

    • tmk says:

      eBay is useful for:
      * New things that are a bit too obscure for Amazon Marketplace. Slightly cheaper too.
      * Used things that are small and valuable enough to ship.
      * I guess collectibles? I don’t buy any.

      • Tarpitz says:

        I use eBay quite a bit to both buy and sell Magic cards, which I guess fall into your second and third categories.

      • toastengineer says:

        EBay is great for things that wouldn’t be appropriate for a store to sell, like recovered coolant and broken scientific equipment. Or things that are legal, but not quite supposed to be sold to just anyone under normal circumstances, or is usually sold with a contract attached. Or stuff like integrated circuit engineering samples that are blatantly stolen property but no-one seems to care. It’s a bit of a middle ground between Amazon and AliExpress.

    • I still use Yahoo email.

  15. phi says:

    Okay, here’s a question for someone who knows a lot about voting theory:

    I’ve come up with a voting system that’s fairly simple, so it seems someone should have thought of it already. I’m wondering what the official name is? It’s used in situations where there is a choice between several alternatives, one of which is a ‘status quo.’ It’s designed to be biased towards this status quo. Voters rank their choices of the alternatives. If none of the alternatives are ranked above the status quo by more than half of the voters, then the status quo is chosen. Otherwise, the alternative that is ranked above the status quo by the greatest number of voters is chosen.

    I think an example will be helpful here. Let’s suppose that a soccer team is deciding whether or not to change their team color from blue to some other color. Let’s say that red, green and brown are all suggested. Suppose that 40% of voters rank red above blue, 30% of voters rank green above blue, and 20% of voters rank brown above blue. Since none of the alternatives were preferred to blue by at least 50% of voters, the status quo prevails, and the soccer team remains blue. In an alternate universe, let’s suppose that 60% of voters prefer red to blue and 70% of voter prefer green to blue, while 40% prefer brown to blue. In this case, green wins the election, since the change from blue to green is approved of by the greatest number of people.

    This system has the nice property that any changes to the status quo will be supported by at least half of the voters. So does anyone know what it’s called?

    • Aapje says:

      @phi

      Voting theory generally seems to be agnostic about the choices and treats them all equally. I’m not aware of intentionally biased systems.

    • silver_swift says:

      I think your system reduces to a two round election where the first round is a simple majority vote with two candidates (keep status quo/reject status quo) and the second round is approval voting among all the non-status quo candidates.

      I don’t know if this system has a special name, but it does sound interesting.

      • Evan Þ says:

        Not exactly. Suppose a family currently lives in Aurora; 1/3 of them are perfectly happy there but are fine with Bentonville as a second choice; another 1/3 want to move to Bentonville but their second choice is to stay in Aurora; the last 1/3 want to move to Clyde but their second choice is to stay in Aurora. Under your two-round system, they’d first vote to reject the status quo and then vote to move to Bentonville. Under phi’s original system, though, they’d stay in Aurora as no individual choice is ranked above Aurora by a majority.

        • AnarchyDice says:

          Also, in the scenario posited, there is no stable state. If the vote is taken again once they are in Bentonville, they vote against the status quo with a 2/3 majority in the first round, then in the second round the preferences lead them to choose to move back to Aurora.
          Also also, if you modify it such that the preferences are Ab Bc Ca, then you get a continual vote process that leads the family to move from A->B->C->etc.

          • Evan Þ says:

            That’s a good point, which given real-world costs of moving would argue for phi’s system: why incur the costs if there’s no stable state?

    • drunkfish says:

      I don’t know a name for that, but it seems like it deserves a bit of modification. The biased-toward-the-status-quo part is definitely interesting, and I really like the line of thinking you’re going down, but what you do once you choose to reject the status quo seems incomplete. A hopefully illustrative 3 ballot example, assume A is the status quo:

      B>C>A
      B>C>A
      C>A>B

      In your description, C wins because it gets 100% vs A, whereas B gets 66% vs A. However, B has 66% vs C too, so between B and C it seems like B is the obviously best choice (add more B>C>A ballots to make it more compelling). I suggest, instead of your approach after rejecting the status quo, you construct it to just use a standard voting system once the status quo is rejected (I personally like Ranked Pairs, but even instant runoff and first past the post I think are better than “most wins vs status quo”).

      • phi says:

        I agree that it isn’t the best.

        Whether B or C is the best candidate in your example depends on the utility functions of the three voters. Those ballots are consistent both with utility functions where B is the highest overall, and utility functions where C is the highest overall. I think it would probably be preferable to have a system that would choose candidate B, though, since B is the Condorcet winner.

        For another thing, “most wins vs status quo” seems likely to be vulnerable to strategic voting. Take a look at the following ballots, where A is the status quo:

        1) B > C > A
        2) C > B > A
        3) …

        If 3’s true ranking is A > B > C, and she can predict how 1 and 2 will vote, she might be tempted to vote B > A > C. This will allow B, her second choice, to win. If she votes according to her conscience however, B and C will tie and the election will be decided by a coin toss.

        EDIT: I took a quick look at Ranked Pairs and I like it quite a bit. It seems to have a lot of nice properties. And it’s simple enough that I could understand it on the first read-through.

  16. We are having another South Bay meetup on Saturday, June 9th.

  17. Brad says:

    If you live in a city and have never tried sending your laundry out, I recommend giving it a shot. It isn’t very expensive as compared to other luxuries that even relatively poor people routinely pay for. I recently started doing it and now I can’t understand why I didn’t start doing so a long time ago.

    • maintain says:

      How do you think it would compare to getting a portable washing machine?

      • christianschwalbach says:

        Never heard of this being a thing. Is this due to lack of washing machine? Some other reason?

      • Brad says:

        Eh. IIRC those things don’t dry very well so you end up needing to string clothes all over your apartment. Plus you still need to fold.

    • Evan Þ says:

      I assume you’re talking to people who don’t have an in-home washing machine, or a washing machine right down the hall in their apartment/condo/dorm building, and would otherwise need to sit at the laundromat while it’s running?

      • Brad says:

        Even if you have a shared machine on your floor of an apartment building. I guess if you have a W/D in your unit you’re already committed.

        • Evan Þ says:

          Why? I don’t see the advantages of sending your laundry out in that setup.

          • Brad says:

            There’s still a lot of time spent going back and forth to the machine. And on a shared machine you have to be pretty prompt. Plus there’s folding which, at least to me, is extremely tedious and so ends up being done poorly.

          • JulieK says:

            If you would iron the clothes, getting someone else do to it is a big time-saver.

    • A Definite Beta Guy says:

      If you don’t mind sharing, how much is the cost?

      • Brad says:

        I pay $0.79/lb for wash and fold. In a typical week I spend $20-$25. If I wanted to save money I’d wash my own sheets and towels, which are fairly heavy.

    • outis says:

      Send out how? Do they come to pick it up at your place?

      • Brad says:

        The one I go to offers to pickup and deliver, albeit in a fairly small radius. Customer drop off and pick up is more common, I believe.

  18. Le Maistre Chat says:

    What pop culture SF has transhuman technology but won’t admit it’s transhumanist?
    I think the most blatant one is Star Trek. There was even a Next Gen episode where a con woman was able to pose as the supreme god of a planet with the tech on a cloaked jumbo shuttlecraft (runabout?). Combine that with the fact that runabouts have a transporter, a device seen to resurrect the dead minus only memories formed since their last transporter pattern, and anyone wealthy enough to afford one would be a Zeus-tier immortal (and arguably Zeus lacked the power to travel to other worlds).

    • dodrian says:

      IIRC, patterns stored in a transporter buffer only have about a few minutes before they begin to “degrade”, and the person would be lost. Something to do with the Heisenberg compensators probably. There are a few hackish workarounds, but only when the plot requires.

      • Anonymous says:

        OTOH, I haven’t seen a good technobable argument against cloning with transporters. There are some transporter clones running around, after all.

    • Mark Atwood says:

      What pop culture SF has transhuman technology but won’t admit it’s transhumanist?

      Superhero Comic books. However, that has been changing for a while.

      Harry Potter esq and Harry Dresden esq magic fantasy, Magic Realism, Urban Fantasy all as well. When you start systematically thinking through the implications, transhumanism falls right out front and center.

      Manga and Anime magical combat storylines, too.

      • Le Maistre Chat says:

        “Been changing for awhile” how?

        I don’t know much about DC technology, but Marvel humans have invented almost all space opera technology plus shrinking/growing… personal immortality is much weaker than the Federation should be capable of, but there’s an anti-aging formula and a blood donation from Bruce Banner will make you as strong and durable as She-Hulk with no known mental or cardiovascular side effects.

        Harry Potter makes it very clear that there’s no way to keep yourself alive when you’d otherwise die without ritually murdering humans for horcruxes and drinking unicorn blood.
        Now Dungeons & Dragons casters… or even better, the sitcom Bewitched, where “mortal” is practically a racial slur and witches can cast Wish without having to study.

    • James C says:

      Hmm, well Star Wars has all those AIs running around that they never seem to do anything with. Although, Star Wars is pretty implicitly post-apocalyptic when it comes to its degrading techbase so it might no longer count as a transhumanist setting.

      Space Marines are, despite their own insistence to the contrary, explicitly a transhuman off-branch of humanity. The Dark Age of Technology also featured a number of transhuman technologies.

      The Fallout universe has brain uploading, replacement robot bodies and rampant genetic engineering. Not sure that one counts as not admitting to transhumanism as it was pretty implicit that’s what some factions were gunning for.

      MLP had an infamous episode where they cloned Pinkie and they canonically didn’t catch all the clones. While they weren’t perfect copies, the concept was sound.

      The Honorverse goes out of its way to avoid its transhuman elements, complete with a big genetic boogieman, despite having designer humans as a mature and well developed technology. Spoiler: Gur ivyynvaf bs gur yngre obbxf ner vzcyvpvgyl rivy genafuhznavfgf

      • albatross11 says:

        In the Vorkosigan books, the Cetagandans are explicitly shooting for a transhuman civilization, split into the haut and ghem castes. (The ghem do the dirty work, the haut rule and manage the really scary bioweapons and genetic engineering of the next generation.)

    • albatross11 says:

      Star Trek is the land of tragically forgotten bits of cool technology. I’ve always wanted to see some kind of men-in-black explanation for that–the Federation guys with black uniforms who zap people with neuralizers so they forget the Picard Plantetbuster they used on the Borg, or Scotty’s trick to use a transporter to keep himself in stasis for a century, or any number of other technological wonders that were used as plot devices or to solve some otherwise-untenable plot problem, and then never used again.

      • Mark Atwood says:

        In the “Star Trek: Temporal Investigations” series, there are a couple of cases wear neargroup and fargroup outsiders point out that the Federation culture has become in practice very “technology affects culture” hidebound, conservative, and reactionary, despite loudly stating otherwise.

    • Le Maistre Chat says:

      Shoot, I forgot: Marvel comics does have immortality via mind uploading. That’s how there can be five Hitlers.

  19. hash872 says:

    Is there any reason why someone should buy real estate as an investment property? I.e. not your personal house but a multifamily residential property. The clear and obvious alternative being placing the same funds in a series of Vanguard index funds that cover most of the equities world (S&P 500, Europe, emerging markets etc.)/some % of bonds, depending on how aggressive & how old the investor is.

    The more I read up on and research real estate as an investment class, the more fascinated I am that people are so…. taken with it. Returns are basically always lower than equities over any time span, and that doesn’t include all of the many expenses of real estate too- financing, repairs, lost rent, possible court costs dealing with tenants, and so on. You’ll never see this in a pro forma from a seller or broker, but over a long enough period of time that property will need a new kitchen, new bathroom, new roof, new heating & cooling system….. These are massive costs eating in to your returns over a 20-30 year span! (Yes, I understand depreciation & taxes). So- real estate returns are lower than equities *before* a true accounting of expenses. Not to mention the different tax rates for capital gains vs. being taxed at the marginal rate for real estate income.

    The other answer everyone always gives is ‘diversification’. If I deposit money every month in index funds that cover all of the major markets plus emerging ones, plus some bonds including Treasuries….. how much more diversified can I get?? Not to mention the huge capital requirements of getting in to real estate investing.

    I’m at the age where friends are buying properties, some are buying investment properties, and friends & family are becoming like the ‘you’ve gotta have a baby Elaine!’ episode of Seinfeld. I kinda think it’s….. fundamentally irrational? Why would any rational investor buy a residential multifamily vs. a basket of index funds or ETFs? One number is larger than the other- what am I missing?

    • Douglas Knight says:

      Where do you get the claim that returns on housing are lower than returns on equities?

      This claims that returns are equal since WWII (and higher before), but with lower volatility and lower beta, so, really, higher. Of course, “Diversification with real estate is admittedly harder than with equities.”

      One theory is that people look around at their parents’ peers (or peers’ parents) and latch on to the strategy of the most successful person of the previous generation. That is likely to be someone who was poorly diversified but lucked out.

      • hash872 says:

        Seems to be widely accepted and known. I heard the same from (fee-only, non-commission, fiduciary, doesn’t manage a portfolio for me so has no ulterior motive) financial advisor:

        “For the majority of U.S. history – or at least as far back as reliable information goes – housing prices have increased only slightly more than the level of inflation in the economy…. Take a different time period: the 38 years between 1975 and 2013. A $100 investment in the average home in 1975 – as tracked by the House Price Index from the Federal Housing Finance Agency (FHFA) – would have grown to about $500 by 2013. A similar $100 investment in the S&P 500 over that time frame would have grown to approximately $1,600.”

        https://www.investopedia.com/ask/answers/052015/which-has-performed-better-historically-stock-market-or-real-estate.asp

        Also, as mentioned- the numbers used in the above quote are probably not counting for repairs, a new kitchen every 10-20 years, a new roof etc. My Vanguard S&P 500 Index fund does not require any maintenance

        • Douglas Knight says:

          Investopedia doesn’t take into account maintenance, but neither does it take into account rent!

      • mrjeremyfade says:

        That dataset does take maintenance costs into account or tries to. Page 11 and 12 discuss their method.

        Table A2 on page A56 might be the most relevant for hash872. While the time period isn’t specified there, it shows that for the US, equities outperformed housing by over 2% per year, which is huge.

        Although, housing has been the better investment for many non-US countries. I wonder why.

        • hash872 says:

          The US having the widest and deepest financial markets on Earth, having the largest number of high-quality companies everyone regardless of location would want to invest in, being the reserve currency of the world, etc.

          I actually used to sell commercial real estate (hence my familiarity with the nuts-and-bolts), and a huge huge % of landlords & investors in Large Famous Blue City that I live in are foreign. Seems like they all prefer real estate to equities. Wealthy Chinese were the ‘dumb money’ (sorry if harsh) wildly overpaying for assets just as I was getting out of brokerage

    • Chalid says:

      One thing I suspect is that, unlike in the stock market, it really is possible for an individual to “beat the market” in real estate by doing their own research. Most people who are buying or selling homes are doing so for non-economic reasons, so lots of information isn’t fully priced in. I think this explains a good deal of the appeal.

      • hash872 says:

        See, now this is a pretty good argument for real estate (and I just got done a self-study course on currency trading, on the idea that maybe I could beat the market on my own- answer, extremely unlikely lol). From my experience as a commercial real estate broker, I think the people who are making the higher returns are buying distressed properties, completely managing the construction & rehab (that $300k+ in my Large Blue City), then selling the finished product and flipping the proceeds into the next project. Still takes a huge huge amount of cash though.

        Also in my experience- that becomes a full-time job, so you’re probably replacing your current job with ‘real estate investor’. So I would still argue for index funds because it takes 0 hours of my time, so the time value of my returns is theoretically infinitely higher than a guy working 50+ hours a week. But, good argument for the real estate side for sure

        • Garrett says:

          To add another item, my limited experience is that the cost of getting professional work done is roughly 50% materials, 50% labor. If you are reasonably handy at some things and are willing to pay the opportunity costs, you can shave a lot of the costs of maintaining a rental unit off of the “expected” amount by doing much of the work yourself. This means that it can become a much better investment for someone with a blue-collar background than standard index funds, or than it would be for someone who has to pay professionals to do all of the work.

      • melolontha says:

        One thing I suspect is that, unlike in the stock market, it really is possible for an individual to “beat the market” in real estate by doing their own research. Most people who are buying or selling homes are doing so for non-economic reasons, so lots of information isn’t fully priced in. I think this explains a good deal of the appeal.

        I don’t think this is plausible unless you can explain why the smart rich people are leaving money on the table. If the real estate market is difficult to short, I can see the possibility that relatively obvious bubbles might go uncorrected until they burst. But why would knowably underpriced real estate remain underpriced, any more than knowably underpriced stocks stay underpriced? (This is only a semi-rhetorical question — I’d like to learn about any features of the market I’m overlooking.)

        • Thegnskald says:

          Because houses aren’t commodities in many meaningful respects. Knowing the value of House A only gives you limited insight into House B. Knowing the value of stock in Company A gives you the value of all other stock in Company A; the same isn’t true of housing.

          More, the information is expensive to acquire for each individual house, either in time or money. And knowing what information you need is itself valuable and specific information; height relative to sea level means a lot on coastal property, not so much inland, for a trivial example.

          And more, and more. The end result is that identifying underpriced houses is a full-time occupation, with decent hourly pay, but not particularly exceptional. There are corporations who do this. They don’t seem to be very competent; they can fail to realize an area has termite issues and have properties destroyed. The really substantive issue is that anybody competent enough to do this as a full time job has better, less risky options.

          The market can be perfectly rational while leaving large amounts of money on the table, if it isn’t economic to get that money, or of there are better sources of money elsewhere.

        • SamChevre says:

          Basically, because the factors that affect house value are not transparent, and are very location-dependent. Someone who lives in a specific place, and stays in touch with the housing market there, has a LOT of information that isn’t easily accessible. (Apartment buildings, and standard commercial property, are significantly more transparent, and significantly more likely to be owned by large-scale entities.)

          So think of investing in housing as an OK investment market, IF you have to pay someone to do all the work. If you do the work yourself, in many cases you make good wages for your work, much of which doesn’t feel like work, and have a decent investment return in addition.

          • Chalid says:

            I agree that local knowledge is a lot of it. If you actually live in a neighborhood and pay attention you have a huge advantage over some a corporation based 1000 miles away. You hear what people are talking about; you know about how an empty lot will get turned into a park in a few years (with predictable effects on local prices), or that the neighborhood grocery store is closing or what have you.

            A couple other speculations. Housing often takes a long time to pay off, and most big funds have a hard time looking more than a year ahead. It’s also hard to “mark to market” your portfolio which poses problems for a big fund’s investors. And investors typically want to be able to get their money back out in a reasonable amount of time, which isn’t practical when your typical house takes many months to sell.

    • tmk says:

      Don’t forget leverage. You can borrow 90% of the house value from a bank, which amplifies your gains (or losses) ~10x. You can’t borrow money to put in index funds. Or maybe you can, and it’s just not socially accepted.

      • Ryan Holbrook says:

        You certainly can borrow money to put in index funds. Interactive Brokers makes it easy and seems to have the best rates. There’s a book, Lifecycle Investing, by a couple of guys at Yale that advocated this for young people’s retirement savings.

        • A Definite Beta Guy says:

          With what collateral? I find it hard to believe I can just go to this website and borrow $200 million at 2%. If I can, everyone on SSC is about to become a billionaire.

          • Ryan Holbrook says:

            You invest your own money as collateral and then can (usually) borrow as much again as you invested. So, if you bought $10,000 of SPY with your own money, you could buy another $10,000 with borrowed money. I think the rates are about 3% up to $100k.

            Margin Accounts

          • Brad says:

            In addition to other options, you can take out a signature loan. That’s based on your earning capacity and credit worthiness instead of what you are buying with it and it isn’t subject to a margin call.

            If you were to borrow $50,000 at 6% on a two year term and then use that to buy $100,000 worth of stock, with the second $50,000 with a margin rate of 3% as above, you’d have a monthly payment of $2,216.03 comparable to what you might pay on a mortgage for an investment property.

            At the end of two years you’d own the first $50k in stock outright plus any dividends and price appreciation on the whole $100k, less around $1600 for margin costs. You could then borrow another $55,000 …

            I’m not saying this is actually a good idea, but I’m trying to make the point that leverage ratio is not necessarily the best number to look at when a loan is predicated on a stream monthly payments from the borrower’s personal income.

      • A Definite Beta Guy says:

        Right, this seems like the best answer. A typical multi-family investment in real estate involves something like 10% down, IIRC. You will borrow the remaining 90% from the bank at a rate somewhat higher than risk-free but lower than the actual return on the property.

        So if your property is making 8% return on assets, you are making MORE money as long as you are borrowing at less than 8%.

        You can then use the proceeds from one property to buy ANOTHER property at 10% down and build yourself a mini-empire.

        This isn’t something you’ll do if you are a loser corporate peon like me, but it’s what a lot of high-earning professionals will do. My only apartment was owned by doctors, and my finance professor uncle owns something like 5-6 rental properties.

        • Creutzer says:

          So if your property is making 8% return on assets, you are making MORE money as long as you are borrowing at less than 8%.

          Where on earth (or rather, in the western world) do you find a property that makes 8% return on assets?

    • ana53294 says:

      I think the main reason people may prefer to invest in real estate than index funds is the same reason people bury gold in the backyard; lack of trust in the government and the economy. Let’s say you own the house you live in, or have gold buried in your back yard. The government is changed by a coup, and the government comes after you to nationalize your gold or your house. You can then take your gun, unite with other people, and defend your property, and you will probably die, but you will die for a righteous cause.
      But if you have money in a bank account, or you have a piece of paper that says you own a piece of paper that says you own something called a “company” whose main asset may actually be a piece of paper that says that you have an exclusive right to produce something that brings money, what can you do to defend it, other than trust the government? (the justice system, the police, etc.). It’s not like you can heroically stand in front of it, taking the last stand to defend it.
      This view may be irrational in a country with a long history of respecting pieces of paper like the US, but it is actually rational in a lot of countries. I will tell about the example of ex-soviet countries, from what people who lived there told me.
      You could not buy that many appreciating assets there. The assets you can buy are: antiques, artwork, precious metals, precious stones. Antiques and artwork are very hit-and-miss, and require knowledge. But let’s say you were savvy enough to buy a painting by Valentin Serov, and this has increased a lot in value since you bought it.
      And then the government declares it unexportable and boom!, it loses most of its value. Or they just nationalize it, because, you know, something so valuable should not be in private hands. By the way, you could not export precious metals or stones, either.
      You couldn’t buy a house, but you could acquire one by exchanging favours, bribing, etc., for a house.
      When stuff was privatized, the only thing people were able to own were the houses they lived in, because factories, etc. were privatized in very strange ways that meant only the right people could acquire them.
      Housing is still the safest investment in Russia, even if it brings less returns, at least for poor, not connected people. Sure, you could invest in the EU or the US, but then the government says that you cannot leave the country, because you are not a trustworthy person (this can actually happen; the Russian government can at any time take away your passport), and then you cannot access your money. But at least houses are more difficult to take away from people, because this would probably create a revolution.

      Also, some people are just bad with numbers.

    • John Schilling says:

      The more I read up on and research real estate as an investment class, the more fascinated I am that people are so…. taken with it. Returns are basically always lower than equities over any time span,

      The average price-to-rent ratio in US residential real estate has fairly consistently been in the range of 20-25. That makes it the equivalent of a blue-chip stock with a 4-5% dividend yield. Actual blue-chip stocks pay dividends in the 2-4% range. So I am skeptical of this “basically always lower than equities” claim.

      And if you’re looking for more dynamic performance with comparably greater risk and volatility, note that you can leverage real estate at least 5:1, whereas equities are limited to 2:1.

      This does, as you note, come with a fair degree of overhead, particularly if you want to maintain anything like a diversified portfolio. So if you really don’t like the sort of hassle that comes with being a landlord, this probably isn’t for you. That hassle is why it pays better than buy-and-hold equities.

    • pontifex says:

      Why buy real estate?

      There are tax advantages for both your first home and for rental properties. You don’t get this for stocks.

      If you’re a foreign oligarch, buying property in all-cash deals is an effective form of money laundering.

      You can get a lot more leverage, easier, with real estate.

      You can’t live inside the stock market. Moving all the time sucks.

  20. johan_larson says:

    (I was asked to repost this link in a culture-war-positive thread. Done.)

    The Atlantic has an interesting article about the American gentry, the “9.9 percent” as the author calls them:

    The meritocratic class has mastered the old trick of consolidating wealth and passing privilege along at the expense of other people’s children. We are not innocent bystanders to the growing concentration of wealth in our time. We are the principal accomplices in a process that is slowly strangling the economy, destabilizing American politics, and eroding democracy. Our delusions of merit now prevent us from recognizing the nature of the problem that our emergence as a class represents. We tend to think that the victims of our success are just the people excluded from the club. But history shows quite clearly that, in the kind of game we’re playing, everybody loses badly in the end.

    Anyway, what this article got me wondering is whether there is a right level of inequality in a society. The US has a high level of inequality compared to the nations it tends to place itself among, but it’s not at all obvious this is per se a bad thing. Intuitively, I would expect a moderate level of inequality to result in greater productivity by spurring the capable and ambitious on to greater accomplishment, and that’s good, but really dramatic inequality would lower productivity because the few at the top would be able to twist the rules and institutions in their favor to stay on top. Is that how things actually work out?

    • The Nybbler says:

      really dramatic inequality would lower productivity because the few at the top would be able to twist the rules and institutions in their favor to stay on top. Is that how things actually work out?

      I think historically this worked quite well (see: hereditary aristocracy). Now… not so much. Certainly those at the top can keep themselves (personally) on top, but keeping it going for generations isn’t working out so well. Rags to rags in three generations is overstating it, but there’s considerable mobility. I think the biggest factors working against mobility aren’t twisted rules and institutions but actual heritable (oops) differences in the population. The Atlantic article makes a nod at “assortative mating” but not a serious one; I’d take it seriously.

      • christianschwalbach says:

        There is more competition for those top spaces, I will concur. And Inequality, per se, is relatively neutral but context matters. In many fast growing economies, inequality grows as some well positioned people profit far more than others, whether they be innovators, holders of capital, investors, etc… etc… As long as the society as a whole also gains wealth and income, inequality is a permissible side effect. The issues come in when economic growth stagnates for all but the top…whatever percent. (In the article, 9.9%) This makes economic mobility more challenging, and while you may have a lot fo turnover at the top, someone going from the 20% to th e9.9% isnt quite the same as a “Rags to Riches” American Dream experience so seemingly lionized in US narrative. Regrding more equal countries, there is also a good deal of variance here. Norway is a largely upper middle class nation, with less inequality than us, and there are some “3rd world” nations that have most of the population living on little, but usually involving some form of kleptocratic govmt and an very small elite. More unequal as a whole, but more citizens within a band of poverty income than say, the US. All in all, the measure that I pay attention to the most is economic mobility, rather than inequality. When this stagnates, its a sign of major issues.

        • The Nybbler says:

          The issues come in when economic growth stagnates for all but the top…whatever percent.

          The catch here is that the “top whatever percent” is not a static group. People move into and out of it. So it really is inequality that’s referring to, not mobility. Most of us, I imagine, start independent lives either near zero wealth or net negative with student loans. That’s somewhere in the bottom quartile.

          • christianschwalbach says:

            I addressed that when I mentioned that you can have movement in and out of the very top echelon, but have it stagnate for the vast majority. This scenario, while technically one of economic mobility, is not very societally healthy

          • The Nybbler says:

            The article fails to demonstrate that is what is happening.

          • MrApophenia says:

            There has been a bunch of research over the past decade or so showing that economic mobility has declined as well as inequality grown.

            A high level overview from Brookings was here, but there is obviously lots more available:

            https://www.brookings.edu/on-the-record/economic-mobility/

            “What our research is showing is that there is less intergenerational mobility or opportunity to achieve the American dream than many people believe in the U.S. That doesn’t mean we don’t have a fair amount of opportunity. People who are born, for example, into a middle income family have about an equal shot at either moving up on the economic ladder or moving down. We have a lot less mobility at the top and the bottom of the ladder. If you’re born into a poor family, it’s very hard to escape and similarly, if you’re born into a rich family, you’re likely to stay there, we call that stickiness at the ends of the distribution. The notion of rags-to-riches in a generation is mostly a myth. So, the glass is half empty and half full, we have a fair amount of mobility but the American dream is a little frayed right now.”

      • mtl1882 says:

        Looking at history, I’d say that while there are some highly notable cases of success, a hereditary aristocracy does not work well at all. The children eventually can’t keep themselves on top, and the inheritance can only be divided so many times. Then there’s panic and bad decisions. The amount of truly extraordinary people who come from an allegedly “inferior” genetic background always amazes me. Of course, it’s not the norm, but people who act like if we just got the “right” people together in stable, two-parent married families, we’d have a utopia, baffle me. If human history had depended on the offspring of only those unions, we’d have missed out on a ridiculous amount of human ingenuity.

        I think inequality becomes a big problem when it becomes strongly associated with worth. As we become more meritocratic, there’s a lot more emphasis on what people “deserve,” and it’s easy to argue the “deserving” should get more and more reward, and that the poor should be blamed. Whatever you think of these arguments, if you push them far enough, resentment builds to a breaking point, and that’s the real danger. If you can be poor and find meaning through being a working class politician or something, it’s better. If the extraordinary poor people are pulled out and sent to a top private school and then Harvard, there’s no one left to give pride and meaning to being poor. It’s transformed into an identity of failure. And if being rich means being the best instead of inheriting it, the children of rich people suddenly can’t keep up with expectations. Sometimes they start radicalizing and looking for meaning as well (this is arguably part of what happened in the run up to the Civil War – New England young people from aristocratic families suddenly had nothing to do. More innovative lower class people and immigrants had taken over the business world and other avocations. They were out of money and respect. But they were super educated and had been raised to be useful. So they joined abolitionism and other activist movements that gave them meaning and a community, even at the cost of ostracism and violence.) The only people who end up happy with the situation are the small number of extraordinary people who are deemed successful and deserving. In the Civil War, the nobility on both sides were very happy up until the point the country fell apart because most other people were not so happy. They felt they were “deserving” of respect, and since they declared that, it was true. They were “obviously” superior, although they had different versions of what that meant (well-educated gentleman versus plantation master gentleman). They had once been truly extraordinary families, but the latter generations were less so. And a lot of westerners with no family background to speak of, as well as religious and ethnic minorities, suddenly showed their talent in the leadership vacuum.

        ETA: Some inequality is obviously desirable and unavoidable, but there’s definitely a point where it becomes provoking to people.

      • maybe_slytherin says:

        When there is large inequality, there tends to be little mobility. Very true in hereditary aristocracies, fairly true in modern America.

        In practice, this inequality – and the rewards for being in the top segment – dictate that a lot of resources are spent trying to stay there. Historically, this could have consisted of armor and soldiers and elaborate feasts and clothing. (There are some pretty strong gender dynamics, in terms of “women’s work” supporting class stratification.)

        In modern terms, education is a major mechanism by which this stratification is enforced. (The article lists a few others, but education may be the most interesting.) For the libertarians out there: this is basically the argument of Caplan’s case against education. So much investment just to signal competence/class, not productive learning. Schools are also where a lot of assortive mating happens. (I think the genetic component of this is somewhat weak, but the wealth/social capital component is very strong.)

        In a more equal society, education would not need to serve as large a signalling function. So you could have less of it, and more productive work.

    • John Schilling says:

      Inequality is the only thing that can concentrate resources for purposes not favored by bureaucrats, like e.g. Mars colonies or iPhones or even interesting movies. We can disguise that fact if someone really insists, but that won’t help the underlying reality any. Bring on the inequality and stop being ashamed of it unless people are literally or figuratively starving at the bottom.

      • Does that mean the more inequality, the better, endlessly?

      • Aapje says:

        @John Schilling

        Lots of people who are not super-rich pool their resources by buying stock* directly or indirectly (by saving for their pensions). Nowadays you even have things like Kickstarter.

        You can have a society with relatively little inequality and yet strong pooling of resources.

        * Note that the first big stock-issuing company was founded in The Netherlands, which is probably because the society was relatively egalitarian.

        • John Schilling says:

          The modern incarnation of the joint stock company either has a single stockholder or core group of stockholders who are very much more equal than all the others, or it is run by bureaucrats who do the sort of things bureaucrats find safe and appropriate.

          And Kickstarter is just a way of concealing or realigning the inherent and necessary inequality, but so far only at the relatively low level of maybe single-digit megabucks per project.

          • albatross11 says:

            Also, if I can buy stock in companies, and some companies perform better than others, and more investment in stock leads to more wealth for my descendants after I die, then there will, in fact, be inequality.

      • Randy M says:

        At the first level, I don’t think that’s inequality, but surplus. Inequality doesn’t stop bureaucrats from taxing the rich more if they feel they need to for some reason.

        If you’re making the broader point that the existence and display of higher levels of wealth is motivating people to work harder, thus creating more goods that can be pooled, or that inequality spurs trade, perhaps that’s so.

        • John Schilling says:

          Surplus, equally divided, leaves absolutely everybody equally incapable of financing a prototype spaceship or smartphone. It takes a decidedly unequal concentration of the surplus to do that.

      • maybe_slytherin says:

        Inequality is the only thing that can concentrate resources for purposes not favored by bureaucrats

        Consider this quote about large projects in an 18th-century utopian religious community:

        “Labor “bees” were common at every utopian colony of the era, but the Perfectionists raised them to an art. They held bees for every large task—brick making, planting, broom corn harvesting, bag stitching, vegetable picking, and fruit preserving. “Working in storm,” as they called it, made tedious jobs go fast and gave the communists an economic edge over their neighbors. Neither “isolated” householders nor wage-paying bosses could quadruple or halve their workforce from day to day. The Circular regularly trumpeted the efficiencies of the system. Four thousand quarts of strawberries were picked in a single day. A barn was raised in a weekend. A large trap order was filled in one night. One “storming company” was tasked with stitching the bindings of nine hundred religious pamphlets. They ran out of printed matter so quickly that they went looking for other things to sew, turning their needles upon a large heap of flour sacks in need of darning.

        Were these “purposes favoured by bureaucrats”?

        Also, if you have a sufficiently good system of government such that people’s ideas are actually represented and you can kick out bad bureaucrats, does it really matter?

        • Skivverus says:

          For certain definitions of “large”, at least. From the same site:

          At the community’s peak, three hundred Oneida “Perfectionists” lived an intensely intimate, intellectual existence in a rambling, Italianate mansion.

          This is still within spitting distance of Dunbar’s Number, and even so:

          Ultimately* the community disbanded after failing to find a suitable leader to replace the aging Noyes (he tried to install his son, who was a flop.)

          *In 1881, 33 years after its founding in 1848.

        • 19th century, actually.

    • gbdub says:

      Is the problem really inequality per se, or the combination of equality and immobility? To me the latter seems like the bigger issue (and will over time exacerbate the former).

    • Sniffnoy says:

      I think your comment is confusing inequality with the causes of inequality.

      You ask if there’s a right level of inequality. As best I can tell, your “as opposed to” is the right level of being none. I’d say your question is ill-phrased, since “none” is itself a level of inequality. The better question is if there is a right level of inequality — as opposed to inequality being a neutral thing, not something to be optimized for, where there’s only a “right” level to the extent that doing the right thing generally will result in some level (but optimizing for that level will not yield the right thing).

      • johan_larson says:

        No, I’m thinking of inequality as the input, the thing that can be changed, and wondering what level of it produces the best society (ignoring, for the moment, how one determines whether one society is better than another.)

        • Sniffnoy says:

          Right, and I’m saying that’s almost certainly a mistake. Inequality is, for the most part, not an input. And that, moreover, if you imagine the ideal society (whatever that might be), consider its level of inequality (as measured by, say, Gini coefficient), and then somehow magically set inequality to that level, you will not get anything like the ideal society. Don’t optimize for such things! Things that are alike in one statistic need not be anything alike under the hood!

          Let me expand on my remark that you’re confusing inequality with the causes of such. You suggest that inequality could be a motivator. But it’s not inequality that’s a motivator — it’s how much one’s effort is rewarded, a thing that can cause inequality, that is a motivator. Don’t conflate the thing with its causes. E.g.: You can set the level of inequality to anything in a perfectly propagating hereditary aristocracy, where one’s efforts are rewarded barely at all. You can have arbitrary levels of inequality here — as large or as small as you like — but it won’t affect motivation much, it’ll be low regardless. Conversely you could imagine a perfectly meritocratic society which despite this fact has barely any inequality at all because it’s populated entirely by duplicates of one person. Looking at level of inequality, rather than mechanisms, is the wrong thing.

          Level of inequality might sometimes be a useful indicator that something’s likely wrong, but you don’t fix the indicator, you fix the problem it’s indicating.

          • doubleunplussed says:

            To be fair, with wealth redistribution you really can adjust the ‘inequality’ knob directly. It absolutely can be an input to how your economic system operates. Welfare is an example of this. You can try removing welfare (and the associated taxes to fund it) and see how much more or less productive the economy becomes. That’s input affecting output.

            I think roughly we want whatever level of inequality maximises something similar to median disposable income in the medium term.

            Decreasing inequality a lot by heaps of wealth redistribution has the power to increase median income rapidly, but wouldn’t be sustainable because it would disincentive private industry from doing anything, knowing they wouldn’t be able to keep the profits.

            No redistribution at all also seems bad if inequality keeps going the way it is, not just for poor people but potentially for economic growth as poor people are no longer customers worth innovating to sell products to, decreasing the incentives for companies to innovate to solve problems ordinary people have. This one I’m not sure about since I’m not an economist so I’m just speculating. But we can agree at least that lots of inequality is bad if it means lots of people are poor (in absolute terms and not just relative terms – I believe it’s contentious whether inequality is making people absolutely poor or just relatively. Median wage growth being flat makes me think it’s the former).

          • quanta413 says:

            @doubleunplussed

            I think it’s harder in practice than in theory to adjust inequality directly by adjusting tax rates and welfare.

            I forget which OT it was, but somebody mentioned the Soviet Gini coefficient, and it was higher than some of modern Europe. And they were almost as close as you get to a command-and-control economy. And that probably underestimated true inequality because a lot of benefits weren’t fungible with more money in that system.

            I think Sniffnoy is basically right; we can adjust rules (which strongly affect inequality), but we really can’t turn the knob directly. Turn the knob too hard (tax all the things and redistribute them all!), and all you get is a different unofficial black market economy, followed by economic collapse, and finally by a new form of inequality in who has the guns and military power.

            We probably shouldn’t just look at listed dollars of wealth as inequality either. Value can reside in things not easily exchangable for money or easily detected.

          • Sniffnoy says:

            (Whoo, finally getting back to this…)

            I mean, you can do what doubleunplussed said — you could have some sort of direct redistribution to achieve a given level of inequality. The problem is not that this is impossible. The problem is thinking “level of inequality” is a relevant thing to consider independent of the broader background — that there must be some correct level and you just have to find it.

            It’s like asking, “What’s the correct number of chairs to exist?” Sure, if we imagine an ideal society, it will, at any given point of time, have some number of chairs in existence — OK, it will be rapidly changing with time, but that’s not the point; we can consider the chair function, I guess. That doesn’t mean that if you had the ability to figure that function out that you should optimize for it! It’s not the number of chairs in existence that makes it ideal, and most societies with that chair function will not be remotely similar to it! What’s important are the processes that yielded that chair function. You can adjust the chair function directly through government appropriations or whatever, but then the chair function, and not the things that actually matter, will be the only similarity to the ideal you started with.

            And the same is true for so many other things, including level of inequality.

          • Sniffnoy says:

            Note: To be clear, everything I’m saying is just a re-expression of Goodhart’s law.

    • Another Throw says:

      I meant to read the article the last time you posted it and forgot. So before I do, here is my argument for something kind of similar, judging by the quote. Let’s see how well I do:

      All stable societies have a relatively high degree of social immobility, because all societies reward some traits/skills and punish others and all such traits/skills are highly heritable (via both nature and nurture). Steady state mobility, if such a thing were to exist, is therefore a measure of that heritability mixed with a healthy dose of luck. Periods of high mobility are those where the the traits/skills so rewarded and so punished have changed and society is resorting along the new axis.

      Consider the resorting of a society when the principle means of getting ahead switches from the battleax to the game of thrones, or vice versa. By any of the usual measures, the Norman Conquest would almost certainly show as a period of high “social mobility.” It is difficult not to notice that the skill most rewarded by the preconquest society was the accumulation of soft power in the game of thrones. A skill which, practiced over several generations, placed Herold II on the throne when the Witenagemot chose him as the successor to Edward the Confessor. (Edward, incidentally, was the last descendant of Alfred the Great, who won the kingdom with the battleax.) Likewise, it is difficult not to notice that the principle method of getting ahead in the immediately postconquest period was to find a manor you liked and battleax the incumbent when the king wasn’t looking or to help the king battleax a bunch of incumbents all at the same time during one of the many revolts. The Doomsday book gives us a fascinating look into this process. IIRC, something like 97% of the names recorded therein for the postconquest state of affairs were newly installed Normans. This includes (again, IIRC) the yeoman witnesses called to testify! This complete upending of society, therefore, penetrated deeply into the populace at large. (Both Alfred and Williams’ dynasties’ power being eroded via the game of thrones are examples of vice versa but we haven’t nearly the insight into the process because the Doomsday book was singularly illuminating.)

      This pattern is readily observable in basically every hard revolution for which we have even a modicum of insight. The newcomers seizing power invariably[*] abolish the old order and reorganize society along their preferred lines. Moreover, I would contend that every soft revolution follows the same lines. History provides a rich tapestry of the constant ebb and flow of religious, aristocratic, imperial, commercial, cultural, and military power periodically resorting societies without necessarily devolving into the Harrowing of the North, the Terror of the French Revolution, the Gulags of the communists, or whathaveyou.

      Furthermore, the unusually high social mobility experienced during the post-WWII period[*2], now flagging, was one such soft revolution, now firmly entrenched. (For the time being.) The displacement of the petite-capitalists that excelled under the pre-WWII laissez faire meritocracy by the, well, bureaucrats that have excelled under the bureaucratically enforced meritocracy established since. Despite being frequently described in similar, meritocratic terms, entrepreneurship and navigating a bureaucracy are largely non-overlapping traits/skills. Consequently, by gating financial, social, political… well, every kind of success behind a bureaucracy (including, but not limited to, obtaining a college degree) bureaucrats have displaced the preceding petite-capitalists. And in the grandest twist of irony, by gating services for the poor behind another, even more impenetrable bureaucracy that avenue of advancement is foreclosed as well.

      The currently expanding inequality in the United States, and the higher degree of inequality in the United States than its peers are product of a variety of factors:
      (a) This is the natural state of affairs after a soft revolution, as the new incumbents consolidate power and seek ways to increase the heritability of their successful traits/skills and prevent luck from pulling them from their perch.
      (b) The United States is (to a greater extent than its peers) the bureaucracy for the entire world. With the worldwide reach of American commercial, cultural, political, and military power the respective bureaucrats are able to exploit their larger reach for larger personal gain than their relatively more parochial peers. To say nothing of those excluded from it.
      (c) The political climate in the United States could be described as a sort of meritocracy+socialism; so long as incremental socialism is a foregone conclusion, it is damn well going to be a meritocracy (which in practice turns out to be bureaucratically enforced). Whereas The United State’s peers could be, after a fashion, described as protectionism+socialism; nobody really gives a damn whether there is incremental socialism so long as it is impossible to get fired from your now-siezed-as-part-of-the-means-of-production job. The ever expanding portion of the economy controlled by government bureaucrats in both cases is (for the time being) monotonically increasing, but the completing political forces in the United States are pulling it askew in a direction more directly exclusionary to those out of power by administering a double-dose of bureaucracy.
      (d) Related to the above, the petite-capitalists in the United States, thinking of themselves as the pinnacle of meritocracy, actively championed their own obsolescence, having assumed their success as petite-capitalists would stand them in good stead. It didn’t. Lacking in opposition from entrenched players, the soft revolution in the United States has been more complete.

      So, even though I’m not sure how sold I am on it myself, let’s see if mine argument is better than the Atlantic’s!

      As to your question, I’ve never seen any good reason why inequality qua inequality is something to get wrapped around the axle about. It is largely inevitable, because all societies inherently reward some and punish some traits; and self correcting, because which traits will drift over time to disrupt power centers and redistribute excess gains. And in this particular instance, inequality in the US (and to a lesser extent the westernized world) is growing largely because of their global reach, not because of any insurmountable barriers within the polity. While such barriers are certainly are not helping ATM, they will transmute over time, almost certainly before any particular fissures ruptures in an orgy of violence. Not worth worrying about. (Although, as a dutiful bureaucrat, I’m not particularly keen on any of the current crop of trying-to-be-incipient counter-revolutions. They all seem to kind of suck in the same slightly-higher-but-still-incredibly-remote chance of orgies of violence sort of way.) The excess gains from our current period of global dominance will be, in time, redistribute throughout the society.

      [*] Barring, of course, the three counterexamples David Friedman is just thought of while reading that. I will therefore caveat by saying that may not intend to do so but it happens nonetheless, and maybe leave it at that, okay?

      [*2] WWII is a convenient anchor to hang off of; the various mechanisms both proceed and follow it, but arguing about the details or finding a more accurate anchor to hang on are probably not productive to the discussion.

      • JulieK says:

        Very interesting comment!
        I think I would identify the biggest factor causing mobility in the post-WW2 era as broader access to education, in large part because greater overall prosperity meant that there was less pressure for teenagers (or kids even younger than that) to drop out of school in order to help put bread on the table. This meant that suddenly, a much larger fraction of the population had the opportunity to advance meritocratically.

        • albatross11 says:

          I’d say in the US, a major factor for post-WW2 mobility was that we were suddenly one of the only major industrial powers who hadn’t just had most of our infrastructure bombed into rubble, and who hadn’t gone deeply into debt to fight the war/maintain our empire. That set up a situation where the US economy was likely to do very well given even minimally competent policies and laws. But a couple decades after the war, a bunch of our major competitors had rebuilt and were competing for sales with American companies.

          • JulieK says:

            I’m suspicious of this mercantilist argument. Since free trade is supposed to make both sides better off, having prosperous trading partners should be a good thing.

          • Randy M says:

            Hmmm, I hadn’t noticed that contradiction before. Thanks.

          • Another Throw says:

            I don’t think it is really a contradiction. The free trade very much did benefit both sides. It catapulted all the bombed out places the US traded with after the war into becoming all the most developed and wealthiest places in the world. And the US, of course, reaped enormous benefit from the trade as well.

          • albatross11 says:

            It may be that the post-WW2 boom was an especially nice time for heavy industry in the US, since that’s an area where we suddenly had some huge advantages. Comparative advantage tells us that we can still benefit from trading even with a Germany or Japan that’s mostly squatting in the smoldering rubble of the war, and I’m sure we did benefit. But for many years, most big things that needed a lot of infrastructure to make were being made in the US and sold to them. Later, both countries rebuilt their industry and killed off a big chunk of our car/construction equipment/steel/etc. industries by providing a better product at a lower price. But for a couple decades after WW2, we had a large fraction of the industrial capacity of the whole world, and that probably made things very nice for American workers. That’s when you see high-school-educated factory workers with powerful unions negotiating good salaries and generous benefits packages. As the big advantages for American industry faded, so did the bargaining power and well-being of American industrial workers.

            Along with that, it seems like two other things made post-WW2 an especially nice time in the US:

            a. We had just gotten out of a deep recession when WW2 started up, and we spent several years with wartime taxes and austerity. That meant when the war ended, we were starting from a fairly low baseline.

            b. I think WW2 led to a lot of social mobility–people moving away from home, going to college, etc.

          • The Red Foliot says:

            @JulieK
            The benefits of having prosperous trading partners are that the prices of goods go down and the quality of goods go up. Today we can see the benefits of prosperity in our trading partners: consumer goods galore. This is a different sort of benefit from those enjoyed by post-WW2 American society. Those benefits had to do with, not cheaper goods, but ‘cool’ jobs that allowed males to preserve their masculinity while also supporting a family. This benefit has since vanished.

            One of the reasons it vanished is that all the cool jobs were eaten by China and automatization. Another is that women entered the workforce. A third is that zoning laws have made housing costs unfathomably expensive.

            So, in one sense, the lower classes have benefitted since the 1950’s, in part because of having prosperous trading partners. But in another sense they have suffered, in part because of having prosperous trading partners. As far as I know, this doesn’t contradict economic theory, as economic theory seems primarily to judge things on the basis of dollars rather than less quantifiable ‘social effects’.

        • Another Throw says:

          @JulieK & albatross11

          Can’t it be both? (Plus a whole bunch of other factors.)

          Post war trade created an economic surplus which, combined with government policy (e.g., the GI Bill), allowed surplus labor to be directed towards education, which created a feedback loop as the resulting technical and bureaucratic competence increased productivity and thus economic surplus further.

          Bureaucracy does, after all, actually serve a purpose. These values that the postwar society chose were not chosen at random.

          The problem comes when a value is rewarded to the exclusion of every other beneficial value a society might have, because it excludes from society everyone that does not excel along that axis. Thus, locally-persistent immobility sets in.

    • Thegnskald says:

      Used to have arguments about the GINI coefficient and income mobility back in my libertarian days. The basic argument I had still holds:

      The GINI coefficient of the world is worse than the GINI coefficient of any one nation. The GINI coefficient of any collection of nations is worse than the GINI coefficient of any one of those nations. (Not as a guarantee, but as a general rule, mind.). The GINI coefficient punishes the size of nations.

      The GINI coefficient groups people incorrectly, by income. Poor in New York City doesn’t resemble poor in Louisiana, in terms of income. What the GINI coefficient attempts to capture gets swamped in statistical noise, and the fact that the US has a relatively “poor” GINI coefficient compared to some European country doesn’t indicate anything. Comparing it to the EU as a whole might be a better comparison, but still deeply flawed.

      To illustrate the problem, imagine if the US annexed Mexico. Even if everybody in Mexico and the US ended up better off, and even if income mobility increased in both countries, our GINI coefficient and income mobility would both get worse, not better, because most of that income mobility would be relative to local incomes – moving from 5k to 20k is a huge change, but doesn’t move you across a critical boundary.

      Somebody in NYC ending up in a lower income class than their parents might move from 200k to 120k – a huge change, that might not end up crossing a critical boundary either, since from the perspective of most of the country, they’re both upper middle class.

      (Also, wealth != Income, and also, both more or less follow a zipf distribution, suggesting inequality and prosperity probably can’t be disentangled, at least not within capitalism)

      • Conrad Honcho says:

        I don’t disagree with your points, but just for future reference, “Gini” is not in all caps. It’s not an acronym…its name references Italian statistician and sociologist Corrado Gini who invented and published the measure.

        • Douglas Knight says:

          Europeans write surnames in all caps so that people are not confused by the Hungarians: ELO Arpad and Corrado GINI. Thus the ELO score and the GINI coefficient.

        • Nornagest says:

          Ever since I heard the term, I’ve always kind of wanted to write an urban fantasy about the economics of summoning and binding spirits and title it “Genie Coefficient”.

    • arlie says:

      I think this is a research question, but not the kind of research that can be done, because of the political entanglements. So we get people reasoning from their preconceptions, biases, and personal life experiences – what sometimes gets called “just so stories” in a scientific context. Just because a story is plausible, doesn’t mean it’s true… even if there were no political/values entanglement.

      I really want to write my own just-so story here. But I don’t know any more than anyone else in this thread does. So I’ll just say that I think the question is a lot more complex than it looks like, and leave it at that.

      • maybe_slytherin says:

        It’s the kind of research that is currently being done. It’s even referenced in the article.

        Consider Mr Apophenia’s link to the Brookings Institute above. Or a google scholar search on intergenerational earnings mobility in the US.

        Now, you may think that all of this research is crap, because of political bias as you mentioned. And certainly some of it will be (though having bias doesn’t mean something is wrong – it’s just Bayesian evidence). But…to know truth, you have to be able to cut through crap, in any field.

    • Conrad Honcho says:

      I don’t have a comment on the acceptable level of inequality in society, but I am exasperated by the hubris of the author as he attempts to address the hubris of his class.

      The 2016 presidential election marked a decisive moment in the history of resentment in the United States. In the person of Donald Trump, resentment entered the White House. It rode in on the back of an alliance between a tiny subset of super-wealthy 0.1 percenters (not all of them necessarily American) and a large number of 90 percenters who stand for pretty much everything the 9.9 percent are not.

      Sure, I’ll give you that, although it would be worth noting that practically every tech billionaire and media mogul was in Hillary’s camp, not Trump’s. But he then goes on to say:

      Did I mention that the common man is white? That brings us to the other side of American-style resentment. You kick down, and then you close ranks around an imaginary tribe. The problem, you say, is the moochers, the snakes, the handout queens; the solution is the flag and the religion of your (white) ancestors.

      “Yeah, the commoners are totally resentful, but too stupid to focus their resentment where it belongs (on us) and instead they’re evil racists who blame blacks and immigrants!” No. He previously wrote:

      From my Brookline home, it’s a pleasant, 10-minute walk to get a haircut. Along the way, you pass immense elm trees and brochure-ready homes beaming in their reclaimed Victorian glory. Apart from a landscaper or two, you are unlikely to spot a human being in this wilderness of oversize closets, wood-paneled living rooms, and Sub-Zero refrigerators. If you do run into a neighbor, you might have a conversation like this: “Our kitchen remodel went way over budget. We had to fight just to get the tile guy to show up!” “I know! We ate Thai takeout for a month because the gas guy’s car kept breaking down!” You arrive at the Supercuts fresh from your stroll, but the nice lady who cuts your hair is looking stressed. You’ll discover that she commutes an hour through jammed highways to work. The gas guy does, too, and the tile guy comes in from another state. None of them can afford to live around here. The rent is too damn high.

      The tile guy doesn’t hate illegal immigrants because he’s a racist. He hates that he can barely keep his head above water because he’s trying to do the right thing but his competition is undercutting him by paying illegals under the table. And when he complains, “why isn’t the government enforcing the damn laws?” he gets “you’re a racist, diversity is our strength and we have these great Thai restaurants now tee hee!” He’s not mad at the immigrants. He’s mad at you.

      The harried hairdresser knows exactly why she can’t afford to live near where she works. She’s not stupid. She’s not blaming that on black people. She’s blaming that on you.

      I found this sort of talk in the article insufferable. Wringing his hands about how problematic all the wealth and privilege of his class are, but then consoling his audience with affirmations that the victims of their policy choices are stupid and evil. Wonderful. Just wonderful.

      • Aapje says:

        I found this sort of talk in the article insufferable.

        Ditto. It’s not just that they revel in the benefits that (illegal) migrants bring to their (upper) class while getting angry at lower class people who don’t benefit and who object; but the way in which the former group tend to claim to be morally superior for burdening others with the externalities of their lifestyle.

      • maybe_slytherin says:

        Yeah, the personal narrative is for sure the weakest part of the article. Since I agreed with the article overall, my objection to them at the time was “these are boring and you’re kind of full of yourself”, but once you point it out, I can agree that it’s pretty insufferable.

    • JulieK says:

      The article does a good job describing the current inequality in the US, a but a bad job convincing me that that the situation is the fault of a particular group, or can be improved meaningfully by any simple measures.

      Is it the fault of the elites that 70% of children born to parents with a high-school education or less live in a single-parent household? Is it really true that “lengthy commutes cause obesity, neck pain, stress, insomnia, loneliness, and divorce?”

      • Tarpitz says:

        Is it really true that “lengthy commutes cause obesity, neck pain, stress, insomnia, loneliness, and divorce?

        I have no idea what research exists on the subject (and would have doubts about the reliability of any such research anyway) but based on my experience of long commutes and my observation of other people with same I would strongly expect most of those to be true.

        Obesity: early starts and late finishes leading to lack of sleep, lack of willpower to cook healthy food, tendency to buy unhealthy food from service stations etc. on commute, consumption of sugar to alleviate aforesaid tiredness.

        Stress: assuming the commute involves either crowded public transport or heavy urban traffic, I find it hard to imagine how anyone who has experienced either could doubt this.

        Divorce: 4 hours a day less contact time is apt to strain a relationship, and the tiredness and stress presumably don’t help.

        Loneliness: presumably strongly mediated by divorce, but those commuting hours and fatigued post-commute evenings are also not conducive to socializing in general.

        Neck pain and insomnia: I don’t know much about the causal mechanisms for either, but I wouldn’t be surprised.

        • Nancy Lebovitz says:

          Tarpitz:

          One more for health hazards of commuting– more exposure to air pollution. Last I heard, those microparticles are bad for people’s hearts.

      • Another Throw says:

        Is it the fault of the elites that 70% of children born to parents with a high-school education or less live in a single-parent household?

        An argument could be made that the elites instituted policies that benefited themselves without regard to the impact it has on those that are not like them. By reorganizing society so that certain values are no longer rewarded, or are actively punished, that it completely fucks whole swaths of society whose principle traits align with what was devalued.

        If, for the sake of argument, we assume that there is a substantial population of women that do not want to go to Harvard and become a fabulously wealthy drone at a Big Law firm–they just want to have kids, for example–a society that devalues maternity and is constantly screaming at them that they are complete losers if they don’t want to go to Harvard and become a fabulously wealthy drone at a Big Law firm will reliably cause them to become a unwed teenage mothers. They are losers anyway, why wait? And if such a society was actively hostile to unwed teenage motherhood, for example by gating everything behind a college degree which is practically impossible to obtain as an unwed teenage mother, they become marginalized. And this marginalization is heritable.

        This does not tell us the proper way to organize society, but it illuminates a mechanism by which the elites could have caused the lack of education among single parents.

        • albatross11 says:

          It’s a good idea not to personify society or the elites too much here.

          My impression from living a middle-class life with a stay-at-home wife and three kids, in an expensive part of the country, is that a lot of the drive against women staying home and against large families is the competition for housing. If 80% of middle-class families have two incomes, and we only have one, then we’re going to be outbid for nice houses in nice neighborhoods with nice schools. There are various ways we can try to get around this–buy more house than we can afford and hope prices keep going up so we can make money on the equity, move to a distant suburb where we can buy more house and then I have a two-hour commute every day, get money from our parents or someone to spend on a huge house, etc.

          It’s not that anyone here said “Let’s mandate that women shall work.”

          • Another Throw says:

            Not having lived through it, I have always thought of the sexual revolution as an upper/upper-middle class thing that rapidly broke the lower class. There were a lot of factors involved. There were a lot of people doing different things that were trying to solve problems. Nobody went around trying to mandate women work or anything. Nevertheless, in the end it resulted in a lot of unintended consequences.

            The financial squeeze on the middle class that strongly disincentives stay-at-home parenting is a step or two further down the consequence chain than the rapid dissolution of marriage among the poor. And the dissolution of marriage was what JulieK was remarking upon.

            But I’ve never studied the subject, so I am open to insight.

          • dndnrsn says:

            First, women working outside the home varied heavily by class, at least before a certain point. Poor women were far more likely to work outside the home, or bring work in (piece work, taking care of other people’s kids), than better-off women were.

            Second, I think you’re linking “women working” and “sexual revolution” rather too strongly. Better contraception – which made women having careers, as opposed to simply “working”, far more possible – was part of the sexual revolution, but it wasn’t the thing entirely.

          • SamChevre says:

            I would second this, but I blame the elites rather more than albatross11. I blame anti-discrimination law: by allowing the 9.9% to buy their way out of dysfunction, but forbidding the 20%-90% from discriminating their way out, they reliably and predictably made it so that the only way to require two parents as a norm was to require two incomes as a norm.

          • albatross11 says:

            SamChevre:

            It seems like there’s a bigger story there. There’s a certain fraction of the population that’s dysfunctional in ways that cause trouble for the neighbors. The fraction is larger for blacks than for whites, and for whites than for Asians.

            I think we went through several revolutions at the same time:

            a. A lot of the legal and social mechanisms of control that kept a lid on certain kinds of dysfunction were weakened or abolished. (Think: forcible institutionalization of crazy/seriously intellectually disabled people, vagrancy laws, strong social sanctions surrounding unwed parenthood, weakening cultural direction toward middle-class values, decline of religion, etc.)

            b. An ideology swept through the elites and even the top 10%-ers that opposed expressing any judgment in public about “alternative lifestyles.” Of course, they (we) still had those judgments–we still knew what trash behavior was, but we largely stopped being comfortable saying so in public. And this was especially true in elite media culture.

            c. Racial integration eliminated one easy way to exclude groups that were more often dysfunctional. (But remember that it also eliminated a common way of keeping out more-functional people–there were restrictive covenants keeping Jews out of neighborhoods, for example. Jews are probably much less likely to be dysfunctional than Gentiles, overall.) This was part of a bigger trend, though….

            d. The general trend toward increasing centralized control and centralized national culture made it increasingly harder to maintain distinct communities with their own values. Which is important if you don’t want the neighbors’ trash behavior or the TV’s trash values to influence your children.

            e. For a time, it seemed very difficult to keep any kind of a lid on crime in much of the US, particularly in large cities. That gave a big incentive to anyone with the means to get themselves out of places with a lot of crime–which in practice were places full of dysfunctional people and were often mostly black. I think something similar happened with schools.

            I think all those things together made it harder for a normal person to raise their kids in a functional environment. But there’s a market for places that let you do that. And while you can’t discriminate on race, or even on trash behavior, you *can* discriminate on income, and dysfunction is negatively correlated with income. So you end up with Le Samo. The well-off live in nice suburbs with great schools and pleasant parks, and the poor live down the road somewhere because they can’t afford rent in Le Samo.

            One side effect is that the elite are walled off from many of the practical consequences of their decisions. It’s not that they think “Now, we’ll screw over the working class by wrecking the local public schools and making the streets and public parks unsafe after dark!” It’s rather that they live in places where the public schools are excellent (because the students are all above average) and nobody needs to lock their doors, so they don’t notice that this is a side effect of policies they’re supporting.

            The other side of that is that the elites mostly don’t know anyone who, say, wishes he could get a construction job like his dad had, but those are mostly taken by Salvadorans who will work for less. Or anyone who found passing basic high school classes really challenging. Or who finds filling out a dense five-page form nearly impossible, instead of a time-consuming pain in the ass.

            As best I can tell, most elites’ expressed policy preferences and political causes have very little to do with policy, and a lot to do with intra-elite status games. My model for a huge amount of societal descisionmaking is that it’s like a wrestling match in the cockpit of an airplane. Sometimes, the controls get jostled by accident or by design, but the effect on the passengers isn’t really the point–the point is winning the wrestling match. A huge amount of elite/powerful virtue signaling with destructive effects in the world fits this model, IMO.

          • johan_larson says:

            It seems like there’s a bigger story there. ….

            Best post in weeks. Scott, this one deserves some sort of highlighting.

    • maybe_slytherin says:

      I know I’m further left than many commenters on this site, but I must admit I’m pretty surprised how strongly people seem to support inequality.

      For me, the basic utilitarian argument is:

      1. Poor people receive much greater marginal utility per dollar than rich people.
      2. Therefore, total utility is higher if a rich person’s wealth is split with even one poor person. Since in practice one rich person’s wealth can be split with many poor people, the utility gains are very large.

      This isn’t to say that the correct level of inequality is always zero — that may be the case from arguments about utility per dollar, but there are a bunch of reasons to think this isn’t the best society overall. In particular, capitalism is good at getting people to work and invest and gives us nice things. But in general, I’m very skeptical of trading off shiny gizmos against the utility of entire swaths of the population.

      All of this is quite apart from arguments about mobility.

      • J Mann says:

        Generally, I think the main concern among utilitarians is that redistribution reduces total output (especially compounded growth) through incentives and dead weight effects, so that over time, you have less total stuff to redistribute, leaving everyone poorer.

        The moral argument is that wealth is somewhat due to individual choices. My neighbors almost universally drive nicer cars than I do, take more vacations, update their wardrobes more frequently, etc. I suspect that I have more savings than they do. In some sense, it makes sense to take more money from me relative to them, charge my kids more for college, etc., because I have more money, but in another sense, it’s a little unfair. (And this feeds into the effect above – if I know that savers get taxed more than spenders, it increases my incentive to take those cars and vacations, which is second choice for my utility function without the redistribution, and also reduces the amount to capital invested in growth).

      • quanta413 says:

        Speaking as someone not on the left, here’s why I find your argument unconvincing

        (A) I don’t view the purpose of money or government as being to maximize utility. Instead I view money and government as tools which people (who often disagree with each other) use to incentivize each other to do things. It often takes immense concentrations of resources under relatively centralized control to get some things done. Whether a concentration is largely a single person (Jeff Bezos owning Amazon) or the government is less relevant than whether a concentration of resources is very useful. Noticeably, Jeff Bezos’s giant stack of value in Amazon is directly caused by trading with people much, much poorer than him. Similarly for Wal-Mart. Basically, other people made Amazon and Wal-Mart very, very big presumably because they benefitted from trading with Amazon and Wal-Mart. This simultaneously led to immense concentrations of wealth. This looks roughly like a reasonable incentive system to me. It’s not hard to find examples of the opposite of course, where the concentration of wealth is influenced more by rent-seeking. For example: universities, homeowners in cities with strong building restrictions, government contracts placed in a particular district to get an important congressman on board. But I don’t view inequality as a problem in and of itself, but rather the means by which it is achieved may be wrong.

        (B) You can’t aggregate preferences (utilities) across people in a meaningful way without deciding on a cardinal scale. Worst-case, it’s mathematically impossible (see Arrow’s theorem). Best-case, deciding on a cardinal scale makes everything go straight to hell. You run into utility monsters, problems with deciding how to compare across people, and a basically infinite number of free parameters that could be adjusted to reach desired conclusions. Another issue is there is a massive nonlocality problem in deciding between actions unless you make a conscious decision to choose a utilitarian framework which (rather unusually for utilitarianism) tries to keep decisions from depending on things far way in any causal chain related to your actions. Utilitarianism is a useful tool for some cases, especially if it can be linked to a direct monetary cost which is a socially agreed upon cardinal scale (like externalities of pollution), but I’m suspicious of it as a general framework.

      • AnarchyDice says:

        People in general may receive diminishing returns of utility for every dollar earned, I would generally accept that, but that principle tells us nothing about how utility returns vary between people nor how quickly the returns diminish.

        Person P1 may get U1 from the first 10K, but only X*U1 for the second 10K.
        Person P2 may get U2 from the first 10K, but only Y*U2 for the second 10K.
        Person P3 may get U3 from the first 10K, Z*U3 from the second 10K, and Q*U3 from the third.

        There is no way, using the principle of marginal utility, to determine the relationship between U1, U2, or U3. that means we have many variables to consider if we want to take money from one person to give to another. We have to figure out their rate of diminishing returns and their relative utilities.

        • To make the argument a little more precise:

          If you assume that everyone has the same utility function for income but different abilities to earn income, the rich have a lower marginal utility of income than the poor.

          If you reverse the assumptions and assume that everyone has the same ability to earn income but different people have different utility functions for income, the conclusion reverses.

          In the real world people differ in both, leaving the net effect theoretically indefinite, but I think most people believe the former case is closer to true than the latter.

  21. John Schilling says:

    Your weekly update on the Total Goat Rodeo that is the US/North Korea summit.

    It’s beginning to look like this might actually happen. Officially, Trump cancelled it in a huff and KJU said “fine, be that way”, but both finished their dismissive letters with “…so I just met you, and we’re both crazy, so here’s my number, and call me maybe”. OK, maybe not those exact words, but about the same level of drama. And ever since they’ve been pretending not to talk to one another.

    Meanwhile, South Korea’s President Moon Jae-In has been working overtime to put the summit back on track, because he’s still worried the alternative is his country gets caught in the crossfire of a Trump-Kim nuclear war. So one benefit of this whole thing, is more dialogue and better relations between North and South Korea than we’ve seen in at least twenty years. However, MJI does seem to be playing the same old game of shading each side’s negotiating position when talking to the other, which has the potential to backfire badly when they start talking directly.

    Which, a level or two down from the top, they clearly are. A team of fairly high-level US officials including a former US ambassador to South Korea (there is unfortunately no current US ambassador to either Korea), has visited Pyongyang to talk to North Korea’s foreign minister. Meanwhile, top North Korean officials including Kim Yong Chol (vice-chairman of the Party’s central committee) are enroute to New York to talk to Secretary of State Mike Pompeo. Trump and KJU may not be talking publicly, but these visits aren’t happening without their full support.

    There are also high-level advance teams from the US and North Korea in Singapore, hanging out in the summit where the summit was originally scheduled and arranging for hotel management to impose a strict no-journalist policy. So there’s a distinct possibility of Trump and Kim meeting in Singapore on or about the 12th.

    There’s still the mismatch of expectations, the probably irreconcilable differences between the two sides. The Trump administration, like every other US administration, is insisting on “CVID”, Complete Verifiable Irreversible Disarmament, applied to North Korea alone. North Korea’s position, at least as assessed by the CIA’s elite “No Shit, Sherlock” department is that they might be willing to open a McDonald’s in Pyongyang but no way are they giving up their nuclear weapons while we keep ours. MJI, for his part, insists on using “denuclearize” while carefully not being pinned down on a definition, so both sides can hear what they want.

    The most plausible basis for an agreement is something front-loaded with a testing moratorium and lots of flowery langauge (both of which we’ve already got) and with the promise of actual verifiable disarmament measures pushed off until sometime after Trump’s tenure ends. Other alternatives are a summit that satisfies no one, such that they use lots of flowery language to conceal the lack of achievement and then try to forget the whole thing, or a summit that leaves one party so dissatisfied that they want to punish the other for leading them on and wasting their time with false promises.

    But Pompeo and KYC are serious people who know the score, and they aren’t going to let a summit happen if there’s a serious chance their boss will either negotiate away the homeland for a handful or magic beans, or start a nuclear war in a fit of pique. Almost certainly they will either agree to the boringly useless deal and arrange for their bosses to ratify it, or they won’t and the summit won’t happen.

    Long shot outcomes are still a possibility, of course. And one worrisome dark horse in all of this is national security advisor John Bolton. Who is either under orders to keep his mouth shut since the last time he derailed the summit, or is Up To Something. It’s an absolute certainty that he wants this summit to fail; the question is whether he still has the influence to make that happen and, if so, can he make it fail in a really big way in Singapore rather than fizzle out in DC and the Twittersphere.

    Provided Bolton doesn’t manage to start a nuclear war, best bet is we get greatly improved DPRK/ROK relations, greatly improved DPRK/PRC relations, mildly improved US/DPRK relations, a smugly satsified Donald J. Trump awaiting his Nobel Prize, and no actual disarmament but de facto constraints on the growth of North Korea’s arsenal.

    • Randy M says:

      Well, okay. Not too bad.

      To think at one point I kind of liked John Bolton. I guess when you think the pendulum (of international appeasement, or something) is too far in one direction, someone hammering it the opposite way sounds good, but could also be evidence that they want to dismantle the clock, take the pendulum out back, and dynamite it, pour encourager les autres.

    • Mark V Anderson says:

      This is definitely a fascinating process, and almost for sure will come to nothing. But maybe we’ll get lucky. I really think that Kim is even more Trumpian than Trump, and if he spontaneously decides to give up his nukes, or some lesser step of stopping testing or something, he would lose face if backed out, so we get something. Of course bad results are also possible for the same reason.

      I am curious about Bolton. It seems everyone in the world but him wants some sort of treaty so the Norks don’t have nukes. What is Bolton’s motivation? I can’t think of a rational end result that would be better than this.

      • James C says:

        Can’t say I know enough about the guy to make any calls, but short, victorious wars are always appealing to certain types of military planners.

        • John Schilling says:

          There’s also the Curtis LeMay argument that, if you’re stronger than your adversary now and war is inevitable, you either get it over with now or you impose strictly verified disarmament as a condition for not getting it over with now. But that depends on the level of confidence in “war is inevitable”, and LeMay didn’t have the benefit of sixty-five years of quasi-peaceful coexistence with his adversary.

      • sfoil says:

        The most straightforward motivation is what John Schilling said: if the DPRK’s capability is going to increase in the future, then better now than never, sooner than later, to attack them since they won’t actually give up the nukes peacefully.

        The Chessmaster explanation is that Bolton is (knowingly/willingly or not, but he’s not stupid) raising the cost of refusing to at least negotiate by increasing the apparent risk of failure to include open war. Which, even if the DPRK high command thinks they can win/survive, is going to hurt a lot.

    • J Mann says:

      What are people’s takes on alternatives for a positive deal in the medium term? What do the US and NK have that each other wants? IMHO:

      1) The best case would be to buy out the North Korean aristocracy somehow. They give their country a path to democracy and unification and they get a billion Euros, immunity from prosecution, and the right to party with Kim and Kanye.

      2) Second best would be we somehow guarantee no military option against North Korea ever, and they give up their nukes. (Probably). But I don’t see that happening, even if China makes an ironclad guarantee they will go to war to protect NK.

      3) Third best is probably what John Schilling suggests – NK substantially pauses its nuclear program and cuts most of its connections to world terror, and we continue lifting sanctions or even paying ransom to NK to stay good.

      • John Schilling says:

        Immunity from prosecution is specifically prohibited under international law, per the Pinochet precedent. And once the Kim dynasty is out of power, there’s no percentage in anyone letting any International Pariah Points stick to their shiny public image by e.g. defying a lawful ICC warrant for an Evil Dictator whose atrocities would by that point have been fully Youtubeized.

        Case 1 helped ease some rather nasty people away from the levers of power back in the 20th century, but it no longer works here in the 21st.

        • J Mann says:

          Well, that’s obviously dumb. There should be some mechanism for someone to grant plea deals. (Presumably a UN body of some kind).

          • Lambert says:

            It’s negotiation with terrorists writ large.

            People would just take over a country then start negotiating to hand it back.

            We must keep the sword of Damocles hanging as low as possible over their heads:
            Once you seize control, and especially if you commit war crimes or crimes against humanity, you are forever fair game. There are always people out there who want you dead, and might just happen to have the means to make that happen.

          • John Schilling says:

            There are always people out there who want you dead, and might just happen to have the means to make that happen.

            And thanks to A.Q. Khan, you can have the means to make them dead too. Then you can either all kill each other, or you can go about not killing each other one day at a time.

            People will not be dissuaded from becoming Evil Dictators, because nobody really plans to become an Evil Dictator in the first place. The first generation plans to become liberators defeating the Evil Dictator, and never notice all the lines they cross until far too late. Subsequent generations, if any, will be killed by their ambitious relatives if they don’t take up the family trade with a vengeance.

            Now, they’re all of them riding the tiger until the day they die, and if the tiger doesn’t do the job we’ll kill them if they dismount. But it’s safer for them if they can arrange for it to be a fire-breathing nuclear tiger, because that way at least maybe they can keep us at a safe distance.

            Have fun with that sword, Damocles.

      • sfoil says:

        The main thing that the DPRK has that the US “wants” is probably the ability to put a thumb in the eye of the Chinese in various ways.

        In the longer term, the country is also ridiculously underdeveloped. Getting a slice of a hypothetical DPRK economic modernization would make a lot of people rich. Some of those people would be North Koreans, but not all of them. Perhaps more or fewer of them could be Americans or Chinese.

        They aren’t going to give up their nukes. I guess it’s plausible they could be brought into the nonproliferation fold through a combination of de facto recognition and diplomatic ignorance of certain awkward facts; not like it hasn’t happened before.

        Kim Il Sung’s stock in trade was playing the USSR and China off each other. KJU probably intends to do the same thing. He’s also probably serious about economic improvements; aside from stuff already mentioned I think it’s conceivable that a reliable nuclear deterrent would allow the DPRK to cut its overall military budget.

    • Lambert says:

      I’m no expert, but my technical intuitions are thus:
      More nuke tests don’t translate into increased diplomatic clout. Existing hundred kt designs are enough to threaten other countries. OTOH, improved delivery systems might have a decent marginal utility.

      CVID sounds impossible to do meaningfully, even if NK pretends to fully co-operate.
      Even if all the stockpiles facilities are burned to the ground and tilled into the earth, all you need is one hard drive with all the research data and a way to track down all the personnel again, and you’re halfway back to being a nuclear power.

    • Conrad Honcho says:

      What do you think about the tariffs on imports of Chinese electronics the Trump admin just announced?

    • Iain says:

      Provided Bolton doesn’t manage to start a nuclear war, best bet is we get greatly improved DPRK/ROK relations, greatly improved DPRK/PRC relations, mildly improved US/DPRK relations, a smugly satsified Donald J. Trump awaiting his Nobel Prize, and no actual disarmament but de facto constraints on the growth of North Korea’s arsenal.

      Curious about that last bit. Do you just mean a moratorium on testing? Would a moratorium on testing even matter at this point?

      My understanding is that Kim values his nuclear program very highly, both to deter invasion and as a source of prestige. Is he really going to endanger that program? I don’t understand what leverage we think we can use to extract any real concessions.

      • John Schilling says:

        Kim values the existence of a potent North Korean arsenal very highly; the phrase “Treasured Sword” has been used frequently. As I’ve mentioned, their translators use old dictionaries so they haven’t yet figured out “cold, dead hands” or “Molṑn Labé”, but that’s the sentiment.

        It is not clear that they believe the continued production of nuclear missiles is necessary to this end. Kim’s strategy from the first day of his ascension to power has been explicitly, 1. secure power at home, 2. develop nuclear deterrent, 3. secure peace with US/ROK, 4. shift to economic development. It is possible that they believe or can be persuaded that step 2 is now complete.

        If so, I’d expect a moratorium on both testing and on visible production activities. What happens in deep underground caves, stays in deep underground caves. They may not feel it necessary to do much of anything in those caves, or they may slowly round out their production to some predetermined adequate force level. They may dismantle a few weapons for show, or they may drive them into a building labeled “dismantlement facility” that is actually a tunnel entrance. But there are limits to what they can do underground, particularly when it comes to large-scale manufacturing. Or operationally realistic training.

        If we insist on a large fraction of North Korea’s current nuclear arsenal being turned to scrap while we watch, that will be a dealbreaker.

        • Iain says:

          My point is that Kim’s decision to cease testing is almost certainly dependent on whether or not he has accomplished step 2. If he thinks that his current arsenal is sufficient, then ceasing production is hardly a sacrifice. (Indeed, he might have done it anyway; I’m pretty sure nukes aren’t cheap.) If he thinks he needs more, then there’s no way he’s going to make any real sacrifices.

          In either case, I don’t see how anything that the US can offer would affect Kim’s calculus here.

          I guess maybe that’s fine? A slightly more open North Korea with a small arsenal but no ambitions to expand it seems like the best case scenario from this point. I suppose it’s on Trump / Bolton not to screw it up.

          • John Schilling says:

            Whether or not Kim thinks he has accomplished Step 2, depends very much on where he stands with Step 3. To a first and very rough approximation, the strategy is to build more and better nuclear weapons until the US says “OK, sanctions and threats aren’t working, let’s try a peace treaty instead”, then stop and hold. If the US credibly offers to implement a treaty iff Kim stops building missiles, it is little sacrifice to stop building missiles for a year or two and see if the treaty follows.

            Nuclear missiles are expensive, North Korea is a poor country, and the Kim Dynasty has very much wanted a peace treaty for three generations. That’s a fair bit of leverage if we’re willing to settle for North Korea not building any more nuclear missiles.

          • Iain says:

            If the US credibly offers to implement a treaty iff Kim stops building missiles, it is little sacrifice to stop building missiles for a year or two and see if the treaty follows.

            You’ve talked before about how hard it is for the US to make a credible peace offer. I can’t imagine that Trump pulling out of the Iran deal has made it any easier. I guess we trade short-term peace for a short-term pause in building nukes, and then hope that the short-term continues indefinitely?

          • John Schilling says:

            Yes, that’s probably the best we can realistically hope for. Good night, Westley. Good work. Sleep well. I’ll most likely kill you in the morning.

    • Anonymous Bosch says:

      But Pompeo and KYC are serious people who know the score, and they aren’t going to let a summit happen if there’s a serious chance their boss will either negotiate away the homeland for a handful or magic beans, or start a nuclear war in a fit of pique.

      I have a little less confidence in the magic beans scenario being out of the realm of possibility. Trump wants “a deal” and we know he isn’t particularly moved by evidence or specifics. Watching B-roll footage of Dianne Feinstein (nobody’s idea of a smooth operator) manipulate him into agreeing with shit like a clean DACA renewal or an assault weapons ban doesn’t really inspire confidence in his negotiating skills. In those instances he of course got immediately put back on script once Congressional Republicans got him alone, but this is an executive agreement, not a law; he’s accountable to no one but his base here. And while his advisors will be in his ear, the recurring story of the Trump administration is “Trump receives advice, Trump ignores advice, Trump advisors furiously polish turd because they like having jobs.” Mike Pompeo selling a worse-than-Iran deal as a Westphalia-scale diplomatic triumph wouldn’t be any more jarring than Mick Mulvaney saying that deficit spending is actually a great way to stimulate growth, or Larry Kudlow insisting tariffs are just a 4D chess negotiating tactic.

      Provided Bolton doesn’t manage to start a nuclear war, best bet is we get greatly improved DPRK/ROK relations, greatly improved DPRK/PRC relations, mildly improved US/DPRK relations, a smugly satsified Donald J. Trump awaiting his Nobel Prize, and no actual disarmament but de facto constraints on the growth of North Korea’s arsenal.

      The C, I, and D are obviously pipe dreams for now, but the V seems kind of important even for a preliminary agreement that imposes constraints short of denuclearization. North Korea isn’t exactly to be taken at their word for this sort of thing. Is there any indication they’re willing to allow any inspections, by anyone? I don’t see how you could have de facto constraints on their arsenal without them.

    • BBA says:

      My usual snark about this summit is that the President will give away the whole peninsula in exchange for rebranding the Ryugyong Hotel as the Trump International Hotel Pyongyang. I really hope this isn’t an accurate prediction.

      • Anonymous Bosch says:

        There’s probably a pretty realistic steelman for that involving China straight-up bribing him (through the Trump Organization) to not start a war while he’s in office.

      • Nornagest says:

        Nah, that doesn’t have nearly enough mirrors or fake gold leaf to be a Trump property.

    • Dan L says:

      In the eventuality that the world has to deal with a nuclear North Korea indefinitely, what’s your take on the proliferation risk? I’m concerned about any outcome that incentives other countries to obtain nuclear weaponry for diplomatic leverage, especially if North Korea is able (and can be convinced) to assist them in doing so. Similar logic applies for their ICBM program.

    • Garrett says:

      How does the US sentiment about the conditions in Korea apply to our ability to negotiate? For example, I can see the US public being sold on a nuclear weapons deal with Iran (whether the one we got is saleable is a different idea) because the internal conditions in Iran aren’t terrible. That is, though most people wouldn’t want to live in Iran, people aren’t that worried about the lives of ordinary Iranians.

      This is in contrast to North Korea which has been described by people across the political spectrum as a “concentration camp”. This is the kind where Americans see themselves as bringing freedom by kicking over the can/hornets nest.

      So whereas the US can establish relatively normalized ties with previous geopolitical foes such as China, Japan, Vietnam, etc., I can’t see a way to leave the North Korean regime in power without fear of being knocked over in the future without nuclear weapons. Any idea on how this might work?

    • outis says:

      Isn’t peace with North Korea bad for the US? They’d have to pull back a bunch of troops from South Korea, weakening their power projection in Asia.

      • John Schilling says:

        Who do you imagine we are projecting power against with those troops?

        Okinawa and Guam are much better bases for any purpose other than waging war against North Korea, South Korea, Russia, or China. And a 28,000-man invasion of Russia or China isn’t a credible strategy, distraction, threat, or even bluff.

  22. helloo says:

    What would happen if a country demanded that all sales be done by separate local companies from that country.
    There may be some exceptions such as really small (online) companies or one time events and such, but basically make it so that it’s about the same to start up a new company in that country as to have an international company operate there.

    As in, McDonald would need not just a local branch, but a subsidiary company that is technically independent and needs to handle all the data and sales to the people/business of that country. That company does not have to do all the work in the country and could “contract” the parent company for example accounting/IT needs, but it’ll be like if it outsourced that job to any other 3rd party company. All of its costs/profits/etc. would need to be applied to the local company by the standards of the country’s laws and taxes, even things like IP and licensing.

    Then expand that to everyone – what if all countries had this rule and “international” companies were for the most part outlawed? For the US, what about if even national companies were outlawed and companies could only operate state wide?

    • The Nybbler says:

      I believe multinationals already operate like this. Banks used to work this way in the US I believe, and it just made things less efficient for no real gain.

      • Mark Atwood says:

        Multinationals will often create a new temporary holding company as part of the process of acquiring another existing company. When they acquire an existing company in another country, they will create two holding companies, one locally, and one in the target’s home country. If the target itself is multinational, the acquiring company will create a new temporary holding company in each of the target’s countries. It’s a hell of a dance. It’s got something to do with tax calculations, international tax law, clarity of accounting, and so forth. Each of the holding companies then exist for Nx length of time and then are rolled up, where the value of each specific N is calculated by a small army of accountants and tax lawyers.

        Yes, there is an API, tho it usually spits out faxes onto the desks of registered agents, instead of directly interfacing to each local government’s registry of corporations.

      • helloo says:

        I know that it is sort of like what is already being done, but I was trying to convey a much tighter version of it to even make transferring funds and goods between the countries the same internally and externally within a companies.

        I suspect that if it’s seriously being done anywhere, plenty of loopholes and tricks would be done to skirt around it, but as a thought experiment for creating a more “closed system” and say how that would impact things like exports/imports/price discrimination/countering globalism and such.

    • Mark V Anderson says:

      I think a lot of countries have rules kind of like this. I have never been directly involved in setting up foreign subsidiaries, but I have often been on the fringes of discussions by companies trying to set up subsidiaries in new countries. Often countries have rules about local ownership of these subsidiaries, rules about currency conversion so funds are difficult to remove from the country, and various other rules about commerce, of which I cannot remember the details. The purported intent of these laws is to avoid exploitation by multi-nationals, I believe, but the real effect is to transfer profits to local law firms that specialize in getting around these rules in the easiest and cheapest way possible. So law firms in these countries do a bit better, and the rest of commerce in the country does a bit worse because of inefficiency.

      • helloo says:

        My thought regarding it was that it might get implemented not for the lawyers’ interest/benefit (they are always going to be the winners of any complex restrictive law systems), but rather a sort of nationalism. Sort of the business version of eating locally.

        Also by forcing companies to operate independently within countries, it does create extra work and reduce opportunities for leverage, but could allow them to be much easier to control or regulate than multinational entities.

        It is not supposed to be a technically good thing either economically or “logically” besides to possibly curb the power of international companies.

    • Lambert says:

      A lot of defense sector stuff works like this.
      In order to sell to the military of country Foo, a multinational will set up a local subsidiary, staffed primarily by Foovian nationals, as an intermediary.

    • A Definite Beta Guy says:

      I’m not quite sure what you’re getting at. As other commenters have noted and you yourself have stated, this is already happening. It’s also happening within the US, for a variety of reasons: my last company was structure into technically 30 or 40 different companies, both for tax reasons and to make sure that if one of our commercial properties defaulted, a lender couldn’t go after OTHER properties.

      My current company also operates this way, except in an even MORE convoluted fashion: we are a production facility within a US company within an international holding group, but this particular production facility still operates within a different US company that also operates within an international holding group (because this production facility was divested from one interational holding group to another international holding group).

      Also, even if you are organized as a single company, you still need to operate according to local laws. Like, McDonald’s can’t just set up shop in Oregon and only pay $7.25 because it is the minimum wage they pay in their theoretical corporate HQ in Alabama.

  23. SolveIt says:

    What’s a convenient blogging platform these days? Only hard requirement is LaTEX support.

    • dick says:

      WordPress (which runs this site) seems to support it. I recently had to go hunting for a cheap-o managed wordpress host and found namecheap.com to be a reasonable option. If you meant “free and hosted by someone else” then I don’t know, maybe blogspot or tumblr?

      • David Speyer says:

        wordpress.com provides free (ad supported) hosting for wordpress blogs. Terry Tao runs his blog this way, as do at least half the mathblogs I know. I’m not sure whether to say that the Secret Blogging Seminar is still alive or not, but we ran it for years on wordpress hosting and found it very easy and convenient.

        If you get tired of handcoding LaTeX in the wordpress editor, you might like LaTeX2WP, which takes not overly complex LaTeX documents and converts them into material ready to copy paste in WP. I found it made the writing and editing much nicer, but the LaTeX-inside-HTML it produced was painful to read or edit by hand, so I usually had to go back to my source LaTeX if I wanted to make any edits and then repaste the whole new edited material.

  24. I am increasingly intrigued (and, I admit, irritated) at all the parenthetical gasping that has been going on here.

    As far as I can tell from searching SSC, Scott himself has never done it, but many commenters, including some of my favorites, do it frequently.

    A few examples from SSC comments:

    About the only thing it spoils about the earlier work is that, gasp, the two protagonists in what is obviously a love story do end up making a baby together.

    the very act of donating to charity (and the very premise of Utilitarianism in the first place) is rooted in, *gasp*, sentiment!

    I love it when Blue Tribe members get a tiny glimpse into what it is like to be Red Tribe…. or, gasp!, religious….

    I’ll do what I please, even if that includes (gasp!) thinking that equal protection, like freedom of speech, is a concept that’s larger than the details of how any one state might presume to enshrine it into the law

    Wasn’t there a huge outrage in Ireland about having to (gasp!) pay for water just recently?

    (and demonize them if there was *gasp* luck involved)

    were calling for his firing from a board position because he was a *gasp* Trump Supporter.

    But if – gasp! – you don’t read Twitter at all…

    the slightest possibilty that someone might, gasp!, present an argument

    computers or the internet (or *gasp* AI) could totally change society!

    why parents don’t seem to be unhappy even though – gasp! – having children is financially harder than not having children

    I want some outside opinion on a **GASP!** Reddit thread.

    Now that I think about it, it would be nice if you could easily link other posts (gasp) 4chan-style.

    My guess is that Foucault (gasp!) was basically right as to why this is the case…

    You can even *gasp* believe that you should follow the immigration laws

    Could it be– gasp!– that those in the population itself get their warm fuzzy feeling from other qualities in life?

    differences between men and women are caused by women and men being, gasp, different.

    Maybe – gasp! – there’s MORE to politics than liberal vs. conservative!

    Perhaps with a little hand-wringing about actually – *gasp* – talking to the Hated Enemy!

    Planning and organizing the labour is recognized by modern marxists and even (gasp!) anarchists.

    an English department academic moaning about the horrors of having to teach and – gasp! – write papers to be published!

    Reds have historically sponsored both adultery (gasp!) and spousal abuse …

    You can be guilty of the worst things, but as long as you’re not, gasp, a Red Triber, the Blues will find a way to defend you and support you.

    There’s a –gasp– social justice argument for support of the arts.

    Really for it to have a chance it seems like it would have to be mandated by *gasp* the government.

    Is it perhaps because… they get *gasp* tired when they try to run at that fast of a pace?

    the homophobes with such opinions who have wormed their way into jobs above their station – maybe even *gasp* working in the civil service itself!

    Else you’re being – gasp! – inconsistent.

    The implication is that some other person or group (with different views) would be so shocked at your beyond-the-pale contention that they would gasp in horror.

    But an actual, involuntary, gasp of horror is, in my experience, a rare event, something we do when faced with a sudden disaster, such as a car crash or explosion, or perhaps a jump scare in a movie. I don’t think I’ve ever heard anyone, regardless how intolerant, unironically gasp in horror at another’s heretical views.

    Sometimes the parenthetical gasp is used for good-natured comedic effect, satirizing the trope itself, and that’s a fine thing. No problem there.

    But in my admittedly priggish view, on serious topics, the parenthetical gasp comes across as a sneer toward the supposed close-mindedness those who disagree with you. It is not a respectful way to argue. It is neither kind, nor true, nor necessary.

    • Randy M says:

      Is it perhaps because… they get *gasp* tired when they try to run at that fast of a pace?

      I like this one. Perhaps it’s not an example of a parenthetical gasp, but rather used to illustrate the point.

    • Mark V Anderson says:

      I certainly got tired of reading all the gasps in the quotes you had, but I hadn’t noticed it being so common. Interestingly, I did not recognize even one of the quotes.

      I never really thought about it before, but I think I agree with you in principle that using gasp is kind of smug and annoying. I will endeavor not to use it in the future. And I will be looking for them in future comments from those who didn’t see, or ignored this post.

      • Interestingly, I did not recognize even one of the quotes.

        When I trimmed the list down to a manageable size, I deleted those that I thought were too identifiable.

        I think I agree with you in principle that using gasp is kind of smug and annoying. I will endeavor not to use it in the future. And I will be looking for them in future comments from those who didn’t see, or ignored this post.

        Thank you — I appreciate that.

    • j1000000 says:

      Seconded! Here’s another stylistic tic I dislike among people debating culture war-y topics: start your sentence with “It’s almost as if…” in a sarcastic way and then state your belief. It’s gotten dull.

      Some of your comments use equivalents like “Could it be” or “Is it perhaps because.”

      • j1000000 says:

        (By “some of your comments” I mean “some of the comments you have selected as examples”)

      • achenx says:

        Yes, agreed. Though I have noticed myself getting way less tolerant of sarcasm in general. (Especially when people do that thing I see on reddit and similar places where they put “/s” at the end, which comes across as “hey, look at how clever I am for using sarcasm!”)

        To quote the esteemed doctor of philosophy Dr. Horrible: “Oh, sarcasm! That’s original!”

        • albatross11 says:

          The problem I see with sarcasm and other kinds of verbal cleverness in arguments/discussions is that they sometimes increase the difficulty of understanding what you’re saying, and often make it a bit slippery for someone arguing against you to know exactly what you’re saying.

        • Protagoras says:

          Since his degree in horribleness is apparently a PhD, I suppose you could technically say he was a doctor of philosophy in horribleness, but the usual way of shortening that is to drop the philosophy part, not drop the horribleness part. A “doctor of philosophy” usually refers to a doctor of philosophy in philosophy.

    • FXBDM says:

      For some reason this made me think of the moaning epidemic in Catch-22

    • onyomi says:

      Somewhat tangential, but I think it’s interesting when seemingly involuntary physical reactions are actually at least partially culturally mediated.

      Example: in my entire life, I’ve never seen anyone faint or collapse from shock. I seem to recall reading somewhere that at one point (Victorian England?) it was pretty common, and thereafter became a convention in books, movies, etc. My suspicion is that it was always sort of a theatrical reaction, but possibly not consciously intentional. Something like: “you grow up knowing that an acceptable way for ladies to express strong shock is by collapsing, but not by e.g. shouting or stamping your feet violently or… therefor you almost involuntarily express shock that way, rather than in any of the many other possible ways human beings physically express shock.”

      Related, I think there are e.g. cries of pain and laughter people do more in presence of an audience, seeming to imply that the brain knows how many seemingly involuntary reactions exist primarily for social signalling purposes and are therefore unnecessary when alone.

      • seemingly involuntary physical reactions are actually at least partially culturally mediated.

        Oh, certainly. People will often swear when startled by a sudden accident, but specifically what comes out of their mouths is based on hearing other people swear.

        Swear-words, I believe, are contagious: hearing an outraged or startled person yell out some particular curse makes it more likely that you will react with the same words later. You don’t have time to sit back and consider your word choice: it just comes out.

        When I was in grad school, I spent time in the presence of another student who swore only in G-rated ways, like “Oh-my-heavenly-stars!” Somewhat to my embarrassment, for a while I found that I had picked up this same habit.

      • Tarpitz says:

        Back when I ran a horror theatre company, we had quite a lot of fainters at some of our shows. It was definitely triggered by things happening on-stage, but definitely mediated by other factors, including heat and whether they were sitting or standing (our promenade show had far more than any other).

      • Nancy Lebovitz says:

        I fainted once as an emotional reaction. I’d seen someone kick a kitten and it was bleeding slightly. The only other factor at my end was that I was low on sleep.

        As far as I can tell, it wasn’t performative.

    • carvenvisage says:

      What’s *gasp* wrong with implying that something which one thinks should be obvious is beset with opposition?

      Are you familiar with the term offended?

      As in “I wasn’t sure whether to be offended”, as in (at least some) people conciously consider whether to take umbridge and at least some are far from ashamed to ‘confess’ this oh-so-unhuman and unfamiliar practice in public.

      _

      I mean, it’s obviously a ‘bravery debate’ sort of move-

      (here, I’m implying bravery debates aren’t a big deal. -Without even explicitly arguing for it! Just suggesting it. *gasp*. just like a moment ago I was implying you’re insufficiently sceptical about human nature, and just then that you were precious, all of which would have passed by like water before a fish if I wasn’t explicitly pointing it out…),

      But there are thousands of bravery-debate-sort-of-moves, and people are always inventing new ones.

      If you ban one very particular way of implying your opponents are [negative quality], or simply wrong, of course they’ll just switch to another. The idea of calling out specific memes for people to voluntarily refrain from using strikes me as absurd. ..Like trying to empty the sea with a bucket of water.

      It is not a respectful way to argue. It is neither kind, nor true, nor necessary.

      It may very well be true in individual cases. And it may be useful, which is most of what “neccesarry” can realistically mean here where we blog-comment rather than perform triple bypasses.

      (just for the record, I originally wrote ‘heart surgery, then I decided triple bypasses would sound a little better, with the not too inconvenient side effect of sort-of-implying I know the difference between a triple and double bypass and a bypass highway.

      -Which seems to me fairly typical, and really not even objectionable human behaviour. Just because it happens to (incredibly mildly-) flatter me doesn’t mean it’s a bad thing, or that I made the change to flatter myself. In fact the actual reason is because it’s livelier/more energetic. And people (well, IRL, where I get more feedback) generally prefer when I put things in lively/energetic ways than in ways strictly 100% stripped of anything that might look like vanity.

      Deiseach got banned for one of my favorite posts here ever, so maybe other people lack this fine appreciation for artful bravery debates. But it seems to me that this is a mistake, that you can never get rid of bravery debates, only apply rational constraints to favour the side that is actually right. A world where bravery debates ultimately come down to who is more of a psycho or a douche is certainly undesirable. It’s not at all clear to me that chestertonian norms of honorable bravery debates wouldn’t be a vast improvement on the ones we have.

      )

      Brackets over, back to the regularly scheduled post;

      Why not respond to a “*gasp*” on its own level? Can’t it be answered directly? Isn’t the implication usually clear?

      -Does the gasp not in fact usually have some sort of propositional meaning that is clear from context? I mean, sometimes that propositional meaning is something that would get you a ban if you said it outright, in which case I’m not suggesting any special immunity. But if you imply that something is obvious, yet still subject to fierce dispute and taking of umbridge, that can be a true or false charge, fair or unfair, appropriate or inappropriate. And I don’t see why implying this rather than say it outright has much bearing on any of these.

      _

      -Imo, and I hope none of my illustrative ribbing is taken personally.

      • outis says:

        I could only skim over this, but I assume it was an ironic response, loaded as it is with annoying affectations.

        • carvenvisage says:

          Guys I think we should ban it when people use their laziness to condemn things without having to engage with them.

          When we get rid of that tactic, (..Or, well, kindly ask that people refrain from), that’s when we’ll be left with nothing but perfectly rational discussion.

    • pontifex says:

      The “parenthetical gasp” seems like a perfectly cromulent English construct. I think the real problem is that it’s sarcasm, and sarcasm tends not to work very well on a bulletin board. It just relies on a shared mental context that may or may not be there.

  25. Edward Scizorhands says:

    Vaguely Culture War-ish: what specifically got ClarkHat booted off of Twitter? How long ago was it?

    I’m not looking for a defense nor an attack on him, just wondering what broke the camel’s back and when.

    • Aapje says:

      It seems to have been Februari. Breitbart has a story about it.

      • Edward Scizorhands says:

        Wow I was very out of the loop. Thanks.

      • Iain says:

        I believe this screenshot shows the actual tweet. (Here is another sample of his oeuvre, found while searching.)

        • outis says:

          I am honestly concerned about the fact that Americans love dogs more than people. That’s not at all specific to this ClarkHat fellow, it’s a disease of your entire culture.

          • meh says:

            Best disease ever.

          • Brad says:

            I’m American and agree with you. It’s bizarre.

          • quanta413 says:

            Loving dogs more than (most) people in a feelings type sense is totally reasonable. Or maybe has little to do with reason at all.

            Confusing this for how you should morally weigh dogs vs humans is a terrible mistake.

            Not saying you’re making second mistake. I mean if someone actually considered moral action towards dog > human. (Assuming the human isn’t Hitler or something).

          • Brad says:

            It depends on how you are using the word ‘love’. As a simplification let’s say there were only two: 1) “I love that first sip of coffee in the morning” and 2) “I love my daughter.”

            If the argument is that spending time with your dog fits under #1 and most other people don’t fit under #1 or #2, therefore you love your dog more than you love most people, then fine. But if your argument is that your dog fits under #2 and that’s reasonable, then I disagree.

          • quanta413 says:

            I’m saying the first is typical.

            Sticking your dog somewhere between #1 and #2 is still fine, as long as you don’t do things like shove a person in front of a trolley to save your dog.

            Your love for your dog should not be like your love for your children in the sense that your love for your children should reasonably include prioritizing them sometimes to the exclusion or occasionally even detriment of nearby humans. If you could save your daughter or your neighbor from a burning building, most people would consider it wrong to save your neighbor. The opposite goes for a dog compared to your neighbor.

            I leave out far away humans because everyone outside of barefoot saints at least sometimes prioritizes material comfort over far away humans as can be seen by the fact they have things like computers and coffee when they could be donating money to bednets and deworming in Sub-Saharan Africa (insert “most effective charity here” if that doesn’t work for you).

            Human love for material goods over other humans strikes me as a far more significant problem than that Americans like their dogs. And honestly, Americans don’t like dogs that much; many Americans will euthanize a dog if the dog is inconvenient. Dogs are fungible in a way that people mostly aren’t.

          • meh says:

            Is dog love really unique to Americans?

          • Aapje says:

            Perhaps it is a replacement for slavery 😛

          • quanta413 says:

            Dunno how much more Americans like dogs than Europeans do.

            But outside of a few large urban cores, Americans tend to have bigger houses and yards with a lot more room for dogs. Appropriately sized dogs of >= 18 kg (40 lbs) or so.

          • Nornagest says:

            Don’t confuse love with moral worth. Dogs are a lot more loveable than most people, at least if they’re well socialized and not very yappy — and I really do mean a personal, emotional, #2-ish attachment. But I’d still pull the lever if the proverbial trolley was heading towards someone I didn’t like very much and there was a dog I liked very much indeed on the other track.

          • meh says:

            Finding it hard to find good data, but there is this:

            https://www.gfk.com/global-studies/global-studies-pet-ownership/

            Argentina, Mexico and Brazil have highest percentage of pet-owners, followed by Russia and USA

            however, that seems to contradict:
            http://www.guinnessworldrecords.com/world-records/highest-rate-of-pet-ownership-(country)

          • Edward Scizorhands says:

            The Internet will freak out if you abuse a cat versus abusing a human. It might have to do with treatment of creatures that can’t defend themselves, or maybe people are just weird.

  26. christianschwalbach says:

    Anybody here experimented with Intermittent Fasting (w/o weight loss) to aid mental/brain health?

    I have messed around with it on occasion to address some digestive problems in the past, but a couple years ago I had a migraine flare up and cutting carbs/targeted fasting periods really helped. Now I am interested in experimenting with it more to aid Depressive symptoms. I have tried low -carbing, but can never stick with it as I have little weight to lose, and dont like the low-energy that it causes. Right now I am trying a IF version where I eat my normal complex -carb based diet, but only consume food from around 12 pm to 7 pm, or 6 pm if I am extra tired. This is to allow for some time for my digestion to settle, and also for my immune system to do its job cleaning up cellular gunk and whatnot (along with good sleep as well). I have noticed some difference in mental energy, though not so much physical, and I think this is a sustainable thing for me. It is proposed by some researchers that this time limited form of eating is actually truer to historical practice……

    • knownastron says:

      I tried all types of diets to optimize for energy (Soylent, Intermittent Fasting, low carbs etc.) and what I realized was that 100mg of modafinil in the morning solved all my problems AND I could eat whatever I want. It was just too much work trying to plan my life around eating.

      Have you thought about just taking modafinil?

    • Barely matters says:

      I’m doing almost exactly the same thing you are in eating within an 8 hour window but otherwise proceeding normally. Aside from leaning out a little, likely from just not eating as much as I otherwise would, I’m not noticing much difference. Normally I’d be cutting leading up to this time of year anyway, so it’s been essentially par for the course.

      I haven’t noticed any specific changes in mental or immune function (Though I haven’t been sick since I started four months ago which, while not of the ordinary, is positive). Seems easy enough to stick to.

  27. drunkfish says:

    Scott, are there any contexts in which you don’t want your articles shared? I frequently link your stuff to friends who I think would appreciate it, which I assume you’re fine with. Recently though I was considering (I didn’t end up doing it) sharing an article on a facebook group I’m in that has a large contingent of extremely far left folks, many of whom aren’t big fans of rational debate as a way to reach truth. I hesitated because that seems like the kind of people who make your life on the internet less pleasant, was I right to hesitate?

    • ordogaud says:

      Obviously not gonna answer for Scott, but I’d say if you do end up sharing an article with a potentially disgruntled/militant group you should share using the ‘Link without Comments’ button next to the twitter/facebook button at the bottom of every article.

      Hopefully that way they’d be less likely to disrupt or start fights in the comments.

      • drunkfish says:

        woah! i never noticed that. That’s definitely good to know about, thanks!

        I’m not totally sure it solves the problem, since I’m more concerned about Scott being harassed than I am about a few dumb comments, but still a really useful tool to have available.

        • Nornagest says:

          Scott is also the sole moderator. Fights that start in the comments are functionally equivalent to harassing Scott.

        • Randy M says:

          Also, you underestimate the ability of motivated people to bring pressure on Scott for idle musings in his comment section only tangentially related to whatever nuanced point he makes in the article above those comments.
          After all, if a blog owner is sufficiently against the commie-nazis, he wouldn’t be able to stand having musings on the eugenic effects of UBI on his blog, would he?

  28. C. Y. Hollander says:

    I’m not sure where to put this, but I was clicking through the blogroll recently, and I think that perhaps 1 Boring Old Man should be moved to the Embalmed Ones category. 🙁

  29. JohnofSalisbury says:

    Philosophy of religion question. Let’s assume that God chooses to become incarnate. We can give a nice broad characterisation of His motivation: say, putting His relationship with humankind on a firm footing. It’s a unique incarnation, by the way: God becomes incarnate only once. When and where would you expect the expect the incarnation to occur? I will share my own thoughts later: the upshot may be predictable, but hopefully the details of my reasoning will be of interest. But I wanted to learn what other people’s thoughts were first, particularly those of non-Christians/theists. Let me know if you have clarificatory questions.

    • Evan Þ says:

      (Disclaimer: I’m a Christian.)

      To answer this from logic alone: Since by premise God is becoming incarnate to change His relationship with humanity, it stands to reason He would want to do it at a place and time where news of this incarnation would spread quickly: somewhere with well-traveled trade routes and a broadly-spoken language capable of speaking in detail about God’s nature.

      Also, since by premise God is the sort of God who intervenes in human history, He would want to become incarnate among people whom He’d prepared for this – perhaps by prophecies, and perhaps by more-or-less-quietly guiding their cultural development.

      Yes, you can probably guess what I have in mind, but I hope my reasoning’s valid and of interest.

      • Randy M says:

        t stands to reason He would want to do it at a place and time where news of this incarnation would spread quickly

        To play devils advocate, news spreads much quicker in an age of railroads, or radio, or television, or the internet, than Roman roads.
        But perhaps you just need another corollary–sooner in history is better than later, for greater long term impact.

        • Evan Þ says:

          I was thinking of that, but without Christianity, would we have gotten to an age of such fast communications?

          And your corollary’s also valid.

        • JohnofSalisbury says:

          Good, the first two replies have got to what I think is the core of the issue. We’re trading off promptness against preparedness. In the preparedness category, we’ve got ease of travel and communication, as well as relevant religious background. Any other dimensions of preparedness? What about moral and intellectual development? Also, there’s something that might be be considered pretty important to the news spreading quickly which Evan hasn’t mentioned explicitly.

        • johan_larson says:

          You know, if you can’t shift the time of appearance from what is supposed to have happened in our timeline, it sure seems like there were better options for location. Why show up in Galilee, rather than, say, Rome itself? Wander into the Senate, brush aside the guards, and do whatever it takes to show the most powerful men in the world that there is no god but God. Then take the show on the road to the Colosseum, just down the way, and wow the masses. Same deal.

          If nothing else, theologians two thousand years later would have a lot more to work with.

          • Randy M says:

            That sounds a lot like the retcon John Wick (or whoever wrote the details after him) had for the first prophet in 7th Sea (A swashbuckling Europe + magic RPG).
            Appeared before the Senate, displayed power there, escaped from prison and spoke to the crowds, etc.

            However, given the relative speed Christianity took over Rome, it doesn’t seem like it would necessarily been more effective for Jesus to appear there in person.

          • JohnofSalisbury says:

            Thanks for tackling the location question. Right, there seems a lot to be said for populous/important cities. I guess you wouldn’t be very impressed by my left-field suggestion of Central Asia. On the other hand, I agree with Randy that, if you can get the news to disseminate well anyway, the added value of turning up in the biggest metropolis available isn’t that significant. What about location as in region of the world? And any thoughts on time?

          • johan_larson says:

            Any thoughts on time?

            You don’t get a chance to affect the lives of people who lived before you show up, but changing technology also affects how many people hear about you and your actions. That suggests you want to show up right when truly global travel and communication become possible. I’m thinking sometime during the 1800s would be about right. Let steamships, telegraphs, and newspapers amplify your message.

          • JohnofSalisbury says:

            you want to show up right when truly global travel and communication become possible

            Again, a plausible suggestion with issues of questionable added value. So long as your message makes it to the era of global communication, it will get amplified by the new technologies anyway, and you’ve reached at least some people who lived before that era. On the other hand, granting that we are looking for ease of travel and communication, you have identified the most significant threshold.

          • j1000000 says:

            @Randy M

            However, given the relative speed Christianity took over Rome, it doesn’t seem like it would necessarily been more effective for Jesus to appear there in person.

            By waiting a few hundred years for his message to get to Rome, instead of appearing there outright, lots and lots of people who were never exposed to his message (or forced to convert to Christianity) in the interim were doomed to lives in — hell? purgatory?

          • John Schilling says:

            So long as your message makes it to the era of global communication, it will get amplified by the new technologies anyway, and you’ve reached at least some people who lived before that era

            But you’ve lost some of the people who lived after that, through degradation of the message quality. I, and many others, do not consider the surviving evidence to be sufficient to support a belief in the divinity and resurrection of Jesus Christ. If he’d showed up in the 19th century and posed for a few photographs on the first Easter, the case would be much stronger. Also, much of what was written about Jesus in the first generation or two of Christianity, and likely much of what he actually said, seems to have been lost or corrupted.

            If you assume that God is constantly meddling to make sure that His message is not truly corrupted, that the Bible never contains an error of doctrine no matter how poor or malicious a translator is left to the task, that problem maybe goes away. But at that point, Jesus can show up as e.g. an Australian aborigine in 10,000 BC, and God just uses his mind-control whammy to make sure everyone who hears the good word believes and retransmits it faithfully, along with his weather control mojo to make sure any aboriginal missionary who throws himself into the ocean on a piece of driftwood winds up alive on a welcoming shore, and this becomes much less interesting as an intellectual exercise for us to play with.

            I don’t think we need to wait until the photographic era to avoid excessive signal degradation, but it probably would help if you show up after any inconveniently-scheduled dark ages and in some fairly cosmopolitan city. Early Medieval Byzantium, perhaps? And make sure to get out and around, and to be seen by the sort of people who keep good records.

          • JulieK says:

            Assuming that history would have been more-or-less the same in other respects, it should be early enough that Europe was converted before the Age of Exploration, so that when Europeans colonized the New World and other locales, they brought their religion with them.

        • Le Maistre Chat says:

          But perhaps you just need another corollary–sooner in history is better than later, for greater long term impact.

          Well, yes. Waiting for railroads only makes sense if you foresee the message dying out before railroads and fast communications are invented.
          You wait for a time when writing exists, and is so secure that it won’t be lost during a Dark Age. CS Lewis even alluded to it being logically possible that Horus or Krishna was an Incarnation, but how would we ever empirically know?
          Premodern sea travel was much better at spreading people than even decent roads, so you choose a huge empire with a central sea rather than one like China.

          • Nornagest says:

            The Dark Age bit is important. For all we know there could have been a Jesus Christ in Mycenaean Greece in… let’s say 1350 BC, for giggles. That’s a pretty cosmopolitan and well-integrated part of the world for its time, with writing, trade, and lots of intercultural contacts — but the Late Bronze Age collapse came along a hundred years later and wiped out everything, and now the only trace left to history is a few allusions to a brief departure from polytheism in Egypt.

          • Le Maistre Chat says:

            @Nornagest: Yep. Even with continuity of writing in Egypt, figuring out what was going on outside its borders before the Bronze Age collapse is nigh impossible. What caused Aten worship? How recognizable would the Indo-Aryan Mitanni be to Hindus? Whole lot of mysteries.

          • johan_larson says:

            You also need to worry about your message getting miscommunicated, either through error or deliberate tampering. And the better the communication and record-keeping technology available, the more accurate a record you can leave.

            As I recall, our earliest substantial records of even very major figures from the Roman Empire are often from decades after the fact. That leaves room for all sorts of misunderstandings and unanswered questions.

            I’m thinking it might be useful to wait for at least the printing press.

          • JohnofSalisbury says:

            I’m thinking it might be useful to wait for at least the printing press.

            Yes, I hadn’t considered this point. It’s also worth noting how soon the Age of Discovery follows on from printing in Europe. Still, while I think 2BCE north India is better than 5BCE north India, it doesn’t seem a great deal better. The added value of printing over literacy is less than the added value of literacy over orality, and the one and a half millennia delay is a hefty price-tag.

          • Douglas Knight says:

            The Age of Discovery did not follow the printing press in Europe. It preceded the printing press. Henry the Navigator was slightly older than Gutenberg. He commissioned the discovery and settlement of Madeira and Azores before the printing press. (Not to mention the Portuguese and other involvement in the Canaries a century earlier, before the Black Death.)

          • Jiro says:

            You wait for a time when writing exists, and is so secure that it won’t be lost during a Dark Age.

            Depends on what you mean by “lost”. You can have the religion itself survive and not be lost, yet the evidence for the religion, or the details of exactly what happened, can still be lost.

          • Le Maistre Chat says:

            Depends on what you mean by “lost”. You can have the religion itself survive and not be lost, yet the evidence for the religion, or the details of exactly what happened, can still be lost.

            I mean loss of evidence. Hinduism has survived since Harappan times, yet we have no evidence for when Krishna lived. Most Hindus say he died around 3102 BC, while there’s a whole class of scriptures (the Puranas) that say 1050 years plus two generations from Chandragupta Maurya, a contemporary of Alexander the Great. And it’s a matter of faith that before him lived Rama… but when, based on what evidence?
            (The consensus in South Asian Studies is that the Ramayana is pure fiction composed later than the Mahabharata, for which there’s little counter-evidence… but one can’t help wondering why universities would have a department dedicated to offending Hindus.)

      • shakeddown says:

        Inverse reasoning: Wherever the ideal time/place for God incarnate to appear is probably also the ideal person for a human to create a religion – so the time/place of creation of the largest modern religion is probably pretty close to the ideal time/place for God to have appeared (even if we assume Jesus was just some guy).

        • silver_swift says:

          Gods incarnation would be able to convince a much more scientifically informed audience than a random person inventing a religion (just do the fish replication thing in front of a dozen physicists and/or atheist and give incontrovertible proof that at least something supernatural is happening).

          I don’t know if that necessarily moves the ideal time much at all, but I don’t think the two situations are quite that isomorphic.

          • Jiro says:

            God could just appear a few thousand years ago, but provide a coded message that can’t be decoded until we invent the ability to brute force messages encoded using 56 bit key RSA.

          • Evan Þ says:

            @Jiro, you might be interested in the Bible Code?

            (But then also the Moby Dick Code.)

        • JulieK says:

          so the time/place of creation of the largest modern religion is probably pretty close to the ideal time/place

          On the other hand, Islam is in second place, but no one here predicts Arabia as the ideal place to found a religion. So I wonder how much of the responses here are influenced by hindsight.

          p.s. I was amused to see that google’s first 2 autocomplete suggestions to “number of…” were “…christians in the world” and “…jews in the world.”

    • meh says:

      In private, observable to me only.

    • Douglas Knight says:

      19th century China.

      • JohnofSalisbury says:

        Interesting choice. What’s your reasoning?

      • Nornagest says:

        Hong Xiuquan would have agreed with you.

      • Douglas Knight says:

        Or, maybe…
        In the age of mass communication. In Palestine, because it’s a focal point. At the creation of a new state, because that is a focal point.

        • JohnofSalisbury says:

          Now that’s even more interesting. You’ve got impact down, but letting promptness fall by the wayside rather.

    • Protagoras says:

      Perhaps somewhere to the north of India, maybe Lumbini, a nice central location from which his influence could spread to both India and China, and hopefully eventually westward. A nice early date to give the influence lots of time to spread, but not so early that the stories will be forgotten and not written down; maybe 5th century BCE?

      • Douglas Knight says:

        In your first sentence you talk about influence, but in your second you talk about credit. It seems to me that the most influential person in history was a bit earlier and a bit farther west. But he doesn’t get a lot of credit these days.

        • Protagoras says:

          Are you suggesting someone from the Vedic period? If so, sure, that seems at least as good as my suggestion, perhaps better.

          • Douglas Knight says:

            I mean only a little bit earlier, the founder of the Axial Age, whom I locate in time and space by assuming it to be Zoroaster.
            I’m just saying that he was influential, and that you equivocated on whether you were optimizing for influence vs veneration.

          • Protagoras says:

            OK, sure. And I don’t think I did equivocate. The word “credit” is yours, not mine, and I think you’re over-interpreting something about the wording of my second sentence.

          • Douglas Knight says:

            Thanks for the clarification!

      • JohnofSalisbury says:

        Yes, you’ve got at what I take to be the main issue regarding location, cross-roadiness. Hence my left-field suggestion of Central Asia, above. I think that’s where you’d want to go if you really want to maximise diffusion. On the other hand, you compromise the other aspects of preparedness: little literacy, moral or intellectual development.
        Giving my answer to Shakedown here, I would say that that (looking at the historical transmission of religions) is cheating, and certainly makes the exercise a lot less interesting. Mid-1st millenium BCE Nepal is not a very obvious choice. The AfPak/Iran border regions are a much more natural cross-roads (which fits the Zoroaster suggestion pretty well), and if you wait a few centuries you get a significant increase in preparedness (fitting Zoroaster less well): large polities on every side, increased literacy, a thriving silk road. But using Shakedown’s method, your suggestion works decently.

    • James C says:

      Very interesting question.

      I’m going to guess somewhere around 7000BCE in the Indus Valley region. You hit agricultural civilization just as its forming new ways of dealing with the world, but after its established enough to survive the odd calamity that might wipe out your message. While this would lead to a good deal of drift from your original faith by our era, the fundamentals of society would be divinely inspired and this seems to be more enduring and reliable way of establishing godly living that adding post hock-corrections to what ever has grown up by the late iron age.

      • JohnofSalisbury says:

        I like the basic idea here. Another deep-past suggestion, dovetailing with my Central Asia geographical suggestion, would be among the PIE. Nonetheless, I think the deep past proposals are skewed too far in favour of promptness at the expense of preparedness. The issue isn’t how well established agricultural civilization is, but how well established godliness is within agricultural civilization. What happens when, say, Indo-Europeans sweep in from the Steppe and establish their own, ungodly order? What if the level of godliness declines, and there’s no source to return to?

        • James C says:

          It’s a definite risk and while there are mitigation strategies a god could employ, like baking in commands into good sense survival strategies, the risk that the message will be diluted or lost is always going to be there.

          However, this isn’t necessarily a deal breaker. While civilisations rise and fall their ideas will often survive in some form. By getting the message out early later civilisations may be less godly, but is 20% godliness from everyone on Earth better than 100% godliness from 1 in 10 people?

          • JohnofSalisbury says:

            To get everyone on earth, though, you’d want the last common ancestor, not the Indus valley, with a massive increase in the risk of failure.

    • a reader says:

      Does the God have divine powers in his human form, can He do miracles? If yes, it doesn’t matter that much where He becomes incarnate – He can teleport anywhere. It also doesn’t matter that much the time, the technological level of the contemporary civilizations – except there have to be some big civilizations (kingdoms, empires, big city-states) that use writing.

      The God has to wait before civilizations cover a substantial part of the world. Then He incarnates in a beautiful place with good climate, in a family of wealthy and kind people. There He grows in as good as possible circumstances until 20.

      Then He teleports himself to the capital of the biggest empire, during an important religious ceremony that gathers a large crowd, the emperor and the highest priests. He makes the statues of gods fail with a gesture and proclaims in their language that He is the true God. The emperor sends the soldiers to kill or arest Him for blasphemy – He makes their weapons melt like wax. The emperor and the crowd, seeing the miracles, accept Him as God (if not, he can strike the emperor with paralysis until he accepts Him). Then He gives them a Book written in their language, with their writing, on an indestructible material.

      Then He teleports to the next empire or kingdom, where He does the same – until all empires and kingdoms on Earth accept Him as God and the teachings of his Book.

      • JohnofSalisbury says:

        Yeah, I think I’m going to have to rule out heavy use of teleport, on the grounds that this is a functionally equivalent to multiple incarnations, and so contrary to the initial set-up.

        • a reader says:

          If you don’t let God teleport, can He at least materialize a plane (and fuel for it)? Or a flying chariot, like many traditional gods? Or at least a 4×4 car?

          If not, the God will have to travel on horseback and by ship – even so, I think He still may visit and convert most of the world (including the New World) in ~ 50 years of travel.

          • Conrad Honcho says:

            It would be a lot easier to get kids interested in church if all the stories were about how Jesus showed up and wowed the crowds gettin’ huge air in his bitchin’ monster truck.

        • Wrong Species says:

          What about all the other miracles? Is it cheating if he writes “I am Jesus Christ, the son of God” all across the sky but where people see it in their own language?

      • Conrad Honcho says:

        I don’t really think displays of supernatural power are enough to stand the test of time. There are plenty of ancient stories about awesome displays of power but those don’t convince modern people to worship Zeus.

        • Wrong Species says:

          Pick a given year, say 250 AD. Now imagine that you have sources from Rome, Persia, India and China all reporting of a figure who calls himself God and performs supernatural acts in that year. That is pretty damn suggestive.

        • Chalid says:

          You could leave behind tangible evidence though. Carve the Holy Word into a mountain in 40-foot letters. Instead of turning someone into a pillar of salt, turn them into a statue of diamond. etc.

          • Jiro says:

            Yes, but the message on the side of the mountain in 40 foot letters initially said “Holy Word Land”.

    • silver_swift says:

      Is God limited to a single human lifetime on earth? If not, appear on earth when the very first hunter gatherer societies form, make yourself immortal and just never leave.

      If you are limited to a single human lifetime, you want to leave at least some artifacts behind that are unquestionably divine in origin and attach some message to them that will remain understandable for the longest time possible. If you can make the artifacts grant visions or write messages in a universally legible script or something (you’re omnipotent, figure it out) then you want to appear as soon as agrarian civilizations begin to form so these new civilizations will form around the artifacts that you set up.

      If you are limited to writing a message in a script that is locally available at the time, you probably want to wait until the world is interconnected enough and enough record keeping is done that the most commonly available language of the time isn’t going to be lost forever. Ancient Rome, as others have suggested, seems like a good option here.

    • James C says:

      Oh, if you’re willing to stretch the definition of ‘humankind’ to include other sophonts, the best time would be just after the development of the first practical FTL drive or reliable interstellar STL drive. That way the Good Word could be spread across the entire observable universe.

      Slight wrinkle, though. Might not happen on Earth if some aliens beat us to the interstellar punch

    • JohnofSalisbury says:

      Ok, I’ll expand on my own views now. A lot of this has been covered already.

      The basic issue is trading off promptness against preparedness. Maximize the former, and you get the last common ancestor of all humans, or maybe just the first hominid capable of receiving a message from God (perhaps such a message was received, but didn’t stick, and so an incarnation happened at a time of greater preparation, per some putative revelations). Maximize the latter, and you get, for instance, the founding of modern Israel, per Douglas Knight’s suggestion. I think we should try to give both similar weight.

      The next issue is what preparedness involves. Here’s are some plausible features:

      1. Literacy. You want a lasting record of what happened, and what happened next, that other peoples and generations can return to for guidance. You also want it at a pretty high level: minimally, people read and write more than documents of state administration.

      2. Ease of travel and communication. It’s hard to pin down exactly what this means, but I think the basic idea is that of reaching multiple (> 3?) peoples within a generation so you build momentum and secure fail-safes. Essentially, you want at least multi-ethnic empires, with their infrastructure, trade-routes, and common tongues.

      Now for matters we’ve discussed less.

      3. Morally, I’d say we want critical concern for the good. What I mean is this: (some somewhat influential) people distinguish clearly between prevailing norms and what is actually good. On a social level, they criticise prevailing norms in the name of the genuinely good. On an individual level, they insist on the importance of doing what is good rather than what is socially expected. Note that I am not writing in any content here: this is a formal characterisation. It maps pretty well on to the concept of the axial age, though as critics of the traditional axial age concept point out, maybe Egypt and Babylon had nascent versions of this earlier on.

      4. The intellectual resources to articulate theism (as an aside, worth reminding you that theism is part of the initial set-up). This means people can distinguish ‘There is a unique source of all that exists’ from ‘The number of the gods is one’. Hard to pin this one down, but having a developed philosophical tradition to draw on is a big plus.

      5. Finally, intellectual ferment. We want people to be actively looking for new and improved ideologies. If everyone’s ideologically settled, we won’t make much of an impact. Think the hundred schools of thought period in China.

      Much of the above applies to location as well as time. As I’ve said, you want a cross-roads for the sake of point 2, and you’d want a fairly high level of civilization for the rest. A civilized cross-roads of civilizations, if what we’re after.

      So where does that leave us? It seems plausible that some of our criteria were fulfilled as early as the Bronze Age, but, as has been noted, survival outside of Egypt was abysmal and Egypt itself was ideologically conservative, as Akhenaten learned. We really start ticking the boxes off in mid to late classical antiquity. The Achaemenid empire is probably a little early still in terms of intellectual ferment and non-administrative literacy, but between the Hellenistic and the Roman period I think it’s a matter of whether you give a slight edge to promptness or preparedness. Both are excellent bets.

      Anywhere within the main corridor of classical civilization (Italy to India) is fine. We might like urban, and I quite like access to the steppe. The Levant, we might note, enjoys a nice central location with Asia on one side and the Med on the other (though admittedly not so good for the steppe).

      • JohnofSalisbury says:

        Some closing thoughts. A lot of people seem to be reasoning from the following assumption: God would do whatever it takes to maximize conversion . Fair enough. The earliest simple way to do that is probably to turn up in Victorian London and go from there. But in principle, God could always work enough miracles where and whenever He arrived.
        I had something subtler in mind, though: something like shaping human history by initiating a religious movement based on personal trust rather than public demonstration.
        Why would God bother with such subtleties?
        Well, for a start, look at the historical record. Nothing fits the maximising-conversion profile. (Also, for something more substantive, beleester’s opening paragraph).
        So much the worse for theism
        Maybe so. Now we’ve worked our way round to the problem of divine hiddenness. But theism is part of my initial set-up. As is some attempt on the part of God to reach out to us, which we should expect insofar as we’re bothered by divine hiddenness at all. In retrospect, maybe it would have been better to go for a more neutral ‘main messenger of God’ rather full incarnation. Never mind.
        So what do we have? Someone within the Hellenistic or Roman world. Evans raised a further point, which gives us even more to go on: relevant religious background. There was a people around in that world with a suitable theistic background, including prophecies about a coming divine messenger, helpfully based in the Levant.

      • onyomi says:

        I think you should consider the possibility of getting the causation reversed.

        You seem to be arguing, roughly:

        “If I were God and I had one chance to incarnate as a human and spread the news of universal salvation through faith, I would do it at an early-ish, but not too early, thriving historical crossroads–perhaps an early major empire, because that would be a good way to spread the message.”

        What I am saying is, if you instead look at it from the perspective of “behind a veil of ignorance, and assuming, for the sake of argument, that religions are entirely the creation of men, where would we expect a religion preaching universal salvation to all peoples (that is, a religion emphasizing proselytization to all cultures and not tightly bound to an ethnic identity, like Judaism) to arise and catch on?”

        I would say the answer here is probably also “at an early-ish historical crossroads with decent communication and infrastructure, perhaps an early empire.”

    • beleester says:

      (Conservative Jew, but I’m trying to step outside my box as much as I can)

      Most of the comments here are focused on God’s incarnation gathering worshippers, convincing others to believe in him, and having God’s message endure long enough to spread without getting lost in translation. But that’s not much of a relationship, is it? That’s just asserting his authority over us, and he doesn’t need to take human form to do that. You could, for instance, carve the text of the Torah into a mountain with a few well-placed lightning bolts. Or create something like the Koran Angelfish, except in a language that won’t be mistaken for a pattern of random squiggles. So the question is, what does God hope to teach us (or what does he hope to learn) that can only be done by walking among us as an equal?

      If God seeks to learn something from his experience (yes, he’s omniscient, but maybe there’s a difference between knowing something and experiencing it), he should probably incarnate as early in human history as possible. Presumably, he’s after some universal human experience – love, friendship, desire – rather than something that can only be felt by Jewish carpenters, and he would want to have this experience as early as possible. He could be a nameless caveman, but he might wait for civilization to arise and incarnate in Mesopotamia.

      God might want to touch the lives of as many people as possible on a personal level. In that case, he’d want to incarnate somewhere with easy travel and communication. That points to the modern era, but maybe mass media is too impersonal for him? Or if he’s trying to alleviate as much suffering as he can in one incarnation, perhaps he’d incarnate in some miserable war-torn place where he can bring peace?

      EDIT: Or maybe he’s waiting for the singularity, where he can use uploads and forking to get around the “only one incarnation” rule?

      One option that’s compatible with Christian views: God might want to demonstrate that power flows not from divine thunder or military might, but from the will of ordinary people to speak up against the powerful. In which case, a Jewish preacher in occupied Palestine isn’t a bad choice. But there are many empires in history – why not speak out against the Babylonians or the Persians? Why not against the slavery in Egypt? Or why not move further forwards in history, and incarnate into the US Civil Rights movement, or Gandhi’s fight for Indian independence?

      • JohnofSalisbury says:

        Good to have a non-Christian theist’s thoughts, so thanks beleester.
        I don’t follow your arguments in favour of early: if universality is what’s sought, then presumably the date doesn’t matter. If touching many lives is the priority, then perhaps a good candidate might be Elizabeth II. But we recapitulate some of the earlier discussion about globalisation: you can still have low quality, high quantity impact after you’re dead, having touched a bunch more people in the intervening generations. I think the war-zone idea is an excellent suggestion.

        Regarding the last paragraph, as I say, Rome is particularly good because it’s relatively early and in a state of intellectual ferment, though the Seleucid empire would work similarly well. But I like the idea, which might give us a reason for favouring the hinterland over the metropolis, as Johan Larson originally suggested.

        • beleester says:

          You can obtain the experience at any time, but it’s still better to obtain it sooner rather than later, because you can’t make use of the knowledge until you have it. As an analogy, you could go to college at any time in your life, but if you wait until you’re 40 to go, you’ll spend your 20’s and 30’s without the benefit of an education.

          • Jaskologist says:

            God exists outside our timeline, so He can make use of the experience at any point, regardless of when in our timeline it happened to take place.

      • Jaskologist says:

        One option that’s compatible with Christian views: God might want to demonstrate that power flows not from divine thunder or military might, but from the will of ordinary people to speak up against the powerful.

        That’s not the Christian view, that’s the view of the recent Social Justice heresy. God generally works through the marginal people precisely to show that it was His power at work and not the specific person. See for examples: Abraham, the childless father of nations, a bunch of Egyptian slaves, Gideon and his 300 men, King David, the youngest child nobody even thought to invite to the coronation ceremony, etc.

        • JohnofSalisbury says:

          I take your point, but worry that anathematising the Social Justice Heretics is, as discourse, a mite unhealthy. Perhaps a more perspicuous articulation of belester’s underlying insight would be to say that God is more like the weak man crying out for justice (social or otherwise) than the strong man crying out for obedience.

    • rlms says:

      Another question that was touched on in Ada Palmer’s excellent Terra Ignota trilogy! (The other being the idea of voluntary laws upthread). I’m inclined to agree with her characters that the answer to this one is probably “some time in the future, at a pivotal moment (for instance the establishment of the One World Government)”.

    • JulieK says:

      A lot depends on whether you want as many believers as possible, or if you want people to be able to freely choose to believe (meaning you don’t want the proof to be so overwhelming that no sane person could deny it).

      • Wrong Species says:

        I don’t get the latter point. I know that I’m conscious. Is my belief not freely held?

      • Protagoras says:

        Saying that belief in the obvious doesn’t count as believing freely contradicts a decent amount of the tradition about what freedom is (Descartes being the clearest example I can think of). And there are good reasons for that tradition; notably, it explains why God can be perfectly free and perfectly good together. God’s omniscience makes what’s right obvious to God, and so God freely believes all and only what’s right (because it’s just obvious to Him). Of course, there are still some mysteries as to why God doesn’t make it more obvious to the rest of us (as Descartes notes), but I assure you that denying that belief in the obvious is free will lead to far worse problems for your story of freedom.

    • JulieK says:

      Suppose that God has previously revealed Himself to some humans (without incarnating) and succeeded in getting them to adopt a divine code of law. Are there reasons why the ideal time and place for the first type of revelation would differ from the time and place for the second?

      • meh says:

        One reason I can think of is that part of getting them to adopt a divine code of law was a promise of a second revelation.

        • JulieK says:

          Why would the promise of a future revelation at some unspecified time and date be more important than the miracles of the original revelation?

          • meh says:

            incentives, if there are rewards attached to the future revelation (i.e. dead come back to life)

          • JulieK says:

            @meh:
            Well, that might be the case in a hypothetical alternate religion, but at any rate it’s not the case for Judaism. If you look at, e.g., the book of Deuteronomy (ch. 4-5, 11, 29), you see Moses repeatedly reminding the nation that they personally witnessed the Exodus and the revelation at Sinai, but no reference to a future revelation or revival of the dead.

          • meh says:

            @JulieK
            I think JohnofSalisbury was just asking about a hypothetical god, not a specific religion.

            However, after quick search I found concerning a messiah in Judaism (https://en.wikipedia.org/wiki/Messiah_in_Judaism)

            Maimonides has written: “Anyone who does not believe in him, or who does not wait for his arrival, has not merely denied the other prophets, but has also denied the Torah and Moses, our Rabbi.”[8]

            All of the dead will rise again (Isaiah 26:19)

            The “spirit of the Lord” will be upon him, and he will have a “fear of God” (Isaiah 11:2)

            He will give you all the worthy desires of your heart (Psalms 37:4)

          • JulieK says:

            Yes, belief in a Messiah and future revival of the dead is certainly part of Judaism (#s 12 & 13 of Maimonides’ 13 principles of faith), but it doesn’t seem to have been an incentive in the Jews’ adoption of the Torah.

          • meh says:

            Ah, I misunderstood. Let me see if I understand, are these accurate?:

            1. There is a belief in a second revelation, but it played no part in getting believers to adopt the faith.
            2. It makes no sense for the divine being to have a second revelation since the divine code of law was already adopted.

            I don’t know if (1) is true or not, I would be interested to know how that is even determined.

            I suppose (2) could make sense for a non proselytizing god, but the assumptions of the original question would tend to indicate god does want converts.

          • JulieK says:

            Re #1, it would be hard to prove that it was *not* a factor, but I don’t see any evidence in the 5 books of Moses that it *was* a factor.

            Re #2, there were prophets (like Isaiah and Jeremiah), but their job was to remind people to follow the laws, not to change the laws.

      • Nornagest says:

        Read a certain way, the Bible is the story of a frustrated Dungeon Master trying several different ways to keep his players on track until he finds one that works — although seeing it this way would be heretical to most Abrahamic religions, of course, for several reasons.

        • dndnrsn says:

          I don’t know, they still kept splitting the party.

        • JulieK says:

          This doesn’t really fit the pattern, though; in the earlier cases, God reacts by narrowing the chosen population, and increasing the number of rules, while this would be the opposite.

        • JulieK says:

          I don’t see any reason to think that things were getting off-track in the first century; certainly not more off-track than back in the era that led up to the Babylonian exile (when there was no change theologically, only materially).

    • meh says:

      Was this god the creator of the universe, with the incarnation the only means they have of interacting with it? Most responses seem to assume a naturalistic evolution of humanity up until the incarnation, though this is not the message of most major religions.

      Also, how would answers change if the assumptions are changed? Assume there is a god, but we have no idea of their motivations, and they are able to become incarnate as many or as few times as they choose. What would you expect then?

      • Randy M says:

        If you have no idea of the motivations, how do you have any expectations of the behavior? Especially from an entity unconstrained by being in a society of peers.

        • meh says:

          What I meant was we are not assigning a motivation as a premise. We can conjecture motivation based on the powers it has, the physics the universe was created with, and, if you like, natural history up to some point in time.

          So right, I did not mean ‘no idea’ of the motivation, I meant more no explicit telling of the motivation.

    • Wrong Species says:

      What about Han China? It’s the earliest imperial era, writing is firmly established, Confucianism still hasn’t firmly taken hold yet, it has historically had one of the largest populations and China has probably the best continuity of any civilization over the last 2000 years.

    • carvenvisage says:

      At the start of the caveman era, probably while there are still other hominids around, to set things in a good order from the start. Why wait hundreds of thousands of years before you set things in order?

      Also, while incarnate it would be logical to give an explicit reason why this can only happen once, so that people in the future won’t quite reasonably wonder “why doesn’t it show us again, if we’ve really forgotten”?

      And certainly one would want to avoid exagerations and rumours of omnipotence.

      _

      But then maybe this is sort of what jehovah did, -incarnated when when other dangerous spirits were ascendant, to put the world in order, using his one incarnation to free us from the likes of moloch.

      If we just drop the omniscience assumption, then look empirically and deduce that he was no philosopher king but a warrior (of sorts), to tear down evil spirits, this seems like pretty plausible behaviour to me.

      (which Would also explain why he couldn’t just remain incarnate, permanently; the best time to incarnate was the one where it was neccessarry to spend his divine energy without counting the cost -in contests with other spirits who needed to be defeated/destroyed/cast-out.)

    • Zeno of Citium says:

      Every time I look at you, I don’t understand
      Why you let the things you did get so out of your hand
      You’d have managed better if you’d had it planned
      Why’d you choose such a backward time and such a strange land?
      If you’d come today, you would have reached a whole nation
      Israel in 4 BC had no mass communication
      Don’t you get me wrong – I only wanna know

      So, as other commenters have pointed out, it’s the tension between showing up late enough to convince people in the relative future, versus showing up as early as possible to make it so as few people as possible don’t have a chance to know God and his commandments. Given the current state of the world, I think you want to err on the side of showing up later. There are about 100 billion humans in the past, and humans evolved about 2 million years ago. The Earth could easily be about as hospitable to humans for billions of years. The number of people in the future overwhelms the number of people in the past by thousands if not millions of times.

      I think that, if God showed up today and started doing miracles, we’d pretty much have exhaustive proof of that forever. Scientific studies under controlled conditions, 3d-video, a press release by God simulcast across all 7 continents – it’s got to be enough, considering that religious charlatans (pick your own here) have formed religions that have millions of modern members. You could probably go back to the 1950s, which is enough to have real science and television in a lot of places.

      (since you asked for demographics – I am not a Christian or theist).

      • John Schilling says:

        The number of people in the future overwhelms the number of people in the past by thousands if not millions of times.

        Unless He is planning to shut the whole thing down next Thursday, having vetted an adequate Heavenly population for His eternal needs. For you know neither the day nor the hour, and so cannot solve this as a mathematical optimization problem.

        • Zeno of Citium says:

          True! I have to do the best with the data I have though, and the original prompt saying that the purpose of this incarnation is “putting His relationship with humankind on a firm footing.” After all, if we posit that God can destroy the universe whenever he wants, there’s nothing stopping us from positing that God can extend the lifetime of the universe, or our human race, making the math I laid out more compelling, rather than less.

  30. ana53294 says:

    To animal rights activists, vegans etc. How would you rank the following activities that use animals, from least to most acceptable to you?

    Hunting; fishing; aquaculture; farming for meat; farming for eggs, dairy, etc.; medical/cosmetic experimentation on animals; having pets; using animals for entertainment (circuses, horseraces, bullfighting).

    • christianschwalbach says:

      Least—-> Animal Entertainment (esp. circus and bloodsport), Experimentation on animals, Having pets, Hunting, farming for meat, fishing, aquaculture, farming for eggs and dairy <—- Most *I do want to note that almost all of these categories have sub-categories that could be denoted as more/less humane as well. There are huge differences in farming methods for example, and there is a wide variety of ethics in pet owners, some of which is absolutely atrocious.

      • Enkidum says:

        I find your ranking very surprising. Farming for meat is more ethical than hunting, and both are more ethical than animal experimentation? From my perspective, meat is basically completely unnecessary, while animal experimentation produces a great deal of good and is critical for the advancement of neuroscience and medicine.

        I’m a former vegan who is now omnivorous simply because I’m a hypocrite, but I work in a lab that does invasive animal experiments.

        • christianschwalbach says:

          Enkidum. Read my asterisk

        • Aapje says:

          @Enkidum

          I think that it matters whether it is (literally) putting lipstick on a pig or developing medicine. It also matters whether there are alternatives (and people should develop them, which they are doing).

      • ana53294 says:

        Why are pets worse than farming for meat? Unless we talk about exotic/wild animals, I think that having dogs or cats and pets, and treating them as members of the family, is better than killing animals for meat?

    • meh says:

      do we assume the best version of the category or the worst? I think almost every category has a bad example that is worse than every other categories best example.

      also, are zoos/aquariums considered entertainment?

    • ana53294 says:

      Assuming average type of use. For example, in fishing, you have fishing with a fishing pole, on one extreme, and bottom trawling being on the other extreme. I assume we talk about the average, legal commercial fishing. Same for the rest.

    • ana53294 says:

      My order would be, from least to most acceptable:
      Hunting (I place a much higher value on a wild animal’s life than on a domestic one)<- entertainment<- fishing <-farming for meat<- farming for eggs, etc.<-pets<-medical experimentation

      • Aapje says:

        I place a much higher value on a wild animal’s life than on a domestic one

        Why?

        • ana53294 says:

          Because a wild animal needs a lot more resources to grow. They need more land to grow. Also, because they do not have vaccines or antibiotics, they die more.
          So if it takes land area X to grow a domestic cow, it would take a wild cow land area (X+Y)*Z. This means that when you kill a wild cow, you are using a lot more natural resources than when farming. Of course, if we eat A number of meat from farmed animals, and reduce it to B=(A*X)/((X+Y)*Z) (taking into account the inefficiencies of growing a wild animal), while maintaining the land area constant, then hunting would be better than farming. However, this isn’t going to happen, so what we do is to consume more resources while eating A amount of meat.
          Y=inefficiency of eating grass instead of grain, and being unable to store hay. Also, they do more exercise, so they need more calories to get the same amount of meat.
          Z=the number of calves needed to get one adult (for simplifying, let’s assume that this is 1 for a commercial cow, although I know it’s not). This will be more than 1, because you have diseases and predation by wild animals.

          Also, I just think that a life in the wild is better than a domestic animal’s, so I also value the quality of life of a wild animal higher than a domestic one.

          • Aapje says:

            This means that when you kill a wild cow, you are using a lot more natural resources than when farming.

            Your argument only works if we eradicate wild animals by destroying the habitats they can use and/or hunting them to extinction first. Otherwise your calculation is incorrect and the one below is better.

            Assume that:
            – a wild cow uses 2 resource units (ru) a year.
            – a domesticated cow uses 3 ru a year.
            – a hunter will kill a wild cow at age 5 on average
            – a farmer will kill a domestic (meat) cow at age 2
            – wild cows will live to age 10 on average if not hunted

            Then in the hunting scenario, you will have the wild cow living to age 5 on average, so the cost is 5 * 2 = 10 ru’s.

            In the no-hunting scenario, where a cow is farmed instead, you will have the wild cow living to age 10, taking 10 * 2 = 20 ru’s. You will also then have a domestic cow living for 2 years, at a cost of 2 * 3 = 6 ru’s. Combined that is 26 ru’s.

            So the no-hunting scenario then actually costs 16 ru’s more.

            Obviously, more farming tends to take away room for wild animals, so the actual gap is smaller, but the optimal use of resources is probably to have some hunting.

            Also, I just think that a life in the wild is better than a domestic animal’s

            I don’t understand how you reconcile this with your earlier argument. Do you want to increase the number of wild animals then? Because the most logical way to achieve that is to make wild animals more valuable by having them hunted, so humans preserve more habitats for wild animals.

            Nature is extremely cruel, especially to older & weaker animals, so more ethical hunting practices may actually maximize the well-being of wild animals, by killing them before they become decrepit.

          • JulieK says:

            This means that when you kill a wild cow, you are using a lot more natural resources than when farming.

            That assumes that the hunting takes place in special preserves that would have been used for something else if the land wasn’t set aside. What if no one was planning to use the wild land for anything else?

          • ana53294 says:

            The way I see it, the objective is to produce the meat needed to provide the necessary nutrition to humans, while maximizing the number of untouched ecosystems.
            The wild cow will get eaten by wolves, or bears, or other predators. Or it will die and get eaten by carrion eaters.
            If we allow hunting in natural parks, that would mean predators would need an even larger area to obtain enough food. If we hunt the wild herbivores, we destroy the natural equilibrium, and either starve the predators or force them to eat domestic cattle, which again brings conflict between humans and animals.
            @ Aapje

            Assume that:
            – a wild cow uses 2 resource units (ru) a year.
            – a domesticated cow uses 3 ru a year.

            Why would that be the case? commercially grown cows eat corn and hay. Corn is an amazingly productive C4 plant that requires a lot less land area to produce the calories needed. Leaving aside the ethics of giving corn to a ruminant animal, a cow in a farm will need less land to grow than a wild one.

          • J Mann says:

            Because a wild animal needs a lot more resources to grow. They need more land to grow. Also, because they do not have vaccines or antibiotics, they die more.
            So if it takes land area X to grow a domestic cow, it would take a wild cow land area (X+Y)*Z. This means that when you kill a wild cow, you are using a lot more natural resources than when farming. Of course, if we eat A number of meat from farmed animals, and reduce it to B=(A*X)/((X+Y)*Z) (taking into account the inefficiencies of growing a wild animal), while maintaining the land area constant, then hunting would be better than farming. However, this isn’t going to happen, so what we do is to consume more resources while eating A amount of meat.

            I must be missing something. Leaving the moral issues aside, aren’t I saving a lot of resources when I take the wild cow, because it will stop consuming resources at that point?

            I could see your point if we were discussing setting aside land specifically for hunting as a strategy of feeding people, but if the land is already set aside as a preserve and if the hunting doesn’t strain the population, then culling the population seems to be neutral or even positive as a marginal impact on resources.

            ETA: Just saw your reply. I guess that assumes an impact on predators – certainly the goose and deer population in my area seems to exceed the number that predators need.

          • Aapje says:

            @ana53294

            Why would that be the case? commercially grown cows eat corn and hay. Corn is an amazingly productive C4 plant that requires a lot less land area to produce the calories needed. Leaving aside the ethics of giving corn to a ruminant animal, a cow in a farm will need less land to grow than a wild one.

            Land is not all the same. Wild animals and extensive cattle farming can use low-quality land. Intensively held farm animals also tend to be bred for fast growth and thus need to eat quite a bit per time unit (but they can be slaughtered relatively early).

            Farmers also use fuel to bring food to the animals, while the wild animals move themselves to the food. Fuel is also a resource.

            Leaving aside the ethics of giving corn to a ruminant animal, a cow in a farm will need less land to grow than a wild one.

            That encourages minimizing the number of wild animals, which for some animals requires hunting.

            If we allow hunting in natural parks, that would mean predators would need an even larger area to obtain enough food.

            The typical solution that hunters choose for this is to hunt predators as well…

            PS. Note that smaller wild game can often survive outside of natural parks.

        • ana53294 says:

          Also, modern hunting is not done for meat, but entertainment. If we exclude hunter gatherer tribes that have access to a huge, protected land area with wild animals, most hunting is done in the few leftover protected wild areas. They do it with guns, or event semi-automatic guns. Quite frequently, they will not even eat the animal, as wild animals have a lot of diseases, so after testing the meat, it would turn unsafe anyway.
          And because I see hunting as a form of entertainment*, I think that if you need to kill one or several animals to entertain just one hunter, a form of entertainment where you get more people entertained per animal killed is more ethical. So I see bullfighting as more ethical than hunting for fun, because at least it is more efficient in delivering the entertainment value. And besides, in a bullfight the bull actually has a chance to kill the matador, which somehowe makes it seem more fair than hunting with a gun.

          *I also admit that I am severely biased against hunters. I grew up in rural Spain, where the only people with guns are hunters. It would not be rare to encounter hunters in a farm picking apples, leeks, etc., that I had to plant, weed and collect as a child. And then you are at a standstill, where you, the legal owner of the land, are facing armed trespassers who are armed. They say “We were just following a boar”, and you say “of course, you followed a boar into private, fenced property while carrying a hoe to pick leeks”, they shrug, and leave. Pressing charges is way more trouble than it is worth (they probably did not steal more tha 50$ worth of produce, so it is a felony, not a crime). But I still see hunters as entitled people with no understanding of private property and the effort it takes to farm, who also take a sadistic pleasure in killing animals.

          • Michael Handy says:

            An interesting position. Would the moral calculus change if you were somewhere where domestic animals take almost as much/more space than the local wildlife, and where unsettled space is fairly common (eg. Australia, where I live.)

          • ana53294 says:

            Would the moral calculus change if you were somewhere where domestic animals take almost as much/more space than the local wildlife

            If they did so while producing less or the same amount of meat than just living the land wild, yes. But then what would be the point of farming? I don’t think farmers invest that much capital and resources to do something that nature can do better (unless they get subsidies for farming, which I think are evil anyway).

          • Aapje says:

            @ana53294

            Also, modern hunting is not done for meat, but entertainment.

            This seems factually wrong, because I have eaten and enjoyed wild boar, so it did end up on my plate. I don’t disagree that entertainment is also a major consideration, but that means that there are multiple reasons.

            So I see bullfighting as more ethical than hunting for fun, because at least it is more efficient in delivering the entertainment value.

            Bullfighting is very cruel, because it is prolonged torture where the animal is intentionally hurt to make him enraged, but also intentionally not hurt badly enough to die.

            The ideal in hunting is to achieve a quick death.

            I personally believe that the manner of death is generally more significant than the death itself, because the dead animal doesn’t feel pain.

            I think that if you need to kill one or several animals to entertain just one hunter, a form of entertainment where you get more people entertained per animal killed is more ethical.

            You forget that hunters often spend a lot of time before getting a kill. Bullfighting may entertain more people, but the entertainment is also very brief.

          • veeloxtrox says:

            Just going to chime in, I was able to have meet at most meals growing up because my parents would hunt the maximum number of allowed deer each year. If they were not allowed to hunt, it would have negatively effected my diet while growing up.

          • ana53294 says:

            @Aapje

            I have eaten and enjoyed wild boar, so it did end up on my plate

            Sure, the meat of the animal does end up getting eaten in some cases. But that is not the main purpose of hunting.

            I personally believe that the manner of death is generally more significant than the death itself, because the dead animal doesn’t feel pain.

            Well, the way I see it, the animal in a bullfight at least has a chance to harm its opponent. The way I see it (I probably personify the bull too much), I would prefer to die while taking the enemy with me, even if that means more suffering, than a quick death. This does mean that the total amount of suffering increases, so this is not a utilitarian argument, it just seems more fair.

          • Aapje says:

            @ana53294

            Sure, the meat of the animal does end up getting eaten in some cases. But that is not the main purpose of hunting.

            For some (poorer) people it is. I also fundamentally disagree with declaring one reason to be the true reason, when people have multiple reasons. It is a very uncharitable way to look at people’s motivations.

            Well, the way I see it, the animal in a bullfight at least has a chance to harm its opponent. The way I see it (I probably personify the bull too much), I would prefer to die while taking the enemy with me, even if that means more suffering, than a quick death. This does mean that the total amount of suffering increases, so this is not a utilitarian argument, it just seems more fair.

            This seems like more of an illusion than reality. Is it more kind if a Nazi tells a Jew to run and then shoots him in the back, then if he just executes the Jew right away? In theory the Jew that gets to run has a slightly better chance to escape.

            The former seems more sadistic than fair. I think that bullfighting is also more sadistic than fair.

          • John Schilling says:

            They do it with guns, or event semi-automatic guns.

            Do you know why they do this, and if so why do you think it matters?

            Quite frequently, they will not even eat the animal, as wild animals have a lot of diseases, so after testing the meat, it would turn unsafe anyway.

            How frequent do you imagine this is, compared to e.g. farm animals being destroyed as unfit for human consumption?

            I think you are working from a grossly flawed, and grossly insulting, model of hunting – at least where contemporary North America and Europe are concerned. The purpose may be primarily recreational, but two of the most broadly enduring ethical principles of the sport are that the animal is to be killed as quickly and painlessly as possible, and that you eat what you kill (or distribute it to others for that purpose).

            To a first approximation, every animal killed by an American or European hunter displaces a roughly equivalent animal that would have been raised and slaughtered on a farm. And the principle exceptions are hunting for varmint or predator control, which is often done by people who make a sport of it but trades e.g. one coyote killed by a hunter for many lambs killed by a coyote (and replaced by still more lambs raised for slaughter).

          • ana53294 says:

            To a first approximation, every animal killed by an American or European hunter displaces a roughly equivalent animal that would have been raised and slaughtered on a farm.

            Hunting in Europe is usually done with a hunting license at a certain season, and every year there is a different quota of animals that can be killed. The whole reason we need this type of culling is because we destroyed the natural ecosystem by removing the natural predators. In Spain, we have almost no big predators; the last bear in the Pyrinees had to be crossed with central European bears. It came to this mostly due to hunting, especially by farmers angry that their sheep where getting eaten. There are not that many wild herbivores that can be eaten, and those should be left for wild predators. The way I see it, a wild animal does not displace a farmed animal, because they are not equivalent. When a predator catches and eats a wild animal, nobody will lose income, claim insurance, and try to illegally avenge their losses by hunting the predator. In fact, this is one of the main reasons why there are no big predators in the area where I live; in the absence of enough wild pray, they try to eat farmed animals, which are somebody’s property.
            I favour the re-introduction of wild predators in protected areas that are large enough to sustain them, and part of making more areas able to sustain them is by reducing the loss of prey by hunting.

            Also, if the objective is to maximize the number of poor families that have access to meat, giving hunting licenses is not the most efficient solution. Besides, if you do not want steak, but would settle for an old animal with tough meat, you can get one almost for free where I live. We have a sheperd friend who makes cheese from sheep, and he cannot sell mutton from old sheep, because nobody wants it. He gives it to us for free. If you go and talk to sheperds, or animal farmers, you can get a lot of meat that is almost free, which has nothing wrong with it other than not suiting modern tastes. This is true for at least the area where I live; so I think that hunting in my area cannot be justified by poverty.

            This may not be true in other places, but it still is true that modern meat eating is very wasteful. A poor family can buy offal, or skin, or tripe, which are frequently cheaper than some vegetables or even grains. So I think we should try other ways of giving poor families access to protein before resorting to hunting.

          • quanta413 says:

            @ana53294

            Europe lacks large predators, yet the ecology there is good for humans and has been for thousands of years. Lions have been gone for a long time. Bears aren’t totally gone yet, but they’re gone from a lot of places. Why would we want to cut into human consumption and risk human lives to animal attack by reintroducing competing predators (competing with us) to an area without them? Why is it preferable for a bear to kill a deer instead of a human?

          • rlms says:

            Bears are cool.

          • carvenvisage says:

            I favour the re-introduction of wild predators in protected areas that are large enough to sustain them, and part of making more areas able to sustain them is by reducing the loss of prey by hunting.

            Why? It seems pretty awesome to me if we’ve replaced piecemeal entrails-dragged out eaten-alive deaths with sudden shock from bullets.

          • Nornagest says:

            It’s uncommon for even the full-power rounds used in hunting rifles to kill instantly — about the only way they can is by destroying the brainstem, and hunters do not generally attempt that on large game (it’s a small target, and if you miss, the animal will be mutilated but still alive and fully capable of running away. Plus, it ruins a trophy if you want one). And pretty often you end up needing to track prey a ways after you shoot it. But I’d rather take a hollow point through a lung and keel over from blood loss and asphyxiation in a minute or two, than spend an hour running from wolves, trip over something, and get to watch my liver being eaten while I was still alive.

          • quanta413 says:

            @rlms

            Bears are cool.

            Touche.

    • onyomi says:

      I’m not a vegan or animal rights activist, but I think many of these depend on the manner and/or purpose of the action: for example, factory farming with all the animals cruelly smashed together in an unnatural situation seems much more morally problematic than raising animals for meat in a more natural situation. Dairy and eggs are similar, except I think they have the potential to be wholly benign so long as the milking etc. is done in a way that doesn’t cause the animal suffering. Using animals for entertainment could be entirely benign or cruel depending on the animal, the manner of their training, the nature of the entertainment, etc. Using animals for medical experiments seems similar, though I think the purpose also matters somewhat: are you torturing monkeys to figure out the best shampoo or are you looking for a cure for AIDS?

      I find hunting or fishing for meat that you eat to be slightly morally superior to buying it, albeit in more of a virtue ethics sort of way than a utilitarian calculation of the animal’s potential suffering. Hunting for trophies seems slightly less so, though it also depends on the circumstances: for example, in places where mountain lion and wolf populations have been decimated, deer populations tend to get out of control in a way that is detrimental to the deer themselves. In such a situation trophy hunting is arguably even virtuous in a slightly Thanos-y way.

    • fion says:

      Least acceptable
      Entertainment
      Cosmetic experimentation
      Farming for meat
      Farming for eggs, dairy etc.
      Aquaculture
      Hunting
      Fishing
      Having pets
      Medical/scientific experimentation
      Most acceptable

      Note that I’ve separated cosmetic and medical experimentation (and added in scientific experimentation, meaning experimentation for the purposes of learning about biology). I do think experimentation on animals is pretty horrible, but with medical/scientific experimentation I think the ends mostly justify the means.

      EDIT: From reading the other comments I would put a slight caveat on hunting and fishing. I was assuming that you were going to eat the animals you’re killing, in which case, hunting and (personal) fishing are better than farming. If you’re not eating them, then these are basically entertainment and come on the other side of farming.

  31. bean says:

    At Naval Gazing, I’ve been doing a series on how to build a modern navy with the frame that a friendly AI gave us (SSC, mostly) an island, and we’re setting up a country on it. It’s mostly intended as a look at the meta-level drivers of naval policy without getting too deep into the object issues. But I’m now curious as to what sort of country the commenters here would build in the first place. Let’s say that we have a medium-sized island (Britain-sized, more or less) somewhere on the planet (I’ve been vague about where to avoid getting distracted with object-level issues). We’re the only people there, and we’ve been magically granted the ability to control it for, say, the next decade. How do we structure the government? Dictatorship of Scott? Democracy? Hereditary aristocracy? What sort of policies do you favor? Who do we let in? Open Borders? Points system? Do we have a welfare state? Where does the government get money from? (I’m strongly in favor of getting money from somewhere, because otherwise I won’t be able to build lots of pretty grey ships I want to be able to defend it when the magical protection runs out.) Or do we just let David Friedman (not) run things?

    • dodrian says:

      I say we randomly split the island into two separate populations and do some A/B testing to find out which forms of governance are most efficient.

      The articles of confederation/union/whatever provide the mechanisms for deciding what should be A/B tested. For the initial set up, people will be randomly allocated to constituencies (say, an even 100). Constituencies elect a single representative from among their number to a parliament by simple majority. Scott is appointed the first Speaker and Supreme Judge. The Speaker is in charge of order, though does not vote, and the Supreme Judge has the power to strike down unconstitutional laws. The parliament has the power to change the constitution by 80% consensus. All other laws passed must be A/B tested: divided up if a ‘federal’ law is voted in parliament by X%, it is enacted randomly to X% of constituencies.

      Note carefully that the constituencies are not geographically-bound. Setting up local governments (or not will be an interesting first task for the Parliament.

      • Randy M says:

        I don’t think a population size of a few thousand tells us much that can generalize to larger nations, unless bean intends us all to immigrate with extended families in tow (which would be more stable long run).

        • bean says:

          That’s probably the biggest question early on, actually. The few thousand readers of this blog aren’t enough to support even a small warship, so we’re going to need more people. How do we get them and who do we take? Extended families wouldn’t be a bad place to start, but we’re going to need more than that before too long.

          • Nornagest says:

            Given the setup, I’m tempted to suggest human-level AIs in cute plastic chassis.

          • bean says:

            Oh, sorry. The AI has departed for parts unknown, and we’re not quite sure what we did to make it in the first place. And it trashed that research facility, saying it was too dangerous for us to continue.

          • Randy M says:

            We’re the only people there, and we’ve been magically granted the ability to control it

            Curious what you are getting at here; are we magically summoning buildings and machines from the bare rock? Or just magically protecting ourselves from invasion? Obviously if we’re the only people there we don’t need to magically subdue the populace, right? (Let’s hope not, the debates on occultic neo-colonialism would be interminable, especially once Scott summons undead Abraham Lincoln)

          • bean says:

            Curious what you are getting at here; are we magically summoning buildings and machines from the bare rock? Or just magically protecting ourselves from invasion?

            More of the later. Basically, our first problem isn’t “How do we stop bad actors from invading”, because that’s much less fun. Exact details are left to your imagination.

            Obviously if we’re the only people there we don’t need to magically subdue the populace, right?

            Correct.

          • Evan Þ says:

            Depends on just how extended the families are getting. Does my second cousin’s wife’s aunt count?

          • bean says:

            @Evan
            My problem with that is that it doesn’t seem likely to get us the skills we need or the kind of people we want. It might be good for getting an initial core of reasonably trustworthy people, but I’d expect we’re going to want more people than that long-term.

          • Evan Þ says:

            @bean, oh, absolutely. For many social reasons, I recommend more of a points-based system than kinship-based. I thought you were talking about needing more numbers than extended families could give us, though – a kinship-based system has a lot of problems, but that isn’t necessarily one of them!

            More specifically, I urge points based on a weighted average of current occupation, skills, abstract reasoning ability, and willingness to consider opposing points of view. I’d also put in diligence, if we could test for it.

            Note that I am not considering education or IQ, except as they affect the elements I mentioned.

          • bean says:

            @Evan
            Ok, we’re pretty much on the same page. The one thing I would say is that diligence is easy to test. We just make sure to “lose” everyone’s paperwork once or twice before we let them in. Or set up an inscrutable bureaucracy to regulate immigration. Although that might well happen on its own.

          • Nancy Lebovitz says:

            Supposing you have to recruit, about how many people in whole world do you think really like warships? Or would sign on because they like this sort of big project?

          • bean says:

            @Nancy

            This isn’t a warship thing. The new country was designed to divorce the theoretical discussion from object-level issues, and I wondered what SSC would make of it in general. The Navy stuff is my specialty, but we have people who have strong opinions about economics, immigration policy, and so on. It’s not that much less controversial than the naval stuff we’ve been discussing, either.
            As for numbers, no real idea. But you get people moving into boomtowns, and I expect you’d get something similar here. Particularly if we put together an interesting government package.

          • Evan Þ says:

            @bean, I think that’s a joke, but I’d prefer to test diligence without heaping burning coals on people’s heads, thank you!

      • bean says:

        The idea of an A/B testing government is one that I should have expected, but I don’t think this is a good implementation. The basic problem is that you’re going to end up with an insane hodge-podge of laws, which will be impossible to enforce. Let’s say we pass a national law that we drive on the right by only 75% because of an unusual concentration of British, Japanese, and Australians among the representatives. Do the other 25% get ticketed for driving on the right? Are they just allowed to drive on the left if they want? Do they get tickets for endangering others instead? This seems like the extreme case of everyone signing onto certain laws and the enforcers having to sort them out.

        That said, I think there’s an interesting seed here. It might be interesting to set up a couple of different states with explicitly different policies (libertarian, social democracy, etc), although you then have the issue that it’s not really a country any more so much as something like the EU.

        • dodrian says:

          You would have to figure out how to establish regional governments somehow, which would be responsible for the more mundane things like traffic laws.

          There federal powers would have to be more a bit more fleshed out I guess. The A/B idea was what I was thinking about mainly in relation to taxes & social programs.

          • bean says:

            I’m not sure that would work very well, though. The problem is that doing A/B testing person-by-person is that it opens up arbitrage opportunities that wouldn’t exist in a world where we had one country doing A and one country doing B.
            Let’s take the minimum wage. Even if it’s a good idea when applied to the whole population, it’s not hard to see the problems with a minimum wage applied to an arbitrary 60% of the population. The result is almost certainly worse than either applying it to everyone or not having one at all. Having this kind of diversity between states/provinces/cities isn’t such a big deal, because the playing field between me and the guy next door is at least somewhat level.

      • Skivverus says:

        Don’t know about the particulars of legislative structure, but I would like to see IT infrastructure supporting the judicial end of things: a searchable database for laws, ideally allowing any citizen to check (a) whether an action is legal, (b) which laws apply to a given situation, and (c) expiration or review dates for a given law. Also, for that matter, it would (d) notify citizens when/if new laws (or repealed old ones) took effect.
        Finally, for grins and Archipelagic support, (e) publicly labeled, opt-in law packages which make more things illegal for oneself.

        On the other hand, that database would be a prime hacking target (after the friendly AI itself, I suppose). And (e) would probably result in signalling arms races, and I’m not presently sure how to mitigate that.

        • Nornagest says:

          C and D are easy, but B is hard, and that makes A hard too. Law by its nature has a lot of slop and discretion built in, and there’s very often ambiguity about where its boundaries lie, so in a lot of cases the answer will be “underspecified; you need to read and understand 500 pages of case law to find out” or “depends what the judge ate that morning” or “technically no, but you’ll only get in trouble for it if you piss off a cop”.

          E is an interesting idea and I don’t think I’ve worked out the full implications yet.

          • dodrian says:

            But starting from scratch could we come up with a system that make A & B more possible? What would it look like?

          • Nornagest says:

            Probably, but I’m not sure it’s a good idea. I’ve never been involved at the sharp end of proper law enforcement, but if it’s anything like being a petty Internet tyrant there’s kind of a sweet spot in terms of ambiguity. Too ambiguous and you’ll eventually be tempted to act based on arbitrary personal grudges; not ambiguous enough and other people will eventually be tempted to go hunting for loopholes. Either one creates an adversarial relationship and ultimately makes your job harder.

            I don’t think it’s practical to come up with a set of rules that’s both totally unambiguous and loophole-free, especially if you also need it to be compact and understandable enough for regular people to easily follow with the help of a reference guide. Even as a petty Internet tyrant. And real life’s way more complicated than that.

            There might be something to be said for a Confucian-ish approach that focuses first on instilling virtue in the people responsible for enforcement.

          • Nick says:

            a searchable database for laws, ideally allowing any citizen to check (a) whether an action is legal, (b) which laws apply to a given situation, and (c) expiration or review dates for a given law. Also, for that matter, it would (d) notify citizens when/if new laws (or repealed old ones) took effect.

            C and D are easy, but B is hard, and that makes A hard too. Law by its nature has a lot of slop and discretion built in, and there’s very often ambiguity about where its boundaries lie, so in a lot of cases the answer will be “underspecified; you need to read and understand 500 pages of case law to find out” or “depends what the judge ate that morning” or “technically no, but you’ll only get in trouble for it if you piss off a cop”.

            We could write our laws in a domain specific language; people have suggested this sort of thing (I don’t know how seriously) before. It would enable or even require more conceptual precision and eliminate ambiguities surrounding e.g. “or” and commas, but I’m sure the approach has its own issues, like comprehensibility. And issues of intent vs implementation would probably not go away either.

            Poking around it looks like there’s been some neat discussion of this.

          • Iain says:

            Another problem with a rigid, static set of laws is that the world is not static. Consider the Fourth Amendment’s guarantee of security in “persons, houses, papers, and effects”. How does that apply to email? The contents of your laptop? Cell phone records? Snapchat messages? The contents of your car? The contents of your Uber?

            The modern world asks questions that were unforeseen (and arguably unforeseeable) at the time that laws were passed. Whether you assign the duty of keeping laws up-to-date to a legislative body or delegate it to a bureaucracy, the burden of pre-emptively answering every possible question would be crushing. Something in the ballpark of the status quo seems like the only feasible solution: you pass laws and establish precedents, and empower your judicial system to reason by analogy when faced with new circumstances.

            Predictable law is good, but it’s not the only goal. Pushed too far, you end up with a legal system that predictably makes bad decisions.

          • Skivverus says:

            Iain, that’s precisely why I mentioned (d). Though I think what you’re getting at is that “who can make changes to the database is important, and there may need to be room for a wider circle of people who don’t have direct authority to change the wording, but do have the authority to annotate it.” Judges, in other words. Or rabbis*.

            Also, Nornagest,

            E is an interesting idea and I don’t think I’ve worked out the full implications yet.

            This made my day.

            *Or [religious text interpreters for religion of your choice].

          • Iain says:

            Iain, that’s precisely why I mentioned (d). Though I think what you’re getting at is that “who can make changes to the database is important, and there may need to be room for a wider circle of people who don’t have direct authority to change the wording, but do have the authority to annotate it.” Judges, in other words. Or rabbis.

            No, more than that.

            I’m saying that thinking of the law as a database is a bad metaphor. It implies that the answers are all written down and you simply have to look them up. My claim is that it is prohibitively expensive (in competent-legal-mind-hours) to preemptively populate a database of answers, and even less feasible to keep that database up to date as society changes around it.

            Google doesn’t precompute the results for every possible search. Instead, it stores raw data in a convenient format, and then queries that data on-demand. Similarly, under the status quo, the law is stored in legislation and precedent, and if you want to know whether something is legal, you query the law by seeking legal advice. Sometimes there is an easy answer; on rare occasions, the question is hard enough that America’s legal algorithm falls back on asking nine smart lawyers what they think.

            You can, if you like, optimize the system to be more predictable. I’m sure there’s some low-hanging fruit in the status quo. But eventually you have to make trade-offs. Both “never believe claims of rape without video evidence” and “always believe claims of rape no matter what” are extremely clear legal standards. They’re also both terrible. To provide just outcomes, the legal system has to wade into shades of grey.

            (This critique is similar in spirit to parts of Seeing Like A State.)

          • Skivverus says:

            Instead, it stores raw data in a convenient format, and then queries that data on-demand.

            Barring semantics-recognizing programs or other computational handwavium, I’m sort of expecting the ‘raw data’ of this hypothetical database to be laws, not verdicts, for pretty much the reasons you state. As both you and Nornagest point out, an answer to a query “is X legal?” might be easily distillable to a yes-or-no, or it might not.
            I think it plausibly could in enough cases to partially automate it, mostly on the grounds of the 80-20 heuristic. There’s a StackExchange for law, after all; would it be so much work to have the actual laws of our hypothetical country posted on an equivalent of there?

          • Iain says:

            Taking a step back: is there currently a crisis where people have difficulty determining whether they’re following the law, but it’s not important enough to bother seeking legal advice?

            I don’t understand what problem this is trying to solve.

          • Evan Þ says:

            @Iain, yes. Lawyers are expensive. It’s out of the question for me to consult them, because I don’t want to spend the money. For a lot more people, it’s even farther out of the question because they don’t have it to spend.

            As for what queries people have, I recommend looking at /r/LegalAdvice. Every day or two, people come there with their problems, get told “go get a lawyer,” and reply “I can’t afford it.”

          • Skivverus says:

            (Ninja’ed by Evan Þ, but still may have some insight so leaving it up)

            “Can’t afford a lawyer” is not an unheard-of state of affairs, and there are as I understand it a substantial number of poor people in prison, but I think I’m approaching it more from the perspective of “beware of trivial inconveniences”: calling up a lawyer takes time, knowledge (of their phone number, and likely their area of expertise) and financial risk (the lawyer may want you to pay them).
            Going the other way, this is a bit of seeing like a state: it makes (or at least attempts to make) the effects of laws more quickly and easily legible, thus allowing better-informed decisions on whether to keep a law when it comes up for review.

          • The Nybbler says:

            @Skivverus

            General rule is that if you can’t figure out if something’s legal without consulting a lawyer, and you’re not wealthy enough to already have one, the answer is “no”. It might be legal but there will be some fiddly set of conditions which will be costly to comply with and with high downside risk if the state or regulator decides you screwed up.

          • Iain says:

            To a first approximation, law is irreducibly complex and lawyers are irreducibly expensive.

            The law is sensitive to the details of a situation. This is good. It reduces the number of court cases that end with blatantly obvious injustice. But sensitivity to detail inevitably requires complexity. In the absence of strong AI, you need a person who can work through that complexity. People who are capable of doing that work are in high demand, and need to be paid somehow.

            I do not think it is possible to create a legal system simple enough to be interpreted by every last citizen without compromising it along another axis. It might be possible to do so in the 80:20 simple case, but then you’ve just pushed the problem up one step: how can you reliably tell when you’re in the unlucky 20? Dunning-Kruger is a real problem.

            That’s not to say that improvements can’t be made on the margins. (I did say “first approximation”.) Overly complicated laws can be simplified. Information can be better organized. Funding for public defenders can be increased. At the end of the day, though, much of the inconvenience here is not trivial. The legal system is complicated because humans are complicated, and there is no easy way out.

          • Skivverus says:

            And now I’m just picturing a machine learning algorithm being given cases with verdicts as training data, aiming to maximize (M*cases assigned – N*cases disputed).

            At any rate, though, I do agree that there’s some irreducible complexity to law (or that it reduces to the Halting Problem, which is arguably worse); I still think it’s possible to make things simpler, more transparent, and more responsive this way.
            As it happens, computer virus detection is, as I understand it, also subject to the Halting Problem; we nonetheless have pretty good antivirus software along with the highly-paid consultants that show up when the software proves insufficient.

          • albatross11 says:

            Iain:

            It seems like that makes an argument for reducing the scope of behaviors in which the law is likely to affect you. Ideally, you should have a simple rule for deciding “Is this something that might require a lawyer,” and only a tiny fraction of human actions would get a “yes” answer. My sense is that the range of human activities which can get you in trouble with the law has expanded a lot over the course of the last couple centuries, even though in some areas (speech, sex, Jim Crow) it’s been pared back in big and important ways.

          • JulieK says:

            I think I’m approaching it more from the perspective of “beware of trivial inconveniences”: calling up a lawyer takes time, knowledge (of their phone number, and likely their area of expertise) and financial risk (the lawyer may want you to pay them).

            For many people, even looking things up in a database falls into “beware of trivial inconveniences”; look how many people don’t RTFM.

          • Iain says:

            @Skivverus:

            As it happens, computer virus detection is, as I understand it, also subject to the Halting Problem; we nonetheless have pretty good antivirus software along with the highly-paid consultants that show up when the software proves insufficient.

            Ironically, the consensus among serious security researcher is that most anti-virus software is a net negative for security. To work, anti-virus programs have to insert hooks into your operating system, browser, and so on. These hooks are themselves often vulnerable to exploitation. (For example, Firefox ran into problems trying to do address-space layout randomization, because AV vendors were injecting ASLR-disabled code into the process.) Windows Defender is the one exception, because Microsoft controls both the anti-virus software and the OS it hooks into.

            More broadly: the halting problem is a red herring here. The halting problem only proves that a fully general solution is impossible. For both anti-virus software and automated legal reasoning, the real problem is that we can’t even reliably handle the average case.

            It seems like that makes an argument for reducing the scope of behaviors in which the law is likely to affect you.

            Be specific. Which areas do you think the law should butt out of?

            From my quick scan, it looks like the majority of the questions on r/legaladvice are about civil law, not criminal law. Civil cases aren’t really about getting in trouble with “the law”; they’re about getting in trouble with other people. The legal system just steps in to provide a consistent and impartial mediator.

            To be clear: my position in this discussion is not that meaningful quantitative improvements can’t be made in this area. It’s that you’re not going to get any significant qualitative improvements. If we implement the most feasible version of what Skivverus suggests, our legal system will probably be better than the status quo, but only incrementally so.

          • IrishDude says:

            This whole thread reminds of a nice discussion in the article Myth of the Rule of Law. It argues Iain’s point, that there is tension between predictability and flexibility (though the final conclusion of the author is probably one Iain wouldn’t agree with). Some highlights:

            “I have suggested that because the law consists of contradictory rules and principles, sound legal arguments will be available for all legal conclusions, and hence, the normative predispositions of the decisionmakers, rather than the law itself, determine the outcome of cases. It should be noted, however, that this vastly understates the degree to which the law is indeterminate. For even if the law were consistent, the individual rules and principles are expressed in such vague and general language that the decisionmaker is able to interpret them as broadly or as narrowly as necessary to achieve any desired result.

            You believe that the law can be reformed; that to bring about an end to political strife and institute a true rule of law, we merely need to create a legal system comprised of consistent rules that are expressed in clear, definite language.

            It is my sad duty to inform you that this cannot be done. Even with all the good will in the world, we could not produce such a legal code because there is simply no such thing as uninterpretable language.

            Let us assume that I have failed to convince you of the impossibility of reforming the law into a body of definite, consistent rules that produces determinate results. Even if the law could be reformed in this way, it clearly should not be. There is nothing perverse in the fact that the law is indeterminate. Society is not the victim of some nefarious conspiracy to undermine legal certainty to further ulterior motives. As long as law remains a state monopoly, as long as it is created and enforced exclusively through governmental bodies, it must remain indeterminate if it is to serve its purpose. Its indeterminacy gives the law its flexibility. And since, as a monopoly product, the law must apply to all members of society in a one-size-fits-all manner, flexibility is its most essential feature.

            It is certainly true that one of the purposes of law is to ensure a stable social environment, to provide order. But not just any order will suffice. Another purpose of the law must be to do justice. The goal of the law is to provide a social environment which is both orderly and just. Unfortunately, these two purposes are always in tension. For the more definite and rigidly- determined the rules of law become, the less the legal system is able to do justice to the individual. Thus, if the law were fully determinate, it would have no ability to consider the equities of the particular case. This is why even if we could reform the law to make it wholly definite and consistent, we should not.”

          • Iain says:

            I have suggested that because the law consists of contradictory rules and principles, sound legal arguments will be available for all legal conclusions, and hence, the normative predispositions of the decisionmakers, rather than the law itself, determine the outcome of cases

            This goes too far. The law consists of rules and principles that are in tension with each other and must be balanced. The normative predispositions of the decision-makers affect the outcome of cases to the extent that they can prioritize one principle over another, but there are limits on how far that can go.

            Aside from that, I more or less agree with everything that you quoted. As you expected, I fervently disagree with the article’s eventual segue towards a “free market in law”. I’ve been arguing that it’s hard to produce a qualitative improvement in the predictability of law, but that doesn’t mean that we should take steps to make law less predictable.

          • IrishDude says:

            The normative predispositions of the decision-makers affect the outcome of cases to the extent that they can prioritize one principle over another, but there are limits on how far that can go.

            Sure, and this objection is also addressed in the article:

            “I have been arguing that the law is inherently indeterminate, and further, that this may not be such a bad thing. I realize, however, that you may still not be convinced. Even if you are now willing to admit that the law is somewhat indeterminate, you probably believe that I have vastly exaggerated the degree to which this is true. After all, it is obvious that the law cannot be radically indeterminate. If this were the case, the law would be completely unpredictable. Judges hearing similar cases would render wildly divergent decisions. There would be no stability or uniformity in the law. But, as imperfect as the current legal system may be, this is clearly not the case.

            The observation that the legal system is highly stable is, of course, correct, but it is a mistake to believe that this is because the law is determinate. The stability of the law derives not from any feature of the law itself, but from the overwhelming uniformity of ideological background among those empowered to make legal decisions. Consider who the judges are in this country. Typically, they are people from a solid middle- to upper-class background who performed well at an appropriately prestigious undergraduate institution; demonstrated the ability to engage in the type of analytical reasoning that is measured by the standardized Law School Admissions Test; passed through the crucible of law school, complete with its methodological and political indoctrination; and went on to high-profile careers as attorneys, probably with a prestigious Wall Street-style law firm. To have been appointed to the bench, it is virtually certain that they were both politically moderate and well-connected, and, until recently, white males of the correct ethnic and religious pedigree. It should be clear that, culturally speaking, such a group will tend to be quite homogeneous, sharing a great many moral, spiritual, and political beliefs and values. Given this, it can hardly be surprising that there will be a high degree of agreement among judges as to how cases ought to be decided. But this agreement is due to the common set of normative presuppositions the judges share, not some immanent, objective meaning that exists within the rules of law.

            In fact, however, the law is not truly stable, since it is continually, if slowly, evolving in response to changing social mores and conditions. This evolution occurs because each new generation of judges brings with it its own set of “progressive” normative assumptions. As the older generation passes from the scene, these assumptions come to be shared by an ever-increasing percentage of the judiciary. Eventually, they become the consensus of opinion among judicial decisionmakers, and the law changes to reflect them. Thus, a generation of judges that regarded “separate but equal” as a perfectly legitimate interpretation of the Equal Protection Clause of the Fourteenth Amendment (31) gave way to one which interpreted that clause as prohibiting virtually all governmental actions that classify individuals by race, which, in turn, gave way to one which interpreted the same language to permit “benign” racial classifications designed to advance the social status of minority groups. In this way, as the moral and political values conventionally accepted by society change over time, so too do those embedded in the law.”

        • Skivverus says:

          For (a) and (b), and incidentally for filters on (d), my first inclination is currently to gesture vaguely at keyword or hashtag-based systems, recognizing that this is unlikely to be an ideal solution but is at least in the vicinity of the status quo of industry-based (or jurisdiction-based) regulation.
          I suppose this would effectively end up being “500 pages of case law to sift through”, and the solution that comes to mind there is ranking (effectively, voting on individual laws), which I suppose would be a point (f): allowing citizens to securely* record their opinions on any given law.

          *Preliminary definition: “as close to coercion-proof, sockpuppet-proof, and hack-proof as feasible”

        • Drew says:

          Criminal codes tend to be pretty short. Common law principles are fairly intuitive, and a movement to streamline criminal codes should be politically feasible.

          The problem comes when you get to the regulatory sorts of law. There are good reasons why we don’t want people pouring motor oil into municipal drains, or burning tires on their lawns.

          You could handle them with a common-law-style catch-all. “Whoever disposes of their garbage in an excessively obnoxious way, shall be guilty of jerkishness in the 2nd degree.”

          The problem is that these sorts of laws inherently ambiguous. That’s fine when we’re talking about traditional criminal offenses. The answer to, “EXACTLY how hard can I hit a stranger before it’s battery” is “don’t punch strangers, you jerk.”

          Ambiguity is bad when we’re talking about actions that will be done millions of time a year. Jiffy-Lube has a legitimate need to dispose of millions of quarts of motor oil. They need a clear standard.

          If you leave the laws ambiguous, you end up with the law-as-written (searchable) and a law-as-enforced (very hard to search). If you spell things out in detail, then lawbooks get absurdly long

          The third option is to punt, an say that people need to comply with standards set by [environmental agency]. That keeps things kind of finite, but means that you need to search a bunch of other sources to know what’s actually allowed.

          I share the desire for easy-to-understand regulations, but don’t know that it’s feasible.

          • Evan Þ says:

            How about something like @Skivverus’s hashtag system? “Whoever disposes of their garbage in an excessively obnoxious way shall be guilty of an offense, as quantified by: For motor oil, see schedule 1, for fluorescent lightbulbs, see schedule 2, etc.” Then you can glance over the list and only look in detail at the appropriate schedules.

          • Skivverus says:

            Evan, that’s pretty much exactly what I was trying to gesture at, thanks.
            Though I’m not wholly sure anymore whether I should be gesturing more at hashtags or at wikis here. Either way, the main goal is to have the database searchable/browse-able in as close as possible a manner to associative memory. And maybe more memorable names than “schedule 1” – say, “oil-based chemicals”, or “hazardous if broken”. On the other hand, avoiding the lure of the forbidden may have to mean dry names.

        • J Mann says:

          I had thought somebody was doing a legal research wiki, but couldn’t find it on casual search.

          That’s a good idea, but it’s hard, for the same reason that Excel techniques are collected in StackExchange rather than organized systematically – there are just so many cases that it’s hard to write one outcome for all of that.

          As an example, let’s say you get a straightforward legal question: the window planter flew off my house in the wind and scratched my neighbor’s car in his driveway; do I have to pay for the damage?

          The answer is: what country and state do you live in, and do you live in a smaller unit (city, private subdivision), etc. that might have some relevant laws? Then for your US, state, the relevant general rule is that you have an obligation (a) to exercise ordinary care to prevent foreseeable damage, as well as (b) to obey laws that are created for the safety of others, assuming that your neighbor is in the class of others that the law was intended to protect.*

          As to (a), there aren’t any house maintenance cases easily locatable that provide guidance from your state, but there are a lot of “tree falls onto the neighbor’s property case.” Based on those, the questions we would expect the judge to ask would be whether your window box was secured in a reasonable manner, and whether you had reason to know that it might be blown off. To complicate matters, there are certain very specific kinds of evidence that are more likely to be convincing on these issues than others, although IMHO evidentiary law probably easier to organize and codify than the application of law to facts.*

          As to (b), we’ll need to know whether your state, local, subdivision, etc. building codes apply any relevant standards to the maintenance of your window box, and then we’ll need to know whether those standards establish an enforceable duty and whether they are intended to protect people like your neighbor, and then we’ll need to know whether you arguably violated them.*

          After all this, my experience is that non-lawyers attempting to apply the law to facts very often convince themselves that they are completely in the right, because the skill of trying to guess how a neutral party like a judge or jury will apply law to facts doesn’t come very naturally.

          None of which is to say this isn’t a good project – I think it is! It’s just hard.
          * This is hypothetical, not legal advice!

          • Iain says:

            After all this, my experience is that non-lawyers attempting to apply the law to facts very often convince themselves that they are completely in the right, because the skill of trying to guess how a neutral party like a judge or jury will apply law to facts doesn’t come very naturally.

            This.

            Making it easy for people to find the relevant laws is an (admittedly large) implementation problem. Making it easy for people to correctly apply those laws to their own circumstances without bias is a more fundamental problem.

      • James C says:

        I say we randomly split the island into two separate populations and do some A/B testing to find out which forms of governance are most efficient.

        I wonder if the right hand side of the island will skew right wing and vis versa.

        • moonfirestorm says:

          After a few centuries, change the orientation of official maps and see if the politics shift accordingly.

    • Rowan says:

      Since we’re only a few thousand people and we don’t want to get Eternal Septembered (or be a mostly-empty island with one small town) when the 10 years are up, we should definitely do Hereditary Aristocracy. We could just be very strict about who gets to be a citizen, but a) “0.1% of the population are actually citizens!” has even worse optics than an aristocracy and b) on a personal scale, it’s way cooler to be a baron than a citizen.

      • bean says:

        I threw that in because I was trying to encourage unusual solutions, but I certainly didn’t expect to be taken seriously. Still, you make a good case for it, and I’m in support. I assume we’d do a Commons/Lords style setup, with actual power in the Lords.

        • arlie says:

          Fictional example: David Weber’s star kingdom of Manticore.

          Personally, I like the idea of different viewpoints/incentives needing to agree to get anything done. I can imagine a House of Experts or House of Long Term Residents (families of original settlers, etc. – your nobility, sort of), plus a House elected by universal sufferage. But I’m less comfortable with giving one House all the actual power – or even most of it. At that point, there’s not much point to the other House(s).

          • bean says:

            When I said “actual power in the Lords”, I didn’t mean “all the power in the Lords”. I meant “as opposed to the current British system, where the Lords are not that important.” But I can definitely see how you could read it the other way.
            The Manticore parallel is a pretty good one, actually. Just so long as we do something to keep the blood fresh there. I really don’t want a High Ridge ministry.

          • arlie says:

            Alas, I fear that human nature being what it is, we’ll have both the High Ridges and the Youngs eventually 🙁

            But that’s why I find myself toying with some kind of House requiring some kind of testable qualification.

            Manticore had a competence test before royal offspring could be counted as Heirs. Perhaps something similar for inheriting noble status? That wouldn’t exclude the lazy, malevolent etc., but at least there’d be no imbeciles or lunatics, even if these nobles became as inbred as the European royal families.

          • Evan Þ says:

            Maybe time-limit the Lords having power, or even make it nonhereditary? We don’t want to get Eternal September’d, but our future grandchildren wouldn’t have similar advantages over the grandchildren of people who move to our island next year.

            Alternatively, let the Senate be elected by current citizens or people who have been citizens for twenty years (or however long). We get to elect it for the first twenty years; after that, it’s elected by people who’ve had twenty years to soak up our culture.

          • bean says:

            Alas, I fear that human nature being what it is, we’ll have both the High Ridges and the Youngs eventually 🙁

            Obviously. What I’m hoping to avoid is a High Ridge government.

            But that’s why I find myself toying with some kind of House requiring some kind of testable qualification.

            It would be nice, but that kind of thing is difficult to set up so you can’t game it. “He’s incompetent and useless.” “Of course you’d say that, he’s on my side politically.” “You’re just defending him because of that.”

            Alternatively, let the Senate be elected by current citizens or people who have been citizens for twenty years (or however long). We get to elect it for the first twenty years; after that, it’s elected by people who’ve had twenty years to soak up our culture.

            That’s not a bad idea.

            On this front, I’m actually toying with the “all of the above” solution. We have one chamber with standard boring universal representation. The other one gets all sorts of different representatives. Maybe some are hereditary. Some are elected by people who have been citizens a long time. Some are elected by a 1 tax dollar = 1 vote system. Some are life peers. Maybe some come from a functional constituency system. It should at least make the debates more interesting to watch.

          • albatross11 says:

            Beowulf is supposed to have a voting system based on occupations–doctors get a certain number of seats on the board, so do lawyers, so do engineers, etc. I’m not sure how that would really work out, but it’s interesting as a thought-experiment.

          • Evan Þ says:

            @albatross11, that sounds sort of like the Irish Senate. On the other hand, it has very little power; I assume that its elections would work differently in practice if it had actual power.

    • marshwiggle says:

      Epistemic status: Wild idea that would probably fail.

      Government – citizens vote on priorities, SSC continues to exist and hashes out implementation (including delegating to officials or whole departments). SSC could set referendum schedules and even what was on each referendum as well.

      Getting population and an economy up and running – lots of people would invest a lot of capital to move to a place that was known to be safe for a decade and had a good education system. Evidence: lots of people already do that as best as they can. We could put together a good education system, and lots of people here could use that capital pretty well in startups – and lots of the people coming over with capital could assist with the startups. We could also put together a decent regulatory environment for startups. We’d need some traditional businesses as well of course, and specialists, and educators, so we’d need separate programs to let in immigrants in each of those categories. All together I think that would get a decent mix of capital, genetics, skills, and culture.

    • There’s already a blueprint.

      Shireroth was Scott’s adolescent hobby.

      • bean says:

        I know about Shireroth, but I’m not necessarily asking Scott what he’d do. He’d be an important voice in the discussion, certainly, but not the only one.

    • arlie says:

      The government thing is interesting, but the first thing I want to know is how we plan to make our living(s). Related to that is how rich and privileged our society will become, once we’ve actually built some infrastucture, not to mention homes etc.

      My instinct is that this feeds directly into what kind of government structures are sustainable, and for that matter what kind of immigration is reasonable.

      And it certainly feeds into the question of how many pretty grey ships we can afford to build, and what we’ll have to give up in order to build them.

    • cassander says:

      I’ll defend the basic swiss model as the only democratic system worth working from. Control highly devolved to relatively small regional governments, a lot of negative direct democracy (that is, the voters can strike down laws and other government actions, but not initiate them). In switzerland this more of a norm than a rule, and I’d make it explicit. Minimal interaction between federal and regional government. Bicameral legislatures one for the voters and one for the states, though I would require that the council of states specifically be appointed by the executive of the states and serve at their pleasure. Parliamentary election of a very small cabinet (7 person) that serves as collective head of state.

      • Extreme devolution has some bad failure modes. Look at what cults like the FLDS are able to get away with once the are able.to take over local government

        • cassander says:

          compared to the failure mode of some cult taking over the whole government, I like my odds. It’s a hell of a lot easier to move to the next county than the next country.

    • onyomi says:

      Thinking about how to have David Friedman (not) run things (that is, how to end up with a purely private property society), assuming we’re starting with an uninhabited place, the best idea I can come up with is to auction all the land off to the highest bidders, but with some sort of restrictive covenant or something attached to it to the effect of “this will always be private land; it will always be under the sole sovereignty of the owner(s) of that private land; it can never be sold or donated to a ‘government’ or ‘commons’ of any kind.”

      Of course, part of the problem is I also think culture, traditions, and conventions are more important than written law, but hopefully selling the property to the highest bidders with such restrictions attached would attract the sort of person likely to create such a (libertarian) culture. It would probably also be helpful for our purposes if our island were a nice place to live but relatively resource-barren, therefore attracting buyers who want to live and establish a new community there, rather than the highest bids from e.g. oil and mining companies.

      • bean says:

        That just seems likely to attract sophistry on what makes a “government” or “commons”. In the sense of “No, this isn’t a government, it’s just a private association attaching restrictive covenants to anyone who chooses to live in this city I’m building, and the association obviously has to have land and resources to enforce it.” Also, who enforces these rules? You aren’t going to have a government to do it, obviously.

      • That seems like an excellent scheme for hastening the transition from libertarianism to feudalism.

    • Kestrellius says:

      This is only tangentially related, but I am so strongly reminded of this that I couldn’t help but post it. Actually, maybe that’s where you got the idea; have I seen you on Twenty Sided? I know there are a few people who post both here and there.

      Context of the link: The blogwriter had been doing an extremely long and extremely in-depth critique of the Mass Effect trilogy, particularly the latter two games, in which a covert terrorist organization named Cerberus demonstrated the ability to field large fleets of advanced warships and armies of well-equipped troops, capable of challenging major galactic powers. In order to prove the absurdity of this, the writer proposes a thought experiment in which you are given a magically hidden island and some resources, and are asked to build a modern warship in complete secrecy.

      • bean says:

        Not even remotely related, actually. I’ve only ever been there for DM of the Rings, and that was a long time ago.

        Skimming the article, he does a decent first-level job of explaining why building a 9000-ton ship in secret is impossible. But he misses the bigger challenge of building a warship in secret. The bits that separate the two are huge. I work on a 50-person team responsible for a moving map on a military aircraft. Making something like Aegis (which doesn’t even get mentioned) in secret, without lots of tests and a fair bit of operational experience is just insane.

  32. bean says:

    Naval Gazing is celebrating the 102nd anniversary of the Battle of Jutland (May 31st) this week by republishing last year’s series with illustrations and maps.
    So far, I’ve covered the strategic background, the preliminaries and run to the south, the run to the north and deployment and the last clash of the main fleets.

    • bean says:

      Happy Jutland Day, all!
      Today, the narrative comes to the end of the battle proper, with the night action as the Germans slipped by the British.

      • gbdub says:

        What shocks me about the night action (and really the whole battle) is how unaggressive the British seem to be. One would not think that would be an issue for Navy men raised on legends of Nelson.

        Basically the entire British naval strategy in the North Sea revolved around forcing or tricking the numerically inferior German fleet into a decisive battle and more or less completely destroying it. This could only happen with one of:
        1) Colossal stupidity/incompetence on the part of the Germans
        2) Dumb luck
        3) A well coordinated and executed plan that used superior British numbers to trap the High Seas Fleet and force a climatic battle.

        Only one of those was under British control, and they completely flubbed it, seemingly content to rely on the first two. Yeah, radios and coordinated intelligence were in their infancy – but how could they NOT know that excellent signalling and coordination of some type was going to be key to their strategy, and rehearse it religiously? Instead they have this weird situation where every captain apparently has this blind belief that the system will work perfectly, without taking any personal responsibility to make sure it did.

        Despite all that, the Brits found themselves in a position to force the battle they supposedly wanted, but threw it away. First with Jellicoe’s turn away, which may have been the safe/smart move, but certainly was not the move of an aggressive commander confident in his ships and his numerical superiority.

        Then you’ve got the apparent reluctance of Room 40 to reveal all their cards – what were they saving them for? Who cares if they know you broke their codes after you’ve sent their whole fleet to the bottom? This was their one chance to do it!

        And finally the night action, where apparently every commander got it in their head that Jellicoe was a psychic, and clearly would have told them if he wanted to do anything other than wave politely at the HSF as it steamed past them. One would think that the obvious conclusion would be “hey, I see some Germans, and haven’t been told to attack them. Gee, I know the whole point of this little joyride was to blow up a bunch of Huns, so if ol’ Jelly hasn’t told me to go blow up these particular Huns he must not know about them. I jolly well better tell him (and maybe send some steel downrange while I’m at it)!” Or the completely ridiculous excuse of “well I didn’t want to give away the position of the fleet by blowing away this dreadnought right in front of my damn guns”. Again, that was the whole point of the exercise! Bring the two fleets together, and sink the German one.

        Basically, it seems like the British were unprepared or reluctant to make the aggressive moves they should have known they’d need in order to execute their supposed grand strategy. Why even have that strategy if you were going to wimp out at the key moment?

        • bean says:

          I think you sort of have it backwards. The one thing that the British absolutely positively could not let happen is the destruction of the Grand Fleet. Jellicoe was famously the only man who could lose the war in an afternoon, and the turn-away reflects that. It was unquestionably the right decision. I examine this more in Parts 6 and 7, but destroying the High Seas Fleet doesn’t gain them all that much in the long term. Maybe you can break into the Baltic and help the Russians put down the Revolution. Borkum becomes a base against the U-boats. But you’re looking at cutting maybe 10% off the war. If they lose their fleet, then they lose the war. End of story.

          Then you’ve got the apparent reluctance of Room 40 to reveal all their cards – what were they saving them for? Who cares if they know you broke their codes after you’ve sent their whole fleet to the bottom? This was their one chance to do it!

          Two points:
          1. Room 40 was not an ideal setup. Basically, interpreting that kind of data is a specialized skill, which nobody had figured out at the time, so Room 40 just passed raw data to the operations people. The OIC of WWII was the first time someone did this right. They saw the situation and the data, and got to meld it together before passing it off. Room 40 almost certainly wasn’t getting the data on what Jellicoe was reporting (assuming anyone ashore knew), and thus had other priorities.

          And finally the night action, where apparently every commander got it in their head that Jellicoe was a psychic, and clearly would have told them if he wanted to do anything other than wave politely at the HSF as it steamed past them.

          I believe I once characterized the night action as something I’d dismiss as a particularly stupid and implausible alternate history if it hadn’t actually happened. So I don’t have a full explanation. There might have been reasons they’d emphasized following orders over initiative, and I suspect that nobody particularly wanted to fight a night action. Those were incredibly messy things in the days before radar and good plotting. Also, fatigue almost certainly played a part.

          Basically, it seems like the British were unprepared or reluctant to make the aggressive moves they should have known they’d need in order to execute their supposed grand strategy. Why even have that strategy if you were going to wimp out at the key moment?

          Their grand strategy wasn’t to destroy the High Seas Fleet. Their grand strategy was to maintain control of the sea. By that standard, they won the battle. Risking the fleet at night (where the risks were higher) might well have been seen as a bad idea.

          • gbdub says:

            There might have been reasons they’d emphasized following orders over initiative

            That’s kind of what I was getting at, but it just feels like an attitude you’d get in a world with perfect communication, not in a world where most of the time anything beyond the horizon is unreachable and you expect to be on your own.

            The one thing that the British absolutely positively could not let happen is the destruction of the Grand Fleet. Jellicoe was famously the only man who could lose the war in an afternoon

            Then why offer battle and pursue the Germans at all? Jutland was risky – if they wanted to minimize risk, the British could have been even more defensive. But if they wanted to cripple the German fleet, they were unwilling to grab at the opportunities they needed to do to do that. They took a big risk by (mostly voluntarily and intentionally) getting into the big battle in the first place, but then mostly refused to take smaller risks to collect the payoffs that would have made the initial risk worth it.

          • bean says:

            That’s kind of what I was getting at, but it just feels like an attitude you’d get in a world with perfect communication, not in a world where most of the time anything beyond the horizon is unreachable and you expect to be on your own.

            But it might well be the attitude you get when you go from a world where the horizon is the limit to one where it isn’t. The right balance for this kind of stuff wasn’t properly worked out until WWII. Rules of the Game is on my shelf and I believe it address this, but I haven’t gotten around to reading it yet.

            Then why offer battle and pursue the Germans at all? Jutland was risky – if they wanted to minimize risk, the British could have been even more defensive. But if they wanted to cripple the German fleet, they were unwilling to grab at the opportunities they needed to do to do that. They took a big risk by (mostly voluntarily and intentionally) getting into the big battle in the first place, but then mostly refused to take smaller risks to collect the payoffs that would have made the initial risk worth it.

            What does not offering battle look like? When do they fight the Germans? When they try to leave the North Sea? When they’re camping off the British coast? At some point, if the other guy rides out, you have to offer battle. But you have an option of how satisfied you are with simply seeing him off as opposed to crushing him. And the former is safer, which in this case means I think it was the better option during the turn-away. The night action was the result of stupidity and doctrinal issues, but that kind of stuff is hard to get right.

          • bean says:

            I’ve looked into this more, and found confirmation of a couple of things. First, the British were genuinely terrified of a night action. Those are chancy things, and Jellicoe believed it to be a case where the Germans might reverse the balance of forces. Also, the British were not particularly good at recognition, which is a lot harder than it sounds. In Fighting the Great War at Sea, Friedman points out that initiative is a dangerous thing, particularly when you’re dealing with a force the size of the Grand Fleet, and could easily trigger a fatal melee, with lots of Blue-on-Blue action.

        • John Schilling says:

          I think you sort of have it backwards. The one thing that the British absolutely positively could not let happen is the destruction of the Grand Fleet.

          Note that this has been true since approximately the days of the Spanish Armada, and has been hard-coded into British naval strategy and tactics for most of that period. Literally hard-coded; for several centuries the Royal Navy’s Permanent Fighting Instructions only included flag and semaphore codes for conservative, essentially defensive fleet tactics to make sure no damn fool admiral tried to actually win a naval battle.

          If the status quo is that you rule the seas already, you don’t have to win battles, you just have to not lose. And for several centuries, the British did manage to rule the seas with essentially an unending series of draws where the enemy never quite managed to land an army on English soil or strangle England’s trade or break England’s blockade, even if it looked for a while like they might come close.

          Nelson was an outlier; the Royal Navy favored aggressive cruiser captains but very cautious admirals. And if any of Nelson’s aggressive moves had not lead to victory, his “legend” would have been very different.

          • Lillian says:

            It was not really Nelson that was the outlier so much as the entire period from early in Seven Year’s War through to the end of the Napoleonic Wars that is the outlier. It started with the execution in 1757 of Admiral Byng after the Battle of Minorca for “failing to do his utmost to take or destroy the enemy’s ships”. He was explicitly punished for not being aggressive enough, a change of policy brought about by the Royal Navy’s frustration with their lack of success in the early years of the war. Voltaire sardonically commented that the Royal Navy had to execute an admiral from time to time “pour encourager les outres”, but encourage them it did. British naval commanders were very aggressive in conducting combat operations for the rest of the war. Most dramatically and crucially at Quiberon Bay, where the British pressed battle into a literal fucking storm in close waters unfamiliar to them, and yet emerged victorious.

            You could say then that Nelson’s aggressiveness half a century later was at the time old fashioned rather than newfangled. It had however passed completely out of fashion by the First World War, as Britain got comfortable with her long unchallenged rule of the seas.

    • Lambert says:

      Is it me, or do both sides at Jutland seem terribly incompetent?
      I get that WWI and II both involved a lot of mistakes on both sides in all theatres of battle, but that account of the battle seems to involve a lot of randomly meandering around the North Sea for no reason, miscommunication and other random dumb stuff.

      • James C says:

        In an era before radar and even reliable radio communication there’s a limit to how coordinated a fleet at sea can be even at the best of times. Jutland has the unfortunate added complication of being the first battle of its kind so many of the practices and procedures that were common sense by WWII were still being worked out.

      • bean says:

        By modern standards, they were, but it’s unfair to judge them by those standards. The big difference was a very different concept of how to use information in battle. Keep in mind that the senior officers had come of age in a world with no radio, where ships at sea were entirely on their own once beyond the horizon. They hadn’t managed to grasp how important the act of reporting in is. This is particularly apparent during the night action, when lots of captains assumed Jellicoe knew what they did, and thus did nothing. The British were also a bit paranoid about using the radio for various reasons, and some of their people just forgot to use it. (This is leaving aside stuff like Beatty’s choice of a Signal Lieutenant who couldn’t tie his own shoes, which I can’t defend.)

        The British did have the advantage of a plot, which took all available information and integrated it to form a picture of the battle. They still had a lot to learn about how to do it well, but it gave Jellicoe a major advantage. The Germans were somewhat better about communications, but Scheer had to keep the whole battle in his head.

        This is an area of my interest, and one I intend to cover in some detail at some point in the future. Can’t say exactly when, sadly.

        (And as James points out, they had no radar.)

        • cassander says:

          I like to point out that Admiral Fisher’s first posting was to a ship of the line, and when he left the service for the last time they were building aircraft carriers. Fisher was about 20 years older than Jellico, but still, that’s a lot of change to see in one lifetime.

          • bean says:

            I’m actually writing a post on Fisher right now, and that’s pretty much my opening line. But yes, it’s rather amazing that anyone did as well as they actually did given the amount of change involved.

      • gbdub says:

        In what sense did the Germans seem incompetent? Apart from the (major) communication error that resulted in the submarine trap not being sprung, things seem to have gone about as well as possible for the Germans once the battle was actually joined. Ultimately they did more damage, fairly deftly escaped several bad tactical scenarios, and ended a battle that could have gone very badly for them with their fleet mostly intact.

        • Lambert says:

          True.

        • Protagoras says:

          Though it is of course not specific to this engagement (and for the various reasons bean mentions, in this engagement it didn’t hurt them as much as it would have if the British had been more competent), the Germans definitely lose points for being insufficiently paranoid about the possibility of communications being intercepted and codes broken.

          • bean says:

            That’s a pretty general problem. It affected them in both world wars, but it also affected the British, the Americans, and I believe the Russians, just off the top of my head.

        • cassander says:

          the german incompetence is reflected in the total lack of an overall strategic plan and their failure to make use of the fact that they could take the initiative whenever they wanted. they should have been continually forcing the Grand Fleet to sortie over pre-positioned submarine screens.

    • bean says:

      And for the day after Jutland, Aftermath and Analysis.

    • bean says:

      The Jutland series concludes with a look at other paths the battle could have taken

      Thanks to everyone for reading and commenting. It’s been fun.

      • Andrew Hunter says:

        When you started saying “what if the British had won” my immediate thought, which I’m glad to see came up in your analysis, was a surprising “so what?” It’d have been great for English morale and the honor of the fleet if they’d routed and sunk the German DNs, but a ship’s a fool that fights a fort held true until large scale carrier aviation. (In fact, at least as presented in Massie, it’s stunning to me how enthusastic Fisher was about Baltic operations. Why was he in love with that plan where he had hated the Dardanelles, and been proved right???)

        Opening traffic to the Ruskies is one thing, I guess. (Were the northern convoys insufficient? Too limited by weather/ice?) But it is interesting–and surprised me when the thought came into my head–that as far as I can see, giving the British fleet total and unquestioned supremacy in the sea would not have meaningfully changed their ability to operate against Germany.

        Sad, really.

        • bean says:

          Fisher is genuinely hard for me to understand. On one hand, he was clearly a visionary, and right about more things than anyone else around. On the other, he’d latch on to weird and stupid ideas like Incomparable, and I’d probably classify the Baltic campaign under the latter. But it looks like he never explained the whole scheme to anyone (classic Fisher) so we don’t know exactly what he was thinking.

          Why was he in love with that plan where he had hated the Dardanelles, and been proved right???

          He hated the Dardanelles because it was an alternative to the Baltic. Also, it is fair to point out that Gallipoli was a terrible place to land troops, and the North German coast is somewhat better.

          I’m not sure the effect of a victory would be nothing at all. With the HSF intact, the British were forced to husband the Grand Fleet, and not do risky things with it. More than that, they couldn’t do risky things in general near the German coast because they’d run into the High Seas Fleet. Suddenly, sending expendable monitors and pre-dreadnoughts into the Baltic or into the approaches of Wilhelmshaven becomes a viable option. And don’t discount the ability of the Baltic Scheme to tie up German troops, even if never executed. They couldn’t take the chance that the British were serious about it.

          All that said, my estimate of a British victory at Jutland is the war ending a couple months sooner, not that it suddenly comes crashing to a halt. When you have the choice between not losing and a 50/50 chance of victory 10% sooner or defeat, you take the sure thing.

  33. J Mann says:

    Question: Is there some subcategory of paranoia where you specifically start suspecting that other people privately think you’re a jerk?

    I’ve had a few specific days where it felt intuitively like several people were all annoyed with me. Obviously, some other possibilities are:

    1) It’s just a coincidence that a number of people were annoyed with me or read that way on the same day;
    2) I was unusually annoying that day in a way I didn’t perceive;
    3) I’m a jerk.

    It’s not debilitating or even super inconveniencing, but I’m curious. For reference, I probably feel an intuitive sense of impending disaster several times a year, but for shorter times. (And haven’t been able to find a pattern cause for that either.

    • Question: Is there some subcategory of paranoia where you specifically start suspecting that other people privately think you’re a jerk?

      I also suffer from this, perhaps more often than you do.

    • Randy M says:

      If people have a low chance of just randomly waking up on the wrong side of the bed and you interact with several people each day, there’s going to be a few days sprinkled in where most of them are just grumpy for no reason. Add in your innate pattern recognition (even when there is no pattern there) and that could explain it.

    • Nornagest says:

      Question: Is there some subcategory of paranoia where you specifically start suspecting that other people privately think you’re a jerk?

      I think that’s just regular social anxiety.

      • christianschwalbach says:

        I disagree. I have had days where ive been more in tune to people’s reactions and additudes towards me, and often I over-estimate their annoyance, but this is rarely truthful. Other days my mind is off in la la land and I actually do quite well and am bubbly with people….

    • C. Y. Hollander says:

      4) They didn’t actually find you as annoying as their reactions led you to feel.

      (That may be the ‘possibility 0’ which you alluded to but didn’t spell out, but the word “just” in possibility 1 implies that the annoyance is not something you’ve questioned, even if the suspicion that they privately think you’re a jerk is.)

    • doubleunplussed says:

      Irritability makes you think people are more angry or impatient than you would otherwise perceive.

      One time I thought everyone was being really short with each other in a group chat. Later I read it and realised it was totally normal and it was just my perception.

      If you’re sleep deprived or in caffeine withdrawal, you might be irritable. Once I realised this it was really eye opening, it is something that affects how people interact very frequently.

    • outis says:

      I’ve had a group of people cut ties with me because they thought I was annoying. In that case, one of them was kind enough to explain to me that the other people thought so, but that is a rare occurrence. I’m sure it has happened many times before, with no explanation; and I am sure that there are people who never get that explanation, and just keep having people pull away from them.