OT125: Opentathlon Thread

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:

1. Those of you who don’t use ad-blocker may notice some more traditional Google-style sidebar ads. I’m experimenting to see how much money they make vs. how much they annoy people. If you are annoyed by them, please let me know.

2. Someone is doing one of those tag your location on a map things for SSC users. If you sign up, you may want to include some identifying details or contact information, since right now most of the tags don’t seem very helpful unless people are regularly checking their accounts on the site.

3. I’m considering a “culture war ban” for users who make generally positive contributions to the community but don’t seem to be able to discuss politics responsibly. This would look like me emailing them saying “You’re banned from discussing culture war topics here for three months” and banning them outright if they break the restriction. Pros: I could stop users who break rules only in the context of culture war topics without removing them from the blog entirely. Cons: I would be tempted to use it much more than I use current bans, it might be infuriating for people to read other people’s bad politics but not be able to respond, I’m not sure how to do it without it being an administrative headache for me. Let me know what you think.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

1,121 Responses to OT125: Opentathlon Thread

  1. thevoiceofthevoid says:

    @Scott: I’d like to register my annoyance with a google-style sidebar ad. This ad was flashing at me in all the colors of a poorly-palletted rainbow as I tried to read a comment thread. If it were static I’d have probably ignored it and possibly not even noticed it consciously, but having something actively flashing on the side of the screen is acutely distracting and annoying.

  2. metamechanical says:

    I missed the classified thread, but here’s a link anyway.

    I just released a small puzzle game on itch.io.

    Please check it out, and feel free to give me feedback. Thanks!

    https://metamechanical.itch.io/text-mode-adventure

    • dick says:

      Neat! Considering that it’s free I don’t think you need to wait for a Classified thread, and it’d be fine to repost it in the new OT so more people see it.

    • On my Mac, the cursor vanishes when it passes over the black area, so I can’t click on the “start new game” marker.

    • helloo says:

      Stuck on last level.
      Spoilers ROT13ed.
      V’z cerggl fher V xabj jung vf fhccbfrq gb or qbar, ohg gur gbc cneg vf whfg fybjre guna gur obggbz cneg naq V pna bayl unyg gur zvqqyr naq pna’g fybj be fgbc gur obggbz cneg.

      Fairly difficult, got stuck for a while on I think on the 7th and 12th.
      Could use a reset level key and return to start option on the menu.

    • fion says:

      Very cool. Stuck on level 9. I’ll probably take a break and come back to it later.

      • fion says:

        I have found what appears to be a bug on level 12. I get to this position and am unable to move down.

        • metamechanical says:

          I can’t reproduce it. I’m not sure what would cause that.

          • fion says:

            Weird. I did it two or three times. But I guess what I was doing was far enough from the right answer that it’s unlikely many people will even get to the same point.

            I’m stuck on the last level now. Really great game! 🙂

  3. AG says:

    So I noticed that most of the residential houses in the area have a double step to get to the front door.

    That is, there is a raised porch platform, and then one or more additional steps from the porch level to the raised front door.

    Isn’t this violating the ADA? Like, I guess the logic is that a wheelchair user could use the door inside the garage, but there are no doorbells by the garage door. Assuming that they have a cell phone to call their friend to open the garage also seams in the spirit of violating the ADA.

    And there are houses that have the door facing parallel to the street, which is even worse, because you either need two ramps at the required degrees (one to get on the porch, one at a perpendicular direction to get from the porch to the door), or a mess of a ramp at some wacky 45 degree angle or something.

    • Aapje says:

      Residential houses are not open to the general public, so the ADA doesn’t apply.

    • sorrento says:

      The ADA does not apply to private residences unless a business open to the public is located inside (which would usually be a zoning violation anyway…)

      Residential building codes often have nods toward accessibility, like requiring doors to be a minimum size. Hopefully science will figure out how to fix disabled people before politicians get the idea to mandate elevators in all residential buildings and so forth.

      • ana53294 says:

        Are apartment buildings included in the private residence definition?

        • Nancy Lebovitz says:

          It seems so, or at least walk-ups aren’t that rare.

        • CatCube says:

          ADA/ABA requirements are a huge ball of nightmare, so I don’t know the answer right off the top of my head. However, apartment buildings are a Group R-2 occupancy, and there doesn’t appear to be an exception to the accessibility requirements for those (R-1 [hotels] with less than 5 sleeping units or detached one- or two-family dwellings do have exceptions). Once you’ve determined that a building or part thereof needs to be accessible, you get pointed at another 400 page code to cover detailed requirements. I don’t know if those detailed requirements are written so you can have an apartment building with accessible units on the ground floor and not worry about upper floors, and it would take a fair bit of reading to figure out. I got as far as there being “dwelling units” (apartments) categorized as Accessible, Type A, Type B, or Type C, but I have no idea what differentiates “types” or how many you would need.

          Note that even if you need to have all floors accessible, this is for new apartment buildings. Old buildings are typically grandfathered in, so a lot of walk-ups don’t necessarily mean you’ll be able to do that in new construction. You can see plenty of metal fire escapes on the exteriors of buildings in a lot of US cities, but those are completely illegal in new construction; they’re only permitted as retrofits because at least some way of escaping a fire is better than none.

  4. jgr314 says:

    One story about the 737max is that there were changes to the plane that weren’t deemed significant enough to trigger the type of regulatory review and pilot (re)training that would be required for a new plane design. Is there an ordered list of commercial plane types ranked by this type of risk? In other words, current models that are the most different from the reference model that was fully certified and trained?

    • John Schilling says:

      How do you propose to quantify “most different” for this purpose?

      • jgr314 says:

        I’m asking if there is a resource like this that already exists, not proposing new research to create one. I don’t have any expertise in this area, so my guesses about metrics are probably useless.

        Given that you are one of the most expert here, your reply implies this isn’t something that exists.

        • John Schilling says:

          It probably would exist if there were an unambiguous metric that could used for the purpose. Since there isn’t, any attempt at a ranking would inspire the same sort of complaining and gamesmanship as e.g. college rankings, complete with all involved getting nastygrams from Boeing and/or Airbus lawyers, and it seems that nobody is willing to dive into that mess.

  5. Froolow says:

    With regards to the ads, they don’t annoy me at all and I think you should do whatever generates the most revenue for you. Even though this isn’t the most insightful comment ever left on this blog I thought it might be helpful to balance out any feedback from people who say they are annoying (looking at the responses here I’d say most people don’t care though)

    • Hyperfocus says:

      I don’t mind ads as long as they aren’t animated. A loading/startup animation is bearable, but a looping animation makes the page containing it all but unreadable for me, because it grabs my attention every 3-5 seconds. I realize this is due to my screwed up brain chemistry, but I’m going to ask anyway because I really like this site, and I would greatly prefer supporting it by viewing ads over feeling guilty about ad-blocking.

  6. Well... says:

    Conquest’s third rule of politics is “The simplest way to explain the behavior of any bureaucratic organization is to assume it is controlled by a cabal of its enemies.”

    In my experience, a similar rule could be made for the learning management systems used by many large organizations:

    “The simplest way to explain the behavior of any large company’s learning management system is to assume it is controlled by a cabal of people absolutely opposed to the use of a learning management system for learning.”

  7. dick says:

    Regarding input lag, discussed here and in the last OT: I saw someone mention 15ms being noticeable and doubted it, but rather than doing research (boring!) I wrote up a little webapp to test the idea. Find it here: https://ineptech.com/latency.html

    I’m curious to hear others’ results. I haven’t done a ton of tests yet but it seems like I’m better than chance at 60ms, which is better than I expected.

    • ManyCookies says:

      Practice helped a lot here, on 50ms I started at 6-7 and got to 9-10 after two more rounds. If I’m performing an action in a game over and over, I’d probably develop a similar “feel” of how long it’s supposed to take.

      I’d be interested in how well I could feel differences between non-zero latencies, and whether that difference would be larger or smaller than my best 0/X. I can usually tell the difference between 30/60 in League of Legends and other Mobas, but did pretty poorly on 0/30 in your app.

      • dick says:

        FWIW, this app is measuring input lag only, which is not the same as what shows as Ping in most video games. In LoL, a 60ms ping will manifest very differently for some game events than others; it might be that what you’re detecting is the actually client-side prediction failures, which can be caused by an opponent with a higher ping than you and hence appear to involve latency much higher than your ping.

    • drunkfish says:

      I worked my way through 150, 100, 75, and I had a pretty easy time with 60 (I had to get some intentionally wrong to make sure it even told me when I was wrong). 40 I got 9/10 right. I started having a lot of trouble with 30 ms, but then I realized if I just rapidly alternated left-right-left-right, I could pretty easily find the right frequency and just tell if it was in phase or out of phase. That got me through 30, and through 20 (with some mistakes, 7/10, but I’m pretty confident I was seeing a real signal). That seemed to break down with 10 ms and I ended up getting 6/10, but I’m pretty sure that wasn’t a real signal.

      • dick says:

        I started having a lot of trouble with 30 ms, but then I realized if I just rapidly alternated left-right-left-right, I could pretty easily find the right frequency and just tell if it was in phase or out of phase.

        Neat, but it sounds like this is measuring your ability to tap your fingers at a certain interval, not detect input lag of a certain interval. So it seems like this is a vote for 30-40.

    • Douglas Knight says:

      Here is a blog post on how latency is getting worse. (Although it may be easy to improve by throwing gpu resources at the problem.)
      I assume you’re measuring the effect of added latency on top of the existing latency. If the background latency were smaller, it would be easier to notice small increases. There is declining marginal cost to adding 15ms latency. Since we have high latency, it’s easy to say that your particular 15ms doesn’t matter, but that doesn’t mean that the first 15ms didn’t matter.

      Latency is much more noticeable and important in direct manipulation (finger on touch screen) than keyboard. His other blog posts have two links from one research group claiming that people dragging objects can discriminate 1ms from 2ms latency and that they spontaneously notice 10ms latency. (But drawing is maybe 4x less sensitive.)

      • dick says:

        Yes, I’m measuring whether you notice latency over-and-above the latency incurred by your system, but those are pretty worst-case numbers, I’d be surprised if anyone here sees triple-digit lag typing a character into a browser. (A console, maybe – part of the problem he was pointing out was how bad some platforms’ consoles are)

        • Tarpitz says:

          I’d be surprised if anyone here sees triple digit lag typing a character into a browser

          There speaks someone who has never tried to use Facebook Messenger on my laptop…

    • woah77 says:

      I did it at 15ms and got 10/10, so either I’m really lucky, or I can objectively notice input latency. If it’s the latter, I lay the blame at 25 years of PC gaming. Although, one of the tests I did to see if it had latency was quickly pressing left then right and seeing if it followed properly. So maybe I just have a solid test for seeing if latency exists?

      • dick says:

        I’m surprised that the variance is so wide. Hey it’s like we’re doing science!

        It’d be interesting to see if there’s an obvious difference on this metric between, say, a really good LoL player and a mediocre player of the same age. I recall reading that there’s a noticeable difference in reaction speed, but not a big one.

        • woah77 says:

          I don’t play only or fast paced games often. Not LoL or any of the other online ones. I think it’s largely that I’ve played enough games like Unreal Tournament on a keyboard to notice when it’s not responsive, and I tested it in a way that very small latency would result in detectable results. That is to say: it takes some amount of time to press a key twice. It takes far less time to press two keys in very short succession. Noticing when it hung on the two key presses allowed me to identify when latency existed or not.

          • dick says:

            It seems like that is similar to Douglas Knight’s strategy, and … well, I don’t want to say cheating, but measuring something other than what I was intending to measure. I could fix it by just removing the ability to go left…

          • Douglas Knight says:

            What was my strategy? Using a high frequency monitor? Maybe that’s what woah is doing…
            [surely you mean drunkfish, not me]

            I think woah misunderstands his strategy. I think he’s exploiting a weird bug. If you press a different button in the latency period, it gets discarded. But if you press the same button repeatedly, it doesn’t get discarded. Setting latency high makes this easy to verify and also easy to see that it’s not about human speed.

            (I tried to cheat by pressing right repeatedly, to see if latency accumulated, but it didn’t, so good job!)

          • woah77 says:

            I mean, if we’re testing the ability for a user to detect latency… that’s one way you would notice latency. Latency is inherently about the difference between perceived response and desired response. I don’t try to figure out how much latency exists, just that what I see isn’t what I told it to do. If everything behaves how I expect it to, then as far as I am concerned, no latency exists (which might not be accurate from a technological perspective, but that doesn’t matter to users). The same thing occurs in games: they only care about what they expect, not what is going on behind the scenes.

          • dick says:

            What I intended to measure is your ability to do a thing and see the response to the thing. What you two appear to be doing is testing your ability to tap two keys at precise intervals.
            Suppose your system needs 10ms between two keypresses for the second one to register; you two are (it sounds like) testing your ability to distinguish the trials where you need to wait 10ms between keypresses from the ones where you need to wait longer. That’s kind of like UI lag, but it’s not visual – you could do it with your eyes closed.

            edit: it occurs to me that I can change this, see continuation in the next OT.

          • woah77 says:

            That doesn’t explain, to me at least, why I could get 100% accuracy at 15ms. Yes, it’s a type of bug, but that’s what users notice. I’m fairly certain that if I went down to 1ms latency, it wouldn’t work so well, but at 1ms latency we’re well below the threshold of human vision. When we talk about the ability to discern latency, the only thing the person detecting “lag” is going to care about is responsiveness, not their threshold or method of detection. Maybe I “cheated” but that’s what someone complaining about lag is going to do. “I pressed crouch then jump and all I did was crouch. AWFUL”

          • dick says:

            I think you’re still not understanding the distinction I’m making.

            Define “latency1” as the time between when you press the key and you see the thing happen as a result. Define “latency2” as the time that must be left between two keypresses for the second one to register. The tool I made can be used to find either, depending on how you “play” it, but they’re not the same thing. The first one is inherently visual, and the second one is tactile. More to the point, the first one is very close to what people generally mean when they discuss UI lag, and the second one isn’t, because almost all real apps will accept overlapping keypresses.

          • Eric Boesch says:

            Even if there are no keyboard queuing issues, there is generally speaking no well defined limit below which latency has literally zero impact. (No, that isn’t supposed to be particularly surprising or a rebuttal to anyone else’s claim here.) For instance, imagine a simple reaction time tester for which the user’s median response time is Gaussian with a 200ms mean and a 30 ms standard deviation. If the player “wins” by reacting in less than their median time, then if I count right, their probability of winning drops with a slope of 1.3% per extra millisecond of latency, whether the user can detect the latency directly or not.

    • jgr314 says:

      I could not tell at 60ms, but I’m older and don’t play many speed video games.

    • fion says:

      I felt like I was guessing randomly at 50ms, but I did do a little better than chance: 7/10 and 8/10

  8. Plumber says:

    @Scott Alexander

    “…Someone is doing one of those tag your location on a map things for SSC users. If you sign up, you may want to include some identifying details or contact information, since right now most of the tags don’t seem very helpful unless people are regularly checking their accounts on the site”

    I signed up for the tag thing, I’m unsure of how it us to be used.

    I’m considering a “culture war ban” for users who make generally positive contributions to the community but don’t seem to be able to discuss politics responsibly. This would look like me emailing them saying “You’re banned from discussing culture war topics here for three months” and banning them outright if they break the restriction. Pros: I could stop users who break rules only in the context of culture war topics without removing them from the blog entirely. Cons: I would be tempted to use it much more than I use current bans, it might be infuriating for people to read other people’s bad politics but not be able to respond, I’m not sure how to do it without it being an administrative headache for me. Let me know what you think.

    Scott Alexander,
    Whatever keeps you blogging is better than the alternative.

    • albatross11 says:

      Scott:

      Is the issue you’re trying to address more like:

      a. You don’t want radioactive subjects discussed too much here, lest you have to deal with radiation-attracted angry online mobs?

      b. You don’t want to be personally associated with some of the subjects/points of view discussed here because you find them offensive or dumb or deeply wrongheaded?

      c. You don’t want the community to go up into a flamefest every open thread as some regular participants can’t restrain themselves from having a flamewar with the other side on their pet issue?

      It seems like those three lead to pretty different sorts of moderation policies.

      • Enkidum says:

        Can we go for all three?

        a. and b. he’s at least heavily implied in previous posts, and c. is much less of an issue here than on other sites, but its obviously something better avoided. All three (as well as other issues) are possible consequences of the kind of posts he’s discussing (at least that’s my parsing of the post).

        • 10240 says:

          c. has always been an issue, and it’s not that much of an issue because the moderation policy has always been geared primarily towards preventing c. The new moderation method may just be a different enforcement method of that policy.

          • There are at least two different ways of preventing c. One is by banning people for flaming other people. The other is by banning people for raising topics that might result in someone else flaming them.

            The second is the one I find problematic.

  9. dark orchid says:

    Not annoyed at all by the ads, even though as a European user some of them aren’t relevant for me.

    I get annoyed by: pop-ups and autoplaying videos, ads that take lots of time/bandwidth/processor power to load, and porn ads. I haven’t seen any of those things here yet – the sidebar here is a good model of how I’d like all web advertising to work.

  10. SkyBlu says:

    So I recently signed up for a keybase, and in the process actually got around to making a private key and stuff that I plan to keep long-term. I am thus considering trying to switch over my immediate social circle to keybase chat, or perhaps xmpp. This fits a long pattern in my life where I convinced people to switch to google hangouts from msn messenger, then messenger from google hangouts, then to discord from messenger, and so on and so forth. Why am I perpetually dissatisfied with chat applications? Does anyone else suffer from this debilitating disease?

    • dick says:

      I really like Keybase and I hope someday their business model leads them to open up kbfs programmatically because you could build all sorts of cool shit on top of it, but I don’t really use it for chatting, I use it to store my private git repos.

    • Well... says:

      If you can specify what exactly it was about those chat applications you were dissatisfied with each time, that would provide clues into why you are perpetually dissatisfied.

      It might be that none of them offered the features and benefits you wanted at whatever stage of life you were in when you were using them. Or it might be that you were always noticing problems in each application, and once you noticed them you were eager to switch to whatever new application didn’t have those problems. Or it might be that you really wanted to use the new applications for other reasons but convinced yourself it was because you were dissatisfied with whichever one you were currently using.

  11. Le Maistre Chat says:

    So, Vincent D’Onofrio playing Robert E. Howard, in a screenplay adapted from the memoir by his one-time girlfriend Novalyne Price and co-starring Renee Zellweger as Novalyne, is a thing that totally happened.

    • Nick says:

      So is it any good?

      I’m positive I’ve seen this movie sitting in Blockbuster or whatever, but I had no idea it was about Robert E Howard.

  12. Nick says:

    RPG DMs: how do you handle splitting the party?

    I’ve long wondered how this is supposed to go, because it’s basically impossible not to leave a few characters hanging during this, and I’ve had sessions where a player or two split off and are running the show for an hour or more. But you can’t just ban it, can you? Or have some had success this way?

    Do you switch between the groups every ten minutes or so to keep them engaged? Do you contrive to be sure party splits end quickly? Do you just roll with it and let the others take a food break?

    • Le Maistre Chat says:

      Do you switch between the groups every ten minutes or so to keep them engaged? Do you contrive to be sure party splits end quickly?

      Yes. “Meanwhile, what are [other PC names] doing?” is the most important thing I can do, to keep everyone engaged.
      I can either contrive to remove anything the split party encounters that will slow down ending the party split or hit them with the same random combat encounter as the full party would meet, as disincentive to future party splits.

      • woah77 says:

        This pretty much. Party size makes this a less or more likely thing, also session setting. If we’re doing an in town session, I expect each and every party member to more or less do their own thing, so I take a few minutes with each and rotate. In a dungeon, it’s far less likely and often is very short in duration, meaning the party hasn’t really split in a long term fashion. Doing a quick scout down the hall is not nearly as boring because all the other players care.

    • dndnrsn says:

      I run a lot of Cthulhu games, which are primarily investigative, so splitting the party happens a bunch – when they’re investigating something, which may or may not have a time limit, they can’t structure things around keeping the party together.

      Mostly I just flip back and forth as quickly as I can, usually each time one part of the group completes a task. “OK, so, you two are done at the library – what are the rest of you doing down by the old reservoir?”

    • etheric42 says:

      Depends on if you are referring to dungeon-crawl type games or more storytelling games.

      Storytelling games usually expect the party to be split a lot of the time, and each player having their scenes/limelight is as important as each player having their combat turn. Usually you keep other people engaged by making the scenes short and rotating quickly and/or farming out NPC duties to other players.

      For dungeon-crawlers, I employ a co-/sub-DM if I can afford it, or just make things so clearly lethal that splitting the party is tantamount to suicide. If I’m feeling fancy I might prep something that requires splitting and coordination to solve simultaneously, but that’s high-prep and I’m generally a low-prep GM.

    • Randy M says:

      I had a good session where I intentionally split each member apart on minor quests; only one ended up with combat while the others possibly could have, but it wasn’t expected. I think I cut to each person once or twice and kept it moving pretty quick. Eventually the sorcerer who was fighting a ghost in an underground jail was blown into the sky, allowing the others to grab him on from their airship and use the clues they had found to track the spirit to where it was headed and finish up together.

      I think three separate instances that each involved tracking turns or whatever would have been a headache, but as done it was a good tool to give each player a chance at the spotlight.

    • John Schilling says:

      If they’re all in the same town and not separated for more than a day or a skirmish, it usually suffices to just rotate between groups every 10-15 minutes. Minor combat encounters that run maybe an hour of playing time don’t get broken up unless e.g. the B team is racing to the assistance of the beleaguered A team, but an hour of RPG combat can be a decent spectator sport for people already invested in the campaign.

      For major splits, which as dndnrsn notes is more a CoC than D&D thing, a split and/or separate gaming session is the only thing I’ve really seen work. Unfortunately, my RPG days almost entirely preceded the rise of Euro-style social gaming, because sending half the party off to Settle Cataan in the other room would have been ideal.

      And splitting the party can be driven from a split playing session as well – if you know Alice and Bob aren’t going to make the next weekly session or two but still want to be a part of the campaign, arranging a major side quest that can be resolved whenever the two of them and the DM can get together is a useful approach.

      • dndnrsn says:

        Coincidentally, I’m pretty sure that the one or two times I’ve had a major split (with the PCs in different locations) it’s coincided with only the players for one of the groups being present. Another solution to something like this (or to a situation where one PC is going to be stuck in a lab or a library all session) is by handing over NPCs who might be accompanying the party – of course, contingent on there being NPCs accompanying the party.

    • Lillian says:

      A lot depends on the GM and players. The last Vampire game i played the party was split much of the time, since the focus was plotting and political manoeuvring rather than adventuring. The level of split could vary from every single member being in different parts of the city doing different things, but still able to hit each other up for help if needed, to having four members of the party go on an expedition while the remaining two stayed behind, and spending multiple entire sessions in which the two groups could not communicate or cooperate.

      This worked for a few of reasons. The first is that the GM had a great sense of pacing and we trusted him to give us a fair share of the spotlight. Players were fine with the others having multiple scenes in a row focused on them because we all knew later on we’d get our own chance to shine. The second is that all the players were deeply invested in each other’s characters. It was rare for anyone to check out or stop paying attention when their character was not in the scene because they still wanted to follow the other’s player’s stories. A third reason is that we were willing to tolerate a fair deal of peanut gallery commentary from whoever is not in the scene at present, which also helps them stay engaged. The unspoken rule is to keep the comments to the little pauses in actions and dialogue so as to avoid interrupting.

    • littleby says:

      I’ve definitely had times where I paused the action and said: “Guys, you can’t split the party because it’ll be boring for the people who aren’t there. How about you all go do Thing A, and then you all go do Thing B. I promise you have enough time to get everything done without splitting up.”

      I’ve also had times where the rogue announced that they wanted to scout ahead because they were stealthy and nobody else was stealthy, and I sort of just rolled with it and felt guilty later.

      • dndnrsn says:

        Isn’t that the point of the rogue? Everyone else should care what the rogue is doing – it is directly linked to their survival (or not) later.

  13. greenwoodjw says:

    For some reason, I thought I was banned until 4/10 and there’s been a few things I wanted to contribute to since my ban expired. Weird.

    Anyway, I was part of a community about a decade ago that imposed a sub-forum ban system for the political forum, and it was relentlessly gamed by the same type of people causing the problem here. Eventually that forum became a one-sided venue that set the political tone for the forum as a whole before being shuttered as a cesspit of sniping and backbiting.

    I don’t think it’s a good idea, I think compromising on your core values will encourage the same people to keep pushing for more and more compromise until either you’re one of them or you shut up. I’m confident that would represent a significant loss to America as a whole.

    I also believe that there is an effort to shut down all venues for discussion between political parties and ideological movements to encourage greater division and resentment among groups, in the expectation that the disruptor’s positions have stronger emotional appeal and/or can be enforced through mob action. This comments section is the best comments section and one of the best discussion forums on the internet. The willingness to tolerate abhorrent opinions allows them to be explored and understood, and that’s the first step to overcoming them and reaching the person instead. You don’t de-radicalize people by calling them monsters and you don’t prevent them from gaining adherents by saying “That man’s bad”.

    More, not fewer, spaces need to be like this.

    • woah77 says:

      People I give you comment of the thread?

      Not to be pretentious or anything, but this is how I feel and believe that silencing a minority viewpoint being communicated in a respectful way has the potential to make this place much worse. I do my best, as a person, to see what axioms someone is using to establish their viewpoint and having a place where multiple viewpoints from all across the political spectrum can meet and mix feels like an extremely valuable thing to me.

      This post represents how I feel about this, but delivered far more eloquently than I could have imagined delivering it.

    • Butlerian says:

      I also believe that there is an effort to shut down all venues for discussion between political parties and ideological movements to encourage greater division and resentment among groups

      Given that SSC has had many posts on the All Debates Are Bravery Debates topic of how it’s oh-so-very-seductive to see one’s self as the oppressed truth-teller surrounded by hegemonic conspiracy, I think beliefs like this demand extremely critical reflection rather than “Top comment of the thread, all posters may award themselves 100 victim points”.

      Unless you have screencaps of the divisor’s discord chats where they discuss their ebil plans to shut down freethinkers, I call Oppression Olympics on this line of thought. “There’s a concerted effort to shut us down!” pattern-matches so well to base efforts in claiming the moral high ground of a victim culture society, that, well, pics or it didn’t happen.

      • greenwoodjw says:

        @Butlerian …there’s literally an effort to shut Scott down explicitly because he’s allowing Bad People to have conversations. That’s what this whole discussion is about. Some folks like Dave Rubin and Joe Rogan get protested because they talk to Bad People.

        I’m calling Isolated Demand for Rigor on the requirements in your demand for proof, too. I haven’t posted much, but I’m fluent in SSC too 😉

    • littleby says:

      What’s an example of how to game a sub-forum ban system?

      • greenwoodjw says:

        Same ways you game a normal ban system. Brigading, having friendly mods who aren’t that aware of their bias, etc. It just only applied to the political forum, which made the thresholds for banning people lower.

    • brianmcbee says:

      My sense is that discussion between political parties and ideological movements work much better in person than online. Online text leaves out too many social queues, and without that in-person feedback people have a tendency to go way beyond what they would do in person, causing a lot of unnecessary offense which then just escalates.

      It requires strong moderation, which if somebody like Scott is willing to do it, good for him! If people don’t want to do that work on their forums, I fully understand, because I wouldn’t want to either. It seems like a thankless task.

      Without that moderation you get the usual shitshow we can already see across the internet.

  14. vV_Vv says:

    Are nuts and seeds healthy foods?

    Mainstream nutritional advice, if I understand correctly, says that they are mostly healthy fats plus some proteins, vitamins and other good stuff, but mainstream nutritional advice is somewhat questionable, mainly due to its long history of failure to mitigate the obesity epidemic in the West, and people on teh interwebz have been recently talking about the risks of excessive omega-6 fatty acids (mostly they refer to the cheap seed oils used in processed food production, such as sunflower or canola oil, but I wonder if the arguments extends to things that I like such as walnuts, pecans, peanuts or chia seeds). But of course people on teh interwebz believe all sorts of crazy stuff especially about nutrition, so I’m not going to trust them acritically.

    Are there any studies on the matter?

    • Gobbobobble says:

      but mainstream nutritional advice is somewhat questionable, mainly due to its long history of failure to mitigate the obesity epidemic in the West

      Wouldn’t there need to be some history of the population actually following mainstream nutritional advice for it to be implicated in the obesity epidemic?

      • Randy M says:

        If the advice given is too difficult to follow, something is wrong somewhere, since obesity was so prevalent in the past.
        Either we are terrible at resisting temptation (in which case, maybe there ought to be a law) or the advice doesn’t account for the difficulty in following it.

        • 10240 says:

          Or many people don’t even try.

        • Evan Þ says:

          Many things were different in the past; modern processed foods are a new thing.

          • Randy M says:

            And either those conform to diet advice, in which case it is wrong, or people eat them in spite of it, in which case the temptation is too great. Or alternatively, people choose to trade off health for pleasure, but in that case they should bear all the costs of doing so.

          • 10240 says:

            Or alternatively, people choose to trade off health for pleasure, but in that case they should bear all the costs of doing so.

            Aren’t they paying it?

          • Randy M says:

            Aren’t they paying it?

            It’s a very complicated subject, as we discussed in a recent thread on how we would change health care.
            But, to give a non-medical cost example, the fat acceptance movement is all about mitigating social costs of being overweight.

          • 10240 says:

            It’s a very complicated subject, as we discussed in a recent thread on how we would change health care.

            Are you implying higher healthcare costs, some of which are paid by others through insurance (assuming that insurers are not allowed to discriminate by weight)? Then we should also take into account the lower cost to the pension system. People tend to forget about that when using healthcare costs to justify taxing unhealthy habits.

            the fat acceptance movement is all about mitigating social costs of being overweight.

            I don’t expect the fat acceptance movement to have any significant success. And in any case, what sort of social cost do fat people impose on others, even if others are to stop shaming them? An ugly sight?

          • Randy M says:

            I’m going to gently extricate myself from this topic, with the qualification that no, I hadn’t done the numbers to see if anyone opting for any particular trade-off was creating net negative externalities or not.

            Last post was worded poorly.

      • RalMirrorAd says:

        IDK if the ‘diet wars’ are considered CW but my understanding is that Mainstream dietary advise was to shift calorie intake from saturated fats to grains. IIRC Saturated fat a s a % of calories fell but IDK if it fell in absolute terms.

        Perhaps the argument can be made that the weight gain was from calories in between meals which was neither the result of the mainstream diet or a failure to follow it.

    • Radu Floricica says:

      In addition to what you said: they’re very calorie-dense, which is arguable the biggest problem with the modern diet. And they have a surprisingly large thermic effect, so you can cut about 15% of the calories on the label. Which still leaves you with quite a lot.

      I’m not current with mainstream nutritional advice. I’m aware that it fucked up big time sometime at the end of the last century with stuff like cholesterol bad, trans fats good, but other that that I’m not sure how up to date it is. Cutting edge nutritional advice is very much evidence based, AFAIK. when in doubt, google scholar – there should be something for almost any question.

      • Douglas Knight says:

        trans fats good

        No one ever said that.

        • Evan Þ says:

          I agree. My mother’s a dietician; I’ve read the newsletters she was getting in the 90’s promulgating the Mainstream View. The only disagreement on trans fats was whether they were Bad or Even Worse.

          • Douglas Knight says:

            Sure, neither doctors nor dieticians ever praised trans fats. But those are two very different statements! There’s often two different consensuses, one among dieticians and one among physicians. When I say “no one” I mean neither. But if you’re going to talk about the Mainstream View, you should ignore the dieticians and talk about the physicians because they have so much more contact with patients. That’s a pity because the dieticians are much closer to the research, but it’s important to acknowledge because it’s half the problem.

        • Radu Floricica says:

          There was a point some 20-30 years ago when margarine was considered an alternative with less cholesterol, at least in my corner of the world. I definitely don’t suggest it was the scientific consensus at any time, but it definitely was popular enough.

          • Douglas Knight says:

            That’s a very different statement. At most someone said “trans fats are bad, but the trace amounts in margarine don’t matter compared to the dangers of saturated fats,” but that’s very far from “trans fats good.”

        • The standard advice, as I remember it, was to replace butter with margarine. The margarine in question was hydrogenated vegetable oil, which is transfats, which, as I understand it, turned out to be much worse for you than the saturated fats in butter.

          The advice wasn’t put as “transfats good” but as “margarine is healthier than butter,” which was not true.

          I’ve read the newsletters she was getting in the 90’s promulgating the Mainstream View.

          The 90’s is pretty recent. What about the advice in the sixties?

          • Nick says:

            This. All through my childhood there was loud and fierce insistence among members of my family that margarine was better for you while butter was verboten. I’m sure my dad still uses margarine exclusively.

          • Douglas Knight says:

            was hydrogenated vegetable oil, which is transfats

            As I said in the other comment, that is not true. Margarine is mostly not trans fats. I think it is 10% trans, but sources vary. But regardless of the number, people promoting it were not thinking in terms of these categories. They were promoting it as unsaturated.

          • 10240 says:

            Also, as far as I understand, since it was discovered that trans fats are harmful, the amount of trans fats in margarine was reduced (eliminated?).

          • ana53294 says:

            I don’t think it’s possible to reduce the amount of trans fats to 0 without completely changing the technology.

            AFAIK, vegetable oils are saturated with the help of a chemical catalyst, and this catalyst doesn’t have the specificity enzymes have. So unless they start producing margarine with enzymes, I don’t think it’s possible.

            I have seen butter spreads which contain vegetable oils, and it is possible to make those trans-fat free.

          • Garrett says:

            Is there a practical way to separate the trans/cis fats after saturation? If so, they could be produced as normal and then the trans-fats separated and discarded.

        • dndnrsn says:

          In the 80s, the Center for Science in the Public Interest (Nutrition Action) promoted trans fats as a healthier alternative to saturated fats. Is this the same as saying they’re good?

          • Douglas Knight says:

            Thanks!

            I retract my sweeping statement. But I don’t think that this was representative. And while it sometimes claims that trans fats are better than saturated fat, it mainly claims that they’re no worse than saturated fat, to conclude that the bundle of fats in margarine is better than butter.

          • dndnrsn says:

            The CSPI seems pretty mainstream as far as I can tell. My parents get the newsletter, and it’s mostly reasonable stuff – but they have a way of sneaking low-carb in while pretending they’re only addressing fat which I find disingenuous. Stuff like, “instead of this high-fat meal” (picture of steak with baked potato) “why not have this low-fat alternative” (picture of boneless skinless chicken breast with green salad.”

          • Douglas Knight says:

            I don’t think that it was representative in that most sources weren’t so explicit as to say “trans fat bad” the way that this one did (which is the opposite of “trans fat good”).

            It was representative in that its conclusion was about the bundle of fats in margarine, not driven by trans fat alone.

          • As best I can tell from a little googling, the shift away from transfats and towards margarine made by a process that didn’t produce them happened in the nineties.

            It sounds as though the process for making margarine before that produced both saturated fats and transfats. It didn’t produce cholesterol, and the amount of saturated fat may have been lower than in butter.

        • ana53294 says:

          Here is advice from the Mayo Clinic to use margarine instead of butter. The writer does say to use softer margarine, because she says it has less trans fats.

          So mainstream dietary advice still says to use margarine.

      • Nick says:

        they’re very calorie-dense, which is arguable the biggest problem with the modern diet

        Where does this claim come from? Seriously, can someone help me out here?

        If I go to the store looking for processed crap, I can get something like canned soup or TV dinners. The canned soup will be about 250 calories at the upper end and the TV dinner will be about 350 at the upper end. Soups like chicken noodle can be as low as 120 calories per can. As far as I can tell, you could be stuffing your face with these things all day and you’d be vomiting it up before you reached 2000 calories.

        If I go to a restaurant I will be struggling to find a 1000 calorie dish. Take a look at Subway’s menu; their 6″ (meat) subs range from 730 calories to 260 (!!). A foot long will leave you full for most of the day; even if you ate two of their highest calorie subs every day, you’d be gaining, what, a pound or two a week? Which tails off after a while, too. I tried the nutrition calculator at Taco Bell just now, and apparently my last visit I ate 780 calories. I calorie count at every restaurant I go to, and I rarely find a big meal that’s more than a thousand or so calories. If you skip breakfast as many folks do and eat two big meals, I don’t see how it’s physically possible to do anything more than maintain an average weight.

        • Randy M says:

          If I go to a restaurant I will be struggling to find a 1000 calorie dish. Take a look at Subway’s menu; their 6″ (meat) subs range from 730 calories to 260 (!!).

          Throw in extra condiments, a large coke, bag of chips, and a cookie, and you’ve got yourself a meal. Also, the direction of the problem, I suspect.

          If you skip breakfast as many folks do and eat two big meals, I don’t see how it’s physically possible to do anything more than maintain an average weight.

          As apparently you’ve escaped their clutches, Starbucks would like a word with you.

          • Nick says:

            Yeah, I don’t drink coffee. Anyway, neither Subway’s nor Taco Bell even do fries (well, the latter only seasonally). You can buy Sun Chips at Subway, but those are only 140 calories.

            The only two places I’ve been to recently where I can get a guaranteed high calorie meal—the kind that will get me to 2000 even if I eat a low calorie dinner—is Five Guys and Mr Hero. The former I can build a 1200 or so calorie burger plus a few hundred in fries, while the latter has insane 1800 calorie sandwiches that leave you hungry in a few hours, and that’s before fries and a drink. I can’t do the same with most restaurants, fast food or not; if I try, I’m at like 1800 calories at the end of the day and not particularly hungry.

          • acymetric says:

            @Nick

            For the restaurant meals, what exactly are you ordering? You can easily get a higher cal meal at a sit-down than 5-Guys. Plus factor in 5 soda refills or whatever. Plus maybe an appetizer or just some bread, and even worse if you do dessert. A typical person could easily exceed 2000 calories at a sit down restaurant and only feel “mildly stuffed”, and that probably wasn’t the first thing they ate that day and may not be the last.

          • Nick says:

            I actually haven’t been to many sit down restaurants lately, so I haven’t been doing the calorie counting there. (It’s also harder to find nutritional information for those places, so you’d have to estimate anyway.) I’ll just pick two local places and see what the numbers look like.

            Let’s go with the Melt, which I went to not long ago. The menu items are in the range of 500-1400, with a lot around 800, which is definitely better. Add fries and a drink and most meals will be 1500. But those meals are also enormous, and I don’t know anyone who walks away from those feeling “mildly stuffed.” @baconbits9 can contradict me if he’s had a different experience, he’s from the area I think.

            For the other let’s go with the Macaroni Grill, which looks to me a nice sit down place in Fairlawn. There’s a much bigger spread here, from little 500 calorie pasta dinners to a 2,000 calorie meal (!!), and lots in the 700-900 and 1100-1300 range. I don’t think sides make sense here, but appetizers and drinks do, so these could easily be 1500-2000 calorie meals. I think you’d be pretty full, though.

            So it seems I was wrong, sit down restaurants go to 2000 calories easily. But if people have counterexamples in the fast to fast-casual range I’d be more interested in those.

          • Jake says:

            I think the drinks and unlimited refills are definitely a huge culprit here. Just to look at some numbers, if you get a large drink at a fast food place, lets say a 40oz Coke, that’s an additional 480 calories. If you get a refill on your way out, that’s 960 calories that probably don’t even register as having eaten anything.

          • Randy M says:

            Yeah, I don’t drink coffee.

            If you are looking for the calorie culprit, I think focusing on meals is the wrong track. People get a lot of calories through snacks and drinks. Starbucks frappe-whatever on the way to work, two donuts from the break room, jamba juice on the way home, handful of chips before dinner, bit of ice cream before bed.
            “Oh, look, I skipped breakfast today, must be keeping the calories down!”

        • Radu Floricica says:

          See previous answer by Randy M.

          I’d also add a dose of healthy skepticism on low cal TV dinners. I’m struggling to find anything under 400cals. Most things with some fat have about 250-300 cals/100g, and they weight 250-300gs. Add a soda/beer and a dessert/snack, and you’re into “Damn!” territory.

        • lvlln says:

          I think you’re just somehow avoiding all the sources of Calorie-dense foods that are common in America.

          I went to Bertucci’s for dinner yesterday, a mid-scale sit-down chain Italian restaurant. Their menu has calorie counts listed, and I had what was a fairly typical pasta dish, which was around 1,100 Calories. They also provide free rolls with olive oil, and I had a glass of beer, so just from those alone, I’m thinking I probably hit 1,400-1,500 Calories. As an adult male, that’s like 3/4 of my daily recommended Caloric intake, all in 1 meal – it’s a higher proportion for non-males.

          And Bertucci’s isn’t some outlier in this – I always see entrees in the 1,000-1,500 Calories range when I go to similar-scale chain restaurants like Cheesecake Factory, Friendly’s, or 99. Some of them offer free bread at the beginning, and if you tack on a beer or non-diet soda, that’s easily >=75% of one’s daily recommended Calorie intake in 1 meal.

          Also, I haven’t had TV dinners in a while, but I’ve definitely seen ones in the super market that have 800+ Calories per serving. I think there’s a brand called Hungry Man or Working Man or something like that which actually advertises the fact that it’s so high-calorie.

          But really, even as calorie-dense as these meals are, they’re not what I think of when I think of modern US diet being too dense in calories. I think of snacks and sweets. A single medium-sized chocolate chip cookie can easily have 200-300 Calories in it, and it’s often easy to snack on 2-3 of those at a time, which don’t really fill you up much but take up over a third of the amount of Calories you need in a day. A small bag of chips can be 200 Calories as well, and if you’re frugal and buy a big bag of chips, you’re looking at 1,000+ Calories which can be easy to just mindlessly eat through once you get started. A 20 oz non-diet soda can be 200-300 Calories as well.

          • Lillian says:

            You know, sometimes i wonder if there’s a certain moralistic impulse that sabotages people’s diets. Like, you’re supposed to eat dinner, right? Part of being a proper upright person is eating a healthy dinner. If you live with family, it’s also often times a social activity that can be hard to duck out of. So my mental image of what the typical person does when mindlessly eat through a whole bag of chips is to feel really bad about it, but still make themselves a healthy dinner, because that’s what they’re supposed to do.

            This is obviously counter-productive, they ate too many calories, and now are adding even more calories. That’s how you get fat. Whereas what i do when i eat a whole bag of chips, or an entire tray of cookies, or half a jar of peanut butter, is to call to simply call it dinner. Sure having my dinner be junk food is bad, not eating my veggies is bad, but in terms of weight control dinner being all the cookies is strictly superior to eating all the cookies before dinner.

            Basically i’m under the impression that a lot of people are operating on a “healthy” and “unhealthy” food dichotomy that supposes the one makes up for the other, as if weight gain was caused by an imbalance of the humours. This makes it difficult for them to actually take the proper corrective measures when they make mistakes.

          • Nornagest says:

            Reminds me of breakfast cereal ads. Calvin and Hobbes said it best:

            Hobbes: Give me a break, this is like eating a bowl of milk duds.
            Calvin: Look, it says right on the box, “part of this wholesome, nutritious, balanced breakfast”.
            Hobbes: And they show a guy eating five grapefruits, a dozen bran muffins…

            Of course, breakfast for me is usually three cups of black coffee and no food, so it’s not like I have a leg to stand on here.

          • Le Maistre Chat says:

            Nornagest says “I like my women like I like my coffee: black and more than one.”

          • Randy M says:

            Sure having my dinner be junk food is bad, not eating my veggies is bad, but in terms of weight control dinner being all the cookies is strictly superior to eating all the cookies before dinner.

            I am not convinced of this. Occasionally, sure, no loss, but habitually, it may be better to have a nutrient + calorie surplus than nutrient deficit.

            Some “junk” food might not be so bad, though. nachos with salsa could be a meal, or some homemade cookies with eggs and oats and so on. Make too many meals “bag of funions” and that will catch up with you fast.

            In any event, though, I’m careful about the messages I send to my kids about your point. I’m the one who puts food on the plates; why should they be compelled to finish them? We aren’t about to start a fast or anything like that, so we don’t have a “clean your plates” rule, we have a “eat til your full, then stop” rule, accompanied by only infrequent junk being available at all.

          • acymetric says:

            @Randy M

            Good for you, I understand why “clean your plate” was a thing in the past but it is long past time to retire it, at least for the majority of the population in the developed world (essentially anyone not living in extreme poverty).

            Is the practice of bribing kids to eat more dinner with the prospect of dessert/snacks after even worse? “Finish the rest of your dinner and we can have ice cream!” (Now the kid has eaten too much dinner and topped off with ice cream)

          • Randy M says:

            Is the practice of bribing kids to eat more dinner with the prospect of dessert/snacks after even worse?

            Probably, although I’m guilty of it on occasion. “You want ice cream? But there’s dinner left, eat that!”

            I think the better practice is to keep portions smaller when you know you will be serving dessert and limit availability of sweets generally.

            And please don’t bribe every little bit of good behavior with candy. I’m shocked at how many fillings some children I know have.

          • The Nybbler says:

            But operant conditioning works so well. Telling a kid to do something unpleasant because it will be good for him, or because it is his duty, and the only reward for this will be in the doing… well, there’s a reason there aren’t any children’s books by Marcus Aurelius.

          • jgr314 says:

            @Nybbler

            there’s a reason there aren’t any children’s books by Marcus Aurelius.

            There seem to be children’s books for everything these days. Your comment immediately made me think about Zen Shorts, but it turns out someone even wrote a series called Little Stoics.

          • Nancy Lebovitz says:

            This is just the result of reading a bunch of anecdotes, but it seems that no-sugar households result in children who binge on sugar. It’s better to have a moderate sugar household.

          • Aapje says:

            @The Nybbler

            But operant conditioning works so well.

            Also to create compliance when the reward/punishment structure no longer exists? I doubt it.

            The goal of getting kids to eat veggies is not just to have them eat well while under control of the parents, but also when grown up and in control of their own diet.

          • Aapje says:

            @Nancy Lebovitz

            An attempt was made to raise me without sugar, so I wouldn’t develop a taste for it. Then Halloween came around. Turns out that liking sweet things is innate.

          • The Nybbler says:

            @Aapje

            If conditioning and habituation (that is, getting used to the unpleasant taste of vegetables) doesn’t work, what does? I expect most kids, as adults, will end up continuing to eat more or less what they ate as kids, or perhaps what their co-habitants ate as kids.

          • Aapje says:

            @The Nybbler

            I was opposing the idea that operant conditioning is sufficient.

            People who are raised under strict rules that they don’t come to believe in, often do not autonomously follow the rules when they believe that authority is not present. Of course, if the rules actually had a purpose, this can result in a (painful) learning experience and possibly even one that one cannot recover from.

            It works a lot better to teach the child that their own goals require the behavior: “If you eat fast food too often, you will get fat and will get bullied.”

            Operant conditioning can then be used when the kid is too young to have self-control, sufficient causal reasoning or such, but once the kid gets older, the response to misbehavior should shift to appeals to their self-interest and/or goals.

            This is not only more reliable, but it fundamentally improves the parental/child relationship, as it’s not adversarial, but cooperative: “I am helping you achieve your goals” rather than “I am making you do what I want.”

            PS. I have no problem with habituation to some extent, but it can be taken too far.

          • Edward Scizorhands says:

            A kid who never has to eat vegetables is how you get Warren Buffet who only[1] eats hot dogs at 89.

            You can learn to like foods, or at least tolerate them. Kids can’t realize this. Sometimes they outgrow it on their own, but knowing that trying a new food won’t kill you is a valuable life skill parents want kids to learn.

            [1] Not literally, but his diet is like an autistic kid’s.

    • Douglas Knight says:

      here and here is a randomized controlled trial showing that giving people an ounce of nuts each day causes them to have fewer heart attacks and fewer deaths. Giving people olive oil was even better. (Giving people nuts is a better intervention than telling them to eat nuts, because it is causally downstream.)

      This is by a very large margin the best study of the health effects of diet that has ever been done in the history of the world. It had some randomization problems and people are currently freaking out about it, but while it’s possible that they have secret information that they refuse to share, it’s probably just that they are bad at statistics.

      Nuts are diverse with different fat profiles. I think that almonds are considered to have the best (relatively high 3:6 ratio, though not much polyunsaturated, lots of monounsaturated). This study was walnuts+almonds+hazelnuts.

    • Nancy Lebovitz says:

      My feeling is that not much is known about what people actually eat, let alone how people used to eat, and the effect of food on weight and health.

      As a result, people invent examples of good or bad eating and guess at the results.

      I’ve heard that snack foods are engineered to pleasant to eat without being satiating, which explains why it’s easy to go through a whole bag of chips or liters of soda. And that people were exercising less, but also eating less (in the 70s) until these engineered foods were developed. I give this a maybe.

      I have a notion that some fraction of the gain in weight is a result of dieting. I’ve seen a lot of anecdotes from people who lose weight by dieting, then gain it all back plus 25 pounds. Some people do this three or four times. And then (at least in the anecdotes I see), they stop dieting. It’s more common for their weight to stabilize than for them to lose weight.

      • Hyperfocus says:

        I agree that some portion of weight gain is caused by dieting. We’ve all heard that yoyo dieting is bad, but it wasn’t until I started seeing a bariatric specialist* that I understood why.

        Most dieters don’t eat enough protein. So while they lose fat, they also lose muscle, and muscle mass is the best way to boost your daily calorie burn. This is why low-fat lots-of-whole-grains-and-not-much-else diets start out with rapid weight loss that slows as you approach your target weight. You lose fat but you also lose your fat burners. Then you go off the diet and start eating junk again, and still don’t eat enough protein, so your muscle mass stays lowered, while your fat increases again. Eventually you get fat enough that you diet again, which is more difficult this time, and you lose even more muscle mass in the process. Rinse and repeat, and eventually you join the fat acceptance movement because it’s just not possible for you to lose weight without starving yourself.

        The moral of the story is eat lots of protein! Egg whites and fatty fish: eat ’em every day if you can! Don’t drink alcohol with food! Limit yourself to 100g of carbs per day to ensure slow and steady weight loss! I’m not a dietician, don’t take my advice without consulting an expert!

        *I’m not morbidly obese, but I’m 265lbs and would like to weigh 200lbs (I’m 6’1″), so I enlisted the help of someone who can make that happen.

    • Worley says:

      Be careful — you’re running into the fact that the concept of “a healthy food” resonates emotionally but doesn’t correspond to the real world. You can have “a healthy diet” or at least a diet that is healthier than another diet. But there is no food for which eating an unlimited quantity of it will always be better than the same diet without it.

  15. onyomi says:

    I’m against CW-specific bans because of the risk, even unintended, of banning opinion space in the name of banning bad behavior. If a topic is off-limits it should be off-limits for everyone. That said, in cases when a poster is good but for a particular bete noire they bring up all the time inappropriately, I can see the reasonableness of issuing issue-specific warnings or “soft bans,” so long as public and specific (not just “culture war,” but “Alice is warned to stop posting low-effort Trump swipes,” “Bob is warned not to post about horrible banned discourse for three months or a permaban will result”).

    • Edward Scizorhands says:

      We’ve had that before. A user was told to stop bringing up Ayn Rand unless they did a book report on Atlas Shrugged to show they read it. It was never tried again, maybe for good reason, or maybe Scott forgot about it.

      • Nick says:

        It was Jill:

        Jill, you are not yet banned, but you are forbidden to reference Ayn Rand, accuse other people of worshipping Ayn Rand, attribute everything you dislike to a conspiracy centered around Ayn Rand, or use Ayn Rand as a metonymy for any view you disagree with. I will lift this restriction if you read and post a book report on Atlas Shrugged.

        • Watchman says:

          Seems likely that a ban would be a kinder punishment (I’m yet to find a readable in my opinion political.

      • bean says:

        That’s not quite the only case we’ve seen. We had another who was specifically warned to stop using too many weirdness points. Both of these quickly lead to actual bans, although I’d say that EC (who sparked this latest round) was a somewhat higher-quality poster than either of those two. (At the very least, he was polite and didn’t cause huge fights every other OT.)

    • John Schilling says:

      If a topic is off-limits it should be off-limits for everyone.

      By that standard, CW topics have to be off-limits everywhere that isn’t willing to accept dumpster-fire-in-a-cesspool level dialogue, because there will always be people who want to drag the dialog down to that level and they will go out of their way to find unspoiled fora to spoil. For productive dialogue of some topics, anywhere, you absolutely have to exclude some people.

      The question at hand is whether it is necessary to exclude those people absolutely, or whether we (by which I mean Scott) can make room for them in a limited non-CW capacity. I am skeptical that this would work very well in practice, but it might be worth a try.

      • Dan L says:

        This is more or less my position. There’s a line to be walked between recognizing that certain types of conversation are unlikely to further a site’s purpose and allowing for a heckler’s veto.

        On individuals with inconsistent contribution quality, I’m a fan of short-duration action on a hair trigger, e.g. 72-hour ban with no warning for a litigable infraction. This contrasts with waiting until someone’s net value goes negative, which can mean tolerating a lot of bad behavior from a regular. The hard question isn’t what erosion of norms they personally are responsible for, it’s what effortposts you missed because their authors went elsewhere.

        • bean says:

          On individuals with inconsistent contribution quality, I’m a fan of short-duration action on a hair trigger, e.g. 72-hour ban with no warning for a litigable infraction.

          I think this is probably a good idea. A 72-hour ban for a stupid comment is definitely more helpful than a 3-month ban after a dozen stupid comments with no action before that point.

          • Lambert says:

            +1

          • Randy M says:

            Which is also good parenting or teaching advice.

            Not the bit about banning from the premises for 72 hours, but the clear, consistent enforcement from the start.

          • Nick says:

            It is better, but it requires much more prompt action from Scott, which I don’t think he wants to do. Given that low administrative headache for him is one of the things we’re optimizing for, I’m not sure it’s a realistic option.

          • baconbits9 says:

            Which is also good parenting or teaching advice.

            Not the bit about banning from the premises for 72 hours, but the clear, consistent enforcement from the start.

            I think you would have to live in San Diego to get away with a 72 hour ban.

          • Randy M says:

            I think you would have to live in San Diego to get away with a 72 hour ban.

            Is there a story behind that?

          • Theodoric says:

            I don’t suppose Mark Kleiman would like to volunteer to be a mod?

          • Dan L says:

            @ Nick:

            It is better, but it requires much more prompt action from Scott, which I don’t think he wants to do. Given that low administrative headache for him is one of the things we’re optimizing for, I’m not sure it’s a realistic option.

            Unfortunately, I fear that goal directly trades off with the goals of active moderation*. As Randy alludes to, action is made most effective by shortening the feedback loop and minimizing false negatives. Trying to substitute by increasing punishment as a deterrence has mixed success and notable downsides. This can be confirmed by either a steely-eyed criminologist with decades of data, a competent schoolteacher, or a mediocre dog trainer. But that’s a different rant.

            A very notable benefit of preferring short-term punishment is that the lower stakes means false positives are less damaging. There’s less inherent need to litigate moderator decisions and the moderator has more opportunities to hear about it if they legitimately overstep.
            (I thought of a few techniques that could be used if community pushback continues to be a problem, but never needed to implement them when I was moderating.)

            *I think there’s a difference in philosophy between moderation that shapes behavior of individuals and that which selects for a certain population. The aspriational goal of promoting positive contribution from a diverse population would necessitate the former, I think.

    • Paul Zrimsek says:

      I’ve thought before that highly specific Personal Topic Bans would be an improvement over full bans, but they might not be practical. Remembering a list of banned people is probably hard enough; this would involve remembering a matrix.

      In any case, I’m against tightening moderation any further until such time as Scott feels able to return to ideological even-handedness.

      • albatross11 says:

        On the other hand, an informal “Hey, why don’t you lay off the CW posts for a couple weeks, you’re getting to be a one-trick pony” might work out okay.

  16. Nancy Lebovitz says:

    https://www.thecut.com/2019/01/does-duolingo-even-work.html

    Claims that Duolingo is pretty useless. Anyone with experience one way or the other?

    • onyomi says:

      I think as a stand-alone tool it’s unlikely to get you anywhere, but might be a good way just to feel one is getting started and/or bone up on basic vocab. I think the sort of practice it offers is too decontextualized and divorced from practical situations in which you’d use a language. And so far as that kind of tool goes it seems to me not as good as Rosetta Stone or Pimsleur, though has the advantage of being free. As usual I will recommend live tutoring on italki and jumping in with reading and listening as soon as possible with Lingq.

    • Viliam says:

      My experience with Duolingo is “hit and run” — using it intensely for a week or two, then ignoring it completely for months, and again; which of course defeats the entire spaced repetition approach — so I can’t talk about how useful it is at its goal, but at least I have an idea of how it works.

      From my perspective, the article is “kinda true, but in a boring way”. It says that Duolingo is less efficient than being fully immersed in the foreign language environment. No shit, Sherlock!

      Then it complains about the choice of topics. The usual textbooks have lessons focused on conversational situations, such as “family”, “in a restaurant”, “in a shopping center”, while Duolingo has lessons more like “adjectives”, “numbers”, “past tense”. (Note: Duolingo also teaches those words within the context of entire sentences. It’s just that the sentences do not try to make a coherent story.) Maybe this is a valid criticism; I am not sure. On the other hand, I am happy that when I want to refresh the past tense, I can click on the lesson called “past tense”, instead of having to remember that the past tense was introduced in the lesson “visiting Grandma”.

      Speaking about practicality of the lessons, I guess you can’t make everyone happy. Different people use language for different purposes; from my perspective “ordering food in a restaurant, small talk, arguing with a policeman about traffic violations” is not the central way to use language. I used to be annoyed when in a textbook the lesson 6 is about ordering various kinds of meat in restaurant — as an aspiring vegetarian, I often don’t even know what some of those words used to describe various ways of chopping and cooking meat actually refer to, so why would I spend an entire lesson learning their English versions? — but I guess for people whose preferred outcome of learning a language is “go for a vacation, eat in a restaurant, get wasted, and cause a traffic accident on your way home” this is one of the most important topics.

      My complaints about Duolingo would be completely different:

      They keep changing how the main page works. At some moment I was satisfied: the lessons were clearly marked as “freshly learned”, “learned long ago, needs refreshing”, and “not learned”; then they kept playing with colors and meanings, and I am not sure anymore what anything is supposed to mean. (I guess the attempt to dumb it down for an average user made it less useful for me; and I am not really sure the average user was actually made happier.)

      When you choose to exercise a random topic, it chooses a random lesson and gives you 30 exercises in a row from the same lesson. I would prefer to have 30 exercises from different lessons instead; my whole reason for clicking the “random” button was that I don’t want to proceed lesson by lesson.

      But none of this is addressed in the article.

    • J Mann says:

      I’ve been doing Duolingo Spanish for two years. IMHO, if you want to play a game on your phone, it will teach you more Spanish than playing Candy Crush would, and isn’t a bad way to build vocabulary, but you need to supplement it with something. I’m a big fan of the Living Language Spanish podcast, and of chatting online.

      I tried Rosetta Stone Spanish and didn’t find it any better.

    • rlms says:

      The only things it could even in theory be good for are vocabulary and beginner-level grammar (I don’t think you could seriously claim that the transcription/speaking/translation exercises are helpful in teaching you to understand and speak to people or translate actual texts, since in the real world language doesn’t come in bite-sized chunks and only contain vocabulary you’re familiar with). Is it good for those things in practice? Well, it’s better than nothing, and probably as good as a lot of language courses. Definitely worse than proper spaced repetition software and a grammar book in terms of efficiency, but being less efficient means you’re more likely to do it.

    • Winter Shaker says:

      I have kind of vaguely dabbled on Duolingo in a few languages, but the only language that I have studied only on Duolingo is Hindi – I got about two thirds of the way through the tree – and I now can remember very little that I could produce (though would presumably be able to recognise a fair bit more), and basically nothing that I might actually want to say to someone in the real world. One of the phrases I remember was one that translated as ‘What is tea? What is water? Who am I?’, which is… likely to be useful only in some very specific circumstances.

      (Luckily, there are people working on adding Hindi to LingQ, the Krashenite comprehensible-input-based site that Onyomi turned me on to and is recommending here, so hopefully that will soon be available as a better alternative, albeit not a free one)

    • fraza077 says:

      I’ve used DuoLingo for the last 2 years or so for Spanish, and I wish I’d moved onto something else sooner. I can get the right answer so easily without really having learned anything. Especially for multiple choice questions. The only really difficult exercise was translating from English to Spanish, and having to write it out myself, which in the App was quite rare. I completed the course quite a while ago, and kept practicing, but am extremely far from fluent.

      I’ve switched to Anki, and I love it. Sure, I can’t ask Anki questions, and that can be frustrating, but fortunately I have some Spanish-speaking friends who help me with that. It’s so quick, because I don’t have to enter anything, I just indicate how difficult it was for me to come up with the answer.

      • Winter Shaker says:

        For what it’s worth, you should try the Krashenite style and see if it works for you. For Spanish it’s pretty easy to get beginner to intermediate books of short stories that come with matching audio, such that you can listen and read at the same time, without having to stop and look up too many words. Much repetition is recommended; and, like Onyomi says, also book tutors for one-on-one lessons so that you can practice speaking with a real human interlocutor.

    • DinoNerd says:

      I’m not a connoisseur of language learning methods, but I’ve tried Duolingo – twice. It’s better than nothing, but I don’t like it.

      Good:
      – gave me a chance to practice voice-to-meaning and voice-to-spelling, which I wasn’t getting elsewhere
      – if you use chrome, it can do text-to-voice exercises, but not very well
      – it was quite useful for attempting to ressurect my knowledge of a language in which I was once reasonably fluent, after 20 or more years with limited opportunity to read the language, and almost no opportunities to write or hear it

      Bad:
      – lots of memorizing the specific answer to use for this particular question – sometimes there are n possible translations, it only accepts one, and when the same word comes up elsewhere, it allows more or different choices. This is well beyond “meanings in context” and amounts to “sloppy coding”
      – system thinks that a word is a sequence of characters. “boy” and “boys” are different words. But two different parts of speech, with the same spelling, are the same “word”, even if the meaning is unrelated.
      – poor proof-reading
      – feedback on errors is eccentric, and sometimes incomprehensible. E.g. suppose you are asked to translate “you run”. In many languages, there are different words for *one* you and a group of “you” – e.g. “tu” and “vous” in french. There are also often multiple words potentially translated as “run” – e.g. “rennen” and “laufen” in german. So you try “du laufst” – except you can’t spell, and wind up with “du lauffst”. Its favourite choice for this question happens to be “Sie rennen” – so it tells you “du lauffst” should have been “Sie rennen” – even though it would have accepted “du laufst” – because it’s “too hard” in general for it to figure out which form you were trying to use. This was new breakage the last time I was on Duolingo; it precipitated me leaving again.
      – almost no explanation (e.g. of grammar) – it’s all “interpret this sentence” (or sometimes, phrase.
      – material changes frequently, whereupon it winds up confused about what material you actually know, leading to rework. Also, there’s always a huge spike of typos etc. after any new material is released, and you can’t choose to stick with the old
      – meh gamification, with rewards you can’t use for anything

      Summary: maybe someone who learns differently than I do could learn a language from scratch with this, at least to some vaguely usable level. But they’d memorize a lot of errors in the process. With German, where I’d spent some time with a “German for reading knowledge” course, travelled in the country (attempting to use my bad German), and had a fluent friend to give reality checks, it was somewhat useful **on top of other methods**. With French – where I was once fluent – it was excellent for blowing off the rust. But I went from knowing a handful of Spanish words – to still knowing only a handful of Spanish words. (I never tried a language I knew absoultely nothing of.)

    • dndnrsn says:

      Good for vocab, terrible for everything else. Repeated exposure can help, but it has to be a lot of exposure, more than 15 minutes or half an hour a day. The app version doesn’t give enough grammar instruction – there’s a whole bunch of things in German where I never knew how they worked, and basically just memorized the answer to something I’d gotten wrong so I could get through the lesson – but very quickly in an actual lesson I got it.

      The weird made-up sentences seem intended to be shared on social media.

  17. Yaleocon says:

    A QM question for anyone with a solid background in Everettian mechanics and Bell’s inequality. Hopefully the exposition of the question will make sense to someone; sorry if it’s too technical.

    So, it’s clear that singlet states can’t be straightforward superpositions. Two particles in a singlet state are guaranteed to have opposite spin if measured in the same basis. That is, a singlet state acts like A:(1/√2|up-x⟩𝛼|down-x⟩𝛽 + 1/√2|down-x⟩𝛼|up-x⟩𝛽) when measured in x. And it acts like B:(1/√2 |up-y⟩𝛼|down-y⟩𝛽 + 1/√2 |down-y⟩𝛼|up-y⟩𝛽) when measured in y. But measure A in y, or B in x, and you don’t get guaranteed opposite spin; so the singlet state is clearly neither. So particles in a singlet state aren’t in a superposition in any single basis at all. It’s almost like the singlet state “decides” which superposition to act like it’s in only once it’s measured.

    Everettians have a good explanation for why there isn’t a “measurement problem” when it comes to superpositions: all eigenstates are simultaneously real, and there is no “collapse”, just “decoherence” when you figure out which world you’re in. But again, singlets aren’t simple superpositions, and their problems run deeper. It seems like measurement in a particular basis forces them into “deciding” what kind of superposition they are. But that can’t be right (to many-worlders at least); clearly it’s the kind of “measurement effect” the MWI abhors. So what’s the Everettian alternative? What’s the explanation for what’s going on with singlet states?

    • eyeballfrog says:

      You’ve made an error–the singlet state is (1/√2|up-x⟩𝛼|down-x⟩𝛽 – 1/√2|down-x⟩𝛼|up-x⟩𝛽). That minus is a big deal. In the singlet state, angular momentum is zero in every direction. With the + sign, you have a triplet state, which only has zero angular momentum in a particular direction. Your A and B aren’t the same state–if you change A’s basis to the up-y/down-y basis, it won’t be B (it’ll be (1/√2 |up-y⟩𝛼|up-y⟩𝛽 – 1/√2 |down-y⟩𝛼|down-y⟩𝛽)). But the singlet state (1/√2|up-x⟩𝛼|down-x⟩𝛽 – 1/√2|down-x⟩𝛼|up-x⟩𝛽) has that form in every basis, and really will get opposite spins no matter what basis you measure it in.

    • Eugene Dawn says:

      My understanding is that there’s nothing really special about singlet states here: the ambiguity you’re noticing is that the state 2^(-1/2) [ |U_x D_x> + |D_x U_x > ] can also be written, using the basis |U_y>, |D_y>, as 2^(-1/2) [ |U_y D_y > + |D_y U_y> ].

      What’s happening is not that your state isn’t a simple superposition: it is a super-position over the two entangled product states |U_x D_x> and |D_x U_x>. The real issue is that it is a superposition over other possible product states as well, for example |U_y D_y> and |D_y U_y>.

      This is not unique to singlet states: even a simple state like |U_x>, if expressed in the |U_y>, |D_y> basis, seems to be a non-trivial superposition in a way that |U_x> does not seem to be:

      |U_x> = 2^(-1/2) [ |U_y> + i |D_y> ].

      You might as well ask the same question about this state: if you were to measure it in the x-axis, you’d reliably see the state |U_x>; if you were to measure it in the y-axis, you would get 50% of the time that it’s |U_y> and 50% that it’s |D_y>. As before, whether this state is in a superposition or not seems to be decided only once you choose which axis to measure: if you only measure the x-axis, it looks like you have a trivial superposition, if you measure the y-axis, then it looks non-trivial.

      This is sometimes called the ‘preferred basis’ problem: any pure state can be expressed in any of an uncountably infinite number of bases, but different choice of bases will give different apparent descriptions of the state: is it a classical state? a weird superposition? It depends on the choice of basis.

      As you point out, the standard Everettian answer these days has to do with decoherence: until you choose to measure it, there is not a good answer to which basis is to be preferred. However, when you get out your measuring device, your measuring device will work by becoming entangled with the qubit you’re trying to measure. The idea behind decoherence is that the measurement device acts as an environment to which the qubit becomes entangled, and due to the nature of the measuring device (i.e., that it has two distinct, essentially classical states corresponding to the two measurements you can make) this selects a natural basis in the qubit system such that when regarded in this basis, the behaviour of the qubit looks like classical probabilities.

      So the key thing to note is that decoherence doesn’t kick in until the qubit gets entangled with an appropriately “measurement-like” environment, and measuring devices are just the sort of environment to do the trick. If you like, the presence of the measuring device imposes a preferred basis , and so the qubit decides which superposition to act like only once it becomes entangled with a measuring device.

      Hopefully now, it’s clear what the Everettian explanation to the entangled state is going to be: just like with simpler superpositions, we have the choice of regarding our state as a superposition in any number of different ways before decoherence occurs–once we start making a measurement though, the measuring device entangles with the system, and the internal structure of the measuring device will pick out a preferred basis in which we will see the effects of decoherence.
      Since the state you mention is an entangled state, we imagine two measuring devices, each of which becomes entangled with the state, but the idea is the same: the choice of which measurement each of our two measurers chooses to perform will determine the possible internal states of their respective measuring devices, and so the joint state of the two measuring devices is an environment with which the original system can become entangled, allowing decoherence to take place w.r.t. to the basis picked out by the internal states of the two devices.

      This is quite long-winded, but hopefully the point is clear: the behaviour you indicate is not actually unique to the state you mention and doesn’t really require entanglement: it’s simply a result of the fact that you can choose multiple bases in which to express a superposition; without a choice of basis, it’s unclear what superposition the state will “decide” to act like. But the presence of an appropriate environment (such as a measuring device) to which the state becomes entangled picks out a preferred basis, and determines how the world will branch.

      In some respects, this doesn’t look too different from the Copenhagen interpreatation: we’ve replaced “a state doesn’t collapse until it’s measured” with “the world doesn’t branch until it’s measured”–but branching is still arguably a simpler description of what’s going on than state collapse, and with decoherence we can at least point to what is necessary for a measurement to have occurred: the state must have become entangled with a suitable environment that ‘prefers’ a certain basis.

      • lightvector says:

        Or to simplify and summarize – whenever any two particles interact, that interaction in general may entangle those two particles along a particular basis, with the basis depending on the relative position and orientation of those particles and the mechanism by which they’re interacting.

        So when your measuring device interacts with the particle, it will entangle with it along a basis that depends on how the measuring device is oriented or what kind of device you use, the same as when anything else interacts with the particle. Exactly as desired: “measurement” is not anything special and works like all other particle interactions.

        • lightvector says:

          Not sure if self-replies violate comment etiquette here, but just to add an addendum thought: If all of this still seems weird to you, then to shoot in the dark at the possible intuition that might be at the root here:

          Perhaps you may need to discard any ontological intuition that there is a single correct way to identify the possible “states” that the world can be in. As Eugene explained, even whether something is in a superposition at all or whether it’s in a pure state (e.g. whether there are two worlds, or whether there is only one) can depend on your choice of basis! Since different interactions will all be “in” different bases, there is no globally canonical way to choose one. This is one way that “many worlds” is an imperfect name – it can suggest such intuitions that don’t quite make sense.

          Or in the specific case of the singlet, it’s not as if anything special happens when the singlet “chooses” between being a superposition of x-up/down versus being y-up/down, those are literally the same thing in this case. And it may happen that the x basis or the y basis is more convenient to describe the next particle interaction, based on the orientation, etc. of that particle, but even so you could describe it in the other basis too if you liked – nature doesn’t care.

      • Yaleocon says:

        the state 2^(-1/2) [ |U_x D_x> + |D_x U_x > ] can also be written, using the basis |U_y>, |D_y>, as 2^(-1/2) [ |U_y D_y > + |D_y U_y> ].

        I don’t think this is correct. (Do the transformation out, they’re not the same.) The transformation only works when you substitute subtraction for addition, as eyeballfrog pointed out above. With that accounted for, my concern vanishes (because all that’s left is the preferred basis problem, which I’m familiar with and which is far less troubling). Thanks!

  18. johan_larson says:

    I’m having trouble thinking of all the reasons why someone might carry a pair of handcuffs. Luckily, I have helpful and creative friends who can be counted on to help me.

    First one’s free, kid: they found the cuffs on the street and are taking them to the lost-and-found.

    • Yaleocon says:

      A policeman, obviously
      A criminal (burglar/kidnapper/rapist), as a way to restrain victims/interlopers
      A prostitute carrying a pair as bondage gear (and others expecting to use them the same way)
      An eccentric patriot prepared to make “citizen’s arrests”
      A protester about to shackle him/herself to something
      The house manager of a show, procuring a pair as a prop for the performance
      Someone carrying several pairs in preparation for a local three-legged race (though that approach could be very uncomfortable)
      A dog owner, as an easy way to attach the handle of a leash to a bike rack (Maybe? I don’t have pets)

    • AlphaGamma says:

      Someone who is about to transport (or has just transported) cash or valuables in a briefcase handcuffed to them.
      A travelling handcuff salesperson.

    • Aapje says:

      A magician, like Harry “Handcuff” Houdini.

    • Nancy Lebovitz says:

      Going on a visit for an SMBD session– the person they’re visiting doesn’t have handcuffs or doesn’t have that kind of handcuffs.

    • Watchman says:

      Criminal memorabilia collector?
      Scrap metal merchant?
      Someone who just knows they look good in handcuffs?

      I mean, I assume anyone carrying handcuffs must have a reason to be so doing. Whether they want to explain themselves to an inquisitive police officer or not is another matter entirely.

    • Radu Floricica says:

      Lockpicking as a hobby, if you want a really innocent one.

      • Randy M says:

        Ostensibly innocent, anyway.
        Anyone who actually takes up lock-picking as a hobby probably has a reason for doing so, few of which are likely to be law-abiding, amateur magician aside.

        • Nick says:

          I’ve known lots of folks who wanted to learn to pick locks, and they didn’t mention any reasons beyond playful mischief. I’m half interested, not enough to actually pursue it, but one reason holding me back is that I don’t think I want the temptation!

        • Randy M says:

          I’ve known lots of folks who wanted to learn to pick locks, and they didn’t mention any reasons beyond playful mischief.

          They wouldn’t, would they? 😉
          Lock your doors consider insuring your valuables.

          • Nick says:

            Oh, in the interest of full disclosure, I should mention I also played a locksmith in a campaign. The first plot hook had me hired by the Church to crack an old vault that contained some ancient artifact. Really fun character to play, but I’m afraid he was too uptight to get up to anything naughty. 😀

        • Nornagest says:

          Most of the people I know who’ve learned to pick locks are nerds who think it’s cool.

          I learned how back in college, from a member of my fencing team, and never used it for anything more nefarious than getting into the storage locker with our equipment before the professor came with the keys.

          • Randy M says:

            I’m sorry, there’s just no good reason for a civilian to have lockpicks of any kind. They are only tools used to rob people, and should all be confiscated.
            /

            (Previous tongue-in-cheek assertion largely retracted, while still giving everyone in this thread the squint eye)

          • Nick says:

            We used to have a problem with teachers getting accidentally locked out of classrooms in my high school. When this happened to my social studies teacher, he just picked the lock. So there you go, picking locks raises test scores!

        • bean says:

          I pick locks a little, although I haven’t done it in a while. Because locks are interesting and it’s a fun skill to have. No ulterior motive.

        • Nancy Lebovitz says:

          Feynman had an interest in lockpciking, and I think it isn’t uncommon for young geek guys.

          Explosives are a more common interest, or at least were.

        • Protagoras says:

          I was under the impression actual thieves overwhelmingly just break stuff (often windows, but while some locks are impressively sturdy, many are actually not that hard to break if you attack them with the right tools). That was the case on the few occasions I was stolen from, anyway. Lockpicking is, I suppose, quieter, but takes more time and requires you to develop the skill.

    • honoredb says:

      They have intermittent psychotic breaks, can feel them coming on with a few minute’s notice, and have a history of misbehavior when under the influence.

      They’re on the way to win a bet about not succumbing to temptation and are exploiting a loophole.

      They’re on the way to a still-life drawing class and wanted a really evocative object.

      They’re improvising a solution to a crafts project.

      Last time they pitched a tent it blew away, maybe this will help?

      They’re trying to improve their feet dexterity by forcing themselves to use them in place of their hands every so often.

    • Loris says:

      A man who fears losing control, and carries them to be reassured that he can restrain himself before that happens. (Real case; saw this on a TV program some years ago.)
      A prankster intending to perform a practical joke on the groom on his stag night.
      Similarly, a prop to embarrass the bride at a hen night.
      A pairing device as part of a pub crawl competition.
      As an expedient short sturdy chain with handholds for use on some sort of flying fox / zip line.
      Actually, as an expedient way of attaching stuff to stuff.
      As a fashion accessory (very punk).
      As part of a costume. Cosplay etc.
      Technically – a prisoner, because they are under restraint and have no choice in the matter.

      And of course all the other random reasons for which the device or its components might come in useful.

      Why did you need a list of all the reasons anyway?

      • b_jonas says:

        > As part of a costume.

        Correct. An example is tom7’s prisoner costume in which he ran a marathon in 2011, documented at “http://radar.spacebar.org/?month=5&year=2011”

    • JPNunez says:

      Bicycle fan, lost his lock and grabbed the most similar thing available.

    • Nornagest says:

      Assuming they’re not a cop, nine out of ten it’s kinky shit. Outside chance of a costume or prop. Unlikely to be anything malicious — I’m not saying that doesn’t happen, but zip ties would work just as well but be more portable and less suspicious.

    • Well... says:

      The handcuffs are being carried unknowingly, so there is no reason.

      A guy who makes, sells, repairs, and/or collects handcuffs is transporting product from one place to another.

      A prop person is carrying the handcuffs to the set/stage of a production. Because they are a prop.

      The handcuffs are being carried but not because the person carrying them is going to use them. He just needs to move them somewhere: he’s letting someone borrow them and has agreed to drop them off, or he’s moving to a new house and is bringing them with him, etc.

      The handcuffs belong to a very specialized martial arts instructor.

      The handcuffs are evidence and are being transported by a non-officer policeman to a courtroom.

      The handcuffs are an heirloom and are being transported by a lawyer to a will recitation. (I don’t know if it actually works this way outside of the movies.)

      The handcuffs are part of a museum exhibit and are being transported by the curator or exhibit designer.

      The handcuffs are metaphorical and are carried by each of us.

    • helloo says:

      Surprised noone’s mentioned fashion

      Part of a trial or ritual.

      Remodeled into a pair of large glasses frames

      There’s always the wide category of art

      Improvised shackle (like as climbing gear or to lock a wire gate)

    • fion says:

      If you’re a werewolf on a full-moon evening.

  19. ana53294 says:

    The only thing clear to me after the crash of the Boeing 737 Max was that Airbus 320 Neo would get more orders, and indeed, they cannot cope (FT) with all the orders they are getting.

    Does the escalation in tariffs against Airbus have anything to do with helping Boeing after they have been harmed by the accident? I know that the conflict between the EU and the US over subsidies to civil arbitration is ongoing, and the cases have been in WTO for more than a decade. The EU gives illegal* subsidies to Airbus; the US gives illegal* subsidies to Boeing; this is true, and will continue to be so (and how harmful these subsidies are will be in WTO arbitration forever).

    *According to the WTO.

    • mfm32 says:

      Your analysis of the A320 orders is wrong, and I strongly believe your prediction is wrong. The A320 backlog existed long before the MAX grounding. The MAX has, last I checked, a roughly similar backlog. I don’t know if Airbus has taken any orders for the A320 since the grounding (probably), but I seriously doubt any are related to the grounding. Most airlines have only 737s or A320s to maximize the benefits of fleet commonality. Except for airlines that resulted from mergers of dissimilar fleets (e.g. United), mixed fleets are rare and fleet switches are even rarer.

      Undoubtedly there will be more grandstanding by MAX operators to either extract additional concessions from Boeing, try to get out of MAX orders they don’t want for other reasons, or both. Don’t mistake that posturing for reality. Airlines and airframe OEMs are highly sophisticated actors engaged in a long-running, high-stakes negotiation.

      • cassander says:

        As I recall, the 737 backlog is smaller and the 737 production rate is higher than the a320, but you are correct that the problem long precedes the max and the current crash is unlikely to seriously move the needle much.

    • bean says:

      Multi-year backlogs have been a fact of life in the airline industry for the past couple decades. To a round number, they hover around 5 years, and the manufacturer does what they can to keep it that way. Boeing did recently step down the 737 line from 52/month to 42/month because of the delivery freeze, but I expect it will go back up when they get the fix to the fleet.

      I’d agree with mfm32 that this is likely to see the airlines trying to turn the screws on Boeing for new orders, and in a few cases, the fleet decision could go to Airbus instead. (A lot of big airlines have both types, although most small to medium sized carriers have one or the other.)

      • mfm32 says:

        If you look at the big airlines with mixed fleets, all or almost all will be the result of mergers of airlines that each had single-type fleets. A very few small airlines have switched fleets historically, and during the switch they operated mixed fleets. But that’s an extreme corner case and even then only a transitory one.

        The only exception I can think of is pre-merger American, which bought A320s and 737s to get out of its crippling MD-80 problem ASAP. Had Boeing been able to meet American’s demand in a timeframe that worked for the airline, I strongly suspect Boeing would have won the whole order.

        Common fleets generate a very long list of very valuable financial and operational benefits. Airlines will go to great lengths to maintain them.

        • bean says:

          You’re right. I looked at several different cases, and you’re right about the root of all of the different fleets. Now I’m wondering why all of the Airbus and Boeing airlines decided to get married. (Seriously, I can’t think of a merger that hasn’t created a horribly mixed narrowbody fleet, except, I guess, for SWA-AirTran.)

          (Legacy United did operate both types at once, but they never bought 737NGs, and retired the last ones before the merger. Also, there’s Lufthansa, who has both the 747-8 and A380, but they’ve always been advocates of the Pokemon school of aircraft procurement.)

  20. Plumber says:

    Near my bed I’ve a pile of books, some from the library, and some I’ve owned for a while, many of which I read years (even decades) ago but only dimly remember, and I’m asking for suggestions on which to open right now:

    The Broken Sword by Poul Anderson

    The Dying Earth by Jack Vance

    Stormbringer by Michael Moorcock

    The Magic Goes Away by Larry Niven

    The Hour of the Dragon by Robert E. Howard

    The Call of Cthullu and Other Weird Stories by H.P. Lovecraft

    The Shadow of the Torturer by Gene Wolfe

    American Character by Colin Woodard (non-fiction from the library)

    Viking Age by Kirsten Wolf (non-fiction from the library)

    Slow Cooker Revolution by the editors of America’s Test Kitchen (a cook book)

    Hawkmoon by Michael Moorcock

    The Coming of Conan the Cimmerian by Robert E. Howard

    The Stealer of Souls by Michael Moorcock

    Let’s Bring Back by Lesley M.M. Blume (non-fiction)

    Big Trouble by Matt Forbeck (an ‘Endless Quest’ Choose-Your-Own-Adventure-ish book)

    King Arthur Pendragon by Greg Stafford (this one’s a big game rules book

    So which one shall I pick?

    • markk116 says:

      “The Call of Cthulhu and Other Weird Stories by H.P. Lovecraft” is the only one of these that I’ve read, so I can’t recommend it above all the others, but I can tell you that this one is really good.

    • Silverlock says:

      The Shadow of the Torturer is a terrific book. I remember rereading passages from it just to revel in the quality of the writing.

    • johan_larson says:

      I’ve heard good things about The Dying Earth, but I haven’t read it myself.

      • Watchman says:

        Excellent book from memory although I can’t actually recall any of the stories which is odd.

    • FormerRanger says:

      As already recommended: “The Call of Cthulhu,” “The Shadow of the Torturer,” “The Dying Earth.” All three are on my re-read often list. One nice thing about them is that all have sequels or related books/stories, so if you like one, there is more where that came from. The same is true of the various Moorcocks, but I haven’t re-read them in a long time, alas.

    • sfoil says:

      I would rate The Shadow of the Torturer as the best overall, but it’s a pretty heavy read and of course you’re committing yourself to the rest of the New Sun books.

      I’ve been wanting to read The Broken Sword but haven’t gotten around to it. An exemplar of the lost art of standalone fantasy novels, by all accounts.

      Dying Earth is great because of how influential it is (especially on D&D, and not just for the magic system), is a work of incredible imagination, and is quite short.

    • carvenvisage says:

      “Shadow of the torturer” is the kind of book some people say is mind blowing, but this kind of experience might be more interesting for young people. If you’ve read/heard-of “the stranger” by Albert Camus I’d compare it to that in that it’s a weird, trippy, immersive, first person experience that’s (I think) intended as a journey that could change your perspective as much as a fun adventure or riveting tale.

      Conan stories are like action movies. -Good, well choreographed action movies. The hour of the dragon is a novel and the other is a collection of short stories, so if you wanted to see how much you enjoy them the latter would be a good place to start. (also, howard iirc only wrote 1 or 2 conan novels, if that effects things- it might be read hour of the dragon second.)

      The dying earth is a collection of adventures with a lot of and subtle dark/sardonic humour. If you like those latter things, I recommend it like a holy grail, if not then still quite highly.

      • Nornagest says:

        I think Conan is best read in publication order. That’ll start you off with The Phoenix on the Sword, which is chronologically one of the later stories, but you’re missing a lot of the subtext if you don’t think of the character as someone you know is going to hack and slash his way onto a throne at some point. You won’t get much continuity, but that’s fine — relatively few characters carry over, and the sense is of an old warrior telling tales at random from his life, which is probably the best way to think of these stories anyway.

        It’ll get a little formulaic about a third of the way in, but that’ll pass. The other weird patch is “Beyond the Black River”, which reads more like a Western than a fantasy.

        • carvenvisage says:

          I think Conan is best read in publication order. That’ll start you off with The Phoenix on the Sword, which is chronologically one of the later stories, but you’re missing a lot of the subtext if you don’t think of the character as someone you know is going to hack and slash his way onto a throne at some point. You won’t get much continuity, but that’s fine — relatively few characters carry over, and the sense is of an old warrior telling tales at random from his life, which is probably the best way to think of these stories anyway.

          Listen to this guy, he has a cooler way to think about it.

          It’ll get a little formulaic about a third of the way in, but that’ll pass.

          You don’t need to mainline them though. They’re a bizarre/alien place you can go- you wouldn’t neccesarrilly go to japan or iceland every month even if you think they’re awesome. It’s almost enough just knowing they exist.

          The other weird patch is “Beyond the Black River”, which reads more like a Western than a fantasy.

          Does it? There’s a frontier, but most of the action happens in the jungle beyond it (as per the title- “the black river” is the frontier) which turn out to be full of bad juju. The association I would have drawn is to something like warhammer not-40k’s Lustria- hostile jungle, ancient magic, savage humanoid people not liking outsiders outsiders intruding.

          (these ones are more or less human, but they’re more people of the jungle than primitive people in the jungle)

    • Le Maistre Chat says:

      The Broken Sword is amazing.
      I love Lovecraft, but I don’t know the exact contents of that anthology you have.
      The Dying Earth and the Wolfe are very good. Conan is up there.
      I liked the premise of The Magic Goes Away but don’t remember the writing being that good.

      • Nick says:

        Wikipedia has the contents of the Lovecraft anthology. “At the Mountains of Madness” and “The Case of Charles Dexter Ward” are two omissions that stick out to me. The sequel anthology has some greats: both those, plus “The Music of Erich Zann,” “Pickman’s Model,” and the essential “Dunwich Horror.”

        • Le Maistre Chat says:

          I have the Barnes & Noble leatherbound Lovecraft with every work of fiction he wrote under his own name (plus the occasional piece of ghostwriting, like “Beneath the Pyramids” for Harry Houdini).

          • Nick says:

            All my Lovecraft I’ve read online, but I have one of those Barnes & Noble Classics—Les Miserables, I think? They’re neat, and I like looking through them when I visit the stores, but given the price and the seller I feel like I’m the guy buying a fake Rolex. Might as well get Penguin Classics or Dover or Modern Library, you know?

          • sfoil says:

            Some of the Barnes & Noble hardbacks are tacky, and they’re not as high quality as e.g. Easton Press (of course, given the cost), but the B&N Lovecraft collection in particular is aesthetically appropriate, pretty exhaustive, and has excellent interstitial commentary from S.T. Joshi.

    • Well... says:

      I think you’ve found a very polite and pleasant way to brag about your book collection. 😀

      • Plumber says:

        @Well…,
        More my hoping to find kindred tastes, as my wife (who studied literature in college) calls my books “trashy paperbacks” as in “Throw out those trashy paperbacks already!”, and my co-workers give no indication of reading at all (though Game of Thrones the television show is spoken of highly by them).

        When I brag about my reading it’s more 19th century histories of Guilds, Conrad and Steinbeck than it is Howard, Moorcock and Niven.

        But this is much more fun than going “eeny-meany-minny-moe”!

        I went with Anderson’s The Broken Sword over other contenders (mostly because it got some votes, I remember that I liked the ’71 version when I read that one or three decades ago, and the type is larger and it’s easier for me to read than all but.one of my Howard books, the game book, and a library book), but I see that Wolfe, Vance, and Lovecraft had a lot of votes as well, so those are next!

        Thank you!

    • Plumber says:

      @Atlas,
      That’s very kind of you to say!

    • Nancy Lebovitz says:

      If you’ve read The Book of the Long Sun and The Book of the Short Sun, what did you think of them?

      • Plumber says:

        @Nancy Lebovitz,
        I haven’t read any of those but by Gene Wolfe I’ve read a bunch of short stories, Knight and most of Wizard plus the beginnings of Ares, Pirate Freedom, and There Are Doors.

        Wolfe seems more “literary” to me than say Niven, or Anderson.

        • Nornagest says:

          Oh man, if you haven’t read it then Shadow of the Torturer is definitely what you should be reading. Knight and Wizard are good for what they are, but New Sun is on a whole ‘nother level. It might be the single book that’s most influenced the way I look at fantasy. Definitely in the top five.

          Vance might be as good in terms of pure use of language, but Wolfe’s easily got him beat for depth and complexity.

  21. Le Maistre Chat says:

    Italy didn’t become a nation-state until 1870, and there’s a slogan that’s been central to Italy’s nation-making project since before then: Fatta l’Italia, bisogna fare gli Italiani.
    … this translates to “We have made Italy, now we must make Italians.” Oh baby.

    • Machine Interface says:

      That seems in accordance with the nature of Italian nationalism as a culture-based rather than ethnic-based nationalism. Anyone can join the Italian nation, regardless of the accidents of their birth, as long as they adopt Italian culture and language. This is fairly similar to French, British and American nationalisms*, who indeed could have adopted similar mottos.

      *: that is, it’s the dominant conception; I am not denying that there have been and still are advocates of ethnic-based nationalism in all those countries, just like there were advocates of culture-based nationalism in countries where the ethnic-based conception eventually predominated (Imperial Germany, Eastern and South-Eastern Europe…)

    • vV_Vv says:

      This refers to the fact that Italians had, and to some extent still have, significant linguistic, cultural, economic and even genetic differences between geographical regions.

      This has been a constant source of controversy in Italian politics ever since, with claims that the concept of Italian nation is artificial, accusations of oppression and exploitation between various regions, revanchism for the pre-unification states and even secessionist movements recurrently appearing both in the north and the south of Italy.

      • Le Maistre Chat says:

        Right, the national struggle has been to make and keep people patriotic to the concept of Italy rather than their region or, say, “workers of the world.”

    • Worley says:

      The same can be said of France, and other nation-states in Europe. With France, the kings of France consolidated their territory, and all French governments since have been trying to homogenize the cultural identity of France. IIUC, you still aren’t allowed to teach in a school in the same province where you grew up. But at least 20 years ago, regionalism was still quite strong. Someone commented about the EU, “First I am Provincial. Second I am French. Perhaps third I am European.”

      • Aapje says:

        Someone commented about the EU, “First I am Provincial. Second I am French. Perhaps third I am European.”

        Ironically, the EU is so diverse that this statement which argues for the existence of strong localized cultures that are hard to unify, itself makes the mistake of attributing a unified meta-culture to Europeans.

        The regionalism of people in Amsterdam tends to be centered on the city, while the regionalism of Dutch rural citizens tends to be centered on the province.

  22. Bugmaster says:

    If you’re going to implement “Culture War Bans”, you’re going to need to be very clear on what “Culture War” means. To use an arbitrary example, let’s say that I’ve posted a link to an article stating that glacier loss is proceeding much faster than predicted. Am I waging culture war ?

    • ilikekittycat says:

      Without being able to define culture war, just forbidding someone from commenting in the Hidden Open Threads and “Things I will regret writing” posts seems like it would have the desired effect. I could be wrong, but it doesn’t seem like most of the notorious posters or over the line comments are being bred in the typical mainline blog topic threads or Visible Open Threads.

      • 10240 says:

        Bad idea, as they would let their culture warring urges run wild in the non-CW threads instead. Especially if we can’t define CW, or they don’t know how it’s defined. While if they know to recognize what is CW, then banning them from making CW comments should be enough.

        • Plumber says:

          I may have a bad read on what is and isn’t ‘Culture War’, but lately the post that seem the most ‘warrior’ to me are usually in the “no culture war” threads rather than the Hidden ones.

    • aashiq says:

      I disagree, because of Goodhart’s Law. Fear of being culture-banned will be more effective when the line is fuzzy, and Scott can judge on a case-by-case basis. If there is a clear line, it will be gamed, and people will be angrier if they get banned because they are technically not waging culture war by an earlier definition. This is not an open society run by the rule of law, but a dictatorship run by Scott.

      • Bugmaster says:

        In our current meatspace world, many people would prefer not to live in dictatorships where their lives are subject to the ruling monarch’s arbitrary whims. Some even risk their lives to leave such places for more lawful shores.

        • LukeReeshus says:

          “In our current meatspace world…”

          This is a blog / online forum. And I don’t know what Goodhart’s Law is, but aashiq’s point is solid. “Culture-warring” need not have a set definition.

          Indeed, it’s better if it doesn’t. Because if Scott thinks there’s too much culture-warring going on down here—that is, if he thinks it’s warping the ideal of this, his, blog—then he’s well within his rights to cut down on it. Moreover, his ability to do so will be hampered by any strict, legal definition of such.

          “So it’s arbitrary!” you say.

          Yes. And that’s the point. Because I, personally, trust Scott to arbitrate on this issue. I’m confident, at the very least, that he can distinguish between culture-warring and any good-faith discussion on a given culture-war-related topic.

          • Bugmaster says:

            then he’s well within his rights to cut down on it

            Well, yes; it’s his site, he’s within his rights to do anything with it for any reason whatsoever. He could turn it into a self-help forum for aspiring circus clowns, or something. The question is not, “what can Scott do ?”, but rather, “what is the smart thing for him to do ?”; with the implication that Scott is amenable to rational discourse and the pressures of his commentariat, at least to some minor extent.

            I’m confident, at the very least, that he can distinguish between culture-warring and any good-faith discussion…

            I am not. This is not a slight on Scott; I am not confident in anyone’s capabilities to such an extreme extent.

          • Scott has the right to do as he wishes with his blog, but he asked for opinions and we have been giving them.

            I do not trust Scott to always make correct decisions, any more than I trust myself to. I in particular do not trust him to always make correct decisions if the decisions are invisible to the rest of us, so that he will have no feedback to tell him if they are incorrect.

          • LukeReeshus says:

            I do not trust Scott to always make correct decisions, any more than I trust myself to.

            “Always” seems like a pretty high standard, even for Scott.

          • LukeReeshus says:

            This is not a slight on Scott; I am not confident in anyone’s capabilities to such an extreme extent.

            Not anyone’s? Really? Not even your own?

            You don’t believe you’re capable of witnessing an exchange and observing, “Yeah, that’s just blatant culture-warring. The sweeping tone, the flat prose, all those subsidiary points inserted to widen the argument instead of refine it…”?

            You haven’t noticed stuff like that, and disliked it?

            I mean, I understand your concern when it comes to marginal cases, but surely that’s a discussion worth having after we cut down on the blatant stuff, right?

          • @LukeReeshus:

            There are posts which are pretty clearly culture warring. But I don’t trust myself to never misinterpret as such one that isn’t.

        • Viliam says:

          The usual online alternative to dictatorships is rule-lawyering.

          I don’t agree with all Scott’s decisions, but I prefer them to endless debates about whether comment X did or didn’t technically follow some rule Y.

          (Precise rules don’t stop people from being assholes; they just make them argue whether “asshole” and “assh0le” is the same thing, if the former happens to be explicitly listed as a reason for ban, but the latter does not…)

          • ilikekittycat says:

            +1.

            In 25 years I have seen successful dictatorship web sites, where the vision/voice of the original website auteur is so clear that most of the rules wouldn’t have even affected how most posters were gonna post and it feels like benign neglect. I have seen unsuccessful dictatorships where everything is drama and purges are done in secret. I have seen terrible rules-lawyer sites with democratic feedback where everything gets bogged down on what subparagraph 3B of the “no parting shots or moderator sass” clause means.

            I’ve never seen a good, functional site that feels good to post on where there are a ton of rules and rules-lawyering and democratic feedback contradicting the people in charge. It’s an empty quadrant

          • Nornagest says:

            Yeah. I used to wonder about that, back when I did more Petty Internet Tyrant work, but then I thought about it a bit and realized that the RL institutions with tons of rules usually have decades to centuries’ worth of experience in finding the edge cases and filing the rough bits off, and large budgets and plenty of personnel dedicated to interpreting them. And a lot of them still don’t work that well.

            With that in mind, it’s not too surprising that half-assed ad-hoc rules ginned up by a half-dozen laymen in their free time tend to cause more problems than they solve.

          • Nick says:

            @Nornagest I saw on the subreddit once that the SQLite project adopted the Rule of St. Benedict for its code of conduct. I think that’s one way to avoid the problem of half-assed ad hoc rules!

          • johan_larson says:

            I don’t know. The Rule of St. Benedict is really lofty and demanding. So much so that I can’t help but suspect anyone claiming to follow it, even imperfectly, of either cluelessness or deceit.

            I can totally believe that trying to follow it will make you a better person. But claiming to be trying to follow it puts the rest of us in the position of deciding whether you are an aspiring saint or a plain liar. And while I have not conducted a census, there are probably more of the latter around. That suggests it is best to follow the rule silently.

          • BBA says:

            Codes of conduct have never sat well with me. Unless you’re talking about actual criminality, it’s a judgment call as to what’s acceptable in a free discourse and what’s grounds for a ban. Context, up to and including years of experience with the parties involved, matters. So if you’re running a space that’s small enough for one person to police, make it explicit that it’s a dictatorship and get on with it.

            See also, the CW thread on the SSC subreddit and how its moderators and their neutral, reasonable-sounding rules ended up making a place…like that.

        • Edward Scizorhands says:

          If you are stuck someplace, you want to know you can influence the outcomes.

          Lots of blogs, each one run as a dictatorship, is the best solution, since I can easily switch blogs. I never go to Cory Doctorow’s blogs because he deletes people that merely disagree with him. (Fair enough, I have more important things to do than worry about him.)

        • aashiq says:

          Agree, and I agree that people can up and leave if they think Scott is being too arbitrary.

          However, it is not always possible to reduce some notion down to a concrete rule. Even in a society based on laws, there is room for both specific rules and more general “standards”. For example, in Jacobellis v. Ohio, the standard for obscenity was stated as “I know it when I see it”, and common law often appeals to what a “reasonable person” would do. To restrict ourselves to only specific rules on a topic as amorphous as culture war is to remove a valuable tool from our repertoire.

          Another point is that I believe a forum where Scott decides what is best will be more pleasant to read than endless lawyering by the patrons. Scott is the only one correctlyincentivized to do what is best for the blog. In addition, I trust his ability to identify culture war more than his ability to be a good lawyer.

          Last, I prefer a world with many competing blogs run under different philosophies of governance to all blogs mired in legalistic debate.

      • Jiro says:

        On the other hand, clear lines can be beneficial because they can prevent the definition from slipping over and over again. And Scott will be less tempted to make biased rulings when the bias has to be laid out for everyone to see and can’t be covered up by claiming it violates one of the rules that everyone violates.

    • chalst says:

      It depends on how they are implemented. You could leave it to users to tag their own posts as CW/non-CW, perhaps by referencing a special @culturewar user. Besides the value of having a social requirement for users to think about whether their posts have CW content, that opens the way to implementable consequences: the most practical I can think of (based on my glance at the WordPress plug-in API) is a cool-off period, a dialog form for such postings that asks for content warnings, or a default for such comment threads to hidden.

    • dick says:

      I think 90% of the time people know culture war when they see it, and for the other 10%, that’s why there’s a warning.

  23. broblawsky says:

    I would also like to +1 the idea of CW bans. It might be best to administer public warnings, though.

    • Etoile says:

      Maybe one could have reddit-like flair for commenters with a “yellow card” warning, foe example? There could also be flair for “top comment” on a given two-week period (a trophy of sorts that migrates) and other positive labels.

    • vV_Vv says:

      +1 for the public warnings.

    • Jaskologist says:

      Public warnings are important. The lack of warnings in some recent instances (and the older Reign of Terror) when banning people for politely expressed bad opinions was a big piece of what made it so offensive. If it’s a new name that’s crapping all over a thread, that’s one thing, but if they’re otherwise following the rules it’s a betrayal of the values this place claims to hold when you ban them without warning.

  24. FrankistGeorgist says:

    What would the long term effect be of a society adopting Uterine succession? I know it’s existed in various cases but I have troubled wrapping my head around inheritance and the incentives such a system would create.

    To clarify:
    The Society remains patriarchal. Men govern and serve as head of the household.
    Succession is by primogeniture.
    Inheritance is founded on the notion of Mater semper certa est wealth and titles pass from mother to daughter.
    So the most common path of inheritance will go from a man through his sister to his nephew.
    Men rely on their sisters rather than their wives to produce children of their house
    Incest is discouraged, although cousins may marry

    So the first implication I can see is that an aristocratic family will want to secure both a son and a daughter, minimum. Their son will be trained to govern the family, and the daughter will be trained to breed. This seems pretty similar to the incentives aristocrats have in a patrilineal system. But marrying off your daughter seems to have different implications. She’s not just immediately property of her husband, she’s still producing your family. Sons, however, don’t propagate the family, even though they get to govern it.

    In a way this feels like a separation of powers. Almost certainly an unstable one, but let’s say it’s really popular and not threatened by neighboring purely patrilineal systems.

    So obviously if you want tight familial control of wealth, you’ll want to marry cousins quite a bit. Which seems true of Aristocracy already in most places. But can there ever be dynastic intermingling? Can one family secure a web of marriage alliances and come to control the region?

    Obviously the longevity of one’s bloodline is always a toy feature of Aristocrats. In this case you can trace an endless fractal Matryoshka doll of mothers corded umbilically back through time. Do new houses ever arise? Are houses forced multiply through countless cadet branches or is there some incentive I’m not seeing to keep things tightly bound.

    Does a Duke’s sister live with him even when she’s married? Her husband brought into her family rather than her going away? Would two families of the same rank ever marry or would that be anathema to them? God forbid we throw a caste system on top of this. I have trouble keeping track of this.

    My understanding is something like this occurred in Kerala in India, and somewhere in Africa? But beyond just references to these things, I’m interested in how a society’s economy, family structure, and politics might be influenced by families structured along Uterine succession. I’ve read descriptions in the past, for instance, of how European societies had different incentives where inheritance was split between multiple children or reserved for the eldest.

    Any reading on the subject (or advice on how to find some kind of inheritance simulator) would be much appreciated.

    • Eric Rall says:

      Do new houses ever arise?

      Probably, along the same process we see in historical patrilineal societies: “new” men/women join the nobility (as clients of higher nobles who create them as vassals, or by buying or conquering a noble estate and having their de facto status recognized by the existing social structure), cadet branches become seen as separate lines, or a son (or a daughter-in-law, or a granddaughter through a male line) inherits for want of a female heir.

      • markk116 says:

        Or a massively rich but landless noble family bribes a house to accept their son, who then rules the house and integrates the noble family in the house so deeply that they become indistinguishable.

    • Michael Handy says:

      While Sparta did not have the system you propose, I think in a society with martial goals it would have a similar effect of land being concentrated into a few female lines as women tend to die less in pre industrial societies. Regencies everywhere

      A fun variant would be having female led elective primogeniture (ie. The woman can disinherit) which would basically mean a council of angey matriarchs with enormous power

      • markk116 says:

        Fast forward a few millennia and suddenly you live in a democracy where only women can vote and only men can hold public office.

        • Viliam says:

          Would it differ significantly from a democracy where women have the majority of votes, and men have the majority of public offices? 😀

    • markk116 says:

      In the patrilineal system, daughters get married off and are often sent to live with their new family. I think in this system the men would be moved. I think this would create a really interesting dynamic, because if you marry your son off to the neighbouring kingdom, suddenly your family controls both areas completely.

      I think that would make accepting a marriage proposal from the other perspective a lot more hazardous. In a patrilineal system, a bride can be used to forge an alliance, in the matrilineal system, the groom is used to seal one. Either this would mean that nobody married out ever and large landmasses remained fractured until one kingdom can muster absolute military might, or one family can use this dynamic to snowball, marrying off multiple male heirs every generation, each acquiring a new slab of land for the kingdom.

    • Watchman says:

      For this system to be enforced it would require a different underlying social organisation than in historically patriarchal societies with descent of property through the male line. That much is clear from the simple observation that otherwise the dominant male would bypass his sister’s offspring in favour of his own so for the uterine system to work this requires constraints. The most obvious would be men can’t hold property, just use it but that might not fit the definition of uterine inheritance. Legal constraints might work, but if the powerful men opposed a customary practice, I am not sure that a legal authority would fare much better.

      The best solution I can offer is that the continuation of uterine succession would require a female-descent-defined clan-based society, so that each family was dependent on the support of other, nominally related families, and where the clans would be incebtivised to keep land within their own lineages as opposed to passing into a different linkage by father-son inheritance. The need for clan support, and possible clan claims on the land, would perhaps incentive compliance with with law and custom.

    • bullseye says:

      Assuming you have lots of families following the same rules and intermarrying with each other, I can see this going two ways:

      1. You keep your daughters at home and bring in their husbands in order to keep the family on the estate. So most of your men are only family by marriage, and one of them is the Duke. Duchess is the hereditary position, and whomever she marries is Duke. Women who seem likely to inherit would have lots of men competing for their hands.

      2. As above, except that the Duke is family by blood; “Duke’s mother” is the hereditary position. When a man becomes Duke he moves back from his wife’s family estate to his mother’s, and his own children are raised on the “wrong” estate.

      • March says:

        3. Both your (where the salient ‘you’ is female) sons and daughters stay at home because they’re more likely to be loyal to the family they grew up in and more likely to be good at governing a family if they have been able to take part in its governing for a long time as adults. Husbands and wives either don’t live together or only part time. Or marriages are arranged between older children (more likely to inherit the line/the top dog governing position) and younger children who are basically born without a chance to inherit anything and can be sent off to other households. (That is if anyone even cares where the Y chromosome comes from anyway.)

        The only thing this society can’t really be is patriarchal, since men invest their lineage building efforts in their nephews, not their sons, and don’t end up ruling their direct descendents. It can still be a male supremacy, but it’d be a matrilineal avunculoarchy or something.

  25. Tatterdemalion says:

    A more interesting version of Birtherism: if Obama were two years older than he actually is, would he still have been eligible for the presidency?

    • Evan Þ says:

      You mean, if he’d been born in the (incorporated) Territory of Hawaii rather than in the State of Hawaii? The Constitution doesn’t specifically address that point, and no President was born in similar circumstances. But, the historical English context of the phrase “natural-born citizen/subject” points to the distinction being that the citizen/subject was born in the realm rather than being born outside it and then naturalized by Act of Parliament. So given the Territory of Hawaii was governed by the Constitution and owed allegiance to the United States, I would say yes he would be a natural-born citizen. Congress seems to agree, since they passed a joint resolution affirming that McCain – who was born in the nonincorporated Panama Canal Zone to citizen parents – was eligible.

      However, I can see an argument for the other conclusion: the Territory of Hawaii was not one of the United States with its own distinct sovereignty and eligible to be represented in Congress; it was merely an external territory belonging to the United States under the complete jurisdiction of Congress. That would fly in the face of a century of jurisprudence, but I personally wouldn’t dismiss it on first impression.

      (Note that Obama would not gain citizenship due to his parents’ citizenship, either in this hypothetical or in reality. His father was not a citizen; while his mother was a citizen, the law in place at his birth said such a baby would only gain citizenship if his mother had lived in the United States for N years after age K. His mother was less than N+K years old when he was born, so this was obviously impossible. Therefore, Obama is an American citizen solely because he was born in Hawaii.)

      An interesting hypothetical; thank you!

      • Eric Rall says:

        Congress seems to agree, since they passed a joint resolution affirming that McCain – who was born in the nonincorporated Panama Canal Zone to citizen parents – was eligible.

        McCain isn’t the only losing major-party Presidential nominee who wasn’t born in a state. Barry Goldwater was born in the (incorporated) Arizona Territory, and I don’t think there were any serious questions raised about his eligibility (although the extreme long-shot nature of his candidacy may have made any such discussion moot).

        I did a quick check, and was surprised I didn’t find anyone else. 19th century candidates seemed weighted heavily towards the older states (although not necessarily disproportionately, since most of the population lived east of the Mississippi (especially on the East Coast and in the Old Northwest/current midwest) until well into the 20th century), and candidates from newer states seem to have been born in older states and moved to the territories (or newly-admitted states) in childhood or early adulthood. Again, not that surprising in hindsight, since the western states were settled much more via internal migration rather than organic growth of the original cohort of American pioneers.

        • brianmcbee says:

          Ted Cruz was born in Canada, and although people tried to bring it up as an issue, it didn’t go anywhere. He didn’t get his party’s nomination though.

      • faoiseam says:

        According to Wikipedia, “Goldwater was born in Phoenix in what was then the Arizona Territory, the son of Baron M. Goldwater and his wife, Hattie Josephine “JoJo” Williams.”

        Also “During his presidential campaign in 1964, there was a minor controversy over Goldwater’s having been born in Arizona three years before it became a state. [110]” The line 110 is dead, and the Internet Archive shows a version that does not substantiate this. Given that Romney’ father was born in Mexico, I think people back then were not as careful about the Natural Born Citizen clause.

    • Worley says:

      Well, Obama’s mother was a citizen, so he was a citizen at birth regardless of where the birth took place.

      But in regard to place, I have a memory that some odd fellow ran for president in his state just so that he would have standing to sue regarding Obama, McCain (who was born in the Canal Zone), and some third-party candidate (who had yet another oddity of birth). I assume the outcome of the case was that the US District Court ruled that all three qualified as “natural born”, as otherwise there would have been big headlines about it.

  26. Reasoner says:

    +1 for the idea of CW only bans. I’m in favor of people experimenting with moderation in general.

  27. AlesZiegler says:

    In previous open threads we had some good explanations of quantum mechanics concepts.

    With that in mind, I feel emboldened to ask about so called many worlds interpretation of quantum mechanics, which I know only from popularizations. And popularizing works make it look, frankly, dumb. So I am looking for steelmaned explanation.

    Wikipedia, which for those purposes is not reliable, explains it thusly:

    “Many-worlds implies that all possible alternate histories and futures are real, each representing an actual “world” (or “universe”). In layman’s terms, the hypothesis states there is a very large—perhaps infinite[2]—number of universes, and everything that could possibly have happened in our past, but did not, has occurred in the past of some other universe or universes.”

    Those parallel universes, which together form multiverse, are, however, unobservable.

    To me this seems like if Isaac Newton, after he invented his equations correctly predicting movement of celestial bodies, declared that those bodies are pushed around by invisible demons who precisely follow his equations.

    Newton did not know what causes gravity, like we apparently do not know what causes certain quantum phenomena, but Newton famously refused to do unconfirmable speculations on this subject, at least according to foundational myth of modern science.

    I realize that I painted a crude caricature and I probably should apologize in advance to proponents of multiverse hypothesis, but this thing bugs me for some time and I really do not know about any other forum where I could get steelmaned version of it.

    • Clutzy says:

      I am not an expert, but this is also something that concerns me about many modern physics hyopthesis. Including many worlds, string, dark matter, and dark energy.

      Once you go deep enough there is something that tethers them that is falsifiable (for some), but we are so far away from testing such things (it seems).

      • AlesZiegler says:

        For the record, I do not think that dark matter and dark energy is on the same level as parallel universes. There is clear evidence that those things exist. Of course evidence might be wrong, but that is a different problem.

        String theory is for me incomprehensible, which is not an evidence of anything, except my own intelectual limitations.

      • eyeballfrog says:

        This might help clear up a few things for those

        Dark matter is invoked to explain a number of gravitational phenomena which strongly suggest the existence of mass we can’t see. For example, the orbital periods of stars in the galaxy are very different from what they would be if the galaxy consisted of only the visible matter. The conclusion is that either general relativity, a theory so accurate it can predict the bending of radio signals from the GPS satellites to your phone to within a few meters, is wildly wrong at galactic scales, or there’s a lot of stuff out there we can’t see.

        Dark energy is also invoked to explain gravitational phenomena, most notably the accelerating expansion of the universe. It suggests that the universe has a small but constant intrinsic curvature, which corresponds to a small vacuum energy density. This isn’t really that out there of an idea–quantum field theory also suggests that the vacuum should have an intrinsic energy, though it doesn’t say how much. Heuristic arguments get an answer that’s absurdly enormously larger than the measured value, though, so that connection is still being worked on.

        Many worlds is a mathematical formalism for doing quantum mechanics. The idea of parallel universes is more of a metaphor here than an actual description of what’s going on. But the underlying formalism gives the same answers as other formalisms for quantum mechanics, and we’re very, very certain quantum mechanics is correct.

        String theory is bullshit. Don’t let anyone tell you otherwise.

    • smocc says:

      I really dislike the name “multiverse theory,” as I think it implies almost the opposite of the point of Everettian quantum mechanics. I’m going to call it the Everett interpretation from here on.

      Quantum mechanics says that there are many more possible states reality can be in than we observe. For example, we only ever observe particles having spin up or spin down, but quantum mechanics requires us to believe in states like “spin up plus spin down” or “spin up plus three times spin down.” These are called “superposition states”

      The question of interpretation of quantum mechanics is the question of why, if superposition states are real, we never observe them

      Copenhagen-like interpretations posit a mechanism called collapse whereby the state of the universe changes when you make a measurement with a random component. For example, if at time t1 the state of the universe is “spin up plus spin down” then when you make a measurement at t2 the new state of the universe is either “spin up” or “spin down”. Why collapse happens or how it happens can be considered at varying levels of detail.

      The Everett interpretation points out that we don’t need to posit collapse at all because your consciousness is part of the state of the universe. The real state before and after measurement looks like
      state(t1) = "i don't know what the spin is; spin up plus spin down"
      state(t2) = (i know the spin is up; spin up) plus (i know the spin is down; spin down)

      We say that your consciousness becomes “entangled” with the state of the spin. Why the state never splits into
      state(t2) = (i know the spin is up plus down; spin up plus spin down)
      is explained by the mathematics of “decoherence,” which follows from the basic axioms of quantum mechanics. No collapse needed.

      So you see there are not really separate universes. It’s just that the state of the universe is a huge sum of different entangled consciousnesses, “(i am alive and know particle 1 is here) plus (i am alive and know particle 1 is there) plus (I am dead and particle 1 is here) plus …” You can’t observe the other universes because there is no you external of the universe. There isn’t even a you external of the particular part of the huge sum that your current consciousness state occupies.

      I used to really dislike the Everett interpretation, but I’ve come around to seeing some of its good points recently. For me the key insight was the one I started with: your consciousness is a physical process and so it is just another variable that can take different values in the big sum that makes up the state of the universe. Any description of different states has to include a description of what conscious observers are seeing. This is what allows Everett to explain the paucity of observed states without collapse.

      * I’m still wary of Everett because I have philosophical reasons for not liking thinking of my consciousness as just another physical process, but at least I understand what it’s saying now.

      • smocc says:

        The question you may be more interested in is whether we really have to posit the existence of unobserved states like “spin up plus spin down”

        The answer is that as far as we know yes we do, unless you are willing to give up the notion of locality. Locality is the principle that the state of the universe at one point is only influenced by its immediate neighborhood, at least for short times.

        Some people do like the Bohmian interpretations of quantum mechanics where instead of positing the existence of superposition states you posit the existence of a real “wavefunction.” However, this wavefunction is still only observable through its effect on other particles.

        What’s more, there is a mathematical proof that the weird probability distributions that quantum mechanics can produce are incompatible with “local, real” classical theories, where both those terms have technical meanings. This is called Bell’s Theorem, and most physicists accept that so-called “Bell test” experiments rule on the side of quantum mechanics. The Bohmian interpretation is decidedly non-local; the wavefunction has to know about the position of every particle everywhere all at once to know how to evolve in time.

      • AlesZiegler says:

        Thanks!

        So, I am not sure I understand it correctly, but if I do, your explanation of Everett posits no parallel universes. Do you, by any chance, know how it happened that this interpretation of quantum mechanics is associated with them?

        • Eugene Dawn says:

          The phrase “parallel universes” I think here is being used in two different ways: I think a less loaded term for them is something like, “other branches of the superposition”: the thing that as far as I can tell most people like about Everettian QM is that it is, in a certain sense, just taking what the mathematical formalism says seriously: the mathematical formalism says that if you have a particle in the superposition |U> + |D>, referring to an equal mixture of being in spin and and in spin down, and you have a measuring device that can measure spin up particles, in which case the machine is in the position |MU> (for “measures up) and |MD> (for measures down), then when a measuring device measures a particle in such a superposition state, it ought to end up entangled with the particle to produce the state

          |U>|MU> + |D> |MD> — that is, in the branch of the superposition where the particle is up, the measuring device will register as measuring spin up, and similarly for the branch where the particle is in spin down. But we never seem to observe measuring devices in these superpositions, so that’s weird. The Copenhagen view proposes that somehow, at some stage, this state must collapse, but it’s not very clear how or why this collapse should happen. The Everettian view says: the reason you never see these superpositions is because you yourself are a measuring device, and so when you observe a particle in superposition, you too go into superposition: the same rules of QM that govern how a measuring device becomes entangled with a particle apply equally well to you: if you observe a particle, you end up in the state

          |U>|EU> + |D>|ED> where |EU> and |ED> are the quantum states describing you experiencing the particle as spin up, and as spin down respectively.

          Superposition don’t collapse, they swallow things up: the reason you don’t see them is because you get trapped inside just a small part of them–it’s sort of the same reason you don’t see the curvature of the Earth: your perspective is limited in a way that traps you in a particular point of view.

          Whether or not the two branches of the superposition, |U>|EU> and |D>|ED> ought to count as parallel universes is a different question. My understanding is that Everett himself downplayed this interpretation, possibly under the influence of his advisor Wheeler, who didn’t want him to estrange Niels Bohr and the Copenhagen faction who still wielded a lot of power; but there’s some evidence that Wheeler understood the implications of his point of view. I think the idea that the different branches ought to be regarded as truly different worlds owes to Bryce DeWitt, who was one of the first physicists to really latch on to and popularize the Everettian idea.

          But as I say, the ontological status of these branches of the superposition don’t really fit with the popular notion of a parallel universe: for one thing, it ought to be possible for the two different branches to re-merge together, and interfere with each other, like in the two-slit experiment. Whether this is a feature that is intuitively conjured up by the phrase “parallel worlds” I leave to your judgement.

        • ADifferentAnonymous says:

          Yes and no. When smocc says that (according to many worlds) the universe enters the state “(i know the spin is up; spin up) plus (i know the spin is down; spin down)”, that means the universe actually contains an instance of you who “knows” the spin is up and one who “knows” the spin is down. If you’d be money on that spin, there’s a rich you and a poor you. Both are equally real. “Parallel universe” maybe isn’t quite the right word for this but it’s not too far off, either.

          It legitimately is kind of crazy, but I find the arguments in favor compelling. To me, non-many worlds interpretations are sort of like if Newton presented the theory of gravitation and then said “But of course you can’t actually go to the Moon, that would be crazy, there’s an invisible wall or something”.

          • fion says:

            It legitimately is kind of crazy, but I find the arguments in favor compelling. To me, non-many worlds interpretations are sort of like if Newton presented the theory of gravitation and then said “But of course you can’t actually go to the Moon, that would be crazy, there’s an invisible wall or something”.

            I really like this analogy. 🙂

      • vV_Vv says:

        is explained by the mathematics of “decoherence,” which follows from the basic axioms of quantum mechanics. No collapse needed.

        Maybe it’s me, but I’ve always found explanations of decoherence somewhat handwavy. Do you know a good one?

        • Eugene Dawn says:

          What level of explanation are you looking for? When you say “handwavy” do you mean lacking in mathematical rigour?

          • vV_Vv says:

            What level of explanation are you looking for?

            Density matrices and stuff. I’m not a physicist, but I know the basic math of quantum mechanics, especially in the context of quantum computation and quantum information.

            Wikipedia has something, but it seems to pull assumptions out of nowhere (e.g. the orthogonality of the environment states corresponding to the einselected basis).

            When you say “handwavy” do you mean lacking in mathematical rigour?

            Yes.

          • smocc says:

            @vV_Vv, if you find one, let me know. I have Nielsen and Chuang, a standard intro to Quantum Information, sitting around somewhere but I’ve never dug into it.

          • Soy Lecithin says:

            @vV_Vv, my understanding is that decoherence is the off-diagonal elements of a density matrix going to zero. What counts as off-diagonal is, of course, basis dependent, so decoherence is a basis-dependent concept and depends on the context. I don’t think it’s a precise term (outside of some particular context).

            Sometimes there is something that we’d really like to think of as classical (say, the state of the pointer on some piece of lab equipment, or the brainstate of a scientist making a measurement). Then the basis relative to which decoherence is accounted will be related to how this “classical” thing couples to the quantum system. For example, in quantum computing people talk about the “computational basis” which in practice might be the basis in which the computer is set up to measure output states. Then in this context when someone says decoherence they might mean off-diagonal elements of the density matrix going to zero when the density matrix is written in the computational basis.

            At least, this is just the impression I get from how people use the word.

            Quantum foundations people sometimes try to find some general principle for choosing a basis for defining decoherence. This is what the einselection stuff is about. I don’t understand any of that though.

        • soup says:

          I wrote this as a reply to Soy below, but it can at least partially answer your question. Can probably dig up references of concrete calculations but it gets complicated fast.

          There are basis independent aspects of decoherence, at least if you can decompose your quantum system into subsystems (a choice of basis, but a very natural one). It can turn pure states with rho^2=rho into statistical mixtures Trace(rho^2)<1.

          Intuitively pure states correpsond to systems where we have a good handle on what the wave function is, and mixed states arise when we use a statistical mixture of wavefunctions.

          To give a concrete example, imagine we have a nice isolated spin pointing along the z axis:

          psi=1

          This is a pure state, rho=((1,0),(0,0)) in up-down basis.

          Now imagine the spin flips and emits a microwave photon, creating an entangled state (assuming a equal amplitude to flip or not, this could be arranged by bringing the spin flip transition into resonance with a photon mode for a certain time):

          psi=10+01

          The second index corresponds to the photon mode occupation. If we keep track of the emitted photon, we need to use pure states and a 4X4 density matrix. We could do this by e.g. storing the photon in a resonator for the ~microsecond duration of some quantum process. It would be possible to undo the entanglement too. If we lose the photon — if it flys off as photons can do — then future measurements on the spin alone only probe the reduced density matrix obtained by tracing over the photon states. A good (easy) excercise is to prove this.

          The reduced density matrix in this case is rho=((0.5,0),(0,0.5)). Trace(rho^2)=0.5. This behaves as a statistical mixture of spin up and down rather than the superposition 0+1, meaning that certain future manipulations you might want to do (e.g rotation to point purely along a different axis) will fail, unless more information is gained about the spin state. This process of entanglement with uncontrolled enviromental degrees of freedom therefore shows up as noise in attempts to control a given quantum system.

          • Soy Lecithin says:

            I wouldn’t say that decoherence is just any change from pure state to mixed state. For example a classical bit subject to classical noise isn’t something I’d call decoherence, but this is still a pure state [[1,0],[0,0]] going to a mixed state [[1/2,0],[0,1/2]]. What I’d call decoherence, at least, would be a superposition turning into a statistical mixture, and “superposition” is a basis dependent concept.

            The second paragraph of your arxiv link has this idea in mind, I think. “Decoherence is a pure quantum effect” distinguished from classical noise. But in order to distinguish quantum from classical you need to have some sense of what counts as classical, maybe some states in your hilbert space that are distinguished as “classical,” a basis relative to which you can have a sense of superposition.

            Decomposition into subsystems isn’t quite a choice of basis, either. If you have a tensor factorization of the hilbert space, that will rule out some bases, but will always still allow infinitely many choices of basis compatible with the factorization.

          • soup says:

            Hey. Nice comments. You’re right that there is some basis dependence to decoherence. I also will say I don’t have a very precise definition of the term. I think (maybe thought) of it as increase in the entropy of the density matrix due to entanglement with the environment, but on further reading I realized this might not be typical and others make a distinction between energy loss and decoherence. Do you now how this distinction is made quantitative?

            Of course measurement outcomes are not going to depend on the basis we describe the process in, and decoherence, or entanglement with the environment, or whatever we want to call it, is going to affect our ability to predict/control these outcomes. (I’m pretty sure you agree, but this was basically what I was going for with the original post)

            A few other comments:

            For the classical bit, if we rotate the initial pure state into [[1/2,1/2],[1/2,1/2]], (physically rotate it, not just change our basis!) it would quickly return to a mixed state. Would have to go through an actual calculation for e.g. a capacitor in a macroscopic superposition of charge states coupled to some bath but that’s my guess. So the system-environment interaction plays a role in selecting privileged bases. A simple case of this is systems which are allowed to reach thermal equilibrium with their environment so that rho=exp(-H/T)/Z is diagonal in the energy eigenbasis. On the other hand, do the same thing for a single charge on a superconducting dot, and there is fuller access to the Hilbert space for practical times.

            My comment about choice of subsystem also implying some choice of basis was that I chose to trace over the eigenstates of one subsystem whereas I could have e.g. traced out half of the Bell states which also span the Hilbert space of two spin-1/2.

            But I’d like to know more about what the tensor product structure implies about the choice of bases 🙂 I really haven’t thought about it carefully at all … does the tensor product structure imply something different than e.g. the types of bases I can make in four dimensional vector space? (I guess it also has a tensor product structure … you can tell I’m not a mathematician).

          • soup says:

            Ohhhh I get what you meant about the decomposition into subsystems. I need the tensor product structure to even define the partial trace. Totally spaced on that sorry!

            So it definitely doesn’t make sense to trace out “half of the Bell states.” Oops!

        • soup says:

          If you are still interested, I can recommend this:
          https://arxiv.org/pdf/1404.2635.pdf
          as a good review.

          Starts out with some general comments and has discussions on how to model decoherence (equations for time evolution of reduced density matrix) in sec III, IV.

      • kaathewise says:

        Why the state never splits into
        state(t2) = (i know the spin is up plus down; spin up plus spin down)

        Isn’t it the perfectly valid state before the experiment, i.e., what you call state(t1)? Obviously, you are already entangled with the system, since your future state correlates with the future state of the particle and the measuring device.

        After the particle is measured it indeed becomes

        state(t2) = (i know the spin is up; spin up) plus (i know the spin is down; spin down)

    • uau says:

      I think the most basic-level argument for why the many-worlds interpretation is preferable can be summarized as follows:

      The famous double-slit experiment shows that to calculate physical phenomena, you need to consider all the paths a particle could take. Thus, any theory that produces correct results needs to have the mathematical machinery to consider all the possible outcomes and their interference in parallel. Then, you can show that this machinery is already enough by itself to explain observed results – you get parallel branches of “the cat is alive, I see no superposition” and “the cat is dead, I see no superposition”, both of which match what people experience.

      At this point, trying to say that there actually isn’t a parallel world where the cat it alive while you see its dead body is not a simplification. The simplest mathematical theory which explains quantum interference produces “parallel worlds”. To avoid having them, you need to add extra constructs on top of your theory, like “wavefunction collapse”, which are not supported by any physical evidence (as the theory already predicts physics at least as well without them).

      • Viliam says:

        This. All the weirdness of many-worlds interpretations is also in the collapse interpretations… they just add something like “but then a miracle collapse happens, and the world becomes non-quantum again” or “however, this is all just a magic mathematical formalism that makes our calculations correct; it doesn’t refer to anything real“.

        Quantum physics implies the existence of parallel “states”. The only remaining question is how large can the differences between the states get, and what happens when for all practical purposes the different states stop interacting (because the larger the difference between the states, the less interaction there is).

        Collapse interpretations say “at some unspecified moment, only one state remains and all the other states disappear”; many-world interpretations say “the differences between the states can get arbitrarily large, and all states remain”.

        • Garrett says:

          How is this any different from eg. rolling dice in a casino? The distribution of possible outcomes for a particular game is well known before you arrive on-site. And up until the dice come to rest their actual value isn’t known. But once they do, they have a defined value. Does this also somehow mean that there is a universe in which every different result occurred as well?

          • lightvector says:

            With classical probability, (e.g. idealized dice games), probabilities are always positive and sum up additively between different branches, so there is no difference between positing that the universe splits, or positing that at each point of randomness a fixed one of the possible outcomes happen and no others.

            With quantum mechanics though, instead of each state having a probability at any given point, instead, each state has an “amplitude”. Amplitudes behave a little differently than probabilities. They can have different phases, as in the phase angle of a complex number, and cancel out. So for example, imagine from state A you’re 50-50 to go to B or C, and from B, you’re 50-50 to go to E or F, and from C, you’re 50-50 to go to F or G. Classically, that means the final probability of E-F-G would be 25-50-25, but with quantum mechanics, it is possible that the 25% going to F from B and the 25% going to F from C have different phase and cancel each other out, so depending on the phase angle of each split/interaction (which you would additionally need to specify), the final chance of E-F-G could be 50-0-50, for example. This is called interference. (This is also a little simplified, but follows the spirit of the math).

            So with quantum mechanics, due the mechanism of interference, having branching and superpositions actually has observable consequences compared to saying that at each point of probability, a specific thing and only that thing happens. And of course, reality appears to behave consistently with the version where you do have superpositions, so that’s what you have to explain.

            From there, it’s not too large a step to the Everett view, which basically says that if arbitrary particles can be in superposition, and you yourself are a human-shaped collection of particles, you yourself should go into superposition when you interact with the particles as well.

            If you dig further into the math, you’ll also find that actually there’s not necessarily a discrete notion of different “branches” either – this is also a simplification of the actual view, which closer to something like that you have a continuously evolving amplitude distribution in a *very* high dimensional space. Particular “blobs” of amplitude we might colloquially identify with “this branch” or “that branch” using some particular basis, but it’s not as if nature says “this is the point where you branched”, in reality it’s more of a continuum than that.

          • uau says:

            That’s why I mentioned the double-slit experiment. It demonstrates that physics doesn’t work this way.

            If you send particles one at a time, they still produce an interference pattern. Consider the state of the system after the particle has passed through one of the slits, but before it hits the target. If you could describe the situation as “50% probability the system is in a state where the particle went through slit A” and “50% probability the system is in a state where the particle went through slit B”, then there could be no interference pattern – in either case the system would be in a state where the particle simply went through one slit, the existence of the other slit would be completely irrelevant, and there would be nothing to produce interference.

            You need to be able to say “the system is in a state where the particle went 50% through slit A and 50% through slit B, and these alternatives interfere like this”. This is like parallel worlds: you need to be able to say that the “true state of the multiverse” contains 30% world A, 30% world B and 40% world C . You need a mixture that has specified portions of each world, not just to pick a single world with specified probabilities.

          • Soy Lecithin says:

            Well, here’s one difference with dice rolls. Classical uncertainty like the uncertainty in the outcome of a dice roll isn’t fundamental. Classical uncertainty has to do with our ignorance. If you knew everything about the die’s shape, and exactly how it was thrown, and exactly what the microscopic properties of the table were, etc., you’d be able to predict the outcome, in principle. This isn’t the case with quantum uncertainty. With quantum uncertainty the nonrealized outcomes aren’t simply outcomes we can’t rule out due to our ignorance. Even an omniscient being couldn’t predict the outcome. Quantum uncertainty is fundamental.

        • carvenvisage says:

          “however, this is all just a magic mathematical formalism that makes our calculations correct; it doesn’t refer to anything real“

          That could never happen in science. If the model works, it must correspond directly to the underlying phenomena, or at least so we have to assume lest we get our empiricist card revoked.

          https://en.wikipedia.org/wiki/Deferent_and_epicycle

    • muskwalker says:

      Tangentially: is it correct to assume that “all possible worlds” does not include “all imaginable worlds”? It seems obvious to me that it shouldn’t, though I have seen popular/lay arguments that assume it does.

      • You are correct.

        It means all worlds that evolve from the initial conditions with a non-zero probability.

        • silver_swift says:

          Which, to be fair, is pretty damn close to “all possible worlds (that obey the same laws of physics)”

        • Jaskologist says:

          But those non-zero probabilities include all the atoms in my body spontaneously teleporting 5 meters to the left, right? So in practice, most of the things we could imagine, including the very unlikely universe containing a superhero who flies by quantum coincidence, would still be possible worlds.

          • carvenvisage says:

            including the very unlikely universe containing a superhero who flies by quantum coincidence, would still be possible worlds.

            I don’t think it’s “flying” if the underlying mechanism is coincidence rather than volition.

          • Jaskologist says:

            In a highly unlikely but still theoretically possible world, those coincidences line up exactly with his volition.

          • Evan Þ says:

            Also including the unlikely universe where all the water at one point in the Red Sea spontaneously teleports away leaving a dry pathway from one coast to the other, and where the nearby water does not drown that pathway for several hours.

            Hmm…

          • Nancy Lebovitz says:

            This seems like a good excuse to mention David Drake’s The Dragon Lord. There’s magic which works by pulling moments from other universes. A dragon can’t live on earth, but you have something which might as well be a dragon by pulling an appropriate series of dragon moments from other universes.

            The information-handling required isn’t addressed, and as I recall neither is what happened to a dragon who’d had a moment snipped out.

            It’s also an early example of a nasty version of a standard story– Arthur and his knights are a bunch of thugs. The two viewpoint characters are somewhat better but not good guys.

          • Viliam says:

            Yes, “possible worlds” include all kinds of technically lawful miracles. But the less likely they are, the smaller the amplitude of the result.

            Any particle in your body can spontaneously teleport anywhere, but the probability that one specific particle teleports 5 meters to the left is quite small; and to teleport your entire body, that is exponentially less likely, where the exponent is the number of particles in your body.

            So, the worlds with miracles are there; but all of them combined are still only a negligible part of the whole.

          • Randy M says:

            But the less likely they are, the smaller the amplitude of the result.

            Could you tell me exactly how likely? I’m going to a party later, and would like to make the atoms of the hostess’ dress jump a few meters to the side.

      • smocc says:

        As David Friedman says.

        The time evolution of quantum mechanics still conserves energy and momentum and the like, so “all possible worlds” doesn’t include ones where energy conservation is violated. Or charge conservation, or any number of other forbidden things happen.

        There’s a caveat whereby energy can appear to maybe be non-conserved if you were in a state that did not have a definite value of energy to begin with.

    • Douglas Knight says:

      First, what does it mean to interpret a physical theory? It means to ask what an observer sees. But to interpret QM usually means not the absolute question, but the relative question, why it seems so close to classical mechanics.

      Traditionally you get a QM model by starting with a classical model and performing a transformation to it. This gives a family of models with a parameter, Planck’s constant h. The deviation of QM from classical is supposed to be of size h. When h goes to zero, you are supposed to recover the classical model. What you actually recover in, say, von Neumann’s formalism, is not a single world evolving according to classical mechanics, but a probability distribution over all possible worlds, each of which is evolving independently. Since they evolve independently, they are perfectly “parallel” in exactly the way that @smocc condemns.

      This parallel classical worlds aren’t true. Of course they aren’t true, because classical physics isn’t true. The question was “in what sense is QM approximately classical,” so these parallel worlds are exactly what we have to talk about. QM is approximately a probability distribution over classical probabilities, which evolve approximately according to classical physics. Where “approximately” means proportional to h. The interactions between the worlds are proportional to h.

      • AlesZiegler says:

        @Douglas Knight

        This is really really great answer, thanks to which I feel I finally get it. Thank you!

        Let me try to rephrase it, so you people could check whether I got it wrong:

        So, apparent paradox is a result of a misunderstanding. Quantum mechanics equations do not have single classical solution, but this is fine and does not mean there are multiple classical worlds, because our world is not a classical world. Our world is a quantum world, and is only one.

        • Douglas Knight says:

          I’m really not sure what you mean. Time evolution is deterministic. And it’s linear, so the only way that the parallel universes interact are (1) one universe spawns a cloud of others and (2) interference, because it’s not a probability distribution, but an “amplitude,” a detail that lies outside of my sketch. [(2a) even if all amplitudes are positive, probability is non-linear in superposition and (2b) because amplitudes can be negative, cancellation]

          A shot in the dark:
          In canonical quantization, where we start with a classical system the (pure) states are named after the states of the classical system. But these names are are only approximately (ie, h) correct. People often describe QM as Schrödinger evolution followed by collapse to a pure state; as a chain of discrete steps through classical space. But classical mechanics is false so there is a metaphysical error in claiming that these are classical states. But they are h-approximately like the classical states, and that’s OK. A much more popular related complaint is that this depends on choice of basis.

          • AlesZiegler says:

            @Douglas Knight

            I tried to be metaphorical, clearly too much. I understand that QM model is in certain contexts closer to what we observe than classical model. Your comment made me realize that interpretations of QM model are attempts to reconcile it with classical model, not with “reality”. In this context Everett interpretation makes intuitive sense.

            Perhaps this should be obvious, but I don´t know anything about physics beyond what I learnt at high school and from popularizations.

          • sharper13 says:

            Disclaimer: I’m pretty amateur when it comes to quantum physics, so I’m looking for explanations/citations, rather than an argument here.

            Other than that’s the way we primitively observe/experience time ourselves, what makes you believe time is linear, rather than corresponding to an actual physical dimension? (I assume you mean by linear that time’s arrow only points one way and can only progress forward little by little.)

            I tend to think of time as like solid objects. Sure, they appear solid to us when we observe them because of how our perception is constructed, but in reality they aren’t actually solid and are instead something like 99.9999999999996 empty space. It’s the energy in objects and the interacting forces which make things feel solid to us.

            Conceptually, something/someone outside of time (as we are 3 dimensional, if it were 4 dimensional, I believe is the usual description) would be able to perceive the past and the future simultaneously (so not have a present), while we only perceive the present while (poorly) remembering the past and (even more poorly) predicting the future.

    • carvenvisage says:

      To me this seems like if Isaac Newton, after he invented his equations correctly predicting movement of celestial bodies, declared that those bodies are pushed around by invisible demons who precisely follow his equations.

      I think that’s what all of the theories are. Except that the demons are being built on something like epicycles- a system of kludges to get working results, rather than something like gravity.

      • AlesZiegler says:

        Well, I disagree with that, but this is already a long thread so I do not want to get itsidetracked into discussion of what is scientific method.

  28. Walter says:

    I was scoffing at the idea of an RPG group with 8 players somewhere else on site, but the question stuck in my mind. How universal is my experience? How do other folks do this?

    So, tabletop players of SSC, a few questions about your campaigns.

    On your favorite campaign that ‘succeeded’ (that is, proceed to its story conclusion, or lasted at least, let’s say a year)
    1. How often did/do you meet?
    2. How many players in the group, counting the DM/GM/HHG/Whatever?
    3. How long is a typical session?

    Thanks for responding!

    My answers are:
    1. Once a week
    2. 4 players is best
    3. ~6 hours.

    • Randy M says:

      I don’t think I’ve ever had a campaign last a year (I only started playing after college). But I’ll answer anyway:
      1. Weekly, with probably one cancellation per month
      2. 4.5 players (my wife tended to fall asleep at 10:00.
      3. 5 hours

    • woah77 says:

      My answers are for Tabletop:
      Weekly; 5 (four PC and a GM); 3-4 hours, for what I’ve run.
      Best ever was:
      Weekly, but systems were different on consecutive weeks (two parallel games); Roughly 7 at the table (average closer to 5, but frequently enough 7); and 4-5 hours.

      For LARP (which is an RPG group, but very different style), it was
      20-30 players, once a week, for 4-5 hours. But, again, very different dynamics.

    • Le Maistre Chat says:

      1. Weekly, with about one cancellation per season.
      2. 4-5 including DM.
      3. My face-to-face DM and I both start to run out of mental energy after 4 hours (we’re talking 5E and ACKS here). I ran a 3.5 campaign that lasted more than 16 months and I tried to push players to dedicate 6-hour blocks because I had exploration and enemies prepared so far in advance of the game’s slow combat system.

      • Nick says:

        1. Weekly, with about one cancellation per season.

        Our ACKS campaign has had very few cancellations and with good advance warning and that is so nice.

    • Nick says:

      1. In my college group, our longest running campaign has been running for four years now. It was intended to run every two weeks, but we had long breaks due to summer and, following graduation, more long breaks due to everyone being unreliable as shit. It’s probably every four weeks on average.
      2. It’s varied dramatically. When the campaign started there were nine of us, I think. That quickly grew to, I am not shitting you, about nineteen, which lasted for all of one session before we rebelled against this ridiculous state of affairs. It was split into two groups of about eight running parallel campaigns, and then shrunk naturally to seven. Since graduation it’s been five.
      3. Five to six hours, but with a food break.

      Nineteen is only the second largest campaign I’ve been in. We ran a Maid one shot with a solid twenty five people. It required the DM to run a scene with about four or five people at a time while all the rest f%^&ed off to parts unknown for a while. It actually didn’t work half badly because the DM kept each of these moving really quickly, but practically it still meant letting folks sit around distracted for long periods of time.

      • Nick says:

        Addendum: Is it just me or is it a lot, lot harder to listen to more than one person in a voicechat compared to meatspace? Interrupting another player is always rude, of course, but at least in person it’s usually to have a brief side conversation or whatever. This doesn’t seem to work in voicechat at all, and I can’t just ignore anybody either, except by muting them I guess.

        We had a problem player back in college who would bring a mandolin and sit there playing it during the session. (This at least had the excuse of being in character, since he was a bard.) It was annoying but at the same time pretty easy to ignore. But he tried it on voicechat once and it made it completely impossible to hear anyone. Anyway, this definitely makes playing the game harder for me.

        • dndnrsn says:

          Much harder, and it’s much harder to start talking because of the lag, I think. It’s much harder to play over a voice connection.

    • Nornagest says:

      My longest-running campaign, back in college, met weekly for 3-4 hours. It peaked at 7 players early on, but stabilized at around 4 (sometimes 5, we had a couple of inconsistently available players) within a few sessions.

      This was a 3.5 game, so it ended up being pretty slow going; once we gained a few levels, a big fight could easily take the whole session. And I strongly suspect my DM was doing a lot of fudging to keep it that low, although she was usually pretty good at hiding it.

    • broblawsky says:

      I ran a D&D 5e campaign that last about a year and a half, concluded successfully, met every other week on average, had 4 people including me, and lasted ~4-5 hours per session.

    • sandoratthezoo says:

      Nice callout to Nobilis.

      Most of the campaigns of my local group have been theoretically every two weeks, probably landing more like every three weeks once you take into account schedule sync. They’ve generally had 6 players + the GM, and sessions have been 5-6 hours.

      We have at times had more players — one Star Wars game technically had 10 PCs (plus the GM), but never all at once, and some of them really very rarely.

    • Le Maistre Chat says:

      campaign that ‘succeeded’ (that is, proceed to its story conclusion, or lasted at least, let’s say a year)

      … one thing that I’ve learned is that, if you follow the assumption that you’re telling the story of the PCs going from Level 1 to the height of their power, 5E gives you fewer sessions to proceed to that conclusion than 3.x, which in turn gives you fewer than Old School D&D. 5E official play is based around leveling up after every 8 hours of play. 3.x had XP awards that would get PCs to Level 20 after 250 encounters that each drained 25% of their daily resources. Old School means that if the recommended 75% of XP comes from treasure, a party of 4 has to kill ~266 men and take their stuff to get to Level 3.

    • Bugmaster says:

      My answers are “1). about once every two months, online, 2). three people, and 3). 6 hours”. The biggest problem with running long and/or large campaigns is, in my experience, scheduling. Due to real-world concerns, it is nearly impossible to get multiple people in the same place at the same time, even if the “place” is virtual. Sure, one could always plan a campaign with scheduling conflicts in mind, so that players can be rotated in and out as needed; but this quickly becomes boring for most people, who no longer feel like they have any impact on the story.

      • Le Maistre Chat says:

        Sure, one could always plan a campaign with scheduling conflicts in mind, so that players can be rotated in and out as needed; but this quickly becomes boring for most people, who no longer feel like they have any impact on the story.

        What I would do: the story is the impact a large group of adventurers has on the world. They have a ship that requires at least 30 rowers. When a session ends with the active players away from the ship, anyone playing next session who wasn’t there last time covered the distance off-screen and comes charging to the rescue.

        • Bugmaster says:

          Yes, this is the way we would usually handle the issue… but like I said, this severely dilutes each person’s investment in the story. It also creates a costly time investment — for both the GM and the players — that is required to keep everyone reasonably up to speed.

    • DinoNerd says:

      It’s been quite a few years, but IIRC, my all time favourite gaming groups was something like this
      – once a week
      – a lot more than 4 players – we had more than 4 even when we had 2 missing, which wasn’t unusual
      – but this is where my memory gets fuzzy.
      – not sure how late we played. certainly at least 4 hours.

      Added info
      – we generally played 1st edition AD&D + Unearthed Arcana
      – we all had miniatures for our characters, and we enjoyed doing our battles as a table top battle setup, though we didn’t take things to the same extremes as those whose game *is* the tabletop battles alone
      – I think we may have had some kind of setup where the main GM got breaks by having an alternate GM run their own campaign. At least one of those experimented with 2nd edition rules. (Might have been alternate groups, or in their own block of time, or even intermittently. I really only remember the main campaign.)
      – All this was happening in the early 1990s. The main campaign was very high level; I’d joined later than most, and nonetheless had a druid that was getting very close to the level where the rules stipulated an explicitly limited number of druids of that level in the world, and I’d have been fighting an NPC to level.

    • dndnrsn says:

      The longest campaign I’ve ran that was definitely successful ran for… Perhaps 15 or 16 months? I didn’t keep track of dates of play in my notes. However, the group itself has been meeting for over five years.

      1. Once a week. I’d say a game only gets cancelled once every 2 or 3 months.
      2. The group plays with minimum 3 including GM. The most people in the room at once was 7, current max is 6. I think that one GM plus 5 players is the most that can be handled well.
      3. Anywhere between 3-6 hours, 4 or 5 more normal.

      This is with multiple GMs running multiple campaigns, but never with GMs trading off responsibility for the same campaign. “Oh hey man I have a cool idea for an adventure, make a PC and I’ll have my guy go somewhere else” kills campaigns dead in my experience. I also find that having a set time every week to meet is vastly superior to trying to meet once a week and hash out when on an ad hoc basis.

    • A Definite Beta Guy says:

      Scheduling once a week just seems to be impossible, even with a group of 4-5. People seem to load up their schedules with stuff all the time. It’s not even a kids thing, the people that are hardest to get hold of are the people without kids that seem to want to party every weekend and can’t stay beholden to any plans (we’ve basically cut those people out, since they clearly don’t think the group is a priority).

    • bean says:

      Once a week, regularly scheduled. I’ve been playing with the same group for about 6.5 years now, first in person, and now online. We cancel reasonably often because we don’t like playing with people missing, although that’s not a hard-and-fast rule. Usually the GM and 4 players, although we’ve had 5 on a couple of occasions. Typical session is only about 2 hours these days because we usually play weeknights, and have somebody who gets off work late.

    • J Mann says:

      1. We aim for biweekly, but push back if anyone’s unavailable, so we can do 1-2 times per month.
      2. Six counting the GM.
      3. 3.5-5.5 hours

      IMHO, large parties (>5) in DnD 5e are a challenge because the game isn’t designed for them, but a good GM can cope. The biggest challenges are

      a. Balancing combat encounters, because the “action economy” means that a group becomes exponentially more powerful as you add members, which requires large groups on the other side, or monsters that are effectively groups because they can do several things at once. It also means that concentrating fire on individual members can be super-deadly.

      b. Keeping people from being bored – if you have 7 players role playing, a few need to cool their jets for quite a while until things get to them. If you have 7 players in combat against 4-10 opponents, it’s even harder to keep things moving.

  29. Nancy Lebovitz says:

    Suppose you took Hera’s offer to rule Asia. What would you do as ruler?

  30. Well... says:

    The logic of voluntary exchange is pretty sound: by giving up something they want less, to acquire something they want more (with relative levels of “want” demonstrated by the terms of the exchange and the fact that they agreed to it), both parties in a voluntary exchange are better off afterward. I considered myself a libertarian for about ten years between my late teens and late 20s, and I would cite this concept a lot, especially in college when I sometimes got into debates with left-wingers.

    But what I can’t remember knowing is how this concept, at least when used in arguments for a more laissez-faire system (or against a less laissez-faire one), accounts for things like buyer’s remorse, or human irrationality, or the fact that people’s circumstances sometimes change so that what may have been a mutually beneficial exchange at the time it was made becomes non-beneficial soon after. Also, consumers often have very poor information and accessing good information is often a chore.

    When I was a libertarian I’d probably have shrugged and said “Too bad. It’s just one more reason to educate yourself, learn to think critically, and be more rational, and it’s good that we have selective pressure to do those things” but now that doesn’t feel like a sufficient answer.

    So, how would libertarians here address these issues?

    • sandoratthezoo says:

      I think that the track record of external organizations knowing what’s a good deal for an individual, better than said individual, is pretty bad. Sure, your own understanding of your wants/needs is irritatingly imperfect. Some large, impersonal rule-based organization’s understanding of your wants/needs is (usually) terrible.

      Exceptions exist for particularly incompetent individuals and particularly straightforward wants/needs (like, “don’t hurt me,” and note that even that gets complicated quickly).

      EDIT: This may be culture-war? Not sure. Seems like it could rapidly go that direction. Fine with my post being deleted.

      • Well... says:

        Like JohnNV’s answer, I would file this answer under “Voluntary mutual exchange isn’t perfect but it’s better than the alternative”.

        But it’s got me thinking: are there any times when some type of relatively centralized social decision-making infrastructure does actually have that society’s best interests in mind better than the average individual member of that society?

        The example that comes to mind is the Amish, in which a central body (the elders of a given community) creates rules (in the form of ordnungs (ordnungen?)) governing the kinds of exchanges members can participate in (e.g. you can ride in a car, but may not own or drive one). Has this system actually hindered the Amish, or is it key to their success? Or is it irrelevant?

        David Friedman is both a libertarian and has studied the Amish, so his answer would be especially valuable.

        • J Mann says:

          Well, in terms of outlawing transactions, I think you can think about it in a few categories.

          1) The transaction is believed to net harmful to society, and arguably to at least one party to the transaction. Possibilities include gambling, sales of alcohol and drugs (to minors or to anyone), sex work, vote buying, murder for hire. As a libertarian-ish thinker, I approach those skeptically, but I believe there are some transactions in that class.

          2) The transaction is believed to be so frequently abusive to one of the members that it requires regulation. Examples would include mandatory return periods for home or car purchases, outlawing payday loans, etc. I’m particularly suspicious of these.

          3) We think we can streamline the transaction through regulations. For example, regulations requiring disclosure of GMOs, disclosure of various home terms, requirements for a simplified or consistent disclosure form, etc.

          • Le Maistre Chat says:

            Possibilities include gambling, sales of alcohol and drugs (to minors or to anyone), sex work, vote buying, murder for hire.

            I’m interested in how to handle these cases. Since I’m not a dogmatic libertarian, here’s my current thinking:

            Gambling: maybe ban? Casinos prey on people who can’t do math, letting their dopamine receptors hurt them. People who know how to win get banned by the casinos, so it only seems fair to ban the casinos from the people who don’t.
            Sales of drugs and alcohol: dogmatically legalizing heroin seems likely to have worse net consequences. Making beta blockers a controlled substance is excessive. Prohibition of all addictive drugs has some famous failures. So less of this seems good.
            Sex work: don’t ban.
            Vote buying: Well it’s not illegal when the candidate offers you money…
            Murder for hire: Yeah ban this.

          • 10240 says:

            dogmatically legalizing heroin seems likely to have worse net consequences.

            I’ve read in a few places that opioids are not actually that harmful if it’s pure and the dose is controlled. Most of the harm of black market heroin comes from adulterants, uncertain potency, and the effects of regular injections. If this is true, the harms of prohibition are massively greater than the benefits:
            • Overdose because of the uncertain potency of black market opioids (because of adulterants or because different opioids may be passed off as heroin), and (I guess) because the difficulty for a drug addict to accurately measure a powder. In a legal market, dosage is more accurate. (AFAIK the recent surge of overdoses in the US was caused by making it harder to get prescription opioids, so many addicts turned to the black market.)
            • Harm caused by adulterants.
            • New synthetic opioids may be cheaper, and they may be more harmful, their effects may be less studied, and they may require more frequent administration.
            • I think people inject in a large part because heroin is expensive due to being banned, and less is needed for the same effect with injecting than with other routes of administration. Frequent injections cause a risk of thrombosis, and increase the harm caused by any impurity.
            • I presume illegality is also the reason some people who inject share needles, transmitting infections (indirectly also affecting others).
            • The high cost of the drugs causes additional problems to drug users, and it may also cause them to commit crimes that harm others.
            • Indirect harms caused by gang activity, such as shootouts that sometimes hurt innocents as well.
            • The cost of incarceration of drug users and dealers to the state, the criminals and their families, as well as the incarceration making them more hardened criminals, and a conviction (even without incarceration) makes it harder to get employment, possibly causing them to commit more crimes.

            Even if it isn’t true that pure opioids of a controlled dose are not that harmful, it’s quite likely that all the above harms outweigh the benefits of prohibition. And, in particular, harm to others than the drug users is made much greater by prohibition, and IMO that should count much more than the harm to those who voluntarily make the decision to use drugs.

          • Casinos prey on people who can’t do math, letting their dopamine receptors hurt them.

            I don’t do casino gambling but I have friends who do. I don’t think it’s that they cannot do math but that they are willing to pay, on average, for the fun and excitement.

          • Cliff says:

            Then there are those of us who made tons of money gambling online and were quite annoyed when the Safe Ports Act was passed and Neteller shut down.

            I think for drugs, gambling, any vice- most people enjoy it without any problem. Some people have problems, and those people should be helped. Maybe it makes sense for the industry to pay for some of that.

          • danridge says:

            @Le Maistre Chat I used to think the same about gambling in terms of why people do it, then I watched this Louis Theroux documentary on gambling in Las Vegas. Going into it, understanding that he was looking at people who gamble often and treat it as a serious hobby, I assumed it was going to be all people counting cards and arbitraging any place where odds temporarily go against the house. Instead it’s just businessmen on weekend trips losing money on roulette and a woman who has essentially retired to the casino to slowly destroy the inheritance she would be leaving her son on penny slots.

            Anyway, I think in general gambling makes more sense to treat, as David Friedman does, as a transaction, paying money for the atmosphere, some ‘free gifts’, and the experience. It kind of doesn’t seem any less seedy, but heavy gamblers have to know they’re not making any kind of smart investment, just renting a very expensive space where they get to imagine they could strike it rich without doing any work, and it still works as intended for them if they never get rich. It probably makes sense to model this as an addiction, as strong as any other that doesn’t become a physical dependency, but it’s not as cut and dried as, “These casinos are masking the fact that gambling at slots is, on average a losing enterprise, and are fleecing otherwise intelligent and hard-working people.”

        • sandoratthezoo says:

          I think that the success of governing bodies relates to:

          1. Small size
          2. Closeness to the governed domain
          3. Homogenous governed community
          4. Release valve/alternatives

          Note that Amish central bodies don’t actually have coercive government power, and I think this is a key part of them. Also note that #1-3 are a very ordinary argument, but I’d like to focus some part of the discussion on, “What do you lose by having a small, close, homogenous community” rather than “How can we make sure everyone exists in a small, close, homogenous community.”

        • The example that comes to mind is the Amish, in which a central body (the elders of a given community) creates rules (in the form of ordnungs (ordnungen?)) governing the kinds of exchanges members can participate in

          In principle, for most Amish congregations, any change in the Ordnung is by unanimous consent of the members. There is presumably some social pressure to go along with the changes that the clergy propose.

          And there is nothing to prevent an individual who disagrees with the Ordnung of his congregation from joining a different one. The Ordung is specific to the congregation, and congregations are generally from 25 to 40 households. Whether he can change congregations without physically moving depends on whether he is in a community with overlapping congregations.

      • Le Maistre Chat says:

        I think that the track record of external organizations knowing what’s a good deal for an individual, better than said individual, is pretty bad. Sure, your own understanding of your wants/needs is irritatingly imperfect. Some large, impersonal rule-based organization’s understanding of your wants/needs is (usually) terrible.

        This. Don’t expect a system of libertarian exchanges to be utopian: we’re never going to have a perfect social system before the eschaton. Just compare it to known alternatives in the material world.

    • JohnNV says:

      I’m mildly libertarian. I think my response is that yes, consumers often make mistakes that they later regret. But at least they have a vested interest in getting the answer right and trying to satisfy their own preferences. It’s hard to believe a third-party with no idea who the consumer is could possibly make decisions for the consumer that result in better outcomes. After all, no government organization could possibly analyze every consumer – merchant interaction on an individual basis and rationally decide if the consumer was making a wise decision, we probably make hundreds of these decisions per day. So the best a government could do is make broad sweeping rules that categorically ban (or mandate) certain types of transactions that they believe aren’t in the interests of one party or another. The question is how often do those bans prevent people from making decisions that they would later regret, and compare those to the ones where the bans just get in the way of people doing what they actually genuinely want?

      • Well... says:

        I’ll file this under “Voluntary mutual exchange isn’t perfect but it’s better than the alternative”.

        Another consideration is how an exchange affects third parties. For example, if a large enough percentage of car buyers opt for automatic transmissions, car manufacturers respond by discontinuing manual transmissions in their new lineups, causing most car buyers in the future to not have the option at all.

        • 10240 says:

          Producing two different kinds of transmission, instead of just one, indeed has a fixed cost, so if too few people prefer manual, then manufacturers may indeed drop it. However, if the minority who prefer manual are willing to pay enough extra for it to cover the cost of maintaining its production, then car makers are going to keep producing it. If they aren’t willing to pay enough, that suggests that the benefit of driving a manual car (for those who prefer it) is less than the cost of keeping producing manuals. Then making those who prefer automatic worse off in order to maintain the production of manual cars (e.g. by requiring a certain percentage of cars sold to be manual, or to require manufacturers to sell both at the same price) would be both unfair and bad for society overall.

          Also, if I buy an automatic car, I’m not making those who prefer manual worse off compared to if I don’t buy a car at all (which I have the right to do, and which should, IMO, be considered the baseline). It might only make them worse off compared to me buying a manual.

          • John Schilling says:

            Considering the number of models of automobile in production, and indeed the number of discrete engines and powertrains, I don’t think you get manual transmissions going away unless there is a truly overwhelming consensus among drivers that manual transmissions are not wanted. There may not be a manual transmission option available in every product line, but that’s to be expected and should not be a problem.

            And if desire for manual transmissions becomes a sufficiently small niche that it won’t even support a handful of specialized models, then so be it – it has never been a market failure that tiny niche demands don’t get the benefits of high-rate mass production, and it certainly isn’t something that ought to be blamed on all those thoughtless automatic-transmission drivers buying the cars that they want without considering the “harm” they are causing to third parties by not instead subsidizing niche demand through buying stuff they don’t want.

      • The Big Red Scary says:

        “The question is…”

        Under labor law of various European countries, when you go on a business trip you must receive per diem, either from the organization sending you or from that receiving you. This appears to be an example of a law meant to protect workers, but in my case can become a damned nuisance. I would much prefer that the employer be obligated to offer per diem, but that I not be required to take it, since the per diem is often so generous that it becomes the limiting budgetary constraint for the length of what could be a longer and more useful business trip.

        In a similar vein, one summer as a student, I had two jobs at my college, tutoring and research. On paper, I was working too many hours, so human resources complained to my research advisor, who tried to explain to them that I wasn’t being exploited since my duties consisted of lying on the sofa, staring at the ceiling, and thinking, which I would be doing anyway. In the end, I was forced to drop half my tutoring load to keep the research job, making me poorer but not significantly less “exploited”.

        • ana53294 says:

          Why couldn’t you drop the research? You would still be doing it, right?

          Or was the research better paid than tutoring?

          • Randy M says:

            He had to do the research no matter what, I think. So if he can get paid for it, it’s probably better to keep that and drop the other; same money and less workload.
            Assuming equal pay; tutoring probably paid less unless it was a group session.

    • rlms says:

      Defining “beneficial” in a somewhat unusual and in extremis circular way that includes lying in a gutter ODing is one way round it.

      • Cliff says:

        Yeah, well, step 1 of this thought experiment would be asking whether more or less people would do that in Libertarian-land

      • 10240 says:

        No one intentionally overdoses. At most, one may decide to enjoy a drug, and accept some risk of overdose while doing so, in which case that entire risk-benefit profile can be considered beneficial, if we assume that someone’s voluntary choice should be automatically considered beneficial to that person. That assumption is not unique to libertarians, but also used by preference-utilitarians among others.

        Alternatively, if we are unwilling to define a voluntary choice as beneficial if it’s detrimental in the eyes of some outside observer, it can be argued that what should matter is not how well you end up being, but how well you can end up being if you make the right choices, assuming that the necessary information to make the right choices is given to you. That is, we shouldn’t make people who are making the right choices worse off, just so that people who voluntarily make bad choices can’t hurt themselves.

        • Garrett says:

          > No one intentionally overdoses.

          Some of the professional opinion I’ve heard on the radio in my neck of the woods, impacted by the opioid crisis, is that around 40% of OD fatalities are suicides.

    • J Mann says:

      A few answers:

      1) Transactions don’t have to be awesome – it’s enough if no one else is better than the person involved at recognizing the best choice given the alternatives. This comes up with payday lending, lottery playing, etc. It sounds to someone who’s not in that situation that those are terrible deals, but when you look closely (a) there are often benefits relative to the alternatives that central planners don’t understand and (b) outlawing the legal transactions we find abusive often drive people to worse, illegal, alternatives.

      2) Lack of knowledge often models rationally. I think Bryan Caplan has written a lot about “rational ignorance,” in that given the cost of knowledge, it never makes sense to have perfect knowledge, and there’s an optimal level.

    • Nicholas Weininger says:

      There is a notion of “euvoluntary exchange” lately popular among some libertarian philosophers which captures some of the objections you mention by imposing stronger conditions than for “merely voluntary” exchange, and then asks whether and when, given that we believe euvoluntary exchange is just by the standard logic you mention, more loosely voluntary exchanges can also be determined to be just. Example paper:

      http://people.duke.edu/~munger/euvol.pdf

    • Clutzy says:

      As an attorney, there are some old common law concepts that could be grafted into libertarian exchanges to deal with something like this. This is probably not practical for your average grocery store purchase (although there are covenants that a practical to that as well), but the common law does have concepts of uncontemplated windfalls, and doctrines to deal with it. Something as silly (or not) as a sow (or cow I forget which was in the case we learned) that was assumed fallow becoming pregnant could be cause for rescission. Obviously there have always been complex covenants that travel with land sales, like the guarantee that you won’t unexpectedly have a hidden termite nest in the basement, etc.

      One thing is true, which is that the law usually has tried to only punish the dishonest or the appearance of dishonesty, and stupidity alone has rarely been a reason for rescission (outside of those the courts deem unable to manage their own affairs, like Lennie from Mice and Men).

      • A cow—Rose of Aberlone.

        Any legal system has to have rules of interpretation, since there is never enough fine print to cover all things that could happen.

    • 10240 says:

      people’s circumstances sometimes change so that what may have been a mutually beneficial exchange at the time it was made becomes non-beneficial soon after

      People make decisions based on (roughly speaking) the expected value of the benefits of different choices (as well as risks etc.). So in terms of probability distributions, the exchange is still mutually beneficial when it’s made.
      Also, if the exchange later becomes bad for both people, they can voluntarily undo it. If it becomes bad for one party (A), and it’s still good for the other party (B) but less than it’s bad for A (in monetary terms), then A can pay B to undo it.

      Also, consumers often have very poor information and accessing good information is often a chore.

      That’s at most a reason to require sellers to give more information, not a reason to outright ban certain exchanges.

      Arguably it shouldn’t even be necessary to require giving certain information, just make it so that if information is given on a product packaging, it’s considered legally binding. Then, if consumers want to see some information, then they should prefer to buy products that give it, so companies should have an interest in disclosing it. (At least products that are among the best on a certain metric should have an interest in disclosing it, and then consumers should assume the worst about a product that doesn’t disclose it.) That said, requiring giving information has little downside, and it’s possible that otherwise companies wouldn’t give out enough information out of inertia, so I can accept laws that require giving certain information. (Though even that is not costless, see those annoying cookie notifications that everyone agrees with anyway.)

      When I was a libertarian I’d probably have shrugged and said “Too bad. It’s just one more reason to educate yourself, learn to think critically, and be more rational, and it’s good that we have selective pressure to do those things” but now that doesn’t feel like a sufficient answer.

      Why not? Would you not prefer being given the information you need to make a decision, being given a recommendation, but being allowed to make the final choice?

    • IrishDude says:

      Warranties and return policies help with information asymmetries and regret. Amazon has fantastic return policies, and a few times that I’ve complained they have given me a complete refund and told me to keep the product. Customer satisfaction leads to return business, giving an incentive for good companies to address concerns about regret.

    • Squirrel of Doom says:

      Not sure what you’re asking. All systems are vulnerable to human irrationality. In Libertarianism, it only harms the irrational person. In a system where you’re ruled by others, you’re harmed by them when they’re irrational.

      The information problem is a general critique of a naive belief in perfect free markets that always produce perfect outcomes in every instance. We don’t need anything near that to outcompete every other system. Just look around you!

  31. ADifferentAnonymous says:

    Request for next survey: some sort of measure ifattachment style. And I hypothesize that avoidants will tend to have more libertarian politics.

  32. hash872 says:

    How did pre-modern age soldiers survive conflicts? I’m especially thinking of sword fights. If you’ve ever watched any HEMA or Kendo or Dog Brothers (which is fantastic BTW)- in any melee conflict it’s impossible for them not to hit each other dozens of times in close contact. If two swordsmen face off, without heavy armor it seems like both men would be bleeding heavily in under a minute, and at least one would dead soon after from shock and blood loss. Defense where a swordfighter smoothly parries every strike is just for the movies. If you’re in close with a melee weapon, you’re going to get hit.

    Then pre-modern/nonexistent medicine and ignorance of germs, anyone who didn’t die of blood loss on the battlefield is at a very high risk of dying of infection days afterwards. So- was the lifespan of every medieval soldier just a year or two? Did most sword-bearing men who went into battle die in their teens or 20s? How were there any old, experienced soldiers?

    Yes yes, heavy armor deflects swords. But I suspect that only the very wealthy could afford such a thing. Also, if heavy armor was really that widespread on the battlefield, swords would go out of favor for maces and other blunt clubbing instruments, targeted towards the head. Even a glancing blow from a heavy mace to a metal helmet is likely a KO (the helmet might even make it worse by clanging).

    So- did everyone in the medieval era die in their first few battlefield engagements? How could anyone survive multiple melee conflicts?

    • woah77 says:

      They didn’t have sword fights? Almost everyone on the field used spears? Most battles were archers picking off people who charged the opposing front line with sticks, while the other line had sticks and archers shooting at your front line. Why? Because swords, and heavy armor, are both expensive and not many had them. Lots of people did use blunt objects for just the reason you cite. Most soldiers used polearms and clubs because they are cheap and because they are easy to use.

      • Watchman says:

        This. Good armour was likely cheaper than a good sword. So sword fights with decent blades were rare.

        In an actual melee the chance to properly swing a sword (they weren’t as useful as a dagger would be for stabbing) would be limited by the press of bodies as well. Think a rugby maul (or a football pushover touchdown for those who don’t know rugby) with hundreds of people involved…

    • Randy M says:

      How could anyone survive multiple melee conflicts?

      Running away. Or, alternatively, watching the other side run away. Most ancient battles were decided by a rout, or even just a show of force.
      Not being in the front also helped a lot too. And if you were, giant shields held by you and the two men to your sides were a big part of it. See Boudica vs the Romans.

    • Lambert says:

      A) Armies don’t fight like pairs of individuals And they tend not to use swords much.
      B) Unless you’re fighting pikemen or foot companions, you’re only in great danger when right at the front of the formation. (or if you run away)
      c) Big-arse shields

      >swords would go out of favor for maces and other blunt clubbing instruments, targeted towards the head.

      That did happen to an extent during the medieval era. Another option was to wrestle the opponent, pin them down, then go to town with a dagger on the joints in their amour, eye holes &c.

    • DragonMilk says:

      Pre-modern age is a long time (I assume you mean before antibiotics?).

      As for infections, no joke, pissing on wounds and cauterization is fairly helpful. I’m sure others know of various other techniques, but you may overestimate risk of infection. Think of the bleeding of various predators suffered when interacting with resisting prey. Sure some may die of infection, but most go on to live and form scars without medical treatment.

      • hash872 says:

        But even knowing to cauterize or piss on wounds would require the germ theory- which didn’t develop till the 19th century, right? It would be intriguing if uneducated, illiterate medieval peasants had a crude ‘naive biology’ that intuitively knew about infection, centuries before science did

        • Conrad Honcho says:

          It might work for reasons they don’t understand, but they’d do it anyway. Urine is a readily available source of water for cleaning a wound, and fire stops bleeding.

          • Clutzy says:

            Also Honey was used as a salve for wounds going back to Egypt. A lot of ancient medicine was BS, but some of it was accidental effectiveness that persevered.

            Like, the Greeks didn’t need to understand the biological pathways of Coniine to know Hemlock would kill Socrates.

        • quanta413 says:

          I think you have a common misconception that I used to have too. I blame it on scientists/science popularizers. They like to claim that science causes totally sweet technology and so you should definitely favor more money for science. But basically, you can accomplish a lot in a fairly empirical manner without a theory of the tiny things making stuff up. Often you have to.

          Or even a theory of the big things.

          For example, people built steam engines first. Then physicists invented thermodynamics. Humans built bridges, temples, etc. without knowing Newton’s laws. Humans had metallurgy for a very long time. And not a clue about atoms or anything like that.

          A lot of the time, even if you do know the underlying theory, working with a toy phenomenological model works better. A lot of condensed matter physics (the study of semiconductors, fluids, granular materials, basically properties of bulk matter) works with phenomenological models rather than trying to compute how big things should work from our knowledge of quantum mechanics.

          A lot of great science is a more rigorous method of trial and error with only a tentative connection to underlying theory. Although ideally you can build towards some solid theory from there.

    • John Schilling says:

      The word you’re looking for is “shield wall”. Also armor, but not everybody could afford that, while shields and pointy sticks were cheap. A large shield, a long pointy stick, and mates you can trust, will beat any amount of flashy swordsmanship when it comes to Not Dying.

      If the shield wall breaks, throw away the shield and run, and hope you were the first one to think of that strategy. Too bad for your mates that they were fool enough to trust you.

      If you couldn’t manage to be part of a shield wall, you wanted javelins a light shield and a fast horse, or javelins and a light shield and for your enemies to not have horses. In either case, don’t weigh yourself down with other gear, maybe just sandals and a helmet and a short sword if you can afford it. Throw the javelins from the longest possible distance, then stay out of reach of anyone who can hurt you.

      Substituting a bow for the javelins allows you to keep a somewhat greater distance, but it means you can’t use the shield and so probably isn’t a net win in survivability even if it does more damage to the enemy.

      If you can afford the shield + pointy stick combo with the full armor option, and you can afford a large horse, you might as well call yourself a “knight” and ride the horse to the battlefield. But expect to dismount and form a shield wall for any serious fighting. Tactical horsemanship is for chasing down an enemy who decides to run away, or for running away yourself; if you impale your horse on the pointy sticks of an enemy who didn’t decide to run away after all, you deserve what you get.

    • Eponymous says:

      I recall (from a lecture) that typical casualty rates in battles between Greek phalanxes were on the order of 10-20%. This would be most of the people in the front lines on the losing side, and some on the winning side. Back ranks usually made it out when they broke and ran.

    • zzzzort says:

      Fun history fact: roman veterans were given land when they retired both as payment and to help romanize conquered territory. Several european cities at one point or another had military retirement communities.

      • vV_Vv says:

        Indeed many cities in Europe (e.g. Manchester) have names ending in a derivative of castrum: the Roman military camp or fortress.

        • Watchman says:

          Yes, but all a cester or chester (or Welsh caer) name meant is that identifiable masonry (which would have to be Roman period) was standing when the English {or Welsh) name was coined. Woodchester, Gloucestershire was named for a Roman villa for example. The Roman colonies in what is now England were Gloucester and Lincoln, both indicating their status by preserving the Latin word colonium in some form in their names, and Colchester, which was presumably named from the walls and civil buildings that may have stood long after the Romans left, at least according to the archaeology done there in the 1980s. So Chester type names aren’t madly significant in identifying colonies.

    • uncle stinky says:

      So I just got sidetracked at ask historians on this. Cursory search didn’t uncover what you seek but this account of the duel between Bazanez and Lagarde Valois was too good not to share. I’ll poke around there a bit further and see if I can find anything more specific.

    • hash872 says:

      I like all of the responses I’ve gotten so far, but I suspect that another part of it is simply that the poor & desperate died in massive numbers in warfare (or duels or from bandits or just from various interpersonal conflicts). Life was nasty, brutish and short, and fighting aged males died in large numbers- the end. Sort of like how childbirth had a much higher chance of being fatal back then.

      I remember reading something where the CIA, who was giving anti-tank weapons to Afghan insurgents against the Soviets, figured out that the average lifespan of an Afghan antitank-wielding guy was like two weeks on the battlefield. Probably not that different from (some) medieval combatants

      • woah77 says:

        I mean, yes. This was history. A lot of people didn’t see 30. You fought wars all the time and they lasted forever. (forever meaning you could be born after it started and it’d still be happening when you died) Two weeks sounds on the short end. The unlucky lived at least 3-6 months because it took that long to get to the battle.

        • Lillian says:

          In wars before the 20th century, the majority of casualties were due to disease. The typical unlucky sod who went off to war and never came back took sick of dysentery and shat himself to death without ever coming in contact with the enemy.

          • Matt says:

            You imply death but say ‘casualties’.

            Do you mean the majority of dead soldiers from diseases, or do you mean the majority of soldiers dead + too sick/injured to fight due to disease?

          • Lillian says:

            Both. The majority of the dead died of disease, and the majority of the disabled were disabled due to disease. The latter should be obvious when you consider that sick people don’t just drop dead, and some sick people recover. For example, take the French during the Crimean War (taken from Wikipedia for convenience):

            135,485 total casualties
            8,490 killed in action
            11,750 died of wounds
            75,375 died of disease
            39,870 wounded

            You can see that 75% of the dead died from taking sick. However nearly all soldiers who died of illness must have been disabled at some point. If we count all who were wounded as being disabled, then you see that disabled by disease is still at least 65% of the total. When you account for those those who were disabled by sickness but did not die, the proportion should be even higher.

        • Aapje says:

          @woah77

          Wars back then were very low intensity though. War often could only be fought during certain periods. There was a lot of marching and waiting between battles.

          • woah77 says:

            I feel like you just agreed with me. Or at least the notion I was trying to get at. Even if you were the unlucky sod who got stabbed on the battlefield, it’d be at least 3-6 months before you saw battle from the “start” of the war.

    • uncle stinky says:

      This post is getting nearer. Selected quote- “In pre-gunpowder combat, the battlefield was in many ways less lethal than it was today. The majority of casualties would likely happen when one side broke and fled, which would allow the victors to pursue them and kill them more easily. Movies portray pre-gunpowder warfare as a giant meat grinder, where two sides smash into each other and rip each other into shreds for hours. This is not how combat really worked in the period. The most convincing model among historians today is that battles happened in short “pulses” rather than one giant slog. Groups would advance, engage briefly, and whoever was starting to lose would break off to recollect themselves and prepare to try again”

    • vV_Vv says:

      If two swordsmen face off,

      The typical conflict wasn’t two swordsmen facing off, it was more like two lines of men butting their shields while trying to poke, hack and bash each other, mainly with spears, axes and hammers. Swords, if they were carried at all, were mostly used side weapons, much like the pistols that modern infantrymen carry.

      And they almost always had some kind of armor made of various materials from hard vegetable fiber (e.g. this Coconut fiber armor from the Kiribati culture of Micronesia) to leather, to metal scales, chain mail or segmented plates. You might object that a light armor might not have been able to stop a clean, well aimed and powerful strike with a sharp blade, but in a real battle strikes were rarely clean and blades quickly lost their edges and points.

      Yes yes, heavy armor deflects swords. But I suspect that only the very wealthy could afford such a thing. Also, if heavy armor was really that widespread on the battlefield, swords would go out of favor for maces and other blunt clubbing instruments, targeted towards the head. Even a glancing blow from a heavy mace to a metal helmet is likely a KO (the helmet might even make it worse by clanging).

      Full plate steel armor was developed relatively late, and war hammers and heavy swords such as the Zweihänder, as well as increasingly powerful projectile weapons such as the crossbow and firearms were developed in response to it. Eventually firearms became sufficiently powerful, accurate, reliable and cheap to mass produce that they made armor irrelevant until the modern plastic-ceramic armors, and in fact soldier mortality skyrocketed in that era.

      • Eponymous says:

        Swords, if they were carried at all, were mostly used side weapons, much like the pistols that modern infantrymen carry.

        The gladius was the main weapon of the Roman legions.

        • vV_Vv says:

          Sort of.

          if I understand correctly the default tactic of Roman legionaries was to first engage at distance by throwing darts (plumba) and javelins (pilia). The Roman pilium had a shank designed to bend on impact, making it difficult to pull out of wounds or shields. Once the enemy ranks had been thinned out and partially de-shielded, the Romans formed a shield wall and engaged at close range with their short swords (gladii).

          Usually it worked well, except when facing heavy cavalry (cataphract) who were essentially immune to projectiles and could easily mow down shield walls.

        • John Schilling says:

          The gladius was the main weapon of the Roman legions.

          The gladius was the main weapon of the Roman legions only if you consider the pilum to be the secondary weapon. But, insofar as heavy infantry combat was decided by the clash of shield walls, the bit where one side would have the sort of second-rate shield wall you can form when you’re using swords instead of spears and the other side would have a ragged line of men whose former wall of front-rank shield bearers are now mostly lying bloody on the ground asking “weren’t our shields supposed to be javelin-proof?”, is not exactly secondary.

          Approximately 75% of a Roman soldier’s offensive weaponry, by weight, consisted of extra-heavy armor-piercing javelins designed to counter shield walls and critical to the success of Roman legions in combat.

          • Eponymous says:

            In context, I meant primary *hand to hand* or close-quarters weapon.

            My comment should be read as giving a one-sentence counterexample to a specific claim, not a full description of Roman combat tactics.

        • Le Maistre Chat says:

          It seems to me that the Roman killer app was the discipline to make peltasts (javelin man) lock together into a shield wall, and even a testudo immune to arcing arrows. Using the gladius when they ran out of javelins was probably an ergonomic compromise.
          Polybius has a famous passage answering his reader’s incredulity that men with short swords could defeat a pike phalanx.

          • Eponymous says:

            Polybius has a famous passage answering his reader’s incredulity that men with short swords could defeat a pike phalanx.

            Thanks for the reference. I will have to check it out. I have wondered about that myself.

            I’ve heard the claim that a Greek phalanx would defeat a Roman legion head on given flat terrain, but the greater flexibility of the Maniple formation allowed the Romans to outmaneuver the phalanx, either by flanking them or falling back to rougher terrain. Not sure if that is correct.

          • Eric Rall says:

            I’ve heard the claim that a Greek phalanx would defeat a Roman legion head on given flat terrain, but the greater flexibility of the Maniple formation allowed the Romans to outmaneuver the phalanx, either by flanking them or falling back to rougher terrain. Not sure if that is correct.

            This might be based on the Battles of Cynoscephalae and Pydna. In the former, a Roman multi-legion force defeated a similarly-sized Macedonian army (primarily made of phalanx-style infantry), in part by detaching part of a legion to exploit a break in the Macedonian line and attack some of the Macedonian phalanxes in the flank. In the latter (about 30 years later in a different war between Rome and Macedon), the Romans won by retreating over rough terrain, which disorganized the Macedonian units, allowing the Romans to counterattack individual phalanxes in the flanks.

    • Eric Rall says:

      If you’ve ever watched any HEMA or Kendo or Dog Brothers (which is fantastic BTW)- in any melee conflict it’s impossible for them not to hit each other dozens of times in close contact.

      I’ve seen various people on youtube (IIRC, Lindybeige, Matt Easton, Shad, and Skallagrim) talking about this as a major unrealistic feature of modern SCA fighting, reenactment, and HEMA-based sparring: the surviving historical sword-fighting manuals place a much higher emphasis on not leaving your opponent openings to hit you, compared with the way most modern people spar with practice swords.

      So- was the lifespan of every medieval soldier just a year or two? Did most sword-bearing men who went into battle die in their teens or 20s? How were there any old, experienced soldiers?

      Battles were relatively rare in pre-modern warfare: wars were mostly fought by sieges (a large force sits outside a fortification and waits for the smaller force inside to starve and surrender or for the attackers’ engineers to batter the fortification into rubble) and harrying (looting and burning undefended and unfortified parts of the enemy’s territory). In the Hundred Years War, for example, wikipedia lists 56 “battles” over the course of 116 years, and a solid majority of the “battles” listed are actually sieges, naval engagements, or very lopsided affairs where a small force got in the way of a much larger one (at relatively little risk to each individual soldiers in the larger force, and it looks like it wasn’t uncommon for most of the smaller force to run away successfully). So maybe one major pitched land battle every 4-5 years even in wartime, and not the same forces engaged in each battle, so an individual soldier might only fight in 1-2 major pitched battles over the course of a 20-year career.

      There’s also a huge spread in terms of training and equipment in pre-modern armies, and the highly-trained professional soldiers with good equipment aren’t necessarily going to be fighting each other directly. And it wasn’t uncommon for the hottest fighting to be done by lower-quality troops (militias, peasant levees, etc) while the higher-quality troops were held in reserve. The Romans in particular made this a formalized practice: the Triarii (older, experienced soldiers, equipped with spears at least in the Republic period — I’m not sure if the practice carried over to the Empire) would be held in reserve and only used if the less-experienced soldiers were in serious danger of losing. So the soldiers doing the most fighting, and especially the most fighting against equal-or-better-quality opponents, would be the kinds of soldiers who would see 0-1 battles in their career, while the professionals who might see several battles in their careers wouldn’t necessarily fight in every battle they appeared in, and when they fought, would often be fighting opponents whom they severely outclassed.

      Also, if heavy armor was really that widespread on the battlefield, swords would go out of favor for maces and other blunt clubbing instruments, targeted towards the head.

      They did. Romans (at least in the late Republic and early Empire) used swords as primary melee weapons, but in medieval/renaissance eras, spears, polearms, and lances were the standard primary weapons. Swords were either specialized weapons (e.g. the giant Zweihander swords used to knock spears aside and create an opening for your buddies to stab in their own spears) or sidearms (used as a backup when you lose your primary weapon or find yourself in a situation where it’s unsuitable).

      • hash872 says:

        But still, outside of open war you had interpersonal conflicts and duels and drunken arguments that turn into fights and bandits and quarrels…. supposedly the homicide rate was so much higher in medieval times than modernity. Seems like a professional soldier or other tough guy would at least be in several armed conflicts of one kind or another in his life

        • Eric Rall says:

          The homicide rates in 14th century London and 15th century Amsterdam were both in the neighborhood of 50 per 100,000 (source), or about 1 murder per 2000 person-years. That’s high compared with modern murder rates (about double murder rates in New York City in the early 90s), but probably not as high as you’ve been thinking. Even if those deaths were concentrated in a particular class of people (e.g. men with soldierly/tough-guy backgrounds), they’re still more likely to die of ordinary causes (disease, age-related degeneration, accidental injury, etc) than violently.

          I’ve heard that duels were typically fought to first blood, so one combatant would be uninjured and the other wounded (most commonly a superficial cut to the arm or leg*). Based on recorded homicide rates, I’d also expect there to have been some kind of generally-accepted limits on brawls to keep them from getting too far out of hand (social norms like “don’t be the first to pull out a knife” or “you don’t need to kill the other guy, just knock him down and claim victory if he doesn’t try to get up”).

          (*) This changed in the Renaissance, when long, stabbing-optimized swords (rapiers and smallswords) became the standard weapons sword-armed civilians would carry and use in duels. With a rapier, a deep stab to the torso becomes a lot more likely as the first wound, and a punctured lung or perforated intestine is very likely to be a mortal wound without antibiotics and modern surgical techniques.

      • edmundgennings says:

        This answer seems the right one. If getting hit means a good chance of dying, then one will put a lot more effort into not getting hit.

      • sharper13 says:

        My experience (for what its worth) with single combat is that when facing a real knife or sword with a real opponent, tactical distance becomes much more pronounced in your tactics. As a result, either the foolish guy gets hammered quickly, or else both experienced opponents do their best to stay out of range of the other guy’s weapon unless they perceive or create a sure-fire advantage.

        That translates into larger, yet still relatively evenly sized groups in that if you retreat just a little bit and your direct opponent moves to close the distance again, he suddenly finds himself in range of your neighbors in the line who are now flanking him. So he’d tend to avoid that situation as well.

        Without coordination and trust between members of a line of battle (which is what makes soldiers so much more deadly than a group of warriors, if you get the distinction), you’re either in a brawl or the smaller side is running pretty quickly.

        The other factor to take into effect is the evolutionary one. There’s a reason the highest percentage of those killed tended to be the first-timers. If you haven’t figured out how to best avoid getting hit, then you’re much more likely to die or be seriously wounded. Once you have, your longer term survival rate is much higher than your first battle survival rate would imply.

        BTW, it’s not just melee weapons. People behave much less recklessly over time in real combat with firearms than they do when playing paintball or first-person-shooter video games. Knowing its “for real” and you can’t actually respawn makes you much more cautious. That’s one factor I liked about the old America’s Army game, it was designed so if you died, you were dead. No re-spawning during that round. Made people at least a little bit more likely to use more realistic tactics.

        • Aapje says:

          Operation Flashpoint: Cold War Crisis was a precursor to America’s Army and was fairly revolutionary in its realism. Injuries would not heal and you had very few saves, so you had to be very careful.

          It was so realistic that an army simulator was derived from it, Virtual Battlespace Systems 1 (VBS1).

          • dndnrsn says:

            I found OFP and its successor games extremely tedious, because while they aimed at realism, they still demanded action-movie heroics of the player character.

            There was one, otherwise extremely undistinguished, shooter game (I mean really undistinguished; it was like playing a big map online shooter with a bunch of bots) that had an interesting frame narrative – the protagonist wasn’t the PC, but rather, was a journalist. The PCs were nameless soldiers, and when you got killed, you respawned as another soldier. Meant you could have a coherent narrative focusing on a character, without the “Private Wilson is dead; war’s over” nonsense.

          • Aapje says:

            Presumably, to avoid that and still have a decent experience, you need to play as a (trained) team, which was how VBS1 was used.

            I used a save-cheat, allowing me to save more often 😛

    • nobody.really says:

      I have a vague notion that people with armor and swords didn’t fight people with armor and swords; they attacked the amorless people on foot. You wanted to CAPTURE the guys wearing armor, ‘cuz they could fetch a fine ransom. Likewise if you wore armor and were captured, it was understood that you’d be preserved and ransomed. Outside of a tournament, only a fanatical knight wound betray his class and the rules of chivalry by doing mortal combat with another knight.

      At the Battle of Agincourt, Henry V showed his loyalty to the English, not to his class, by declaring that he would not be ransomed–and later by ordering the execution of most of the French knights that had been captured. (Ok, there was also the small matter that the number of captured French troops outnumbered the number of English soldiers, and it was unclear whether the battle was over, but let’s make a virtue of necessity….)

      • Watchman says:

        One odd feature of Agincourt is the number of important nobles captured, the sort with really good ransom value (chivalry as a set of values was effectively very polite piracy). This meant that there was little value in keeping alive the non-noble knights (the majority of knights were household retainers, not nobles themselves) as they had minimal value, and little chance of being ransomed anyway since the household monies were going to have to pay for the lord’s ransom first. Henry was clearly a rational thinker.

        • Lillian says:

          You are grossly mischaracterizing the nature of Henry’s decision to order the lower ranking prisoners executed at the Battle of Azincourt. What happened is that towards the end of the battle Ysembart d’Azincourt and his personal troops used their knowledge of the local terrain to get around the English line and mount a successful attack on their baggage train.

          At the same time this happened, the large and still fresh French rearguard started moving in a manner that appeared to the English as though they were preparing to enter the battle. In reality, they were making ready to withdraw from the field. However if they had advanced to engage, it would have left the English in a position of facing attack from two directions while their forces were intermingled with a vast number of prisoners who could have taken the opportunity to take up the weapons strewn about the field and resume fighting.

          Also men-at-arms didn’t have “minimal value” as you say, the ones with minimal value would be common soldiers who were not even household retainers. (Though none were captured since the French men-at-arms left them in the rearguard.) Pretty much any free man captured in battle had some value to someone. Moreover people were willing to hold men for years while their families saved money to pay for the ransom. Were it not for the precarious situation, there would have been no rational reason to execute any of the prisoners, and indeed they would not have been.

          In short, the Henry ordered the slaughter of the lower-ranking prisoners because of tactical concerns rather than economic ones. There were simply too many of them to control while fending off a two-pronged attack. He was indeed being rational, but not in the manner you describe.

    • sfoil says:

      “Pre-modern” (pre-gunpowder, I assume) covers an awful lot of time. But the answer you’re looking for is shields, mostly.

      You’ve also been misled by the sparring matches you’ve watched. Real combatants were much more careful about minimizing their own vulnerability, since the penalty for being hit is a likely-fatal injury rather than just getting whacked with a stick. Likewise, the need to minimize exposure is a major culprit in the well-documented plummet in small arms accuracy in modern combat compared to range settings.

      A few other factors: the average soldier, even a professional, spent the overwhelming majority of his time doing something other than fighting in battles.

      The need to keep wounds clean was well understood, so while infection more dangerous than today, preventative measures against it were known and taken. In fact, there was a general understanding of hygiene/sanitation being important, even if all the mechanisms weren’t scientifically understood. A gunpowder example, but the soldiers in Stonewall Jackson’s army weren’t permitted to eat in their tents, for example.

      • INH5 says:

        Likewise, the need to minimize exposure is a major culprit in the well-documented plummet in small arms accuracy in modern combat compared to range settings.

        And to be clear about the magnitude here, if you watch helmet cam combat footage from places like Syria or the Ukraine, there is a lot of firing blindly in the general direction of where you think the enemy is. This kind of thing tends to get left out of movies, TV shows, video games and so on because, well, it’s boring.

        • Aapje says:

          Suppression of the enemy is an important concern. This is a major reason why suppressors are not that often used in the military. Generals want the enemy to be afraid to accurately fire at you, while they (often in vain) try to get their own soldiers to aim in the face of enemy fire.

          • sfoil says:

            FWIW, my personal but not uninformed opinion is that suppressors aren’t used because they cost money, add weight, and because the National Firearms Act has retarded their refinement if not their development.

          • woah77 says:

            Also because they don’t actually make firearms that quiet, so it’s not like nobody can hear you. The distances required to make your firearm effectively quiet with a suppressor means that you are also far enough away to have echoes and other interference make it difficult to pinpoint your fire even without it.

    • Yes yes, heavy armor deflects swords. But I suspect that only the very wealthy could afford such a thing.

      Shields, however, are cheap.

    • carvenvisage says:

      How did pre-modern age soldiers survive conflicts? I’m especially thinking of sword fights. If you’ve ever watched any HEMA or Kendo or Dog Brothers (which is fantastic BTW)- in any melee conflict it’s impossible for them not to hit each other dozens of times in close contact.

      That’s partially based A. on lack of incentive- you don’t die in real life if you die in hema B. on lack of skill- Even the most dedicated HEMA practitioners are not people who’s lives depend on it, nor do they dedicate their highly to it.

      And nor is martial ability and skill any longer something that brings brilliant or ambitious people glory en masse. The people who would have tried to make their way by physical fearlessness and brilliance of coordination in the past is not the same demographic that dabbles or even devotes themselves to hema nowadays. If you have godlike reactions and focus, amazing physical attributes, or an unparalled ability to devote yourself to a craft, there are still life paths you can pursue to gain fame, glory, and riches, so why would someone like that devote themselves primarily to historical, -clues in the name, swordsmanship instead?

      >Defense where a swordfighter smoothly parries every strike is just for the movies.

      You don’t need to parry every strike, you need to parry 1 or 0 strikes in order to strike them a disabling blow.

      _

      >hit each other dozens of times in close contact

      There are stories of this happening though. Few points/ideas:

      -Getting “hit” doesn’t mean getting destroyed. If you’re serious about killing someone or getting killed you’ll ideally become a bit less attached to your flesh than you otherwise would be.

      -I wouldn’t be surprised if professional soldiers had stronger regenerative capacities because of general vitality and practice for minor wounds.

      -I remember reading that professional gladiators in rome used to be fatter than we’d expect, because it functioned as a layer of protection. (sort of like a walruses skin)

      -Speaking of armour.. armour! -Most hema bouts simulated unarmoured combat.

  33. DragonMilk says:

    Don’t want to tread into CW-territory so I’ll keep it personal when it comes to taxes. Curious if anyone else was surprised to see an additional bill rather than refund. I have always gotten a refund and was going to switch to H&R block, but joke’s on me. No refund to switch for.

    1. I’m salty about SALT. Because I’m in NYC, I lost over $11k of deductions due to the $10k cap.
    2. Increasing the standard deduction hurts donators on a relative basis. I think it increased from 6,350 to 12k. Seems like a tacit form a welfare but not positioned as such, but essentially it means the first X dollars of giving providing no tax benefit has increased
    3. My tax attorney friend said that there were various exemptions for itemizers that were eliminated, and that some likely applied to me.

    All that said, the effective tax rate along the way could have been lowered, but I suspect not by an amount that makes up for the 11k SALT deduction difference alone.

    • Edward Scizorhands says:

      There are a bunch of deductions that should never have been deductions in the first place. SALT is one of them. Mortgage interest is another. Health insurance being tax-free is also a weird artifact that escaped JFK and Reagan’s tax reforms, and has bitten us in the ass even since.

      Even when taking advantage of those deductions, I knew they needed to go.

    • hash872 says:

      Re: 1. The ultimate irony of the Trump administration so far is that Paul Ryan & the Republicans passed a progressive reform to the tax code that redistributes money from the high income crowd- a Sanders-like policy. (I say this as someone that’s going to have to write a $50k+ check to the IRS this year)

      • AISec says:

        It’s worth bearing in mind that the guy that designed the tax reform for Trump was a globalist Democrat (Gary Cohn). The Freakonomics guy recently interviewed him and he claimed that that’s not a bug, it’s a feature.

        • J Mann says:

          Some of the complaints about that is that it seems designed to punish blue state rich more than red state rich, but I think most of the changes are good policy.

          • baconbits9 says:

            It does appear to be both designed to increase taxes on heavy blue areas, but also to not give them credit. Lots of democratic tax proposals discuss the rich paying for the services which these ones don’t.

      • I was struck after it passed by the degree to which people were complaining about features, such as restrictions on the deductibility of state taxes and mortgage payments, that mainly targeted the rich, in a bill that was routinely described as helping the rich.

        • Eric Rall says:

          That might be in part a sign of equivocation between different senses of the word “rich”. In particular, there are a lot of people in what might be termed the “professional” class (successful mid-career programmers, doctors, lawyers, etc) who consider themselves “upper-middle class” while still having far above-average incomes, and are substantially affected by the limits on those deductions, but who may still favor higher taxes on people substantially richer than they themselves are.

          The self-perception as “upper-middle class” is somewhat understandable based on 1) norming off of your peers, who are likely disproportionately in similar fields and income brackets as yourself, 2) high-paying professional jobs being disproportionately concentrated in very high cost-of-living areas (e.g. the Bay Area and New York City), where a well-into-six-figures income doesn’t buy much more than what would be an upper-middle-class standard of living in most of the rest of the country, 3) there being a substantial separation between a “professional class” income/wealth level and the income/wealth level of, say, an executive or founding owner of a major company.

          • The mortgage interest deduction is now limited to $750,000. Someone paying more than that on his mortgage had better have an income that is more than “well-into-six-figures.”

          • The Nybbler says:

            The mortgage interest deduction is now limited to the interest on a mortgage with a principal amount of $750,000 or less. That’s easily achieved with a “well-into-six-figures” income. The SALT deduction of $10K is even easier achieved; that would be property tax on a ~$300,000 house in my town.

          • Eric Rall says:

            I’ve been hearing a lot more complaints about SALT than the mortgage limit, which doesn’t surprise me since the SALT cap is a lot easier to hit, and since existing mortgages are grandfathered at the old limit ($1M in principal), so only people who took out new loans in 2018 with a principal balance between $750k and $1M are affected.

          • @Nybbler:

            You appear to be correct. My error. Mea Culpa.

    • zoozoc says:

      I live in a relatively high-tax state (Oregon), but I didn’t notice anything change with my taxes. Of course I do have 2 kids and don’t make a ton of money, so I benefited with the 2k per child deduction.

      I think the issue is that you live in NYC. It never made sense to me why the federal government should subsidize high-tax areas.

      • meh says:

        The thousand other things they subsidize make sense to you?

        • quanta413 says:

          It’d be nice to cut almost all of those things too. Let the reaping begin. Farm subsidies first.

        • zoozoc says:

          I think “thousand” might be a bit of an overstatement, at least as far as personal taxes go. It seems like most of the major “subsidies” make sense. The two especially I agree with are (a) long term capital gains and (b) children. I think the government should encourage long term financial investment in the economy as well as encourage productive members of society to have children.

    • SamChevre says:

      I’m pleased with the SALT deduction loss (I thought it was bad policy), and losing it meant I also lost the charitable deduction (which I’ve argued before shouldn’t be itemization-dependent) and the mortgage interest deduction (another bit of bad polciy).

      Net/net, it cost me a little money–but I’m feeling like the Pole wishing for a Mongol invasion.

    • Nornagest says:

      I’d been itemizing in previous years, and now I’m taking the standard deduction, but the biggest itemized line item was state taxes, which are capped now. It worked out to be about the same, and I got a small refund from the feds, which is what I was shooting for.

      The bigger surprise was a big (~$1000) refund from my state, which I wasn’t aiming for and still can’t really make sense of. Not going to argue, though.

    • The Nybbler says:

      We got a bill this year instead of a refund, but wasn’t overly surprised. Our mortgage interest is going down, and along with the SALT limit that means we used the standard deduction ($24K MFJ). But in previous years we’ve hit AMT anyway, which also eliminated the SALT deduction.

    • Eric Rall says:

      I owe about $140k. But that’s because I sold a house last year in the Bay Area and realized a decade’s worth of capital gains, and I expected and budgeted for the tax bill.

    • J Mann says:

      Big tax bill, but that was mostly the result of good fortune and basically expected.

      I will say that the effort of tracking deductions, then finding that the standard deduction is now high enough that there was no point, is one of those good fortune things that’s going to get old. I imagine in a year or two, I’ll just quit keeping track of visits to Goodwill, etc.

      I don’t expect my charitable activity or finance structure to change much. It would arguably make sense to shift towards “bunching” -e.g., giving five years of charitable contributions at a time in order to try to take the standard deduction 4 years and itemize one -but it’s way too much trouble at my level.

    • etheric42 says:

      Since withholdings changed this year, wouldn’t a better question be to ask if people were surprised at their taxes going up or not?

      A few months ago all my news feeds were people shocked that their refund went down, or that it did not go up as much as they were expecting. But at no point in the articles did they ever say if the total taxes paid went up or down (although one article had a tax advisor quoted as saying tax refund amount is different than taxes paid, it did not bother to go into more detail).

      My refund went up and my withholding went down. But I also have two kids, a wife, live in a state without income tax, and have never had enough money where it made sense itemize.

    • Reasoner says:

      2. Increasing the standard deduction hurts donators on a relative basis. I think it increased from 6,350 to 12k. Seems like a tacit form a welfare but not positioned as such, but essentially it means the first X dollars of giving providing no tax benefit has increased

      Alternate years of taking the standard deduction with years of donating 2x as much?

    • IrishDude says:

      Total federal taxes reduced by 30% on an apples to apples basis (controlling for changes in income and number of dependents from 2017 to 2018). Two grand tax credit for each kid, doubled standard deduction, and lower rates across the board led to the decreased tax obligation.

      My deductions have been lower anyway since I paid off my mortgage in 2017, so the SALT cap didn’t hurt me and I would have likely taken the standard deduction for the first time this year even under the old tax law (last year I itemized but it was marginal benefit over the standard deduction).

    • baconbits9 says:

      We paid significantly less in taxes this past year (several thousand) despite earning more than last year due to the changes + adding another deduction to our family. We are right around the 6 figure mark in earnings plus another 15k or so in rental income (with lots of deductions against that).

    • DinoNerd says:

      I’m “upper middle class” in an area with high state taxes. I don’t know (yet) how much my total tax bill went up – but Turbo Tax suggests I’ll be writing a $10K check to the feds, which will probably also result in penalties for under deducting/not making quarterly estimated tax payments. (My automatic deductions claim no exemptions whatsoever; next year I’ll have to withhold an extra $420 per paycheque. But some of that may be the result of the deduction tables being slanted to increase people’s take home, as was announced to be the plan at the time.)

      At the risk of cultural wars – I don’t object to a tax increase per se. I object to one that appears to me to be targetted at residents of specific states, and that was farther more sold as a tax decrease. And it would have been nice to have had useful guidance on what withholding levels to set.

      • quanta413 says:

        At the risk of cultural wars – I don’t object to a tax increase per se. I object to one that appears to me to be targetted at residents of specific states

        Everyone likes their own carve out and wants someone else to pay up. You have met the enemy for why tax policy sucks and he is you. You may as well be complaining that the tax increase was targeted at rich people. I doubt taxes went up for poor people in NYC or California.

        There is no sensible policy reason the federal government should lower the tax burden it imposes because the state and local government has a higher tax burden. Almost everyone who claimed SALT made more than 100,000 a year. That makes them high income. And the standard deduction doubled anyways, so the change probably doesn’t bite much until you go higher than that.

        The mortgage interest deduction is poor policy too.

        And I’ll be moving to California after finishing graduate school in a couple months to get whacked by the changes too, so it’s not like I’m not going to get bitten by the change.

        EDIT:

        And it would have been nice to have had useful guidance on what withholding levels to set.

        I think this is a valid complaint though. Even if the info was somewhere (technically in the worst case it could be worked out after some tedious pain), it’s the sort of thing it’d be nice to get a piece of mail about or a big advertising push.

        • Randy M says:

          And I’ll be moving to California after finishing graduate school in a couple months to get whacked by the changes too, so it’s not like I’m not going to get bitten by the change.

          You are expecting to make more than 100,000 two months after graduating?

          • woah77 says:

            This is not unheard of. My brother (who didn’t even graduate) was making more than 100,000 in SF at a mobile game company.

          • John Schilling says:

            Yeah, I’ve hired people at $100K right out of (graduate) school. I think the average in STEM world is a bit lower than that, $100K isn’t three- or even two-sigma high for a starting salary with an advanced degree.

          • Randy M says:

            Sigh. I’ve done nothing with my life.

            Yeah, well, other than that I mean.

          • quanta413 says:

            Possibly immediately, possibly within a few years. There’s a large amount of uncertainty here. I could make more or less. I’m not deeply invested in an exact salary number, so depending on my options I’d give up a significant amount in return for a more flexible schedule that let me have more time with my kids when I have them etc. After adjusting for cost of living though, 100,000 in California is like 75,000 where I live now which I’d bet is closer to the median of U.S. cost of living. Which is a great salary, but not as crazy as a six figure salary.

            I’ve had friends with somewhat less technical degrees than me be making in the six figures at the first job they land after depositing.

            If I could go back in time, I might not get the degree though for two reasons. I spent 7 years making 20,000 a year when if I had focused on job searching or took internships instead of doing abstruse research during summers of my college degree and then doing more of that in graduate school I could’ve taken a nice job at ~60,000 in California 5-7 years ago. But more importantly, I would’ve had a better chance to get married earlier and have kids earlier.

        • brad says:

          There is no sensible policy reason the federal government should lower the tax burden it imposes because the state and local government has a higher tax burden.

          That’s a much stronger claim then “on the balance it’s a bad idea”. For example it’s plausible that higher state and local taxes substitute for some federal spending. Or that the total tax rate in e.g. California is an overall net negative for the country and we’d be collectively better off if the federal government lowered it.

          • quanta413 says:

            I understand that in a theoretical discussion you are correct. But in the sense of how U.S. spending is actually apportioned, I don’t buy it as being a meaningful argument.

            The carveout was there because upper middle class and richer people liked it. Now it’s gone. If the combined tax burden in California is too high, they should lower their own taxes. Having grown up there, California government is not famous for its efficiency or good spending habits.

          • John Schilling says:

            There’s definitely a perverse incentive in a federal system where the states are told to set whatever level of taxation and spending they feel is appropriate and the federal government effectively covers ~1/3rd of that by subsidizing the taxpayers and thus cutting off part of the “this is too much taxing, knock it off” feedback loop.

          • The Nybbler says:

            Theoretically, anyway. But in NJ that feedback loop doesn’t really exist. It’s just government claims it needs more money, government increases taxes, government fails to do anything useful with said taxes, government claims it needs more money. If anyone objects the word “schools” usually shuts them up.

    • S_J says:

      About whether taxes increased or decreased:

      My taxes owed came out a little less, but my withholding had decreased by more than that…so I owe money to the Feds.

      My previous itemized deductions came very near to the new standard deduction, so the change in deductions didn’t change the Taxable Income very much.

      Before the change in standard deduction, I wanted to itemize interest-on-my-mortgage and the values for property tax. Now I don’t care. Also, my local-and-State-taxes are far below the new cutoff for deductibility…but since I’m using the standard deduction, I don’t care.

      It does grind me, however, that the withholding done my my employer left me owing a noticeably-large amount of money to the Feds. (Large enough that the tax software recommended that I file Form 2210. That Form must be filled out if withholding was less than 90% of taxes owed. Some penalty may apply for under-withholding, but it will likely not apply in my case: my withholding was between 80% and 90% of taxes owed.

  34. Lambert says:

    So I’ve got bored with conversations about the relative merits of various translators of various classics, and have thus resolved to become a polyglot.

    It seems sensible to start at the beginning of the Western Cannon.
    So does anybody know any good resources for learning to read Classical Greek?

    I’ve found this, so far: https://lrc.la.utexas.edu/eieol/grkol/50 (wow, the Scythians were massive stoners)

    • Nick says:

      It depends on what you’re trying to read. Most courses teach Attic Greek, so you’ll be able to translate Plato but will have a good deal of difficulty with the epics. We used Groton’s From Alpha to Omega in my undergrad, which suits me very well but not, perhaps, many others—it’s heavy on grammatical explanations verging into technical.

      Read the chapter, take notes, practice verb and noun forms regularly throughout the week by writing paradigms from memory, and assign yourself a selection of sentences at the end of the chapter. Normally I’d recommend you leave yourself some for review later, but Groton’s pretty good about reusing tricky word forms or grammatical features you learned at a pace approximating the forgetting curve, so just doing the ten into-English and 5-10 of the into-Greek sentences each chapter should suffice. I’ll grade them for you if you like.

      If you make it through about 30 chapters of that, though, plus the chapter on μι-verbs, you should be able to stumble your way through guided translations. We translated from Steadman’s edition of the Symposium and it was quite good, but beware that there will be quite a few typos in the notes—I sent him about 30 from the Symposium at the end of the semester, and we didn’t even translate the whole thing. But these typos will rarely actually trip you up. If it’s the epics you want, try his Iliad or Odyssey books instead.

    • Lambert says:

      Recitation/singing and calligraphy would also be useful skills.
      (for some definition of the word useful)

    • Nornagest says:

      It seems sensible to start at the beginning of the Western Cannon.

      Canon. They’re both serious business, but the canon is the standard you adopt, and the cannon is what you enforce it with.

    • aho bata says:

      You’ll find a few grammars of Greek here to mix and match from. I recommend the Babbitt or Smyth for Attic if you’re after completeness.

      If you get bored, there’s a few hundred other languages to browse through, including every classical language ever.

  35. proyas says:

    The Great Filter hypothesis posits that we haven’t detected intelligent aliens because they all go extinct, but how can this happen once the aliens establish self-sustaining colonies in multiple star systems? Even if their home planet in their home system exploded, the colonies in the other systems would be unaffected. I don’t see how every remnant of an interstellar civilization could just disappear.

    • Unsaintly says:

      The point of the Great Filter hypothesis is that it happens before a civilization reaches the “interstellar” level. It’s attempting to explain why we don’t see signs of alien life – past or present – and so the fact that we don’t see signs of alien life isn’t evidence against it.

    • woah77 says:

      Mass Effect had an excellent answer for this: They get hunted by some extragalactic entity that consumes sapients. Which is to say: traversing the stars is a noisy affair, there ain’t no stealth in space, and a quieter hunter could eliminate a civilization without too many troubles assuming a reasonably large timescale.

      • Eponymous says:

        Then why isn’t this hunter consuming all the available resources we see lying around? You have to posit pretty weird preferences for this dominant civilization that it decides to just sneak around and assassinate any other sapiens. Some sort of extreme Gaia cult, at an intergalactic scale.

        • woah77 says:

          I mean, maybe it needs organic minds to fuel some kind of collective and they need to evolve to a certain level to be useful? I was positing an example of how a starfaring society could still go extinct. The epistemic status of this is “Gotten from a video game”. There could be all sorts of explanations for why, but that probably is the least useful question to ask oneself. The better questions are “How likely” and “what might be done”

          • Eponymous says:

            My reply was eaten by the filter. Basically: see Yudkowsky’s “Generalizing from Fictional Evidence”. But in this case generalizing from a game might be fine, since one solution to the FP is that we’re in a sim which is a game, and we haven’t seen anyone else yet because all the players started out at the same tech level.

        • Aftagley says:

          Well, in the Mass Effect Cannon,

          ** Spoilers for a 10 year old game series **

          the hunter species (called reapers) were actually rogue AI programed to periodically cull the galaxy of all species capable of interstellar travel. Since that culling was their only purpose, and the time horizon between their “harvests” wasn’t long enough for any species to develop technology capable of threatening them, they didn’t need to consume any more resources beyond what they need to keep them functioning.

          • Eponymous says:

            Gotcha. By “rogue” I assume you mean they screwed up and the AI wiped them out?

            So the aliens solved the control problem well enough to successfully limit the AIs to not consuming all available matter (the most straightforward way to guarantee they don’t miss any interstellar civs), but not well enough to avoid getting wiped out themselves? That’s a *very* narrow part of outcome space, exactly corresponding to what makes for a good story.

          • moonfirestorm says:

            Aftagley is incorrect: the Reapers weren’t originally programmed to destroy intelligent races. In fact, the species stems from an AI that was programmed to solve the problem that synthetic and organic intelligences inevitably end up killing each other. It apparently didn’t find a solution, destroyed its creators, and processed them into the first Reaper.

            The Reapers transform the “harvested” intelligent species into more Reapers (a Reaper takes the form of a large capital ship) and apparently couldn’t reproduce in a satisfactory form without them. So wiping out the galaxy doesn’t actually help them, they just lose their prey.

            They were actually trying to speed up the cycle a bit: the FTL system in-game is a network of jump gates that were originally created by the Reapers. This helps species develop and uplift each other, and then also lets the Reapers cut the species off from each other when the harvest comes. The center of the gates is a large space station that tends to be a galactic hub, and is also where the Reapers first emerge in a harvest, which helps throw the organic species into disarray.

            Things started falling apart for the Reapers when one species managed to have a group survive the harvest, monkey with some of the Reaper technology, and pass enough information to the next generation (us) that they knew about the harvest in advance.

          • Aftagley says:

            Yep, going deeper into cannon:

            A long time ago there was an alien species that could dominate other intelligence life via some kind of psychic control, which the humans of Mass Effect call “leviathans”. This species ran a galactic empire based on mental repression. Their control wasn’t total, however, and the species kept some degree of independence of action, but not of motivation. So, they could live life and develop technology in a relatively normal fashion, but couldn’t consider trying to overthrow the Leviathans.

            Unfortunately, a trend emerged among these slave races: at some point they would all develop artificial intelligence. This artificial intelligence would then slaughter the race that created them and try to take over the universe/maximize paperclips/whatever. This meant the Leviathans were constantly (on a galactic timescale for immortal psychic aliens, I mean) having to go to war with AIs.

            They won these wars, but eventually got tired of the constant warfare. They didn’t want to just kill every other species, since their way of life required a galaxy full of mental slaves, but everything they tried to stop them from creating AIs failed: eventually the client race would develop AI, then it would destroy them.

            Eventually the Leviathans realized they couldn’t solve this problem on their own and (in a massive oversight, imo) decided to build an AI to help them think of a solution. The AI then decided the proper strategy to reduce AI risk was “kill all species as soon as they develop the technology necessary to leave their home planet.” This would keep a stable of slave races around, but end the potential for AI risk.

            Unfortunately for the Leviathans, the AI choose to begin its mission by culling the Leviathans, although one or two of them survived in hiding.

          • moonfirestorm says:

            I’m not sure I had ever seen the details of the leviathan civilization before. Where did those show up? EDIT: found it, looks like there was a Mass Effect 3 DLC I didn’t know about.

            It’s sort of weird that the Reapers aren’t actually enforcing this solution though. The “hide in the galactic depths, then come forth to destroy all” strategy comes far too late for actually accomplishing the AI’s objective. Heck, we even have an AI risk situation with the geth, and that was hundreds of years before the Reapers came.

        • zzzzort says:

          One line of thinking is that having a dominant civilization kill all the other civilizations is plausibly a stable equilibrium, and possibly the only one or one of a few. So eat or be eaten; eventually the universe will spit out a civilization suited to destroying all the other ones. This doesn’t predict that they should be stealthy (other than stealthy things being better able to eliminate other civilizations). Nor does it say anything about resource utilization, but seeing as our civilization is projected to have a plateauing population the assumption that all civilizations would try to expand as much as possible seems tenuous anyway.

          • Eponymous says:

            but seeing as our civilization is projected to have a plateauing population the assumption that all civilizations would try to expand as much as possible seems tenuous anyway.

            A temporary reprieve, I’d wager. Malthus/Darwin are not so easily evaded.

        • Nornagest says:

          That’s a very good question, but it’s one that Mass Effect infamously failed to come up with a good answer to.

          • Nick says:

            Wasn’t the answer just “lying dormant outside the galaxy until the appointed time”? It seems consistent with the (definitely weird) values that the Reapers have.

        • Clutzy says:

          Why would the hunter want resources as opposed to safety and a lack of rivals?

      • Tenacious D says:

        If we’re getting into fictional territory, it’s strongly hinted that the final book of The Expanse series will involve facing a great filter risk.

    • Eponymous says:

      I think you are (roughly) correct, and therefore it is likely that the great filter (if one exists) lies in our evolutionary past.

      Except I would slightly amend your argument to say that the real problem is that we are on the verge of the singularity. Thus the timeline to launching an interstellar civilization is quite short (though whether it is “our” civilization remains to be seen).

      (Incidentally, I favor another explanation for the Fermi Paradox).

    • dick says:

      Many of the things that would cause a species to wipe itself out aren’t mitigated by building a colony on a distant planet.

      • proyas says:

        Like what?

        • dick says:

          I was thinking of societal ills caused by overcrowding, e.g. wars over resources; you can’t ship people off-planet fast enough to overcome them.

          • albatross11 says:

            Even a Mars colony has some problems as species-survival insurance.

            Case #1: The Mars colony is a smallish outpost, like the Antartica research stations now. If Earth blows itself up/finds itself knee-deep in grey goo/has everyone die from a hobbyist-made plague, then they’re just the last to die.

            Case #2: Mars is a major part of humanity, with millions of humans living there and substantial political, cultural, and economic power concentrated there. At this point, Mars is potentially also a target for whatever goes wrong on Earth. The two sides fighting it out on Earth may have their counterparts on Mars, or Mars might even be one side of the war (think of the world of _The Expanse_). Alternatively, the recipe for nukes in your kitchen gets out and is read on Mars as well as Earth. It wrecks civilization both places.

          • John Schilling says:

            But we’re not talking about Earth and Mars in this context; we’re talking Earth and Alpha Centauri. If there’s an outpost at Alpha Centauri, then it pretty much has to be self-sufficient and isn’t going to automatically die out just because Earth isn’t there any more. And even if there’s a large and thriving civilization at Alpha Centauri, it isn’t engaging in regular trade with Earth, isn’t likely to be a target of Earth’s self-inflicted wrath, and would be difficult for Earth to eliminate even if it wanted to.

          • dick says:

            I think the assumption is not that the home planet blows up the colony, it’s that if the home planet nukes itself then it’s likely the colony will as well. You can think of counter-examples, but they’re the exceptions that prove the rule.

          • John Schilling says:

            I’m not sure how to make any sense at all of that last sentence. There are no examples, and “the exception that proves the rule” is not reason.

            Furthermore, we are postulating essentially autarkic colonies founded by people who decided to leave their homeworld and never return, and whose subsequent interaction with that homeworld (and any other colonies) consists of electronic communications with a decade or so of latency and perhaps occasional immigrants who have also decided to leave their homeworld and never return. The colony may fail on its own. But if the colony succeeds on its own, it is far from obvious that a radio message from the homeworld saying “OBTW we’re blowing ourselves up now” would result in the colony doing the same. A shipload of refugees from home might have that result, but even that is far from certain – and since the radio message (or radio silence) would preceed the refugees by decades, if you are going to assert that it is certain or even likely that the arrival of a refugee ship would have that effect then presumably the colonists would have the same understanding and would use those decades to prepare.

          • dick says:

            I’m not sure how to make any sense at all of that last sentence. There are no examples, and “the exception that proves the rule” is not reason.

            Then I will be more explicit. Suppose the human race founds a colony on Alpha Centauri, and then the Earthlings manage to destroy themselves somehow. What’s the reason for believing the colony on Alpha Centauri won’t meet the same fate?

            Perhaps it’s different somehow. Maybe what destroyed Earth was a shortage of strontium, which AC has plenty of. Maybe what destroyed Earth was religious warfare, and AC was colonized by fleeing atheists. Maybe what destroyed Earth was food riots, and shortly afterward the scientists of AC invented the Star Trek replicator.

            Those are the exceptions. The rule, the default assumption, is that what destroyed Earth was not specific to Earth, it was just humans being human and Moloch being Moloch, and there’s no reason to assume AC won’t eventually meet the same fate.

            (Note that I’m not arguing that human colonies will necessarily perish; just that if the first one does, it’s reasonable to assume the others will)

          • quanta413 says:

            I don’t see much reason to believe humans will wipe themselves out though. Most extinctions are probably due to either being outcompeted by close relatives for your niche or due to astronomical events like asteroid impacts. The asteroid impact type scenario worries me. That or just not leaving earth ever. For the first one, I’m not sure I could care about such a distant possibility as to which human descendant species won.

            A lot of things people talk about as if they are extinction risks kind of aren’t. For example, global warming is laughable as an extinction risk for humans even in a realistic worst case. The worst case may or may not suck for humans for a couple hundred years, but it’s super far from an extinction scenario.

            Even if all colonies eventually perish, that’s not necessarily a reason to think extinction of all humans and their descendants would happen until something like all the suns stop fusion. If the extinction rate is significantly lower than the founding rate of new colonies, then humans won’t go extinct if they can escape the riskiest time before the number of colonies gets big enough to not worry about a fluctuation in luck where the small number of all existing colonies all go extinct.

          • John Schilling says:

            Then I will be more explicit. Suppose the human race founds a colony on Alpha Centauri, and then the Earthlings manage to destroy themselves somehow. What’s the reason for believing the colony on Alpha Centauri won’t meet the same fate?

            Well, mostly it will be that whatever caused the Earthlings to destroy themselves will be on Earth, about twenty five trillion miles away. Also the fact that the people on Alpha Centauri will be not Earthlings, and indeed selected (probably self-selected) for being very unlike typical Earthlings. And the bit where the destruction of human civilization on Earth will have given them a detailed advance warning of the possible threat.

            Those are the exceptions. The rule, the default assumption, is that what destroyed Earth was not specific to Earth

            I do not agree with this assertion, and I certainly do not cede this argument merely because you claim your victory is the “default”.

            Prove it.

          • dick says:

            Prove it.

            Er, I thought we were trading opinions about hypothetical future events. If you think our colony will necessarily be the sort of place that doesn’t do self-destructive Bay of Pigs shit then fair enough, you’re a smart guy who knows lots of stuff, I don’t think you’re provably wrong. It just doesn’t look that way to me; it seems intuitively obvious that P( colony 2 blows itself up | colony 1 blows itself up ) should be substantial.

          • Nancy Lebovitz says:

            I think the point is that the people on the colony are from the same species as people on earth, and are at risk for making the same mistakes.

            You could probably make a good story about the colony getting news of the disaster up to the end or near it, and what they do to not let that happen.

          • dick says:

            Two thoughts I’d add, after pondering over breakfast:

            1. I’m making the case that, supposing Earth manages to off itself, the smart money suggests that our colony(s) will be likely to follow the same route, and John feels the opposite, one reason being that the colonists will have been selected somehow and thus be a somewhat different sort of group. It occur to me that this seems just as likely to make them more susceptible to collapse as less. The colonists must either be selected by flawed humans (e.g. the AC Corp’s Colonist Selection subcommittee) or some emergent process (e.g. one political faction fleeing another); neither is guaranteed to produce a more stable/robust/generally better group of humans than the home planet.

            2) This being SSC, I must point out that my use of “Bay of Pigs shit” was metonymy for the sorts of human folly that might lead us to destroy ourselves, please do not interpret it as “I think the Bay of Pigs was a genuine near-extinction event and anyone who thinks it was actually not that big a deal should come at me bro”.

          • quanta413 says:

            . I’m making the case that, supposing Earth manages to off itself, the smart money suggests that our colony(s) will be likely to follow the same route, and John feels the opposite, one reason being that the colonists will have been selected somehow and thus be a somewhat different sort of group. It occur to me that this seems just as likely to make them more susceptible to collapse as less. The colonists must either be selected by flawed humans (e.g. the AC Corp’s Colonist Selection subcommittee) or some emergent process (e.g. one political faction fleeing another); neither is guaranteed to produce a more stable/robust/generally better group of humans than the home planet.

            I think you probably don’t even need for them to go extinct in similar ways. You were right that they just have to go extinct for some reason. I’m not big on the human folly reason, but maybe asteroids of sufficient size collide often enough with planets, supernovas go off, showers of electromagnetic energy in the right bandwith from nearby astronomical phenomena occur often enough etc. We could roughly estimate how often those things should annihilate life on a planet. I’m not aware of any species wiping itself out, but no other species is quite like humans either so estimating the odds on that is just a big ?????. Something close to a species driving itself extinct has probably happened at some point on earth even if it’s a weird example, but I dunno what the odds are.

            But if we’re talking loss of all humans, we need to have some sort of guess about the colonization rate. That combined with the extinction rate of a colony is going to determine the odds humans and their descendants go extinct everywhere. If the colonization rate is much higher than the extinction rate, many colonies may go extinct but not the human species. If the extinction rate is comparable or higher than the colonization rate, then humans almost certainly go extinct.

    • JPNunez says:

      I assume the Great Filter is that there are hard relativistic limits to movement/communication, so establishing important colonies on other stars will always be a hard proposition. If somehow the home planet goes kaboom for whatever reason, small colonies on distant stars may find difficult to keep progressing in science until they have successfully colonized their own planet, which may take millenia. If anything, smaller colonies will be more susceptible of going kaboom and never recover than the home planet.

      • Eponymous says:

        I mean, we aren’t *that* far away from possible futures in which a wave of self-replicating space probes expands outward at an appreciable fraction of the speed of light, transforming all available matter into whatever concept of Eudaimonia we manage to transmit to our successors. At that point, interstellar distances become pretty small.

        Sure, we might avoid that fate. But if you want to posit a great filter lying ahead of us, it doesn’t have a lot of time left.

        • eyeballfrog says:

          As is common in this community, you are greatly overconfident in your belief that the singularity will happen.

          • Eponymous says:

            I did say “possible” future. Though your inference that I think a near Singularity probable is correct, even if not strictly implied by my words.

            I’m curious why you think a Singularity is unlikely.

            (I suppose the Fermi Paradox is evidence against a near Singularity.)

          • Kestrellius says:

            Von Neumann machines don’t require a singularity. I don’t think they even require AGI.

        • JPNunez says:

          I wonder if paperclip maximizers are detectable from several light years away.

          But you know what’s detectable with current, non-clip-maximizer technology? delicious delicious inhabitable planets. Well, right now we can _kind_ of detect them but that technology will only get better, even in the short term, while reliable self replicating probes still look a little iffy in the short term.

          What I am getting at is that civilizations capable of self replicating probes _probably_ already detected us, so why aren’t we tiled in paperclip form yet? Maybe we already were detected, and a probe is coming our way, in which case we are fucked, but on the other hand, in the timescale of a star like the sun? The milky way should have been colonized a few times over so, that’s not the Great Filter.

          Self replicating probes capable of tiling the galaxy sound hard to be honest. The slower they are, the more things can go wrong over that long time.

          • Nancy Lebovitz says:

            “Habitable planet” is probably too vague. If you want a shirt-sleeve or near shirt-sleeve environment, only a small fraction of planets which are habitable for someone will be suitable.

          • JPNunez says:

            Fair. It’s more of a “planets with some chance of hosting life of some form”. But still. That only makes the Earth only more desirable from afar, as the only planet in thousands with shirt-sleeve environment. If interestellar civilizations don’t give much of a fuck about the human (?) rights of dinosaurs, at least one of them should have sent their self replicating probe our way.

            Assuming of course, universal shirt sleeves dress code here.

          • as the only planet in thousands with shirt-sleeve environment.

            Shirt-sleeve environment? Do you realize that planet is so cold that water vapor is a liquid–amazing heat drinking stuff. Sometimes even solid if you can believe it.

            (Channeling Hal Clement)

          • Peffern says:

            Oh man, an Iceworld reference? I loved that book!

          • JPNunez says:

            I am a little on the fat side so I will wear a polo shirt from above 15C (59F)

          • proyas says:

            What I am getting at is that civilizations capable of self replicating probes _probably_ already detected us, so why aren’t we tiled in paperclip form yet? Maybe we already were detected, and a probe is coming our way, in which case we are fucked, but on the other hand, in the timescale of a star like the sun? The milky way should have been colonized a few times over so, that’s not the Great Filter.
            Maybe advanced aliens respect the value of organic life, so they don’t destroy Earth even though they could.

            Maybe these advanced aliens are machines, or exist in some other non-organic form (pure energy?), so Earth’s climate is not any more hospitable to them than a barren planet like Mars. Hence, there’s no special reason for them to colonize Earth.

            Self replicating probes capable of tiling the galaxy sound hard to be honest. The slower they are, the more things can go wrong over that long time.
            I think there are ways they could be designed to be extremely reliable and resistant to malfunction.

            https://dpconline.org/handbook/technical-solutions-and-tools/fixity-and-checksums

          • Eponymous says:

            But you know what’s detectable with current, non-clip-maximizer technology? delicious delicious inhabitable planets.

            Or even more obviously, all those stars wastefully burning away in the night sky. Anyone out there that meant business would want to tap into that energy.

            What I am getting at is that civilizations capable of self replicating probes _probably_ already detected us, so why aren’t we tiled in paperclip form yet?

            Right. This is the hard version of the Fermi Paradox. If advanced civs with reasonably high probability undergo intelligence explosions and start tiling their future light cones with paperclips, why do we find ourselves in a (relatively) old universe, not paperclipped?

            There are several possible answers, of course. (The universe isn’t actually old; we are paperclipped; advanced civs super rare; intelligence explosions much harder than they seem; I’m wrong about something).

          • acymetric says:

            Right. This is the hard version of the Fermi Paradox. If advanced civs with reasonably high probability undergo intelligence explosions and start tiling their future light cones with paperclips, why do we find ourselves in a (relatively) old universe, not paperclipped?

            There are several possible answers, of course. (The universe isn’t actually old; we are paperclipped; advanced civs super rare; intelligence explosions much harder than they seem; I’m wrong about something).

            Other possible answer: the terminal goal of this advanced civ isn’t conquest/paperclip tiling.

            Also, depending on how far away they are (wouldn’t need to be too far), assuming they’ve noticed our planet, they might have even seen evidence that we exist yet (“hey, that planet way over there looks like it could potentially support life!”). An advanced civ might well have noticed our planet, but they more than likely wouldn’t have noticed us yet assuming they are limited by the speed of light.

          • Eponymous says:

            Other possible answer: the terminal goal of this advanced civ isn’t conquest/paperclip tiling.

            That’s basically what I meant by saying we’re already paperclipped.

            If you get a superintelligence, it just turns its future light cone into whatever it wants; i.e. something high in its preference ordering. Call this thing “paperclips”. Therefore if we lie in the future light cone of a SI, we are paperclipped.

            Maybe we got a local diety that has weird preferences, and this is it.

            An advanced civ might well have noticed our planet, but they more than likely wouldn’t have noticed us yet assuming they are limited by the speed of light.

            My notion of what its like in the future light cone of a SI mostly excludes explanations along the lines of “it hasn’t noticed us”.

            Heck, even a pre-IE advanced race would presumably be eating all stars within travel distance pretty quickly.

          • JPNunez says:

            If they are intelligent machines, maybe they’d be building computronium around the stars. Maybe they wouldn’t touch Earth, but I feel we’d notice the Technocore building a Halo or Ringworld around the sun, to say nothing of a Dyson Sphere.

            IIRC astronomers have searched for Dyson Spheres by looking at stars with suspicious IR emissions or something, and came up empty handed.

      • proyas says:

        I assume the Great Filter is that there are hard relativistic limits to movement/communication, so establishing important colonies on other stars will always be a hard proposition.
        Somewhere out there, there must be stars less than 1 light year apart that both have habitable planets orbiting them. Such a distance could be traveled with entirely feasible space technology.

        • Randy M says:

          Somewhere out there, there must be stars less than 1 light year apart that both have habitable planets orbiting them

          I’m not sure this is necessarily true. There’s lots of stars, yes, but but lots of constraints on habitability and vast distance for those stars to fill. I’d want to see numbers before assuming this had to be so.

          • albatross11 says:

            It changes if we mean “habitable” in the sense of:

            a. Mars and the Moon, places we could colonize but never walk around outside.

            b. Antartica, the Sahara Desert, the top of Mt Everest, places we could colonize and live and even go outside sometimes, but where we’d still need a lot of technology to survive for long.

            c. What America and Australia were to the Europeans, places we could just go colonize and live with relatively limited technology or hardship once we got there.

            I assume (c) is very unlikely. We have (a) in our solar system, so it doesn’t look so improbable. I have no idea whether (b) is at all likely.

          • Randy M says:

            Yes, that’s true, I wasn’t considering places that could merely house a fragile, barely self-sufficient outpost as habitable.

            If we can’t have an ecosystem that we can be a part of, eventually the system will fail.

        • JPNunez says:

          Maybe? But it’s probably rare and just hoping on astronomical coincidences across two stellar systems does not make a galactic empire. Maybe they are out there, posting how come their two system empire is seemingly alone in the galaxy.

      • Kestrellius says:

        As I said upthread: your civilization’s lifespan is limited by the lifespan of your star(s), and more broadly by the lifespan of the universe. A star you haven’t colonized isn’t just sitting there — it’s destroying itself and the computation its energy could perform. The more stars you enclose in Dyson swarms, the more consciousness you can support before the universe ends.

        Personally I think consciousness is good, even if we can’t communicate with it; and I suspect there’ll be enough people who agree with me to get some colony ships out there. After all, if there are people who believe that more of them is good and people who don’t believe that, and the contest is proliferation…well, one would expect the ones who want more people to win.

    • Randy M says:

      The great filter largely implies that civilizations go extinct prior to the interstellar civilization stage. It’s possible to imagine scenarios where far-flung empires would go extinct (brutal civil war, or some resource required for interstellar travel being extremely rare and easily exhausted, for starters) I think placing extinction (or crippling) events earlier in the timeline is more likely.

      • proyas says:

        brutal civil war
        But the brutal civil war only works as a Great Filter event if it ends with the whole alien civilization being destroyed, as in, all of them dying. That’s implausible and becomes ever more so as you assume more star systems and planets belong to the alien civilization (e.g. – odds increase that one or more star systems will stay neutral, or will be too far away from the fighting to be hurt by it).

        Look at Earth’s history. There have been countless devastating civil wars, but none where both sides destroyed each other to the last person simultaneously.

        or some resource required for interstellar travel being extremely rare and easily exhausted, for starters
        If you’re implying that there might be a resource that enables superluminal travel, then yes, I agree that its exhaustion would pose major problems to an interstellar alien civilization, but it wouldn’t by any means lead to them dying out. Also, there’d be nothing to stop them from continuing to expand their civilization, but at sub-light speeds.

        • Randy M says:

          Right, these scenarios could plausibly end an empire, but they by no means would, so I don’t think a search for a single great filter should look there. But as one more failure mode among many, they might possibly contribute to a dearth of interstellar life.

          There have been countless devastating civil wars, but none where both sides destroyed each other to the last person simultaneously.

          None were undertaken by civilizations capable of interstellar flight. Possibly there exists planet destroying weapons.

          • Kestrellius says:

            The one (interstellar flight) does tend to imply the other.

          • Randy M says:

            True, true, and it may be easier to create a planet destroying missile that to create a planet destroying missile with life support systems attached!

        • JPNunez says:

          There haven’t been a civil war where the nation was extinguished to the last man, but civilizations have disappeared mysteriously all the same. The Mayas come to mind, and the Mycaeans. Probably many more. I don’t think it’s impossible for a civilization to die out in a civil war, particularly with more powerful weapons. Maybe they do not kill 100% of the population, but the survivors may not last for long, or develop a distaste for advanced technology, or just become less technologically advanced for a long while.

          • Winter Shaker says:

            civilizations have disappeared mysteriously all the same. The Mayas come to mind

            Classical Mayan civilisation declined severely from its peak, but their descendants are still there.

    • Matt says:

      Great filter idea that will destroy (completely) an interstellar civilization.

      Assumption: Travel back in time is possible.
      Assumption: There aren’t infinite parallel universes. There’s just one, and if you go back in time and change something, it change the one-and-only. You can destroy yourself, you can change yourself, you can change the future, and the past, but the new history is the only history unless/until it is changed again.

      Eventually, even if you establish guidelines, monitors, and laws, somebody will go back in time and make it so that your society is unable to develop time travelling. Someone from your society is going to keep going back in time until prevented, and the only thing that 100% prevents anyone in the future from ever going back and changing anything ever again is that they change the timeline such that time travel is not developed. Any other outcome will result in more fiddling with the past. Anyone in your interstellar empire that has sufficient technology will eventually wipe out the empire itself.

      • fion says:

        It sounds like you’ve chosen one of the time travel models that has paradoxes. If I go back in time and kill myself, who was it that killed me?

        • acymetric says:

          I’m hesitant to jump in here, because I am definitely don’t have any real knowledge of various theories of time-travel, paradoxes, and causality, but my intuition:

          You did. Your future self (that traveled to the past) continues to exist along with whatever you brought back with you (time machine, perhaps)? This would be true even if you went back further and killed your grandparents. The future changes, and you are not born, but you are now part of history and exist as part of the past (which is your new present).

          • Matt says:

            That’s what happens under my assumption. Otherwise, time travel probably doesn’t work as a Great Filter candidate.

            Of course, I think it’s unlikely that humanity ever gets to travel back in time at all, but

            1) If it can work
            and
            2) If it works this way
            then
            3) It’s a plausible Great Filter candidate.

            It has a really nice effect (for the purpose of explaining the Fermi paradox) of allowing sapient species to develop up to the point of time travel and then erasing themselves before they can go interstellar, particularly if time travel tech is approximately as hard as interstellar travel. That the speed of light is a plausible obstacle to both problems implies that that assumption isn’t unreasonable.

            Maybe the reason we can’t go faster-than-light without infinity energy is that the process of going back in time with faster-than-light travel needs all that energy to create a new universe.

          • acymetric says:

            Something I feel is under-explored related to time travel (and that relates to your point):

            Time travel is also interstellar travel (well, at least interplanetary travel). Not only do you have to pinpoint the time where you a arrive, you have to pinpoint where the planet (Earth) will be when you arrive there, and plan your landing accordingly, and fairly precisely lest you end up:

            1. In space
            2. In the wall of a building
            3. Somewhere deep under the crust of the Earth
            4. Somewhere in the ocean (under the surface)

            Time travel is roughly as difficult as landing in a target with a 1m radius on Mars, except also you are somehow navigating time in addition to the regular spacial dimensions involved.

          • Randy M says:

            I think bringing in those considerations make it a computational nightmare, so in fiction I’m okay with handwaving at gravity and saying that you stay in the same relative position on earth.

          • Matt says:

            Well, a time machine that’s not also a space ship would have that problem. If you’re already in an interstellar craft, going back in time is just a navigation problem that’s probably not too much harder than FTL travel itself. You want to stay in open space the entire time and don’t run into anything.

            If your time machine is a DeLorean driving in a parking lot, you’re gonna have a bad time.

          • acymetric says:

            @Randy M

            Still might end up stuck in a wall, or if you weren’t on a ground floor suspended in mid-air if the building you were in doesn’t exist in whatever time you travel to.

          • Nick says:

            I believe this was a plot point in Michael Crichton’s Sphere.

          • Randy M says:

            @acymetric That’s certainly true, and if you go back to the days of the dinosaurs you probably drop into the ocean due to continental drift.
            There’s the Terminator method, where your little sphere of time displacement either over-writes, or swaps with whatever was at the target destination.

          • Doctor Mist says:

            Time travel is also interstellar travel (well, at least interplanetary travel).

            Well, so is ocean travel, or bicycle travel. But gravity and momentum keep you pretty tied to your context; I don’t see why time travel would have to be any different.

            Pretty much any of the remotely plausible time-travel mechanisms (e.g. a Tipler cylinder) serves incidentally as an anchor point — you can’t just follow a closed timelike curve anywhere you like.

        • Matt says:

          Do you prefer one where, if you go back in time to kill yourself, your action results in the creation of a new universe?

          • fion says:

            I guess I prefer the one where as soon as you arrive in the back-in-time time, you create a new universe. The universe where you built your time machine and left from is without you, just as if you’d died. The universe you arrive in has two of you (until one of you dies, whether or not by the hand of the other).

            But “prefer” seems like a weird choice of word here. Obviously time travel is impossible, but the one you posed sounds logically inconsistent to me.

          • Matt says:

            Ok, but there’s an inconsistency in your version too, right? You power your time machine with a relatively small amount of energy from Universe 1, and the process results in the creation of Universe 2, which is a copy of Universe 1, let’s say, 1 second earlier. Isn’t it going to take every bit of energy from Universe 1 to create Universe 2? This is a conservation of mass/energy problem.

            Universe 1 can’t still be there, right?

          • Nornagest says:

            Dave: You went back in time? But I’ve run calculations, and the energy expenditure is –
            Daughter: Roughly equal to the total energy in the universe. Yeah.
            Dave: So how did you –
            Daughter: We siphon it from other universes where they probably don’t want to exist as much.

          • Nick says:

            There’s an episode of Stargate Atlantis where the team tries drawing energy from alternate universes to power their stuff. Unfortunately, it turns out they want to exist pretty bad too.

    • John Schilling says:

      As others have noted, the Great Filter pretty much has to apply before a species starts colonizing other star systems. Only a subset of plausible star-colonization behavior patterns are vulnerable to inconspicuous extinction on an interstellar scale. That subset may overlap strongly with what we imagine ourselves doing over the next few centuries and have been glorifying via e.g. Space-Opera science fiction, but it isn’t a perfect overlap and in any event the range of plausible behaviors is rather larger than SF normally allows for.

      If the universe generates many technological civilizations, and nothing gets around to extinctifying them before they develop starflight, some of them will survive and some other ones will go down in a blaze of really conspicuously blazing glory.

      • Doctor Mist says:

        The exception is if the Great Filter is something that destroys the entire universe, though that just reduces to the unsatisfying explanation that we are the first (in our universe).

    • helloo says:

      Why does a interstellar civilization need the “self-sustaining colonies” part?

      They might all need to rely somewhat on the parent system (possibly purposely created reliance or some unique entity) and following its fall, unable to adapt quickly enough to continue existing.

      Second – perhaps there’s limitations in their growth that’s beyond the planet but below inter-galactic scale.
      Perhaps, terraforming ability is limited to them and the kind of planet they require is only really common in whatever pocket of space they are in, and are too sparse outside of it.

      • JPNunez says:

        It depends a lot on whether they keep researching useful science. Maybe at some point you just hit a wall where even full planet sized research projects won’t do you any good, so if you lose the mother planet, colonies still have all-the-possible-science anyway, and they keep on trucking.

        But there’s also the chance that there’s just no economic upside to massive projects like colonizing another planet 10 light years away and stuff never gets done.

        • albatross11 says:

          Indeed, it’s possible that it never really pays off to get a substantial chunk of humanity off our planet, and then one fine day something happens to wreck our planet, and a century later there are no more humans left.

          • JPNunez says:

            I think there’s a lot of economic benefit to, say, terraforming Mars and building colonies around the different planets. That saves us if the Earth goes kaboom due to someone’s finger slipping on the nukes or whatever.

            The question changes if we are talking of colonizing even Proxima Centauri, and chances are Trisolaris is not really inhabitable for us.

        • Randy M says:

          But there’s also the chance that there’s just no economic upside to massive projects like colonizing another planet 10 light years away and stuff never gets done.

          My opinion is that there is in fact no economic upside, unless near light speed travel is invented, since trading with a colony that it takes a generation to reach is not going to be personally profitable to anyone, and moreover you can probably develop substitutes in the meantime, and the expense if the voyage would be, pardon me, astronomical.

          Or it would be economically useful if there exists some extra terrestrial unobtanium which we somehow realize a use for from all the way over here, ala Avatar.

          Colonization is basically taking out a fairly expensive life insurance policy on your species, with the exception that no one you know personally will benefit from it.

          I’m writing a novel which starts from that premise, and concluded that it had to basically be a vanity project of a vastly wealthy visionary.

          • helloo says:

            It could still be a exploration based venture, or be used to escape prosecution/create new nation/get rid of the useless third.

            Plus, you are making two assumptions with that –
            There’s no great increase in lifespans if not immortality
            There’s no great increase/appetite in automation of the colonization level.

          • Randy M says:

            It could still be a exploration based venture

            Yes, but not one that benefits the sending nation economically.

            or be used to escape prosecution

            Perhaps, but only if an extremely advanced and wealthy group was on the receiving end of the persecution. But that’s just the ‘insurance policy’ aspect writ small.

            create new nation

            It will by definition create a new nation, but how will that benefit the people fronting the cost?

            get rid of the useless third.

            No. (Although I do chuckled at the reference) Aside from the difficulty of getting two billion people onto one spaceship, or building and launching millions of spaceships, in a generation or two you’ll have replaced them and you’ll still have people of below average utility. Colonization can’t be a cure for over population.

            Plus, you are making two assumptions with that –
            There’s no great increase in lifespans if not immortality
            There’s no great increase/appetite in automation of the colonization level.

            True, in some kind of scenario where lifespans are increased by an order of magnitude, very many things change and I can’t really speak to all the implications (many of which will depend on the particular details of how that came to be).
            By automation of the colonization level, you mean some sort of singularity? Similar to above, that will certainly change things in unpredictable ways. Barring a post scarcity society, I think the distance still precludes really profiting much from interstellar trade. Anything you can do without for a century you can probably find substitutes in that timeframe.

          • JPNunez says:

            With long long lifespans, some things become more plausible. You can just ride out your expensive and slow spaceship to the next star and arrive while old. Still, making machinery that lasts thousands of years is a though problem, even with intelligent beings aboard who can repair it (and who have to carry spare material).

            Methuselah aliens colonizing the galaxy is an interesting variation, tho still doesn’t explain where are they. Maybe they just don’t build detectable megastructures. Then the Fermi Paradox answer is that the old aliens just don’t want to contact us.

          • helloo says:

            It will by definition create a new nation, but how will that benefit the people fronting the cost?

            The people fronting the cost ARE the people that want to create a new nation.
            As in, the cost/repercussions/morality in establishing a new nation on the home planet will be more costly than finding a new space colony.

            This was meant to refer what non-economic reasons humans had in establishing colonies in America.

            By automation of the colonization level, you mean some sort of singularity?

            No. Rather making the establishment of colonies an automated process. The most obvious one is via replicating robots. In those cases, can allow exponential growth or at least a cheaper unmanned process.

          • Randy M says:

            No. Rather making the establishment of colonies an automated process. The most obvious one is via replicating robots.

            Not sure I count that as a ‘colony’ but for reasons of the original topic it might qualify. Unless those robots are preparing a home for humans or sending back the unobtanium, that’s just us depositing some self-replicating junk across the expanse.

            As in, the cost/repercussions/morality in establishing a new nation on the home planet will be more costly than finding a new space colony.

            Just because it is more costly to establish a new nation on earth doesn’t mean you have the resources to establish a possibly less costly elsewhere. It’s the people already in possession of nations or equivalent who will be able to fund such ventures.

            But I think that still falls under “no economic upside.” I don’t mean that there are no reasons to colonize; just that I don’t foresee trade being one of them given the transit time and cost involved. All remaining reasons are basically ideological, since the colonists probably won’t improve their lives any by going (by virtue of giving up much of it to get there, or else there being close enough to reach in a lifetime but pretty inhospitable).

          • Eponymous says:

            I will note that empirically living creatures have a strong tendency to fill all available space and increase their population to the extent possible given available resources, and evolutionary theory implies this tendency will be essentially universal.

            Yes, a self-aware species can formulate an ethical system and then take steps to codify and preserve those values (protecting them against further drift/selection), and these values might not fully support unchecked expansion (though, since they evolved, they presumably *will* be such as facilitated growth in the ancestral environment).

            However, it’s extremely likely that the vast majority of advanced species will engage in expansion and interstellar colonization. The reason is simply population pressure: resources (partly raw materials, but mainly energy) in any star system will be limited. Population growth is exponential (barring fine balancing — and there are good evolutionary reasons for growth to win out). At some point you run out of available resources. To be concrete, this might take the form of enclosing the local star in a Dyson sphere/swarm, until all solar energy is tapped.

            At this point (or realistically, long before) some locals will start thinking that all those other stars out there offer a lot of free untapped resources, with the only cost being transport. Some combination of the desperate and the entrepreneurial will set out.

            Once it begins, the expansion will be inexorable and indeed quite rapid, driven by a simple mathematical fact: growth rates of any local population (e.g. around a given star system) are exponential, but available space (and therefore resources) in the future light cone is cubic in time. Thus for any positive growth rate, population pressures will far outstrip reachable uninhabited star systems, and the civ will continuously send out new colony ships.

    • Chalid says:

      The only future Great Filter hypothesis that I find vaguely plausible would be that it is impossible to develop really high tech without making mass destruction really easy. Society wouldn’t last very long if anyone who was having a bad day could whip up a nuke or planet-devouring black hole in 15 minutes. This is somewhat explored in Vernor Vinge’s book “Rainbows End”.

      It is of course easy to imagine many plausible past Great Filters.

      • albatross11 says:

        Isn’t this part of Bostrom’s argument. From his Sam Harris interview, I think his example was that human civilization probably couldn’t have survived if you could make a Hiroshima-sized nuke in your kitchen with easily-found ingredients.

        I’m not 100% sure this is correct–you can imagine cultures evolving that could survive this ability, but they might be awful in many other ways. In any near-term space colony, I think you’d have something a little like this. Someone going crazy and opening the airlocks/starting a fire/dumping poison into the air supply might kill off a big chunk of the colony’s population. One solution to that would be that anyone who seems even a little weird or “off” gets locked up or spaced.

      • albatross11 says:

        One thing that’s kind of worrying w.r.t. the Great Filter is that it looks rather like substantial animal intelligence has evolved multiple times here on Earth. Not just us and other primates, but also in elephants, dolphins, corvids, wolves, etc. (And I think octopi are considered quite intelligent–that’s another species that’s not even a vertibrate!) There’s also a very different kind of intelligence that’s evolved multiple times on Earth–eusocial species. That might (or might not) be an alternative path to some kind of technological civilization, though it’s hard to imagine what it might look like. But it’s worth noting that large-scale war, farming, and herding were all invented by eusocial insects a long, long time before humans arrived on the scene. All this makes it look to me like probably evolving substantial animal intelligence isn’t so hard, once you’ve got complex multicellular life.

        • Nornagest says:

          I have the feeling that intelligence as in raw problem-solving ability probably isn’t as important here as abstract language. It looks very probable to me that you can’t build anything like technological civilization if you don’t have language, even if you can fish for termites or escape aquaria like a boss. And as far as we can tell that really has only evolved once, quite recently; lots of species communicate in some fashion, and a lot can even learn the meanings of a limited set of human words, but nobody, even our closest relatives in the great apes, seems to be able to use them with anything like the structure and generality that we do.

          On top of that, it’s exactly the sort of lateral breakthrough that we’d expect to be evolutionarily rare: learning lots of “nouns” would have steep diminishing returns in the wild without a “grammar”, yet there isn’t a clear evolutionary advantage to building the first steps of one.

    • Reasoner says:

      The Great Filter hypothesis is about why Earth hasn’t already been colonized. It would be pretty quick (on an interstellar time scale) to spread exponentially through the galaxy using self-replicating Von Neumann probes and colonize every habitable planet, but apparently it hasn’t been done, despite hundreds of billions of stars in the Milky Way which could have birthed an interstellar species. Something must be stopping this from happening. Either intelligent species arise rarely, or their development reliably gets arrested permanently before they reach this phase.

      • Bugmaster says:

        It would be pretty quick … to spread exponentially through the galaxy using self-replicating Von Neumann probes

        You are implicitly assuming that interstellar self-replicating Von Neumann probes are possible, which IMO is a huge assumption. Given our current level of technology, we couldn’t even begin to imagine how to build one of those things; and I’m not convinced that the laws of physics do not outright prevent it — unless, of course, you sneak in molecular nanotechnology or superhuman AIs or some other science-fictional shortcut.

        • Edward Scizorhands says:

          In that case, the Great Filter is ahead of us, and we will be stuck on this planet (or in this star system).

          From what we know currently, it should certainly be possible to colonize other planets, and if we can do that then we can colonize (slowly) other star systems. But maybe there are Hard Things we just don’t understand yet.

          • acymetric says:

            Do you mean we can colonize any planet or that we can colonize a planet given (some set of requirements). If the former, our expansion is on the scale of at least a couple hundred years to reach and colonize each system.

            Assuming the latter, colonization of the next livable system is probably closer to the scale of all recorded human history to reach and colonize the system.

          • Edward Scizorhands says:

            My first “colonize other planets” was “colonize other planets in our star system.” Not that we can necessarily colonize any hell-scape planet we find.

            Say we colonize Mars in 2050, and most of the rest of the solar system by 2300. Then a group launches a fast ship to Proxima Centauri b at 5% of the speed of light and it gets there in around a century. (We can probably get people living that long.) They establish at 2400, and take 300 years to build up the wealth to start the process over, colonizing other bodies in their system. That’s about 700 years to cover 4 light years, of 17 million years to go from one edge of the galaxy to the other.

            Maybe there is some reason that humans can’t do this: maybe everything else in the inner solar system turns out to be uninhabitable (and it’s too big a leap to go from Earth to the outer solar system, to say nothing of other star systems), or maybe humans just can’t live long enough, or we started in a particular bad region in the galaxy such that all the planets in nearby star systems are hell-scapes that are too challenging for “baby’s first interstellar mission.” But they would also need to apply to every other species out there.

          • acymetric says:

            @Edward Scizzorhands

            But what motivates this constant, rapid expansion? The time scale is too long for us to rely on “adventurous spirits” who do it “not because it is easy, but because it is hard” unless you find a way to instill that across generations (some fanatical religion or something maybe). Otherwise the expansion is going to be a lot slower because it will be driven mostly by need.

          • Edward Scizorhands says:

            Unless expansion is actively suppressed (which is possible), things work out fine if only 1% of the society wants to expand. That is how most expansion works, anyway. 99% of the people stay home and less than 1% colonize.

            Reaching the next star system is probably do-able in one life time, if not for humans then for some other species.

          • acymetric says:

            So, gen A decides to head to Proxima Centari b and starts colonizing. Gen B and C were probably born on the ship, with no choice in the matter and are probably doing most of the work actually colonizing. I find it more likely that the future generations would build a ship to leave Centari and go back to Earth than that they would say “well, this desolate, borderline unihabital planet has been fun, but let’s embark on another generations long journey to the next one!”

            The only way I buy expansion to other systems is the discovery of a way to travel between points in space such that the travel takes an insignificant amount of an individual lifespan to do so.

            Consider also that even if you have a group committed to doing this, and their future generations also stay committed, the risk of catastrophic failure killing them all at some point along the way is probably relatively high.

          • The only way I buy expansion to other systems is the discovery of a way to travel between points in space such that the travel takes an insignificant amount of an individual lifespan to do so.

            Some version of suspended animation gives you that.

          • John Schilling says:

            But what motivates this constant, rapid expansion?

            Who said anything about “constant”?

            If it takes ten thousand years for the average colony to grow to the point where it is capable of building starships in their spare time, and even then only once every thousand years does the random-walk of local politics, sociology, and economics give rise to an oppressed and/or adventurously spirited minority population desperate and resourceful enough to launch a single interstellar colony mission before reverting to apathy and hedonism, and if colony ships are limited to 0.01c and ten light-years maximum range and new colonies have a 90% failure rate…

            …the Milky Way is still fully colonized roughly half a billion years after the first technological civilization develops starflight.

            The Milky Way is approximately twelve billion years old. Even if we assume that the first generation of stars(*) were too metal-poor to support life-bearing worlds, that still gives us ten billion years to evolve a technological civilization and colonize the galaxy. Fermi’s question stands.

            * Called “Population II” because someone guessed wrong

          • acymetric says:

            @DavidFriedman

            Some version of suspended animation gives you that.

            Granted, and we are probably closer to that than the alternative, although I’m not sure how legitimately close we are.

            @John Schilling

            Who said anything about “constant”?

            I was responding to Edward Scizorhands who proposed a much faster rate of expansion, which (at the time scales we’re talking about here) I would call more or lest constant.

            Your proposal, which I suspect is a conservative take on your part, is more reasonable. The “great filter” in that case is simply time. There are a nigh-infinite ways for a civilization to collapse, especially a fledgling colony civilization traveling through deep space or even after reaching their destination planet. The odds of the civilization making it that far are just incredibly low not because of any single cataclysmic type of event but essentially because over the course of half a billion years attrition ends up outrunning expansion.

            Call it Murphy’s law of Interstellar Colonialism

          • Eponymous says:

            I mean, we’re talking about exponential growth here. That’s going to fill up space pretty quick for any reasonable constant you pick. And the highest constant dominates here, so if civs differ the one with the highest growth rate will just take over everything.

            And the patterns of colonization people are talking about above strike me as insanely slow compared to what is likely — even assuming no intelligence explosion.

          • John Schilling says:

            There are a nigh-infinite ways for a civilization to collapse, especially a fledgling colony civilization traveling through deep space or even after reaching their destination planet.

            If by “civilization” you mean an individual planetary or system-level colony, then sure – which is why my model allowed for 90% of colony civilizations to collapse before ever getting around to launching even a single starship. And you could up that to 99% or even 99.9% if you allow for the tiny handful that make it, to launch starships once per century or decade rather than once per millenium. The collapse of planetary civlizations isn’t a showstopper if there are lots of planetary civilizations to work with – and if interstellar travel is a marginal proposition, then “…and we get to loot the remains of a Lost Civilization for sure!” is probably going to push it over the top for recolonization missions, so probably not much ground lost in the long term.

            If by “civilization” you mean the set of all planetary or system-scale colonies descended from the same source, then particularly for the hypothetical where interstellar travel is difficult and rare, then I disagree with their being a nigh-infinite number of ways for interstellar civilization to collapse, because it wouldn’t be a single civilization in the sense that we normally use the term and the gulf of interstellar space would make for a most effective firebreak against a nigh-infinite number of possible civilization-collapsers.

          • Vermillion says:

            I find it more likely that the future generations would build a ship to leave Centari and go back to Earth than that they would say “well, this desolate, borderline unihabital planet has been fun, but let’s embark on another generations long journey to the next one!”

            @acymetric that is the exact plot of KSR’s Aurora, pretty good book as I recall.

          • moonfirestorm says:

            that is the exact plot of KSR’s Aurora, pretty good book as I recall.

            Also the plot of Stephen Baxter’s Ark, although it’s important to note that in both books there’s something wrong with the colony, so it’s not so much “reaching out again from a successful colony” as “this attempt to colonize isn’t going to work, let’s just go back”. Ark actually goes in all three directions, with some going back, some staying on the problematic world, and some going forward to a third planet.

          • Bugmaster says:

            @Edward Scizorhands:
            I actually fear that you are right; although, in the best-case scenario, the Great Filter might be something like our Sun dying, which won’t happen for a good long time.

            As far as I understand, it should be possible to set up human presence on the Moon, or perhaps even on Mars, given incremental enhancements to our current technology. However, I am far from convinced we will ever do it; the costs involved seem to be much higher than any government or corporation is willing, or able, to pay. China might go to the Moon, though, just to spite the US — but I doubt they’d ever maintain a permanent presence there.

            Traveling to Alpha Centauri would take on the order of 100 years, and that’s just for a robotic probe. No present human institution operates on such time-scales; unless maybe you count dictatorships whose only relevant goals are “stay in power” and “keep being a dictatorship”, not “travel to other stars”.

          • John Schilling says:

            The costs involved seem to be much higher than any government or corporation is willing, or able, to pay.

            “Seem”, means that you are basing your cost estimates on observation.

            And the only sort of manned(*) space flight activity anyone has had a chance to see, is the sort that has been done either A: under an explicit mandate to deliver the most spectacular possible results in the fastest possible time without regard to cost, or B: exactly the same way it was done last time so that nobody can be blamed if anything goes wrong, and with an explicit mandate that no price is too high for “safety”.

            This may give a misleading impression as to the plausible cost range.

            * Very nearly the only sort of unmanned space activity, for that matter.

          • Edward Scizorhands says:

            Colonization doesn’t need to happen via government, although governments must allow it. If you let enough centuries pass, eventually private groups accumulate enough wealth to do it on their own.

          • Bugmaster says:

            @John Schilling:

            “Seem”, means that you are basing your cost estimates on observation.

            Er… yes ? What else should I base them on ? Logical deduction from first principles ?

            @Edward Scizorhands:

            If you let enough centuries pass, eventually private groups accumulate enough wealth to do it on their own.

            What is their incentive to actually do it ? Why accumulate all that wealth on a long-shot blue-sky project, when you could instead invest it into reliable short-term gains ?

          • Randy M says:

            What is their incentive to actually do it ?

            The money men would have to see it as a philanthropic expense and have enough corporate control to start the project and keep it going long enough to build, stock, crew, and launch the ship–which may be difficult because it could take many years.

            Perhaps there was a recent near-miss of an asteroid or nuclear strike which motivates someone to use their funds on such a venture, or maybe they want the renown.

            I don’t think they should expect universal acclaim for doing so, however. A lot of people are going to see it as wasting resources that could be spent otherwise, irrevocably.

            As far as colonists go, once funded I don’t think it would be a problem to find some willing to go, frozen or as breeders. Whether from desire for fame, adventure, or escape.

          • Edward Scizorhands says:

            You are living at a time when two billionaires are fighting it out with space companies, including the world’s richest man (still, post-divorce).

            Why is Bill Gates trying to cure polio? Aren’t there better returns somewhere else?

            Maybe 50 years from now, when there are even more of them, none of the billionaires are interested in space. Fine. Wait another 50 years, and there will be even more (in real terms) billionaires. Oh, they all want to cure Alzheimer’s? Fine, wait another 50 years. Eventually, unless man goes extinct or the government confiscates everyone’s property or disallows space travel [1], you are going to get someone with enough drive to make it happen.

            Also, while looking up billionaires, I found out that Kylie Jenner is the world’s youngest “self-made billionaire.” Never mind the life support, launch me off this planet now.

            [1] Those are actual possibilities, and if aliens are common I’m sure a lot of them got taken out of the space race through one of those three methods. But if aliens are common, then you only need one to get past that and wallpaper the galaxy.

        • Eponymous says:

          You are implicitly assuming that interstellar self-replicating Von Neumann probes are possible, which IMO is a huge assumption. Given our current level of technology, we couldn’t even begin to imagine how to build one of those things; and I’m not convinced that the laws of physics do not outright prevent it — unless, of course, you sneak in molecular nanotechnology or superhuman AIs or some other science-fictional shortcut.

          Um — since superhuman AI is obviously allowed by the laws of physics (it would be trivial to selectively breed super-intelligent humans starting with actually existing historical geniuses, which puts a lower bound on possible intelligent agents a decent step above the current human level. Then factor in running these super-humans on faster hardware, and we’re already up to pretty superhuman level, and we’ve barely gotten started on possible improvements!), saying VN probes are not permitted by the laws of physics “unless…you sneak in …superhuman AIs” is simply incoherent.

          You’re claiming that a thing is actually physically impossible (an insanely high bar to prove!) unless we posit a thing that plainly is physically possible. So…it’s physically possible?

          Besides, there are plenty of biological replicators. We’re essentially colonies of said replicators, and we can build spaceships! Heck, we’ve sent out interstellar probes *already* — how hard would it have been to throw a cell culture on board?

          So…still confident VN probes are *physically impossible*?

          • Bugmaster says:

            Um — since superhuman AI is obviously allowed by the laws of physics (it would be trivial to selectively breed super-intelligent humans starting with actually existing historical geniuses

            I guess you and I have very different definitions of “trivial”, and perhaps “superhuman”. I will grant you that a genius could technically be considered “superhuman” — as in, several sigmas above the mean — but that’s not what it would take to create a reliable Von Neumann probe. When I say “superhuman”, I’m thinking in terms of “several orders of magnitude”, assuming such a concept is even coherent.

            Then factor in running these super-humans on faster hardware…

            Which we currently have no idea how to even begin researching; and, again, I’m not at all convinced that it’s even possible without some sort of self-replicating molecular nanotechnology… which may, in turn, be impossible. And no, running Google Maps on a really big cluster doesn’t count.

            You’re claiming that a thing is actually physically impossible

            I said that I was not convinced that it was possible, not that I was convinced it was impossible. You are the one who is proposing a self-replicating probe that can not only survive interstellar distances, but also make perfect copies of itself out of raw materials, such as rocks and interstellar hydrogen, every 100 years or so (if you’re lucky). The burden of proof is on you.

            how hard would it have been to throw a cell culture on board

            I don’t know, how hard is it to maintain a livable environment for hundreds of years between stars ? Also, how hard is it to engineer a cell culture that actually does something useful… such as generate computer hardware out of rocks in the vacuum of space ? You tell me, you seem to know how to make one !

          • Eponymous says:

            @Bugmaster:

            I apologize if my initial comment came off as a bit sharp.

            Before engaging on this topic further, I think we should clarify terms a bit, since I’m not interested in arguing about the meaning of words.

            I gather from your comments here and elsewhere that you are a technological pessimist (particularly compared to me). That is a perfectly coherent position, and I’m quite happy to discuss it further.

            However, the post I was originally responding to made (what I take to be) a *much* stronger claim, that Von Neumann probes and superintelligent AI (and MNT) are not merely difficult, but might plausibly be *physically impossible* — that is, literally not allowed by the laws of physics, ala perpetual motion machines or FTL signalling.

            My comment was responding specifically to that claim. However, if that’s not your true objection, and you’re merely asserting a more generic kind of technological pessimism, I’ll be happy to continue the conversation in that vein instead.

          • Bugmaster says:

            @Eponymous:

            I gather from your comments here and elsewhere that you are a technological pessimist

            I bet you hear this all the time, but still, I consider myself more of a realist 🙂

            that Von Neumann probes and superintelligent AI (and MNT) are not merely difficult, but might plausibly be *physically impossible*

            I am not claiming absolute certainty that superintelligent AIs and Von Neumann probes are physically impossible; but, currently, I’m about 60%..80% convinced of this — depending on the specifics.

            For example, I am fairly sure that “gray goo”-style molecular nanotechnology is impossible. True, molecular replicators do exist — that’s what we’re made of — but the energies required to do the same thing outside of water-based chemistry (or some equivalent); and in much shorter timeframes; are just too large.

            On the other hand, constructing an interstellar probe is pretty easy — you can launch a wrought-iron cannonball quite a long way, with the right rocket. However, making that probe do anything useful is much harder, depending on what you want it to do. Making a probe that can create multiple copies of itself may be prohibitively difficult, depending on what you want to make it out of — and that’s assuming that you’ve solved the problem of perfect self-replication in the first place, not to mention survival over hundreds of years of exposure to hard vacuum and radiation.

    • Bugmaster says:

      The Boring But Practical ™ answer is that technologically advanced intelligent life is much less likely to arise than optimistic futurists (and science fiction authors) tend to think. It is entirely possible that we humans are the only technological civilization in the Milky Way. Even if there are others like us, the diameter of our galaxy is about 100,000 light years, IIRC. We’ve only invented radio telescopes about 80 years ago.

      The really sad thing, as I see it, is that the Universe is probably teeming with intelligent life, relatively speaking… and we will most likely never see it. The Andromeda Galaxy is 2.5 million light-years away. There could be an alien there right now, writing a post much like this one… and in 2.5 million years, we could possibly read it… except we won’t, because its light will be too weak by the time it reaches us.

      I am reasonably sure that, for all intents and purposes (and barring some sort of a magical FTL engine), we are alone in the Universe… and so are all the other intelligent species.

      • Randy M says:

        I think you are correct.

      • RavenclawPrefect says:

        I think you’re neglecting timescales here. Aliens don’t have to exist right now and only right now; we can look at aliens on the other side of the Milky Way who were sending out signals 50,000 years ago, even if they’re now long gone.

        And andromeda isn’t actually that hard to get to/from for technologically mature civilizations, at least not much more than colonizing the Milky Way is (see this to-scale diagram). Get your Von Neumann probe sped up to a decent fraction of c, wait a few million years, and have it start up the process in the new galaxy. Sure, 10 million years is a while, but the universe is billions of years old. The evolutionary history of Earth doesn’t seem so sped-up as to have been rushing to the finish line of sapience within a few million years as every other biosphere; aliens from Andromeda who got to multicellular life a few hundred million years earlier would have no trouble turning all available matter in the Milky Way into whatever configurations they liked.

        Space is big, but so is time. It doesn’t seem implausible that there might be civilizations with a billion-year head start on homo sapiens, which is enough to traverse the entire Virgo Supercluster.

        • acymetric says:

          Get your Von Neumann probe sped up to a decent fraction of c, wait a few million years, and have it start up the process in the new galaxy. Sure, 10 million years is a while, but the universe is billions of years old.

          Nobody is answering why an intelligent species would do this. For fun?

          • woah77 says:

            I mean, if I had control over society, the answer would be both “Because we can” and “Because I want to see what’s out there”

          • acymetric says:

            @woah77

            But…you won’t see what’s out there. Nobody in your civilization will, just the Von Neumann probe.

          • woah77 says:

            You mean, no one alive today would. Assuming my society is stable enough to survive for 10 million years (that’s a big if, but I’d be willing to operate under it), my offspring millions of generations later will see what’s there.

          • acymetric says:

            How will they see it? Have you developed some kind of massively powerful communication device that can transmit information that far?

          • woah77 says:

            We’re talking Von Neumann style probes. If one is insufficient, an array of them could easily transmit that far. Light doesn’t lose energy with distance, so you just need enough of them for it to be easily detected. The bigger concern to things like the Fermi Paradox is why can’t we see any craft traveling? There isn’t any stealth in space, and anything accelerating to a significant fraction of c will show like a star (not using that lightly, it takes substantial energy to get up to those speeds).

          • Edward Scizorhands says:

            There are lots of old men who plant trees whose shade they know they shall never sit in.

            If you needed the whole society to work together to build a von Neumann probe to get to the next galaxy, then “why would they do that?” is a good question. But I don’t think that was part of the thesis. It’s much easier to argue “someone will do this” than “no one, anywhere in the universe, will do this.”

            There are people trying to do all sorts of weird things that don’t fit your model of them. It doesn’t mean they aren’t doing them. It means your model is wrong.

          • Lambert says:

            The question isn’t how to build the telescope.
            The question is what intensifiers to put before ‘large telescope’.

            Given that it only takes 15 inches of telescope to make out individual stars in Andromeda, signalling back shouldn’t be too hard for a type II civ.

        • Bugmaster says:

          As I mentioned in the thread above, I’m not even convinced that Von Neumann probes are physically possible. And I will absolutely grant you that, assuming that molecular nanotechnology and superintelligent AIs are more than just science fiction (which, again, I doubt), then there could be civilizations out there who have mastered them. In fact, there could be tons of such civilizations… way outside of our light-cone, because the probability of such things happening is extremely low.

          I mean, look at us: we could colonize Mars in the next 50 years, if we really wanted to, but it doesn’t look like we want to. And we have a huge leg up on all those other aliens — we actually exist !

          • RavenclawPrefect says:

            I agree with that conclusion; I assign fairly high probability to the hypothesis “advanced civilization happens with a probability that is nonzero but too small to be likely to reside within our lightcone”.

            For Von Neumann machines, the existence of humans seems to suggest that there are no fundamental limitations to their existence (the only added capability of a Von Neumann machine is the ability to accelerate more effectively from place to place – self-replication and material/habitat fabrication we can already do). As for AGI, I doubt I can provide better arguments than e.g. Nick Bostrom in Superintelligence, but I don’t think it’s a prerequisite for galactic colonization.

    • Watchman says:

      The obvious contention is that we are amongst the most advanced species if not the most advanced, and we’re not seeing anything more advanced because they weren’t there when the light set off. We know species capable of interstellar travel exist, because we’re here. We know we can’t see evidence of species capable of travelling between solar systems, because we haven’t (albeit there is the would we recognise it question). And we know of no yet apparent reason why we can’t follow Voyager beyond the heliosphere. All of this suggests interstellar travel (not ftl necessarily) is possible and we’re as close to it as anyone. There is no filter, just the fact that intelligent life has developed no further than humanity.

      It’s clearly a hypothetical position, but note the basic point here is that in the universe there has to be a first species to travel between stars, assuming that this can be done. Why not us?

      • fion says:

        I find your answer appealing, but I think it’s unlikely we’re at the forefront of an advancing universe. The Earth is only 4.5 billion years old. The first stars formed (relatively) very shortly after the beginning of the universe, about 13 billion years ago. Of course, you need a few stars to go supernova to get heavy elements, but I sill think it’s likely there are many stars with a big head start on us.

        • woah77 says:

          That’s kinda the point about the Fermi Paradox. If we believe that our star/system is nothing special, why don’t we see any other star faring civilizations? Now, it’s entirely possible that something happened or that circumstances to support life only became possible around 5 billion years ago. Or that the seeds of life took several billion years to form in space and that all planets capable of supporting life were seeded within a very short time frame (relative to the billion year time scale) and so evidence of farther away civilizations hasn’t reached us yet because they’re only tens of thousands/hundreds of thousands of years ahead of us.

          Obviously that seems unlikely without something even older being the prime mover, but that would suggest an alien intelligence staying hidden or one that has already gone extinct. Since we haven’t looked at any planets outside our system that closely, we have no idea what might have once been.

          • Nancy Lebovitz says:

            There might be unusual details like having a relatively large moon being required.

          • JPNunez says:

            Generally speaking, I don’t think supercivilizations leaving Earth alone because of wanting to leave aliens to develop by themselves is that farfetched.

            But I also suspect that supercivilizations capable of making that decision would also build detectable megastructures. Maybe it’s too early to detect them, tho.

        • Edward Scizorhands says:

          I always see “we are the first” as just a slightly altered case of “we are alone.”

      • Douglas Knight says:

        The great filter has a precise meaning. In your scenario, there still is a great filter, it’s just that it’s behind us, rather than in front of us. You need a filter to explain why we’re the first, despite our being late compared to how easy it looks for intelligent life to develop. If you think it is hard for intelligent life to develop, that reason is the filter.

    • There is one category of explanations for the Great Filter that I don’t think has been discussed. Perhaps one of the effects of the sort of developments that make possible an interstellar empire is that people don’t want one any more. They discover the nature of reality, conclude that life is not worth living, and kill themselves. Or they learn how to wirehead really well, and do it. Or they become buddhist philosophers and switch to a life of contemplation. Or …

      • Randy M says:

        They discover the nature of reality, conclude that life is not worth living, and kill themselves.

        That seems quite unlikely on a civilizational level. Is there anything you could learn about reality that would make you conclude that?

        Or they learn how to wirehead really well, and do it.

        This one on the other hand I totally buy. Imagine people using futuristic VR to avoid going crazy in the close confines of lengthy space flight, and then neglecting all duties because the VR is too enticing compared to the drab reality of life in a small metal tube or desolate colony.

      • Nancy Lebovitz says:

        Or they find out that the really interesting things are inside your head, so there’s no point in expansion.

      • Edward Scizorhands says:

        I could easily imagine a universe where most species simply decide not to expand.

        But it only takes one to decide to expand, and a billion years later the galaxy is covered.

  36. Internet problems I know how to solve, but don’t understand: reddit occasionally has this problem when they update it, where Chromium based browsers won’t load the site properly and it will suddenly go to a white page, but if you delete your reddit cookies it fixes the site for good (well until the next update), even though you’d surely reacquire the same cookies, right?

    • Nornagest says:

      Sounds to me like something on the server side is refusing to serve anything to a client with a cookie it considers invalid.

      • acymetric says:

        Yeah, possibly a change to what is stored in the cookie (or how it is stored) to authenticate or even just populate options…I would expect the site to handle that gracefully though but maybe it depends on your browser/settings.

  37. proyas says:

    A question for English, Scottish, and Welsh people who live in the British Isles: Do you think Irish people look different from your own ethnic group? If yes, then what is different about them? Are there multiple “subgroups” of Irish people (e.g. (I’m making these up as examples) – short redhead, brunette with gray eyes and big forehead)?

    If you think the Irish have a distinct appearance, are you sure it isn’t due to different clothing and hairstyle preferences that are more common in Ireland, or to culturally-rooted differences in facial expression (e.g. – maybe you can recognize Irish on sight merely because they tend to smile more/less than your ethnic group)?

    • I’m English but I have a quarter Irish blood and my mother, who is half, has extremely stereotypical Irish looks; the red hair, strong forehead, high cheekbones, turned up nose, freckles. I have inherited some of these features (I have dark – almost black – brown hair though, minus the grey, and my freckles are very minor). I’m not sure how common that stereotypical look is, but it’s definitely a thing that exists and was played up in caricatures of the Irish in a similar way to caricatures of Africans. My guess is that it’s more that only Irish people are very likely to have these looks, but the vast majority of Irish people don’t look this way, and are largely indistinguishable from other members of the British Isles.

    • Tarpitz says:

      I’d say rather that some British people look Germanic or Nordic in a way I wouldn’t expect an Irish person to. I don’t think there is an Irish appearance that is distinct from most white Brits.

      I’m white, 5/8 English, 3/16 Scottish, 1/8 Swiss and 1/16 Irish, have darkish brown hair that goes a bit wavy if it gets long enough and a beard that goes a bit ginger in the summer, and think my appearance would make me a thoroughly unremarkable native of any country in the British Isles.

    • rlms says:

      I don’t think any British national group can be reliably recognised (it’s not possible to do that even at the more granular level of European countries) but some people definitely do look [nationality].

      • I think if you go to the level of “Northerners” versus “Meds” then you start to see clearer distinctions.

        • rlms says:

          I can definitely distinguish between e.g. French and German with significantly better than chance accuracy, but that’s still in the realm of educated guessing rather than anything that could be done consciously.

    • Simulated Knave says:

      Defining things a little more concretely here might help.

      For example, my great-grandfather was Irish. Because he was born in Ireland. HIS father, however, was Scottish, and they were Scottish for ages before that.

      So do you mean “can you spot people from Ireland?” Or do you mean “can you spot people whose ancestors are mostly from Ireland for the previous few centuries?”

      • Rowan says:

        They said “ethnic group”, and it’d be a bit silly to just call anyone born in Ireland “ethnically Irish”.

      • proyas says:

        So do you mean “can you spot people from Ireland?” Or do you mean “can you spot people whose ancestors are mostly from Ireland for the previous few centuries?”
        Yes to the second. I define “Irish” people as people who credibly claim to be “pure Irish” or something like that.

    • fion says:

      I’m Scottish. I don’t really think Irish people look different. I guess, yeah, there are some features that are more or less common, but I feel like everywhere’s such a mixture these days that I can’t really tell.

    • carvenvisage says:

      Question not for irish people?

      If you think the Irish have a distinct appearance, are you sure it isn’t due to different clothing and hairstyle preferences that are more common in Ireland, or to culturally-rooted differences in facial expression

      Pretty sure yeah:

      -it seems like dark hair is the norm rather than mid or lightish brown like for whites in england (also in scotland)
      -and Red hair is much more common
      -maybe green eyes too?

      This could be regional*, as apparently ireland has historically had less movement between places than other countries. (and even in england you can recognise people to some extent by region)

      (*I mean, I could also be mistaken, it could be regional even if my impression is accurate)

      There’s also some looks that seem very distinctively irish, like how not every male canadian or scot is a giant with a boyish face, but you don’t seem to get them in numbers anywhere else.

      For example the guy on the right, and a black haired “intense scraggly looking survivalist” type that banning dog-fighting wasn’t fair on. (Lots of others too, those are just the ones which I saw the most there and the least elsewhere.)

      _

      Some variance could be due to other non-genetic factors though, for a mundane example no one in england plays hurling or gaelic football, and I think you’re liable to end up with a different gait, body, attitude, etc, if you play these (at least the first) instead of football/cricket/rugby.

  38. pacificverse says:

    I strongly support your use of advertisements to monetize your excellent website. It warms my heart to know that corporations are paying your website in my place.

  39. gwern says:

    Those of you who don’t use ad-blocker may notice some more traditional Google-style sidebar ads. I’m experimenting to see how much money they make vs. how much they annoy people. If you are annoyed by them, please let me know.

    I’m not sure this is a good idea. Considering the extent to which you make money from Amazon affiliate links of your book reviews & Patreon, ads could very easily cannibalize much more of your revenue than they deliver. Look at past estimates of ad effects: https://www.gwern.net/Ads#replication Your revenue from a single person via Patreon or affiliates could easily exceed the chump change of $30/month or whatever you’d get from AdSense.

    • RavenclawPrefect says:

      Seconded that this seems somewhat risky. As an existing fan of the blog, it’s not going to affect my traffic at all and I’ll keep my adblock off, but I think many new readers may find it a slight impediment to getting attached to the blog if they’re perusing some random post; it also removes the reaction where people are pleasantly surprised to find a site which has uniformly high-quality ads that you don’t want to remove. Unless it’s extremely lucrative, I’d expect that this is a negative-EV gamble.

    • Douglas Knight says:

      I think Scott has to do the experiment to a-lieve that the number really is $30/month.

  40. Deiseach says:

    Professor Friedman, you may be interested in this as I think you were asking about aids to maintain cognition during aging?

    Local institute of technology has research out: carotenoids and omega-3 in combination are good for the brain!

    Improved life expectancy worldwide has resulted in a significant increase in age-related diseases. Dementia is one of the fastest growing age-related diseases, with 75 million adults globally projected to develop the condition by 2030. Alzheimer’s disease (AD) is the most common form of dementia and represents the most significant stage of cognitive decline. With no cure identified to date for AD, focus is being placed on preventative strategies to slow progression, minimize the burden of neurological disease, and promote healthy aging. Accumulating evidence suggests that nutrition (e.g., via fruit, vegetables, fish) is important for optimizing cognition and reducing risk of AD. This review examines the role of nutrition on cognition and AD, with specific emphasis on the Mediterranean diet (MeDi) and key nutritional components of the MeDi, namely xanthophyll carotenoids and omega-3 fatty acids. Given their selective presence in the brain and their ability to attenuate proposed mechanisms involved in AD pathogenesis (namely oxidative damage and inflammation), these nutritional compounds offer potential for optimizing cognition and reducing the risk of AD.

    …We have shown that lower carotenoid concentrations (in both tissue and blood) are associated with poorer cognitive status in healthy individuals (Feeney et al. 2013, 2017) and that patients with AD are deficient in these nutrients (Nolan et al. 2014). Furthermore, our research has observed improvements in cognition following carotenoid supplementation in healthy individuals (Power et al. 2018). Moreover, the presence of ω-3 FAs in the brain and their potential role in neuroprotection and synergy with carotenoids have led us to investigate these nutritional compounds in conjunction with the xanthophyll carotenoids. Our previous interventional work with AD patients observed no benefit of carotenoid supplementation to cognition (Nolan et al. 2015). However, recent pilot work in which individuals with AD were supplemented with a formulation containing a combination of carotenoids and ω-3 FAs yielded positive results in terms of improved biochemical response and carer-reported improvements in the ability of AD patients to perform daily activities, with specific improvements noted in memory, sight, and mood (Nolan et al. 2018). Future work will examine this finding in a larger study (the re-MIND trial), which is currently underway.

    So we all need to start eating our leafy green vegetables, red yellow and orange veggies and fruits, and oily fish a lot more!

  41. J Mann says:

    “Let me know what you think.”

    IMHO, most people want operating this blog to be a congenial experience to you – more feedback on what kind of comments cause you grief would probably be helpful. I’d be up for experimenting with a subject matter ban, or some kind of yellow card system that lets you warn people who are causing you grief.

    On the other hand, the more transparency there is, the more incentive there may be for people to game the system, so some kind of PM to offenders might be a good idea too.

    • Randy M says:

      The only problem I have with the system is that a ban after a private warning could look quite capricious. But Scott can mitigate that by making warnings public after the fact–“Banned for discussing Space combat in Ian Bank’s novels after explicit warning via e-mail on 3/12/2027” or whatever.

      • Nick says:

        It’s Iain, depraved outgroup!

        I have the same concern about the private warning looking capricious. If there’s an upside I see, it’s that folks getting warned in private and then “saving face” by apologizing gracefully is a good thing.

        • Randy M says:

          *Sad Snoopy soundtrack plays*
          I had posed as one of you for so long, only to blow it now on a botched in-joke.

        • Incurian says:

          It’s Iain

          Wow I never noticed that. I guess I’m not as big a fan as I thought.

      • nobody.really says:

        “Let me know what you think.”

        IMHO, most people want operating this blog to be a congenial experience to you….

        1. I had the same reaction: Dude, focus on administrative convenience!

        For almost four years, I was a regular commenter at First Things. True, I often expressed more progressive views than many of the authors. Then David Azerrad posted a commentary decrying a college “Diversity Awareness Activity” for simply shaming and stigmatizing wealthy, heterosexual, white males for having “unearned privilege.”

        I wrote a post concluding that the exercise seemed to be designed to help students recognize the range of circumstances and backgrounds reflected in the student body, helping incoming students overcome a natural tendency to project their own circumstances onto people they don’t know. And yes, some of the questions in the exercise acknowledged differences based on wealth, some on sexual orientation, some on race, some on gender. But I noted that there were also questions based on other criteria—and, moreover, that acknowledging difference does not imply stigmatizing difference. In conclusion, I stated, “So here’s the real question: How can we talk about differences when merely raising the topic causes people to close their ears?”

        With that, I was banned from site—and my posts were removed from that discussion.

        It kinda sucked. (Someone had responded to my post by saying “Ah, nobody.really, my old gay arch-nemesis.” I was able to respond, “HEY! Who says I’m old? Or gay? Or arch? (Ok, you have me there….)” I was kind of proud of that one—and sorry when the moderators removed it.)

        But that’s the nature of editorial pages and blogs: The publisher gets to pick what does and doesn’t get published. I commend Alexander for an abundant concern for fairness—but ultimately, editorial discretion is a judgment call, and I fear he’ll make himself crazy trying to adjudicate every friggin’ post.

        2. Then again, I cannot fathom the mind(s?) that drive this blog. Alexander writes faster than I can read. So perhaps my mere mortal concerns are of no consequence.

        • Aapje says:

          And yes, some of the questions in the exercise acknowledged differences based on wealth, some on sexual orientation, some on race, some on gender.

          They go a lot further than this. For example, take this question:

          “For every dollar earned by white men, women earn only 72 cents. African American women earn only 65 cents; and Hispanic women earn only 57 cents to the dollar. All white men please take 2 steps forward.”

          This question makes the implied claim that earnings reflect the privilege difference between men vs women and that this is twice as bad as various other privileges they note, which merit only one step.

          These are extremely subjective and ‘culture warry’ claims that heavily depend on a certain worldview. Many people dispute these claims, for example because:
          – they think that earnings often require sacrifice and a (large) part of the gender earnings gap is caused by men making more of these sacrifices
          – they believe that earnings are much less relevant than actual material well-being, which is much more equal between men and women due to large wealth transfers
          – they believe that low earnings can reflect (forms of) privilege (for example, people choosing low earning careers or becoming a stay at home mother because they get financial support)
          – they believe that the gender earnings gap is not reflective of the overall privilege difference between men and women

          I merely note that these rebuttals exist to illustrate the subjectivity and ‘culture warriness’ of the question. If you actually want to discuss the extent to which the gender earnings gap reflects gender privilege, I suggest making a comment in one of the OTs where the culture war is allowed.

          I wrote a post concluding that the exercise seemed to be designed to help students recognize the range of circumstances and backgrounds reflected in the student body, helping incoming students overcome a natural tendency to project their own circumstances onto people they don’t know.

          This may be true for questions like: “All those raised in homes with libraries of both children’s and adult books, please take one step forward.” In that case, people are asked to display their supposed privilege by stepping forward if they personally had a certain experience. People of all races, genders, etc could step forward or backward for that question.

          The gender earnings question that I presented earlier doesn’t ask for the experiences of the specific person. It simply declares that white men earn more and thus demands that people sort themselves according to a stereotype. So it asks people to project a stereotype on themselves and then for others to judge this stereotype.

          Imagine a similar exercise for crime which includes these two questions:
          – If you have ever stolen something, please take one step forward.
          – Blacks people are three times as likely to be arrested as white people. All black people please take 2 steps forward.

          Do you see the problem?

          and, moreover, that acknowledging difference does not imply stigmatizing difference.

          Stereotypes are acknowledged group differences…

          A fairly recent study found that exposure to multiculturalism increases race essentialism. So acknowledging group difference may actually increase the belief that those differences are innate and thus unchangeable.

          Furthermore, studies do suggest that pointing out stereotypes may actually increase the application of stereotypes.

          Other studies have found that stereotypes tend to reflect reality (to some extent), so pointing out stereotypes may simply remind people that such stereotypes are useful.

          In conclusion, I stated, “So here’s the real question: How can we talk about differences when merely raising the topic causes people to close their ears?”

          I have seen a lot of complaints by Social Justice advocates that people who are “merely raising the topic” or “merely asking questions” are disingenuous trolls who don’t honestly want to engage, but want to derail the discussion or such. Some seem to consider statements like yours as passive aggressive trolling themselves, using a false claim of victimhood to deter the policing of important social norms.

          Stating very subjective claims as objective truths can also be perceived as trolling.

          If your comment(s) were similar to the one you made here, I can see how you might be perceived as a troll by people with similar low tolerance for such comments.

        • nobody.really says:

          Above, Aapje offers some more-than-cogent thoughts on a college “Diversity Awareness Activity”–and on the nature of comments in general. I concur that this isn’t the place for hashing out the merits of the college exercise. I only wish we could have had that discussion on First Things.

          For purposes of this discussion, arguably Aapje’s most relevant comment is this:

          If your comment(s) were similar to the one you made here, I can see how you might be perceived as a troll by people with similar low tolerance for such comments.

          This suggests that Aapje might join me in saying that Alexander need not maintain some high barrier to banning commenters that provoke him. Alexander has already acknowledged the strain of dealing with cultural war comments. This suggests that Alexander’s enthusiasm for maintaining this site is a scarce commodity, and policies should be devised with the goal of shepherding that resource prudently.

          Culture warriors will never lack for forums for combat. This web page needn’t be one of those forums.

          • Aapje says:

            Culture warriors will never lack for forums for combat. This web page needn’t be one of those forums.

            A feature of the culture war is that some (extremist and one-sided) culture warriors aggressively try to force others to discuss topics in a way that fits with their ideology, rather than depend on mere persuasion.

            Of course one can (try to) avoid that by yielding these topics to those extreme culture warriors or only debating it as they demand, but that has serious consequences for the debate. It means that other viewpoints that don’t fit either side of the dominant partisan divide, are not expressed. This especially includes moderate viewpoints, that don’t fit ideologies that have highly simplified explanations. Furthermore, it segregates society into ideological ghettos, where other viewpoints may not be expressed, causing bubbles where biased and/or false views are not challenged.

            There are indications that:
            – many segments of society feel unable/unsafe to express their views
            – these segments are more moderate than those who speak up
            – these segments also feel unable/unsafe to influence discussion norms

            So this then leads to extremist and one-sided speech to be dominant, which in turn causes fear among the moderate, who then feel forced to support radicals who fight against other radicals.

            It seems to me that yielding to (extremist and one-sided) culture warriors, even just by opting out, then makes things worse. Unless one believes in accelerationism, this seems to have bad outcomes.

            This web page needn’t be one of those forums.

            Many people, including me, seem to believe that CW topics are often not discussed here as they are discussed pretty much anywhere else and thus that this forum does provide a fairly unique variant that is worth preserving.

            Alexander has already acknowledged the strain of dealing with cultural war comments.

            While that is true, there is evidence to suggest that the comments themselves are not so much a strain as the backlash. Perhaps that can not be alleviated without surrendering.

          • albatross11 says:

            +1

            There are a million places on the internet where CW topics are discussed in terms of owning the libs or driving off the nazis, but not so many where people from different viewpoints politely discuss their differences in good faith.

            There is also a whole thread of thinking that says that taking part in, listening to, or even allowing such discussions to take place is evil and should be punished. People who accept that idea are pretty adept at using the heckler’s veto to make such conversations very difficult to have. But I don’t think those of us who don’t accept that idea are obliged to go along.

            It’s notable that a bunch of IDW types have managed to pack auditoriums and get large numbers of subscribers for podcasts/Youtube channels based on trying to have exactly that kind of conversation. That suggests that there is a substantial number of people who want to hear those conversations, and who are frustrated at the ability of the heckers’ veto crowd to shut them down.

    • dick says:

      My 2 cents worth: Scott should just ban people who seem like they ought to be banned, and not worry much or put too much effort in to it lest it become too much work. I’m more worried of the “This is exhausting so I’m shutting it down” failure mode than the “forum is overrun by trolls” failure mode. (This may be partly because I was a big fan of Andrew Sullivan’s blog, which met the same fate, though for different reasons)

      • zzzzort says:

        I’d take a few years of Andrew Sullivan’s comments than a life time of say, marginal revolution’s.

  42. Nabil ad Dajjal says:

    One thing that you may not have considered is that some of us are using fake emails.

    WordPress has had several large email leaks before, and a lot of the culture war discussion here is firmly in get-you-fired territory. Long time pseudonymous commenters have probably left enough bread crumbs for a determined doxxer to find our real identities but at least that requires work on their parts. But a dump of WordPress accounts and linked emails is searchable: an attacker could take one of the dozens of “I rated SSC posters for comment frequency and political alignment” lists users here have made over the years and ID every single right-leaning commenter dumb enough to use their real email in a matter of minutes.

    • Nick says:

      Seems to me the solution is asking folks to at least use email accounts without a real name attached. And if Scott sends the email and it bounces, he can just ban them instead.

      I think that’s workable for the regulars, but 1) it’s yet one more layer of “I just started posting here and don’t know all these obscure rules!” and 2) it still suffers from all the problems that the emailing scheme has in the first place.

    • Aapje says:

      Some time back our emails were visible in the WordPress page. That made me change my mail address to a semi-obfuscated one. I will still get mails to that address, but others may have used fake addresses.

    • Edward Scizorhands says:

      I use a real address that I could, in theory, check. But I never do.

    • The Element of Surprise says:

      How hard would it be to set a flag for the user so when they log on or write a comment a small red banner appears above the input text field?

      • Bugmaster says:

        Normally, I’d say, “pretty easy”. But this is WordPress, so I’m going to go with, “functionally impossible”.

  43. bean says:

    Biweekly Naval Gazing links post:
    The Battle of Manila Bay is the most famous naval engagement of the Spanish-American War, a shattering American victory over a Spanish fleet that could barely float.

    I’ve taken a look at the best of naval fiction, and gotten some excellent suggestions from my commenters.

    So You Want to Build a Battleship continues with a look at the surprisingly complex process of getting a completed hull into the water.

    I’ve already linked my 4/1 post on the Philadelphia Experiment, but I’ll throw it in here, too.

    The destroyer currently is the most powerful combatant in most navies, but it wasn’t always this way. I’ve looked briefly at the history of this type of ship, from its origins in the torpedo boat to the present.

    My long and winding design history of the battleship has finally reached the pinnacle of the breed, the magnificent Iowa class.

    Museum Review: The Tulsa Air and Space Museum. Not a bad way to kill a few hours, but not worth going far out of the way for, either.

    And Naval Gazing is having its usual open thread.

  44. baconbits9 says:

    I’m considering a “culture war ban” for users who make generally positive contributions to the community but don’t seem to be able to discuss politics responsibly. This would look like me emailing them saying “You’re banned from discussing culture war topics here for three months” and banning them outright if they break the restriction. Pros: I could stop users who break rules only in the context of culture war topics without removing them from the blog entirely. Cons: I would be tempted to use it much more than I use current bans, it might be infuriating for people to read other people’s bad politics but not be able to respond, I’m not sure how to do it without it being an administrative headache for me. Let me know what you think.

    If you are going to do all this work, why not take your list of names and post them once a week in an open thread saying “these are people I think can do better in culture war discussions” and see if we they can police themselves appropriately. This gives them a chance to behave better in different ways, and takes less work.

    • albatross11 says:

      Having that feedback be offline/invisible also undermines the quality of discussions a lot–X, Y, and Z don’t respond to some CW discussion because they’ve been told not to, rather than because they don’t really have strong feelings about the matter.

  45. theodidactus says:

    Who would win in the following race:

    * A beetle driving a spider
    * A viper driving a stingray
    * A spider driving a mustang
    * A stingray driving a jaguar
    * A mustang driving a viper
    * A jaguar driving a beetle.

    Assumptions:

    * the animal has some suitably comical way to actually work the controls, and has set its intelligence, however small, to actually winning the race. The stingray has a tank full of water.

    *the race is a street race, in an urban environment, on a course that the animal knows very well, or at least as traveled through dozens of times prior to the race. (In the case of the spider we’ll say she’s run a miniature version of the race. In the case of the stingray we’ll say he’s done an underwater course very like this race). It is in a suitably busy city like Los Angeles, but is conducted in the dead of night and with reasonable certainty that the police will not interfere. The race is 10 miles long and features multiple turns.

    * You can use whatever sub-model of car, above, that you wish, but you must make a good-faith argument that this is the “iconic” or archetypal sub-model of that particular car

    * You can use whatever sub-species of animal, above, that you wish, but you must explain why its behavior patterns, in particular, would contribute to it winning the race.

    • chrisminor0008 says:

      What?

      • theodidactus says:

        I mean I think my question is pretty straightforward.

        • Enkidum says:

          “Pretty” is doing a lot of work in that sentence.

        • AuralAlias says:

          Some people lack the proper education to contemplate the most important issue facing the well being of humanity this century. Only with time and patience can we hope to bring others beyond their current mental and physical limitations and eventually transcend science and technology.

      • Simulated Knave says:

        Basically: what if animals that share name with cars drove a car that is named after an animal? And then raced each other?

        Capitalizing proper nouns would have helped a lot in the clarity here.

        • Gobbobobble says:

          Each relevant proper noun is a hypertext link, which is be a pretty clear signal that it’s not necessarily referring to what you’d expect. Not the OP’s fault if you don’t bother to even hover over them.

    • Protagoras says:

      I’m going to go with the mustang driving the viper. Horses are really the only animal on the list evolved for long distance overland travel, and they’re quite trainable as well. And the mustang has a decent vehicle. The jaguar is stuck in a crappy car, and I have a hard time believing that limited intelligence won’t cripple all the others.

      • Aapje says:

        It probably would have to be a convertible Viper, otherwise he probably wouldn’t fit (or are there mustang ponies?)

      • broblawsky says:

        Agreed. The mustang driving the viper has the best combination of superior intelligence and an actually powerful car. The mustang also has the best-developed brain for dealing with obstacles, unless the spider is a Portia spider.

    • Nick says:

      I think it’s down to the controls. If pressing a tiny gas pedal sends dopamine directly into that little bug brain, then I don’t see why a beetle or spider strictly can’t win. Though as far as learning the track I’m not sure that only dozens of times’ practice suffices.

    • AuralAlias says:

      The beetle in a spider would do really well in a course with many turns and would likely have reflexes good enough to avoid an accident and any obstacles in the way. Top contender.

      The viper in a stingray, while in one of the best suitable cars for this contest, would likely not do very well due to it’s demeanor. I imagine short periods of aggressive and precision driving, followed by a long slow deliberate pace. This is a second place combo, no matter who wins.

      The spider in a mustang would have a lot of trouble avoiding any crowds along the sides of the road. I mean, that much horsepower in the rear wheels with an inexperienced driver is just a bad combo. But on a more serious note, the spider would likely do really well towards the beginning of the race, then sit somewhere and wait to trap it’s opponents.

      A stingray in a jaguar. What the shit man. I don’t even know about this one.

      A mustang in a viper. Similar issues with the spider and the mustang, but coupled with a more intelligent creature with a history of racing. If the mustang can keep it on the road, or there are a lot of straights, this is the clear winner.

      A jaguar in a beetle. Not gonna lie, a reasonable easy driving car, an intelligent creature capable of strategic planning and execution, this is the most underrated contestant. A first place finish here depends on how well the others are able to execute.

      • theodidactus says:

        Your point on the spider is inspired. I’m wondering how successful a strategy it would be to “get out ahead” of the other racers, hide in an alley, and then ambush them.

        My knowledge of the animals involved outweighs my knowledge of the cars, but it strikes me that the spider is actually driving an ideal car for this.

    • Quixote says:

      The jaguar driving the beetle clearly wins. The beetle may be among the slowest of the cars mentioned, but the jaguar is the smartest of the animals mentioned, and by far. Not only is it smartest, but it’s skillset is well suited to the race. The jaguar can navigate a complex obstruction rich jungle environment which mirrors obstructions its likely to encounter in the urban environment. The jaguar is a predator and can model other minds to some extent and is mindful of things like speed, whether paths are intersecting, and being aware of lines of sight. Additionally, the jaguar is among the fastest moving of the animals considered. I doubt that most animals would be capable of tracking movement at even a small fraction of the speeds their vehicles are capable of.

      I think the jaguar is the only animal that finishes the race without crashing. If the jaguar has any completion, it’s the mustang which is also fast moving and also has some ability to model other minds as a social herd animal. I think the jaguar still has the edge though as its better equipped to deal with obstructions and the mustang is more likely to spook.

    • The Nybbler says:

      The mustang would win a an open daytime race, but I think a dark and twisty city environment is no good and he wrecks.

      Stingrays and snakes don’t seem to be adapted to mazes, so they’re not going to do so well.

      At least some kinds of spider seem to have the brain for it, and a Mustang will do.

      Dung beetles, at least, can do celestial navigation.

      The jaguar should also have no problem, but the VW Beetle isn’t up to it. So finishers are

      1) The spider in the Mustang, narrowly beating

      2) The beetle in the Spider

      coming in comfortably ahead of

      3) The jaguar in the Beetle.

      The others fail to finish.

    • fr8train_ssc says:

      This is silly, but I agree with the consensus that seems to argue the Mustang in the Viper and Jaguar driving a Beetle would be contenders for first.

      Now, if there was a Cougar driving Kurt Busch’s Stewart-Haas Racing Chevrolet Impala SS I would argue that pair as the winner.

      • theodidactus says:

        you only think it’s silly because you speak from a position of privilege. For some of us, this is not just a hypothetical problem. Some of us deal with this problem every day.

    • acymetric says:

      * A beetle driving a spider

      Wrecks because it can’t see where it is going

      * A viper driving a stingray

      Wrecks because it can’t see where it is going

      * A spider driving a mustang

      Wrecks because it can’t see where it is going

      * A stingray driving a jaguar

      Wrecks because it can’t see where it is going

      So we’re left with two contenders:

      * A mustang driving a viper
      * A jaguar driving a beetle.

      Both have good enough night-vision (and presumably our urban area is reasonably well lit) so day/night doesn’t come into play. The viper is the superior car. On a straightaway or NASCAR style oval track the horse wins going away.

      I’m inclined to believe our horse will be competent at drifting around corners and give the horse the win in the urban environment as well.

      • Enkidum says:

        I think this is the correct answer (at least the eliminating most of the animals part, I defer to your car expertise). I don’t think the visual systems of these animals could handle the kind of inputs necessary to steer a car at speed.

        Hmmmm… more detail. What you care about when driving has low spatial frequencies. You can run over small things, and you don’t care about the fine details of large things, as what matters is that they are large. So you don’t need high resolution vision per se. I think most of what you’re doing when driving a car uses rods, as opposed to cones, and that gets rid of the need for a large amount of cortical hardware. (Rods are best for fast-moving stimuli, but they have very little in the way of ability to provide high-resolution information.)

        But you need to be able to both predict the consequences of your movements and respond to unexpected changes in visual inputs very rapidly, and I don’t think any of the non-mammalian animals listed could do that. These are all animals (well, with the exception of the stingray, which I don’t know much about) that spend most of their lives moving at well under 5 km/h, with very occasional hard-wired responses to rapid movement.

        The best option would probably be a bird that is used to flying through forests in packs.

        Epistemological status: I’m a sort of vision scientist, but I don’t really know all that much about non-human vision, and not much beyond David Attenborough-level stuff about animal behaviours, so take all this with thirty metric tons of salt. But that’s my best stab at it.

        • woah77 says:

          So what you’re telling me is we need a Firebird driven by a Falcon for a real contender?

          • theodidactus says:

            I got to thinking about this problem when I considered human adaptions to driving. (truthfully, several years ago, this is on an old collection of interesting problems i proposed to a friend)

            Humans, when I consider them in the abstract, really shouldn’t be capable of doing the kinds of things that drivers do. At the very least, it mystifies me. While I CAN drive competently, even in cities and over long distances, I’ve never really been able to get over the fear of being in a vehicle moving far faster than I’m really equipped to move (I believe XKCD has a good comic on this). Driving long distance is enjoyable, but I’ve never really enjoyed driving in even very small cities. There’s too much to account for that my brain doesn’t feel like it can account for.

            And it’s not an issue of practice either. I drove for 5+ years before essentially giving up. I’ve never been in an accident, but i’m always afraid I’m gonna be.

          • Enkidum says:

            @woah – I spent a while trying to think of a bird-named car and couldn’t come up with one, but obviously a Firebird would do it.

            @theodidactus – I’d guess one of the main reasons why we can drive well is that we’ve managed to create such a predictable driving environment. And when it isn’t predictable, accident rates increase substantially (cf. much of China or India vs anywhere in the western world, I don’t have the stats but I’m pretty sure they support this). Generally speaking, we have half a second or more to respond to any sudden changes (and if we don’t, we should increase our following distance – this is why tailgating is bad). And that’s plenty of time for our visuomotor systems. It’s almost certainly plenty of time for the visuomotor system of a spider or whatever to get out of the way of a looming stimulus, but in those cases (I assume) the stimulus-response channel is completely hardwired, so there is only a very specific class of stimuli it will respond to, in a very predictable way every time. There are spiders like the Portia that use relatively high-resolution visual information to plan complex manoeuvres, but this takes them minutes to process, not milliseconds. So yeah, unless someone who knows more about animal behaviour and visual systems (which wouldn’t be that hard to find) says otherwise, I’m gonna stick with the mustang or jaguar being the only ones that are actually in contention.

          • woah77 says:

            @Enkidum I cheated and used google to find one. Or rather the second one, since I knew that the falcon was a name of a car.

          • AuralAlias says:

            Or a Road Runner driving a Reliant Robin?

            Or my favorite, an Eagle(AMC Eagle/Eagle Talon) driving a Tercel. Male falcons are called tercels/tiercels.

    • AG says:

      Loser: a pill Bug in a Reliant Robin

      rollin’, rollin’, rollin’

    • Watchman says:

      All the earlier answers are missing one key piece of data. Watching the car chase scene inBullitt has demonstrated that the Volkswagen Beetle is undoubtedly the fastest urban car available. However fast the participants in the chase go, the same green Beetle keeps getting to places before them (and it’s clearly not racing as it let’s the obviously in a rush cars past repeatedly).

      So, equipped with the driver probably best suited to understanding racing, albeit probably better and stalking and catching a victim than being a frontrunner, the Beetle is clearly the best car and should win.

    • raj says:

      The type of car seems mostly irrelevant because the challenge lies is training an animal to actually be able to drive a car in any kind of satisfactory way. I don’t think you could actually do it under normal circumstances, certainly anything but the mustang and janguar are right out.

      Of course you could kind of cheat and specify that the controls directly interface with the animals brain in way that maps 1:1 with its normal locomotion, then in terms of car it has gotta be a Stingray or Spider (the weight of the mustang being significant)

      • Enkidum says:

        Even with the brain interface (they’ve actually kind of done this with cockroaches and little robot vehicles, I think), I’d still rule out anything but the mustang and jaguar, along with @acymetric above.

        • brianmcbee says:

          Yeah, it has to be one of the mammals. I’m concerned about the horse, though. They don’t have binocular vision do they? How much of a handicap would that be?

  46. Bobobob says:

    First time I’ve tried to start an open thread here, be gentle. Non-CW topic.

    I’ve been imagining what the musical landscape would look like today if Mozart and Schubert had each lived another 50 years, well into their 80’s. My guess is that Mozart would have gone on producing more masterpieces in his accustomed vein (masses, symphonies, concertos), but would have remained “stuck” (if you can call it that) in classical mode.

    But Schubert…if Schubert had lived another 50 years, I think music today would be *completely* different, in all genres. Who knows what he was on the verge of producing, given his last two piano sonatas and the unfinished symphony (the short movements in those piano sonatas point to a kind of proto-jazz, if you listen just the right way). Schubert dying at 31 was like Bernhard Riemann dying at 40, with consequences we can’t begin to fathom. Thoughts?

    • Nancy Lebovitz says:

      I don’t think it’s imaginable. It’s like trying to imagine what would have happened if the Holocaust hadn’t ended that cohort of Hungarian geniuses.

      These days, you can have a computer program which can write what sounds like Bach on an off-day, but that’s not like being to conceive by any method what Bach would have written if he’d had another decade.

    • eyeballfrog says:

      Speaking of such things, I’ve always wondered what the symphony that Scott Joplin was working on when he died would have sounded like. The guy made a lot of effort to go beyond the piano tunes he was best known for, but so much of that is lost.

    • AG says:

      I disagree that Mozart would have been “stuck.” He was pushing new instruments and for the inclusion of more “vulglar” elements like ballet in opera, so I think it would have revolutionized and blurred the lines between high and low art much sooner. Going big and Romantic seems like something that he would embrace, and just imagine how it would have been if Beethoven had gotten to be his pupil after all.

    • Evan Þ says:

      I read an alternate history novel that postulated that Mozart would’ve moved out of music, become a general in the Austrian Army, and helped suppress the French Revolution.

      In other words, like Nancy says, we have utterly no idea what he would’ve done.

    • indianbadger1 says:

      in RE: Mozart, the story I have heard is that Mozart wanted, I mean really wanted to write more Operas. But Operas unlike Concerto’s and Symphonies etc require a commission from a Opera House. And he was looking outside Vienna for that. Prague loved him and commissioned some of his last Operas. As an Opera fan; I regret all the Opera’s that Mozart was unable to write; more than his instrumental stuff.

    • Nicholas Weininger says:

      Richard Taruskin’s article on the roots of Stravinsky’s tonality discusses Schubert’s work– in particular the Mass in E Flat– as evidence that, though he doesn’t put it this way, if Schubert had lived another fifty years he might have produced something in a Stravinskian direction.

      https://www.jstor.org/stable/831550

      My bet is that “really late” Schubert would have been more like Messaien; there’s a dreaminess to Messaien’s stuff that Schubert seems to have been reaching toward as well.

  47. Is it possible to become resistant to pain through training? The sci-fi concept of wireheading is a hedonists dream, but I find it repulsive and scary; I don’t want to become an addict. What I would like instead is if there was a device that could cause pain through an interface with the brain, so I could A: slowly dial up the level of pain, and B: be able to get used to a given level of pain without any physical damage. The best way to cause pain without damage currently is electricity, which is why it’s sometimes used to torture, but it’s not perfect and I’d assume repeated electrical shocks even if weak would still cause nerve damage over time. To create a true super spy we need a device for training pain resistance that hacks directly into the brain and bypasses physical trauma.

    I want to become totally resistant to pain so that I never reveal the location of the rebel base during WW3.

    • Vitor says:

      I recall reading somewhere that strong pain actually causes you to become more sensitive to the same kind of pain in the future. Your brain interprets this pain as an important signal and therefore strengthens the neural pathways involved over time.

      In my experience, doctors are very aggressive when treating severe pain (i.e. treating pain as a serious condition that needs to be immediately addressed, beyond just concerns for the comfort of the patient). The explanation I’ve been given about this is that it gets increasingly difficult to reduce pain with any amount of medication once it has been allowed to be at a very high level for a prolonged period of time.

      So, it’s not clear to me at all that the kind of exposure training you’re suggesting is the way to go to increase pain tolerance. If you do find a recipe that works, I’d love to hear about it though.

    • a real dog says:

      If you ever accidentally electrocute yourself you’ll notice that electricity causes extreme muscle spasms, which causes far-reaching damage and can even kill you via rhabdomyolysis. So that might not be a recommended approach.

      You could also skip the entire process and acquire whatever weird gene this lady has: https://www.bbc.com/news/uk-scotland-highlands-islands-47719718

    • Jon S says:

      Intense exercise, done in a regimented manner, certainly can build up a related version of mental toughness. I’m not sure how well the skill would transfer to enduring other types of pain.

    • Joseph Greenwood says:

      Not disagreeing with other posters, but in the short term if I experience extreme pain and then milder pain that would still normally bother me, I mostly shrug and move on because it feels not-that-bad by comparison.

    • dyfed says:

      Exercise is probably your best bet for learning to endure pure pain. Electricity is right out.

      But the reason you talk under torture is not pain, it’s fear of permanent mutilation. (Unlike the movies, villains do not take pains to make sure the star’s good looks and physical capacity aren’t ruined.)

      If you want to withstand torture, the important thing is to lie convincingly, and with enough verifiable truth mixed in that they believe they’ve recovered information.

      Under prolonged torture, you will give up everything you know, everything you believe, and many lies you make up as well. Information gained through torture is not reliable, so sophisticated opponents will rarely use it, and unsophisticated opponents might be fooled by deception.

      The important thing here is to make them believe that you have reached your breaking point and have yielded all useful information as early as possible, before you are seriously maimed. So it’s much more important to be a good actor than some sort of glutton for punishment. That said, the courage and nerve to act well under pressure are partly a product of physical fitness, so exercise is still important.

      • But the reason you talk under torture is not pain, it’s fear of permanent mutilation.

        Are you sure? It seems like being prepared to die isn’t that uncommon a level of bravery. If you’re prepared to die then surely how you look or function matters little, whereas pain requires you to be able to withstand a very primitive mechanism that grasps ahold of your logical circuits so that it can confabulate reasons for you to give in.

        Information gained through torture is not reliable

        It’s true that false information has been extracted through torture in many highly publicized cases (recently in relation to the Iraq War), but it seems like militaries the world over swear by torture and have to be restrained by human rights tribunals in order to soften the torture to the point where it’s only waterboarding. It makes me wonder whether the cases where false information was gathered actually outnumber the cases where true information was gathered.

        • albatross11 says:

          I’m pretty skeptical about most speculation about the effectiveness or proper strategy to resist torture, since the people speculating virtually never actually have any data or personal experience.

        • woah77 says:

          It seems likely to me, and I’m not an expert, that it’s not that there isn’t a lot of really bad information gained, but that the goal of torture is to seek confirmation of information the torturers suspect and to demonstrate a willingness to “do whatever it takes” which signals to your opponent you will be ruthless when you finally find them. Of course, this takes the torturer not asking leading questions, which sounds like it takes an experienced person doing it. Because we regard it as “inhumane” we’ve reduced the pool of experienced people performing the questioning, thereby increasing the bad information gotten.

        • dyfed says:

          >If you’re prepared to die then surely how you look or function matters little

          No. Death is much sweeter than mutilation and degradation. People who are seriously tortured often commit suicide to escape. There are qualities of life such that all but extreme outliers prefer death.

          >Militaries over the world swear by torture

          Not as an information gathering tool. There is a terrific gulf between behavior that is common and behavior that is effective. Competent intelligence agencies have reams of historical data proving that, under real torture, most if not all information yielded is factitious. It will not surprise you to learn that not everything that even advanced Western militaries do is competent, necessary, or useful from an intelligence perspective.

          Torture as psychological warfare tool or torture as morale device is outside the scope of this discussion.

          • Douglas Knight says:

            Competent intelligence agencies have reams of historical data

            Do they? How do you know? Why do you trust information about intelligence agencies?

            Which intelligence agencies do you think are competent?

        • Randy M says:

          Are you sure? It seems like being prepared to die isn’t that uncommon a level of bravery

          Prince Humperdink would like a word with you.

      • edmundgennings says:

        I imagine the checkablity of information is crucial. Give us the codes to the laptop right here is not something the person is going to be able to lie about. Ask what is the contingency plan for these particular circumstances, and the torturee is going to sing an account that may have nothing to do with reality.

        • Edward Scizorhands says:

          > Give us the codes to the laptop right here

          > what is the contingency plan for these particular circumstances

          True. Those are different and I think those are conflated on purpose.

          If you want an anti-torture policy because torture is amoral, then good for you. I think it’s amoral, too. Tell me where to sign on. But it’s obviously useful, and insisting that it isn’t useful is just setting yourself up for torture to be allowed if/when people decide that it works.

          Also, drug addiction. Give someone a meth addiction and they’ll sing for you.

          • fion says:

            Sorry to be that guy, but do you mean “amoral”? It sounds like “immoral” is closer to what you mean.

      • John Schilling says:

        But the reason you talk under torture is not pain, it’s fear of permanent mutilation. (Unlike the movies, villains do not take pains to make sure the star’s good looks and physical capacity aren’t ruined.)

        There’s certainly some torture that matches that model. But there’s also e.g. waterboarding, which pretty clearly isn’t going to mutilate anyone but is reportedly very very effective. And there seems to be a disturbingly large arsenal of “enhanced interrogation techniques” specifically designed to not leave any physical evidence of torture. So I am left to conclude that A: an awful lot of torture is in fact based on pain or something pain-adjacent, not fear of mutilation and B: it works, at least for skilled torturers and C: it’s very difficult to train people to resist it.

        • rlms says:

          You missed that it’s the villains, which obviously Americans are inherently unable to be.

          • Nornagest says:

            It’s not just the American military, though. The infamous Coca-Cola trick (which might be apocryphal) is usually linked with the Mexican federales.

    • psmith says:

      I don’t think it really works like that (pain is merely a special case of leverage), but given your stated goals you may find some interesting material at painscience.org and in the bibliographies of John Sarno and Howard Schubiner (whose book our host reviewed.). And a lot of that in turn ties into the predictive processing literature. tl;dr no brain, no pain.

    • Paper Rat says:

      In my experience, some kinds of pain are way easier to get used to than others. It seems that resistance to cuts, bruises, mild burns and other tissue damage can be built semi-reliably through training/exercise.

      The other stuff has to do with internal organs, teeth, electric shocks and such, and it doesn’t seem to build any resistance at all. I also seriously doubt anyone would willingly train for this.

      As for the torture proof spy, achieving that level of fearlessness and pain tolerance would also likely make the spy prone to questionable decision making in situations that don’t involve torture (hopefully a majority of situations you want your spy to deal with), which might compromise the main objective. So in the end a good old cyanide pill or some kind of brain frying switch might be a more prudent solution.

  48. deciusbrutus says:

    Here’s an idea that’s technologically more complex:

    Democratize the censoring process. For specific users that would have bans-lite, add a button next to ‘report’ that logged-in users see that says ‘remove’ and one that says ‘keep’. Remove the comment if the ‘remove’ thoughts exceed a specified number and the ratio of remove:keep thoughts exceeds a specified ratio (in the most restrictive case, “if there are more than zero votes and the ratio of remove:keep exceeds 0” is the same as ‘anyone can cause this comment to be removed’).

    Absolute number is important because acting purely on ratio would give infinite weight if the first person happens to want to delete; ratio is important because that’s closer to what we actually want, easy to measure, and hard to cheat.

    Don’t reveal the score, unless you want discussion of the score to be important.

    • Enkidum says:

      I, for one, am extremely opposed to this. This is not a democracy, and shouldn’t be one. We’re here at Scott’s pleasure, and if he decides to burn the place to the ground on a whim, that is his prerogative.

      EDIT: Just to be clear, I am not advocating him burning the place to the ground, nor do I think the proposal amounts to that. I do think there should still be the possibility of discussion over bans, but I don’t think there should be any assumption that said discussion would have an impact.

      • Matthias says:

        Yes, but any self respecting evil dictator also has some henchmen. And pretends to indulge in democracy when it suits him.

    • Aapje says:

      That greatly incentivizes brigading.

    • silver_swift says:

      Those ‘keep’ and ‘remove’ buttons would immediately become ‘agree’ and ‘disagree’ buttons.

    • JPNunez says:

      Democratizing the process will end in one side winning, maybe by underhanded tactics. In the end, Scott may or may not end up happy with the result, and if he isn’t, he will have to go back to taking matters on his own hands. Let’s skip to the end and let him take matters on his own hands.

      If people want reddit-like systems, there’s a CW forum that split off the old CW thread, at reddit.

    • DinoNerd says:

      Every place I’ve ever seen that has a kind of “dislike” button that leads to posts disappearing automatically, has had a problem with self-appointed censors policing the attitudes expressed. Usually it’s just fan children who object to anything that might possibly be taken as disagreeing with any aspect of the site owner’s behaviour. (An example here would be suggesting improvements to the order in which posts appear.) Certainly disagreeing with Scott on something more substantive would be right out. Less commonly, it’s CW vigilanteism.

      The set of posters we have here might not do that. And I haven’t seen this except in combination with some kind of “like” button, visible scores, and people motivated to increase their positive scores. But I’d still be wary.

      • Aapje says:

        The set of posters we have here might not do that.

        Outsiders may use it to attack certain people/ideas.

    • eqdw says:

      Upvotes. You’ve invented upvotes.

  49. fion says:

    Typo: “This is the twice-weekly hidden open thread” -> “This is the fortnightly visible open thread”

  50. johan_larson says:

    The recent college admissions scandal and the continual drumbeat of stories about the extreme lengths students and parents go to to secure admission to a small number of top colleges has me wondering if there isn’t a better way to do all this. Right now, prestigious colleges are known for being hard to get into. But they are not known for being hard to get through. I expect the instruction offered by a college such as Yale is of high quality, but it has no particular reputation for particularly high standards or difficulty.

    What if we turned that around? Could one run a prestigious institution that admitted pretty much anyone, but set the bar high academically, with a demanding curriculum and uncompromising grading standards? I would expect a large freshman class each year composed of bright-eyed young men and women, and a much smaller graduating class of stainless steel motherfuckers who made it through the entire course of study successfully. (How much smaller? 1 in 4 candidates admitted to US Navy SEAL training makes it through the course, though that’s after passing pretty demanding preliminary qualifications. If you let anyone try, might 1 in 10 make it through? 1 in 20?)

    The great advantage of doing things this way is that there would be no point in trying to impress anyone up front. Looking impressive after high school really wouldn’t get you anything, if you didn’t have what it took to make it through the actual studies. Trying to cheat your way through would probably also be quite difficult since by senior year or so the remaining students would be quite a small group, known personally to the rest of the class and the professors. If someone else sat an exam for you, it would be obvious.

    Is there some reason this wouldn’t work? Or might there be some great downside I am missing?

    • pozorvlak says:

      A friend who studied there tells me that ETH Zurich (11th in the 2019 THES ranking, 7th in the QS ranking) works this way, with something like a 50% dropout rate after the first year alone. However, I think going to a tough university that you might drop out of is much lower-risk in Switzerland than it would be in the US – fees at ETH are about 1600 EUR/year, and I think there are also maintenance grants for Swiss citizens.

      Another effect I’ve heard about: academic staff have little willingness or time to help struggling students, because most of them are going to drop out anyway, and there’s an “if you can’t stand the heat, get out of the kitchen” attitude. I think this is a downside: probably lots of them could overcome their difficulties and get through the whole course with a bit more help. But I guess it depends what you’re selecting for – the ability to survive an uncaring environment unaided is a useful thing, but it’s not the same as intellect or scientific potential.

      • Vitor says:

        I studied at ETH, and this information is correct. I think it’s a good system, given that the cost of attending one year and failing is reasonable.

        The main reason for this system being in place is that every swiss student that posesses a matura (the highest-tier high-school diploma, we have two other tiers) has a right to be immediately admitted to any undergratuate program at any swiss university (with some exceptions for studies that have hard capacity constraints, like medicine).

        Also, for the record, professors and TAs have lots of time and willingness to help students along, and it’s mostly students systematically failing to take advantage of office hours, etc. The whole system is just set up in a manner that requires a lot of personal responsibility, in the sense of “here’s a list of requirements to pass, do whatever you want with this information.”.

        • pozorvlak says:

          Also, for the record, professors and TAs have lots of time and willingness to help students along, and it’s mostly students systematically failing to take advantage of office hours, etc.

          Oh, excellent – I’m delighted to be corrected on that point 🙂

        • ana53294 says:

          In my University, many professors had a rule that while you were very welcome to attend office hours, you were not welcome to office hours the week before an exam, if that was the first time you used office hours.

          The last few weeks, office hours tended to be done in groups, because of the huge deluge of students coming at the last moment.

          So the real advantages of office hours (one-to-one tutoring, personalised explanations) can only be enjoyed by students who started studying in the beginning of the semester, not in the last few weeks.

          I’ve never met a student who managed to get into university and started to study when the semester started, who didn’t manage to pass all the courses eventually.

      • Douglas Knight says:

        The typical US state school engineering program is pretty similar. But the students drop out to the rest of the university, so the number is not publicized so much.

    • Murphy says:

      I’d point to the irish CAO system.

      It’s not perfect but it means that if a college’s course has 100 places and 300 people want to get in, the places go to the 100 people who scored best in the leaving cert exams.

      The Washout rate in my course was about about 40-50% per year. We started with ~160, 78 made it to second year, about 50 made it to third. Only about 30 of my original classmates graduated with me. 4th… there weren’t many washouts between 3rd and 4th to be fair.

      Though I have some issues with it: the disability support service basically dragged one student unaffectionately known as “brick” through despite him not particularly understanding the material or doing the work.

      But for most students they mostly weren’t afraid to let people fail. Personally I didn’t find the course too hard, but I have a skewed view because I took to the subject like a duck to water.

      We didn’t have the weird “extracurricular” bullshit where US students have to pretend they spend their summers teaching orphans to fly or something. If you get a perfect score you can pretty much go where you like, in theory a course might get overwhelmed by people with perfect scores applying… but last year out of 55K exam takers only 0.2% got a perfect score and if all 157 of them applied to the same course I imagine the Uni would somehow make extra places or something to keep all the top 157 people.

      I think it’s partly because how Uni is funded in ireland changes the risk profile. The state covers the cost of 1 degree (and has lots of negotiation power with the colleges) so someone can fail out of a degree without ending up financially ruined.

      I think you get a degree of risk compensation and people being unwilling to inflict brutal punishments so failing someone who’s $400K in the hole is socially harder than doing the same to someone who isn’t even if they both deserve to fail.

      Also, people are going to be risk averse: I wouldn’t want to gamble the price of a house on maybe-possibly-perhaps having a 1 in 10 chance of graduating.

      So it kinda makes sense that some of the most expensive uni’s fail people less.

      From talking to a french doctors I know: apparently some french medical schools have a battle-royal model. almost no selection on who gets into first year…. but then only about the top scoring 1/10th get to move on to second. Which creates a brutal competitive environment because helping someone might push them above you. He had some stories about people getting locked in toilets during the exams and similar (because one way to increase your chances is to eliminate someone who’s definitely in the top 10% from the running)

      • Aapje says:

        He had some stories about people getting locked in toilets during the exams and similar (because one way to increase your chances is to eliminate someone who’s definitely in the top 10% from the running)

        I’d expect a biological attack (releasing a virus or such), given that these are medical schools.

        • Paul Brinkley says:

          I would have expected being locked in the toilet of people who live on Irish food to have qualified as a biological attack, but apparently not.

      • ana53294 says:

        Medicine is very competitive in Spain also, although you just need to pass exams. However, grades affect the specialization you will be able to get, as well as getting into the desired hospitals. So the medicine part may be to blame more than the university.

        The worst/weirdest thing I’ve heard about medicine students in Spain is that they write their notes in green ink, because this way it can’t be photocopied.

        • Murphy says:

          can’t be photocopied.

          I remember the old code books that used to come with video games back before networked DRM with black-on-black writing to try to make them hard to photocopy.

          Of course modern scanners have no issue even with black on black.

          • ana53294 says:

            They can be color photocopied, but that’s very expensive. Black and white photocopying of green ink has to be converted into greyscale for printing, and the grey is a very faint color.

            You can do it, though. You first have to scan the green ink text, then convert into greyscale and saturate the color. That is still a lot of work, and cannot be done on a photocopying machine.

            The only things modern scanners cannot scan are euro/dollar bills, and in Spain, University diplomas (because those are printed in the Royal Mint, with all the bells and whistles).

          • Murphy says:

            Get a cheaper scanner, some of the expensive ones respect the EURion constellation and lock up

            https://upload.wikimedia.org/wikipedia/commons/thumb/a/ac/EURion.svg/1200px-EURion.svg.png

            but most won’t.

            Also, fun game: print the EURion constellation on a tshirt and wander tourist locations to get it in peoples photos.

            Anti-photocopier stuff is really out of date , most people I know now would just scan a document and keep it digital.

          • bean says:

            Also, fun game: print the EURion constellation on a tshirt and wander tourist locations to get it in peoples photos.

            I don’t think that will work. Even the wrap-around effect of a shirt will probably distort it enough to make it useless. It’s designed to counter copying, not photography.

      • Etoile says:

        Re: price of a house: the factors that make college so expensive aren’t fixed. If you’re making massive systemic changes, you can expect a massive shift in demand for college, lending behavior, value of the degree in the market…. All of which can have unpredictable and interesting – and perhaps salutary – effects on the cost of college.

    • Aapje says:

      An obvious downside is the cost occurred by those who drop out, especially late in the program. In a diploma-society, where almost graduating is worth little more than never having studied, this is very bad. This is not as bad for US Navy SEAL training because the military pays for the training

      So ideally you want to set a very high bar very early and then make everything after that easier than the bar. A bit of quick googling suggests that US Navy SEAL training works this way as well, with most dropouts being at the beginning.

      • johan_larson says:

        So, the curve would have to look something like this:

        500 enter first year
        100 enter second year
        75 enter third year
        60 enter fourth year
        50 graduate

        I guess it would suck pretty hard to be one of the ten who entered fourth year successfully, but didn’t graduate. Being one of the 400 who washed out in first year wouldn’t be quite so bad.

        • Murphy says:

          I feel that the problem here is the washout-quotas.

          What do you do under your system if 400 people surpass the normal standard for getting into second year?

          Perhaps you have an unusually good intake one year or perhaps they pull together, cooperate and help each other surpass the standards. Doesn’t really matter how but they surpass the standards.

          There’s a difference between setting a high standard and letting people wash out if they don’t meet it vs treating it as a tournament for the aggrandizement of the institution involved.

          it sounds like you’re treating the washout rate as some kind of metric of quality.

          I can set low standards to enter then fail a thousand illiterate fools who didn’t study but that says almost nothing about the quality of my course.

          Similarly if 500 hard-working future-nobel-prize-winners turn up at my door and I fail 90% of them because I’m trying to paint my course as having “high-standards” then I’m simply doing by job badly.

          • albatross11 says:

            Slightly-related anecdote: Terman did this big study of mathematically precocious youth (basically people who did really well on the SAT math section when they were quite young). He tested a bunch of kids, followed the ones who were above his cutoff, and got the kind of results you’d expect (lots of those kids went on the get PhDs in technical subjects, many ended up as professors, inventors, etc.) But no Nobel prizewinners. Two kids who took his test got science Nobels, but both just barely missed his cutoff.

          • Murphy says:

            I think Nobels run into a few statistical issues.

            They’re so rare and the number of scientists and papers so large with a certain amount of politics and chance thrown in that while nobel prize winners tend to be very smart… if you simply sample a few hundred of the smartest people in the country… your odds of hitting a nobel winner are pretty low.

          • albatross11 says:

            And yet both Feynman and Shockley were in the group tested, but didn’t quite make his cutoff. Which is just another way of saying that test scores strongly correlate with brilliance, but aren’t the same thing, and a lot of other stuff goes into the kind of innovation that leads to a Nobel.

          • Douglas Knight says:

            Shockley and Alvarez, not Feynman.
            It was a systematic study of California children and Feynman was not from California. Also, I think he was too young, born in 1918, compared to 1910-1911 for Shockley and Alvarez. Also, the test was the Stanford-Binet test, not math-focused. (Terman wrote the Stanford-Binet test—that’s why it’s called Stanford—based on earlier work of Binet and Simon.) I think that the Termites were tested in elementary school, too.

            There is a separate Feynman story, although it is oddly lacking in detail and corroboration. But it claims to be a high school test, which would be more predictive.

            And you may be confusing this with the much larger SMPY study 1971– which uses the SAT in middle school.

    • deciusbrutus says:

      You would have to scale your freshman class size way up to make admissions not very competitive. That would result in lowering quality of the freshman education.

      Also, the freshman year becomes the admissions process, except that now you have to fail a large percentage of the freshmen by design.

      • johan_larson says:

        Well, I did take part in something like that at the University of Waterloo. The Mathematics faculty, which includes a number of departments, including Pure Math, Applied Math, Statistics, and Computer Science, offers advanced versions of the first- and second-year required courses. They let anyone who is admitted try it, and you can drop down to the regular courses at any time. I took the advanced courses in the 1A semester, and we went from 200-some students to maybe 20 taking the final exam. I think 10 people showed up for the start of the next term. I decided to drop down to the regular mathematics stream at that point.

      • brad says:

        At some point don’t you end up with something that’s indistinguishable from an admissions exam and now you are back to the old system? I mean if 5000 people are going to be taking what amounts to a MOOC and then are only allowed to continue on to the “real” courses if they get a high enough score on the final exam, what kind of system is that really?

        • johan_larson says:

          Why would first year have to be somehow fake? It could teach real material. It could grade real material. Obviously if you’re expecting to flunk well over half the class there has to be something of a mismatch between what the students were expecting and what they got, but people routinely have inflated views of themselves.

          • brad says:

            Because 5000 students in a course is sufficiently different from a class with 30 students to be a difference in kind rather than degree.

        • deciusbrutus says:

          You could have the first years be distributed and delegated, and provide a path for people who don’t pass the actual admissions test to continue education afterwards.

          See also: Transfer schools.

    • brad says:

      Is there some reason this wouldn’t work?

      What do you mean by work? It’s become clear to me over the years after a lot of discussions that people have very different ideas about what goals colleges are supposed to be maximizing or even on a more basic level who should decide what goals colleges (in general or a college specifically) should be serving. Worse yet, many people seem to think their own private answers to these questions are so obviously correct that they don’t need to be stated. In the US the whole thing is further complicated by the dichotomy between public and private schools.

    • dndnrsn says:

      Who you know is just as important as what you know. More time spent surviving the gruelling sink-or-swim university means less time making friends who can get you a job down the road. This sort of model would work better for some subjects than others. Comparing it to SEALS runs into the problem that once someone gets through SEAL training, they are presumably a SEAL, employed by the same government that trains them. If someone goes to Sink or Swim U and gets a history degree – is the university going to be employing them?

      • Murphy says:

        I think the idea is an institution trying to have a “brand” for high quality graduates.

        I know some utter incompetents with the same degree I have, if I walk into a company that hired one of them in the past, my degree is going to have all the value of used toilet paper in their eyes.

        So some people would like the opposite: a hard to get degree that’s a real sign of being good at the subject.

        So lets imagine, your company hired 3 Yale graduates who all share great stories about the wild parties and weird ceremonies involving a pig…. but unfortunately none of them are actually very good at the things they’re supposed to be experts on.

        So you hire someone from quality-brand-university and they massively overperform in their field of experiences but lack any blackmail videos from any porcine related ceremonies with other great and good.

        If the job in question doesn’t actually involve much skill, or could actually be done by low skill individuals or the skills involved are primarily nothing to do with the course you ask for on the job ad then the yale grads may be the ones you’re really looking for and the things you ask for are just to save you time looking through a big pile of CV’s.

        I’d argue that in many companies this is the common case, many positions demand grossly overqualified individuals.

        But if a lot of money actually hinges on the person in question knowing their stuff inside and out, if it actually matters: you probably want to hire the guy from quality-brand-university.

    • theodidactus says:

      Some law schools work this way…it’s not a great setup, for precisely the reasons people articulated below: imagine throwing 3 years of your life into this super-competitive melee (where everyone is encouraged to dial it up to 100%, sacrifice everything to moloch, etc, just so THEY don’t drop out) just to drop out in the last year.

      I have no doubt the resultant class would be excellent at whatever you were training them at, but the losers are just utterly wrecked: you’ve just sucked many years of a student’s life away for nothing, possibly less than nothing. A college dropout might be a worse signal than simply never having gone.

      …and frankly, your institution wouldn’t fare well. People would prefer to go to the magical happyland where everyone gets A’s. As long as those “elite” institutions exist (a lot of ultraelite law schools are pass fail: https://lawschooli.com/law-schools-passfail/) the monster students who can get in anywhere will prefer to go there. You seem aware of the fact that the system currently has several big players that provide immense social rewards for merely getting in…unless you have some solution for stopping that, the best will go there, and your cutthroat academy will be left with the average and the worst.

    • rlms says:

      but it has no particular reputation for particularly high standards or difficulty

      It doesn’t? (Genuine question, I’m unfamiliar with the US system). If that’s true, why do people not just use acceptance letters for signalling purposes rather than going through the costs of actually attending Yale?

      • theodidactus says:

        because employers select for conscientiousness, reliability, and frankly neurotypicality, and they think that actually graduating signals that, and that merely waving an acceptance letter around signals that you don’t understand the process.

        Like dating, a lot of the admissions/job hunting game is a complex series of interactions where various unspoken criteria matter, but you have to act like they DON’T matter. Consider the signal that posting a SAT/GRE/LSAT score on a resume sends.

      • johan_larson says:

        It doesn’t? (Genuine question, I’m unfamiliar with the US system).

        It doesn’t. Generally speaking the top US colleges are known to be very difficult to get into, but once you’re in, you’ll almost certainly make it through.

        Harvard is probably the most prestigious US college, and there 96.6% of students graduate within six years.

        https://www.collegefactual.com/colleges/harvard-university/academic-life/graduation-and-retention/#secGraduation

        Even MIT, which is known for being difficult, has a 91.4% six-year graduation rate.

        https://www.collegefactual.com/colleges/massachusetts-institute-of-technology/academic-life/graduation-and-retention/

        • theodidactus says:

          and virtually all the highest-ranked law schools and (I believe) med schools are pass/fail, with very high pass rates

        • rlms says:

          I might have misread you. High graduation rates are not inconsistent with high difficulty if your students are heavily selected. I wouldn’t expect Yale to be hugely more challenging to its students than Average College is to Average Student, but I’d be surprised if Average Student could get through Yale in one piece (which is how I interpreted your original comment).

          • johan_larson says:

            My hypothesis is that the academics at Generic Ivy are somewhat more demanding than at Average U, but Generic Ivy is dramatically more selective in admissions than Average U. The typical Average U student would struggle at Generic Ivy, but would make it through. The typical Generic Ivy student would make it through Average U and it would be easy but not trivial.

            But these are very much guesses on my part. I’ve never had an opportunity to compare Calc I as offered at Average U to Calc I at Generic Ivy.

          • quanta413 says:

            Some of the students are heavily selected for academic ability but a big chunk aren’t. The son or daughter of the rich and famous can pick an easy major and get through even if their ability is far from stellar.

          • pozorvlak says:

            @johan_larson: I can’t answer your question directly, but I did my undergrad mathematics degree at a university in the top 5 of the 2019 THES world rankings, and TAed calculus courses at one that’s near the bottom of the top 100. The material covered in corresponding courses was very similar, but the top-5 school taught it in a more demanding way: problem sheets usually skipped the straightforward check-you’ve-understood-the-definitions questions and went straight to the “can you use this material in novel ways to solve unfamiliar problems?” questions. The best students at the top-100 university would have done fine in the top-5 course, but I think the less able students wouldn’t have made it through.

        • bean says:

          I don’t think this is at all hard to explain. The people who get into those schools are the ones who never got a B in high school and spent their summers teaching orphans to play classical didgeridoo. The vast majority of those people are going to pass any quasi-reasonable course of study, so unless the school is trying as a matter of deliberate policy to fail more, you’re going to see 90%+ graduation rates. Less-selective schools are the ones with students who are genuinely borderline in ability, so I’m not surprised that those schools have graduation rates higher than the freshmen retention at most schools.

          • ProbablyMatt says:

            I think you also have to account for the sizable minority of students who didn’t get in (solely) on academic skills or academic-adjacent skills. There are a few legacy admits, recruited athletes, and so forth who are generally still good students but certainly have gotten B’s before.

            There are two ways I think the university addresses this. The first is that you can tone down your academic workload by carefully choosing courses. The second is that instead of failing students they just give them C’s even if their scores are significantly lower than the rest of the class. So the university does have to make some room in order to avoid failing anybody but the top students can still challenge themselves.

            Employers who recruit heavily at ivy league schools know this too so they don’t just hire anyone who graduates.

          • bean says:

            Schools have been giving underqualified students degrees for decades by having a subset of courses/majors which are significantly easier than normal. This is most visible in the form of the majors the big athletic schools have for their star athletes (I think the University of Missouri has Breathing Studies), but I’m sure that the Ivys have something similar. And they’re reasonably selective with their legacies, too. It looks like being a Harvard legacy only gives you a ~40% chance of admission, and given the known heritability of intelligence and the like, I’d expect that the top 40% of Harvard legacy students are capable enough to avoid flunking out, even if they tend to be concentrated in the easier majors and get mostly Cs.

        • JPNunez says:

          So what’s the deal here? Is Harvard too easy? Or somehow the selection is very good, only accepting people who are very likely to graduate?

          I get the argument that selection is demanding, but they seem to still be taking in legacy students in droves. edit: quick google says that legacy is just a plus, but that they got so many candidates that it makes little difference. Selection explanation seems enough.

          • Randy M says:

            Harvard would probably also suggest the possibility that their teaching is so excellent that few fail to gain the minimum necessary knowledge. Probably some of each is true in a ratio that fluxuates slightly but is smoothed over by corruption.

          • JPNunez says:

            @RandyM

            have they ever said something even close to that? I am curious.

          • Randy M says:

            Seems I was wrong, they chalk it up to screening:

            The College’s graduation rate is normally 98 percent, among the highest at American colleges and universities. Everyone admitted to Harvard has the ability to complete all academic requirements successfully.

            Although perhaps the meaning changes depending on whether you emphasize “admitted” or “Harvard”.

          • JPNunez says:

            Now I am wary of the selection explanation if Harvard themselves are promoting it.

          • aashiq says:

            There isn’t one level of difficulty across the whole school. From my experience, athletes and legacy admits tend to take easier classes. For example, at Harvard there are a number of intro math classes, ranging from the quasi-remedial, to the legendary Math 55. Students who are struggling self-select into easier majors, or easier tracks within the same major.

          • JPNunez says:

            @aashiq

            Ah, that makes sense.

        • Besides what other people said, I’m sure there is the effect that dropping out of Harvard has higher costs. Once they get in, their marginal students have a much higher motivation to stick with it than those at a state school.

      • StableTrace says:

        People in this conversation should know that you can actually check difficulty standards yourself since a lot of university courses post their exams and homeworks online. For example, here are some of Harvard’s multivariable calculus homeworks. At a glance, the problems do seem tricky compared to what you might have to do at other universities–see for example problem 5 on hw 1 or problem 2 on hw 18.

        • Chalid says:

          Just noting that that’s the easiest version of the course. Math majors (and others interested) sort into various harder tracks.

    • bean says:

      I expect the instruction offered by a college such as Yale is of high quality, but it has no particular reputation for particularly high standards or difficulty.

      This is simple enough to explain, given how hard it is to get in. If Yale is 10% more rigorous than average, but the people who get in are 20% better, then the people who go there aren’t going to come away reporting more difficulty than an average student has at an average college.

      My school sort of did what you’re suggesting, although in a less extreme format. They keep winning “best value college” awards and the like, and as a result, freshman enrollment is skyrocketing. Instead of trying to filter people out before they show up, the administration has decided to do it through Chem I and the various Calc classes. This works, but it does mean that, for instance, student housing is always packed for the first few months of fall semester, until enough people drop out to relieve the problem. My roommates junior and senior year both dropped out at Christmas (due to problems with math, not me, I think) and I ended up with a free single room both times. But that would never have happened if they’d left in September. The local hotels might not have even been cleared by then.

      I think you’re going to see a lot of weird problems from this kind of extreme wash-out regime. At any given time, most of the people around will be doomed freshmen, and that population is going to fluctuate a lot. I’d be surprised if everyone didn’t get a room to themselves in the spring because of attrition, which is going to do terrible things to your housing budgets. Likewise, campus culture isn’t going to be very stable.

      For that matter, don’t a lot of majors (not colleges) basically do this with a weed-out course or two? I know that in my Intro to Aerospace class, there were basically two populations: those who were going to pass fairly easily on their way to an AE degree, and those who were doing it for the third time, hoping to avoid their destiny as Engineering Management majors.

      • rlms says:

        If the content per semester was the same but delivered in two weeks fewer that indicates higher difficulty.

      • Edward Scizorhands says:

        He said they covered less material.

        Calculus I really will be the same everywhere. I remember hearing someone laugh that, on a campus tour of MIT, they were covering the same things on the Calc class he dropped in on as he was studying in his (college-level) high school, and this was proof that MIT was a joke.

        But why should it be any other way? Even if the students were on average 10% better, you still have a lot of students that are only 5% better and then students who only shine in a place that isn’t Calculus or Organic Chem.

        I can probably be convinced that all the elite schools do better is to skim out the people who can’t hack it pre-admission. But an intro class isn’t the way to do it.

        • Nick says:

          At my university, the honors calc classes were made more difficult not by covering material faster but by covering proofs rather than worked problems during lecture.

        • Edward Scizorhands says:

          @Nick: same here, and I forgot I was in one of those classes.

          The point of the basic Calculus I class is to teach the calculus you will need for all the other classes. Even if your students were 10% smarter, there is no need to make it 10% harder or 10% faster. They’ll do the homework in less non-classroom-time, and have more time for other things.

      • Douglas Knight says:

        That anecdote would be more convincing if you’d taught orgo at two schools. Your perspective as a student is very different.

        Similarly, Edward’s friend learns very little from dropping in on calculus. Virtually everyone who goes to MIT has had HS calculus with nominally the same syllabus, but 2/3 of them are asked to retake it (though 1/2 of those retake at at 2x normal speed, which was already 2x the speed of almost all elite schools, and has been since before entering students had calculus). The purpose of calculus is to provide tools for later classes, but later classes at MIT will demand a lot more from calculus than at other schools.

      • Tarpitz says:

        I can’t speak to the US experience, but when I was at Oxford I was expected to write 8 philosophy essays and 4 French literature essays, and translate 8 passages from English to French and 4 from French to English, in each 8 week term. A close school friend at a highly-ranked non-Oxbridge university did three essays in each of his much longer terms. I got around 4-5 hours of tutorial time (one-on-one or in a very small group) every week. He got almost none. And the vast majority of the teaching was outstanding. I really believe that Oxbridge is both much harder and much better at transmitting subject knowledge than other British universities, at least at undergraduate level. And that’s to say nothing of the extracurricular and networking opportunities.

        • Gobbobobble says:

          I got around 4-5 hours of tutorial time (one-on-one or in a very small group) every week. He got almost none.

          In the US it is largely upon the students to avail themselves of professors’ office hours and to seek out other tutoring resources as necessary (though obviously there will be some differentiation between schools). Would you mind contextualizing a bit how it works across the pond? Like does Oxbridge make a point of pushing everyone to go to 1:1 lessons or do they just attract the students most motivated to do so? Do you mean office hours aren’t really a thing at the non-Oxbridge schools?

        • AlphaGamma says:

          @Gobbobobble: One of the main methods of teaching at Oxbridge is the small-group class (referred to as a tutorial at Oxford or a supervision at Cambridge). Typically this is 2-3 undergraduates being taught by one supervisor- sometimes an academic, more often a graduate student or postdoc.

          These are compulsory, all students go to them- each course has a certain number of supervisions attached, and they are arranged at the start of the course. Attendance at lectures, by contrast, is optional.

        • Tarpitz says:

          My tutorials were overwhelmingly taught by academics, not grad students or post-docs. I didn’t get the impression that was atypical, but perhaps that was a Merton thing rather than an Oxford thing.

        • rlms says:

          Might be a humanities thing.

        • Gobbobobble says:

          Oh wow, that is quite different. Thanks!

      • Garrett says:

        Side question, but what is supposed to make o-chem so difficult? I’ve now graduated and am considering going back and taking it for personal entertainment. I already have an engineering degree and have done the common engineering chemistry courses.

        • Edward Scizorhands says:

          I struggled with my organic chemistry class until something clicked in my head about just what they were supposed to be teaching and just what I was supposed to be learning. They didn’t spell it out but, like integration, there are a series of tricks and you need to learn to pattern recognize them. I went from a C-student to an A-student in the course of a week.

          Also, I think a large part of it was stress and anxiety.

    • savebandit says:

      The difficulty lies in getting people to pay for that kind of a setup. Right now a lot of students see the university as knowledge transfer, where they’ve done what they need to by getting accepted, coming to class, doing homework, etc. The college has vetted them in the admissions process as being able to do college-level work, so now it is the job of the college to transfer knowledge and earn the tuition they asked for.

      I think most people pay lip service to the idea that you should be thrown out for not meeting standards, but practically they would view their own failure as the fault of the college. The college charged an exorbitant amount of money to take them, and they pre-vetted them with ACTs, SATs, high-school grades, etc. So now it is on the college to live up to their end.

      To give an analogy, the current model is like an exercise class that you buy a pass for and go take for an hour for a couple months – providing whatever level of effort you feel comfortable with. Your proposed model is like hiring a personal trainer who can fire you for not doing the prep work they give you one too many times. Using the exercise class pricing model for that would result in lots of hurt feelings from fired clients, regardless of how they feel about personal responsibility when asked.

    • Nancy Lebovitz says:

      I look at the amount of money there is in running a high-prestige university and the lack of new high-prestige universities, and I conclude that it’s not feasible to start one.

      After all, you’re asking promising students to bet that this new university will turn out to be high-prestige, or even continue to exist.

      Or are there new high-prestige universities I haven’t heard of? What’s the situation in China and India?

      Existing high prestige universities could expand by adding new campuses (Harvard in California) but that doesn’t seem to be happening either.

      • Protagoras says:

        Schools can move their rankings, but it’s generally slow and expensive; the usual strategy is to spend a lot of money bringing in famous faculty (some of whom can be lured to lower status positions by sufficiently above market salaries). NYU is rather famous for having done this, and is much better regarded these days than it was half a century ago. But it’s still probably not what one would call elite, and I don’t know of a similar case where a school moved from non-elite to elite via any deliberate process in a reasonable time. A limitation of the NYU model is that they specifically targeted faculty in fields where the typical pay rates were low, in order to make the strategy affordable; this also limits how effective the strategy can be, but removing that restriction would make the cost enormous.

        • Douglas Knight says:

          NYU is famous for hiring faculty, but it seems to me that it is not very famous for what seems to me a much more dramatic thing, moving from the Bronx to Greenwich Village.

          • BBA says:

            That wasn’t an intentional effort to raise the school’s prestige. NYU was near bankruptcy and sold the Bronx campus as a last-ditch effort to stay afloat. It worked out well for them, but it’s hard to predict which neighborhoods will gentrify and become desirable and which will stay depressed. If circumstances had been different, NYU could have sold the Greenwich Village campus (then home to the graduate and professional schools) and consolidated in the Bronx instead, and then who knows how their prestige push would’ve turned out.

          • Douglas Knight says:

            Sure, but in retrospect we know which way it affected the prestige and you have to factor that in to the trajectory, even if the effect size is difficult to know.

            Anyhow, my main point had nothing to do with prestige, but was just that this seems to be unknown and I find it hard to understand how it isn’t widely known.

            ————

            Maybe I don’t understand how it segregated undergrads from grad students. Did professors visit both campuses or were professors assigned to only undergrad and graduate teaching?
            (Isn’t there a related fight going on at Harvard, where it has transferred professional schools to Boston, and now it wants to move biology labs, but it’s difficult because researchers talk to undergrads?)

          • BBA says:

            this seems to be unknown and I find it hard to understand how it isn’t widely known.

            Partly it’s that the Manhattan campus is the original location and NYU has branded itself on being in Washington Square for almost 200 years, and partly it’s that NYU undergrad was an obscure commuter college in the Bronx and only achieved its current prominence after it returned to Manhattan. The medical and law schools were historically well-regarded, but they’ve always been in Manhattan.

            Maybe I don’t understand how it segregated undergrads from grad students. Did professors visit both campuses or were professors assigned to only undergrad and graduate teaching?

            I’m not sure how it worked either. It might have been that Ph.D. students were in the Bronx, there just weren’t many of them compared to, say, masters students at the School of Education. Also there was a much smaller undergrad college in Washington Square during the Bronx years… maybe someone who was around at the time can fill us in.

            I also think there may be some level of “conservation of prestige” going on – the rapid decline of CCNY happened shortly before the rapid rise of NYU, maybe it was just filling a vacuum.

          • Douglas Knight says:

            If by “PhD students were in the Bronx” you mean that Washington Square was just professional schools, no, that wasn’t it. For a concrete example, the Courant Institute of Mathematics was in Washington Square before the war. But it’s possible that the PhD students had to commute between campuses so that the professors didn’t have to.

            If NYU reset its undergrad school and was able to wipe out its old reputation and lever the grad and professional reputation into the undergrad reputation of what was effectively a new school, that’s yet a third strategy.

            I’m skeptical of the connection to CCNY. NYU’s rise was mainly about getting people to come from far away.

      • rlms says:

        Existing high prestige universities could expand by adding new campuses (Harvard in California) but that doesn’t seem to be happening either.

        It’s been attempted a couple of times internationally, and apparently CMU did it in California.

    • chrisminor0008 says:

      Part of the value in prestigious schools is socializing with other extremely talented individuals. If statistically, there’s not a critical mass of these uber-talented students until the fourth year, that’s not enough time to bounce ideas off each other.

    • Watchman says:

      Surely there’s a flaw in your logic here? 96% of students graduating a University with high entrance requirements does not indicate its an easy university to pass. It might indicate the students are well-off enough to guarantee fees, or that the institution can provide good scholarship support, but most likely indicates Yale et al are working as planned.

      Note that Oxford University has a near 99% graduation rate and Cambridge 97%. Neither is easy by any means, as both require intensive work at a high level. But the students recruited are those able to cope with this, and therefore thrive (I say this as someone who failed an Oxford entrance interview and is in hindsight grateful to the tutors for that decision). Less able students would not cope so well, and failure and stress would cause a higher drop out. This can be seen by observing that the drop-out rates in the UK universities that take lower-ability students is often above 30% on objectively easier programmes because lower grades give much less clear signalling about whether students have the academic ability to cope with a degree programme.

      If you swapped a cohort of Yale students with their counterparts from say the University of North Caroline Chapel Hill or California State University Monterrey Bay (to select two not awful institutions of which I am periphally aware), keeping the same teaching, don’t you think that the graduation rate would plummet? Yale provides programmes that are easy enough to pass (not necessarily pass well) if you are the sort of student who has the academic ability to get into Yale. It’s a system not designed to eliminate able students but to educate them, and it therefore is going to aim to be passable by most of the students they can recruit. This is the same logic as all universities (other than the odd exceptions noted above) use: educate your students, only getting rid of those who can’t or won’t learn, rather than act as some form of academic death match to create a (false – academic aptitude is not necessarily a transferable skill) intellectual elite.

      You seem to assume a high graduation rate is a weakness. I’d argue its a university doing its job well, and that rationally it is better to have 96% of a high-achieving cohort educated at a high level than x% of a mixed-ability cohort passing an education set at an arbitrary level designed to cause people to fail.

      Nice question though. Makes you think about what universities actually do (or should do).

      • theodidactus says:

        It is really worth reiterating, as some people have pointed out earlier along in the thread, that people have wildly different ideas about what education should “do”…obviously it can “do” more than one thing, but many goals are completely at odds with one another.

        Freddie Deboer had a good piece on this a while back, iirc, but I can’t seem to find it.

        I generally won’t get into lengthy debates on education policy with people unless we can first have a conversation about what education is “for”

        • Watchman says:

          Indeed. I won’t disagree with that although I’ll put a marker down that someone needs to make a very strong case for the idea that we should be aiming to fail people in an education system, as I’ve not seen this.

          Note also that universities clearly believe education is for producing people with degrees…

      • albatross11 says:

        Watchman:

        I think this is exactly the question.

        Model #1: With the same level of effort and ability, Yale is easier to graduate from than (say) UNC. Their highly-competitive admissions guarantee that they get top students, but then they don’t require too much from them to graduate.

        Model #2: With the same level of effort and ability, Yale is harder to graduate from than (say) UNC. Their highly competitive admissions ensure that the students who get in can get through the harder classes, however.

        Model #3: This is just demonstrating the advantages of tracking. If you make sure that everyone in your class is ready to do serious college-level work, then you can be a lot more efficient in your classes–you don’t have to go back and review basic stuff, or spend a couple semesters making sure your students can write a coherent sentence. So the classes at Yale can move at the pace a college-level class should move at, whereas the classes at UNC or University of Maryland have to do a lot more review and hand-holding to get people up to speed in those first couple years.

        How would we decide which of these models (if any) is a good model for reality?

        ETA: One way to distinguish would be to look at how different the outcomes are for people admitted with lower qualifications–legacies, affirmative action admissions, athletic admissions, people whose parents gave huge donations, etc.

        • Watchman says:

          I was going to suggest that examining non-standard admissions might help, but realised that I know in the UK that some of these (disadvantaged backgrounds especially) if selected according to sensible criteria get marks in line with the cohort as a whole. We don’t have legacies and donor kids, or athletic scholarships so I can’t comment on how these work.

          • Tarpitz says:

            I’d say rather that we have very few. As for what happens to them, why do you think they call it a Gentleman’s Third?

          • albatross11 says:

            The claim I’ve read several places (but this isn’t my area, so I don’t know how true it is) is that students admitted via affirmative-action tend to cluster in less demanding majors. Thomas Sowell pointed out at some point that there were a lot of black kids who would have gotten a EE degree from State U, but instead ended up with a Sociology degree from Stanford thanks to affirmative action. (That is, they came in planning to get a EE degree, found themselves massively outclassed in those classes, and ended up switching to a much easier major.)

      • quaelegit says:

        I don’t know much about UNC, but it’s the state flag ship, so I would expect at least significant minority of the students to be Yale-caliber.

        On the other hand, my high school classmates who went to CSUMB were B-average or C-average students, and even aside from academic strength Yale would be less helpful to their goals (local industry oriented).

        (My vague impression was that CSUMB is one of the less “prestigious/rigorous” Cal States — like if you’re a good student but want to stick with the Cal State system, you want to go to the Cal Polys, Long Beach, San Jose State, a few others.)

    • Steve? says:

      The undergrad business program at the University of Michigan used to work this way. You came to UM as an undeclared major and applied to the B-school after your first year was done. If you didn’t do well enough in the pre-reqs (e.g. Econ 101), you wouldn’t get in. I had at least one friend who didn’t get in and transferred elsewhere because he specifically wanted a business degree. I think one other fried didn’t make it in and decided to stay at UM and major in Econ. A few years back the program changed and now you can be admitted directly into the B-school. My (unconfirmed) assumption is that prospective students didn’t like the risk of coming to UM and then not making it into the program and preferred places where they knew they had a high chance of graduating with a business degree (however questionable that decision might be…).

      Some of the other more specialized programs also had this sort of first-year screening. I know engineering did and so did architecture. I don’t remember friends being forced out of engineering — my guess is that the criteria for getting into the program (good grades in intro Calc, Chem, Physics, first-year engineering courses) lined up pretty well with self-selection. In engineering there was a second cut in terms of declaring majors. I do remember people wanting to major in CS but not having a 3.2 GPA or whatever the cut-off was.

      Presumably this is common across other high-ranking non-Ivies.

      Doing it at the college/major level is a bit softer than at the university level. It gives students who don’t make the cut a chance to stay in their current community in a less demanding major or transfer to stick with their preferred major.

    • j1000000 says:

      Maybe a school could do that, but I’d imagine the question is why. Yale/Harvard still attract the best students, those students still go on to great things, and the schools make so much money they basically run hedge funds, so from their point of view there’s no problem to solve, right? The kids who weren’t let in were the marginal ones — and in many cases marginal athletes — so it’s no tragedy they simply had to go to Georgetown or wherever they ended up.

      Meanwhile, your hypothetical school would instantly run into culture war issues — consider the NY Times’ recently renewed focus on the demographics at Stuyvesant. The kids who can just go to Harvard anyways probably wouldn’t want to bother with a controversial school.

    • Nabil ad Dajjal says:

      Some of the better state schools do something similar to this.

      They have fairly loose admissions criteria and low tuition, at least for state residents, leading to huge class sizes. Most of those students slide through in easy programs, a smaller number can’t hack it and drop out, and an even smaller number excel in tougher programs. The latter group has a good shot at top graduate and professional schools as their programs have a good reputation within their fields, even if the school’s name normally doesn’t count for much.

      It’s not exactly what you’re talking about, but it’s instructive.

    • Clutzy says:

      The great downside is that parents (particularly your high performing alumni likely to donate) will not like this system because it creates large deadweight loss for them, and also would prevent their kids from graduating.

    • Randy M says:

      Is there some reason this wouldn’t work? Or might there be some great downside I am missing?

      Yeah, a fairly obvious one, at least in the in-person colleges. Physical space is limited. Unless the college grows it’s personnel and infrastructure to a stupid degree, it won’t have space for this. If it does so, then it probably won’t have the same quality of instruction. And it will have trouble paying off all this infrastructure after enrollment drops. So instead what they do is try to only admit students that they think have a good chance of graduating. That seems fair.

      Secondly, with a high drop-out rate, prospective students will start to look elsewhere. Eventually it might reach an equilibrium back to the status quo–but maybe they want some of the not too bright but rich students to attend and donate later?

    • Edward Scizorhands says:

      I skimmed the responses and didn’t see this; sorry if a dupe.

      Admission by lottery.

      Here is a somewhat older story with a link to an even older essay https://www.nytimes.com/roomfordebate/2015/03/31/how-to-improve-the-college-admissions-process/do-college-admissions-by-lottery

      Here is it updated for the modern mess:

      https://www.npr.org/2019/03/27/705477877/what-if-elite-colleges-switched-to-a-lottery-for-admissions

      Better would be to set a threshold for “these are the people we think can succeed,” then put all the people who pass that bar into an urn, and draw N names.

      It benefits the students, who can stop the psychotic zero-sum credential race where they need to put in more and more and more effort. It stops people with super-degrees from lording it over other people, which I think is good (even though I have one of those super-degrees).

      It doesn’t benefit the schools, directly, and probably harms them a little. But since schools pay a lot of lip service to putting the needs of society over their own, perhaps one of them can be shamed or coerced into making this their policy.

      • The Nybbler says:

        Better would be to set a threshold for “these are the people we think can succeed,” then put all the people who pass that bar into an urn, and draw N names.

        Fails for at least two reasons. One is CW, the other is the definition of “we think can succeed”. Unless you go with relentlessly objective measures like standardized test scores, you’ve just moved all the thumbs to those measures.

        • Edward Scizorhands says:

          > the other is the definition of “we think can succeed”.

          I don’t think this is as hard as you believe.

          First, I wasn’t trying to make a 100% objective measure. Harvard can declare Malia Obama capable of succeeding even if they think she doesn’t[1]. But even if she gets into the lottery that doesn’t mean she’ll get picked. Similarly, it gets a lot stupider to bribe people to only get a chance at admission.

          Second, if you can only admit 1000 people to Best School, and 3000 capable students capable of want in, they have to do a zero-sum credential race. It would spare everyone a lot of time and heartache to just do it randomly.

          The schools know the profile of who can pass and who can’t. MIT’s admission policies, at least 20 years ago, were explicitly to let in NAMs that would succeed, not just the best fraction of them.

          [1] I just picked a famous person. As the daughter of two HLS graduates, she probably can pass.

    • eqdw says:

      There’s already a school that does this. It’s either Cal Tech or Cal Poly, I forget which one b/c I always mix these two up. As I understand it, they make admissions decisions based almost entirely on test scores.

      • quaelegit says:

        That sounds like Cal Poly, which is part of the Cal State Univerity system. I’m pretty sure the CSU application doesn’t even require essays, they just look at your test scores and HS grades (well for athletics or performing arts they presumably look at relevant skills).

        I don’t know for sure but it wouldn’t surprise me if many non-flagship state schools only look at test scores and grades.

    • JohnNV says:

      University rowing functions something like this. I rowed varsity crew all 4 years at my university and while there are hundreds of new applicants each year, the rigor of waking up at 4:30am 6 days a week for hard practices and punishing winter workouts when it’s too cold to row means that the attrition rate in the first year is brutal. We ended up with 12 people make it through to the end of 4 year program, but we were probably some of the 12 most serious and committed athletes anywhere.

    • Chris P says:

      Similar to Switzerland (as mentioned above), this is how a lot of higher education works in Germany. An important difference is that admission happens by subject at each university, not just by university. Also, you generally only study your subject and some ancillary stuff, e.g. if you study some science, there are no language/humanities/social sciences etc. requirements.

      Popular and resource-intensive subjects have selective admission (medicine most of all, others are psychology, business at the best universities), but most STEM subjects have extremely low requirements or none at all, aside form the school diploma you need to go to university.

      So basically almost everyone who has graduated high school with the Abitur (equivalent to the Swiss Matura mentioned above) can start studying e.g. computer science, physics, maths, or engineering at the best universities in the country. (And I’d say German STEM education is generally considered to be quite good.)

      Those courses often have first-year dropout rates of 30 to over 50%. So they’re basically working off that model – let basically everyone who is interested in, then make it obvious to the student very quickly whether they’ll be able to succeed. It’s not uncommon for the occasional STEM exam to have 2/3rds of all students fail. A side effect is that there’s little to no grade inflation in those disciplines.

      Part of the reasoning behind this is that someone might be great at mathy-sciency stuff, but received otherwise mediocre grades in school (in languages, humanities, arts), and therefore has an “unimpressive” diploma. This person might very well excel in their chosen subject.

      But: The cost of going to university is negligible compared to the US (~1500/year, about what poyorvlak mentioned for the ETH Zurich), and students can get financial aid from the government depending on their parent’s income (i.e., children of wealthy parents can’t get any money, but the parents are required by law to finance their children’s higher education up to a certain extent). So if you try and fail, there’s not much lost except time.

    • John Schilling says:

      Is there some reason this wouldn’t work? Or might there be some great downside I am missing?

      Aside from the obvious inefficiencies others have pointed out, this simply means that the extreme lengths students and their parents go to in their quest for elite-university admissions, will become equally extreme lengths devoted to making sure the precious, precious heirs of the elite don’t flunk out. So, six-figure budgets devoted to bribing TAs and adjunct professors (and maybe not-so-adjunct), and threats and intimidation against professors giving bad grades to the wrong students, harassment of rival students who might set the curve to high, professional-grade cheating on exams and classwork that can’t be as standardized and rigorously proctored as e.g. SATs, and other things I haven’t thought of yet but which I can’t imagine will be helpful for the students who are actually trying to learn something. This has the potential to break higher education as anything but who’s-the-better-cheater signalling, so no thanks.

    • What if we turned that around? Could one run a prestigious institution that admitted pretty much anyone, but set the bar high academically, with a demanding curriculum and uncompromising grading standards?

      The experiment has been done, and ran for well over a thousand years—Imperial China’s examination system.

      Almost anybody could take the first level of exams, and anyone could study for them–no university admission involved. There were three stages of the exams, and the second stage had a pass rate of about 1%. The third stage produced about 200 to 300 degrees from as many as 8000 candidates.

      One of history’s more successful societies.

      • Douglas Knight says:

        Other people say one of history’s worst catastrophes.

        • I don’t think the examination system is what led to some of the worst bits of Chinese history. And, all in all, they certainly aren’t doing too bad compared to the rest of the world.

        • Protagoras says:

          The examination system was instituted under the Tang, and the Tang and Song constituted high points of Chinese history. The system continued under the Yuan, the Ming, and the Qing, and the two periods under foreign domination were bad times for China, while the Ming also had more bad times than good. But it is not at all clear how the examination system was responsible for China coming under foreign domination, or for the mediocrity of the Ming (the latter seems to have been a result of bad luck in emperors). Most of the historical criticism of the examination system focused on the state of China under the Qing, a period when the foreign emperors seem to have been the actual sources of most of China’s problems.

      • johan_larson says:

        How did the Chinese deal with the problems of faking test results or candidates trying to bribe the testing officials that John mention elsewhere in this thread?

    • I tried to interest my law school in a variant of this, back when law school applications had dropped sharply and law schools were in serious financial trouble. My proposal was for the school to lower its admissions standards enough to get, say, 25% more accepted students. At the end of the first year, offer all students in the bottom quarter of the class the option of withdrawing and having a full tuition refund.

      If you tune the numbers right, the school ends up with the same net income, since the refunds balance the extra tuition from the additional students. One basis for attracting students was the bar passage rate, and it was generally believed that first year grades were a better predictor of eventual success than information on entering, so bar passage rate goes up. The problem of students spending a fortune and three years only to fail the bar and be unable to ever practice is reduced. The only cost is having to accomodate the extra students in the first year, but class size had shrunk substantially so the school had excess resources in teachers and classrooms.

      I did not persuade them. Looking back at it, one reason may have been general conservatism of institutions, but another might be the fact that one of the things that goes into the U.S. News and World Reports rating, which is an important factors in applicant decision, is the average LSAT of the entering class, which under my system would go down.

    • BBA says:

      I think the real issue is, we don’t know what high school is for. In Europe (broadly speaking) they have tracks – a university-bound student will attend a school that grants a standard diploma (Abitur, Baccalaureat, Matura) which is sufficient for university admission, while other students will attend vocational/technical schools and enter the workforce directly. In Asia (broadly speaking) the standardized test is the sole determinant of university admission, and grueling test prep regimes are the norm.

      But in America, we don’t like tracks. We like to think everyone has equal potential and nobody should attend a “lesser” school, so everyone attends a (nominally) college-preparatory high school and earns the same kind of diploma. We also don’t like centralized authority, so there’s no national standard for a diploma; it varies state to state, district to district, school to school, so all you can really say it means is that you went to class for 12 years and didn’t get expelled. (There have been efforts like NCLB and Common Core to change this, but they are wildly unpopular at every level. And God help us if we ever try to create a national curriculum – just trying to work out how to teach the Civil War would spark another civil war.)

      So in order to have some basis for comparison, we have standardized tests – SAT, ACT, AP, IB. But they’re designed to be used in combination with the high school transcript, rather than as credentials in themselves. For whatever reason (mine is probably different from yours), students of certain ethnic backgrounds tend to outperform others on these tests, and wealthier students can afford test prep classes that poorer students can’t, so in the name of equal opportunity we can’t rely solely on them. Thus, admissions are “holistic” and the current fail parade results.

      And here’s the kicker – I agree with almost all the decisions that brought us here. It doesn’t make sense to move to a European- or Asian-style system. But at the same time, there’s no way to build any kind of remotely sensible college admissions on top of the incoherence that is American K-12 education.

      …and then there’s Canada, where applying to universities is remarkably simple and there’s no standardized test to fret over. I don’t know how they do it – are high school grades really that consistent from school to school? How can we do that here, and do we want to?

      • dndnrsn says:

        I don’t know that much about Canadian high schools, however, my anecdotal understanding is that in Canada, at the university level, there’s far more “compression” than in the US – we have 2 or 3 schools that get into top-30 or whatever rankings worldwide, and there’s much less of a gap between those schools and the worst universities in Canada than there is between the best and worst universities in the US. I remember getting into university in Canada as remarkably stressless and with very little credentialism. I imagine that it’s because going to UWO or Queen’s isn’t really that much worse than going to U of T. Probably also a factor is that the best universities in Canada are all public – private universities are just not that much of a thing here – so the way they make their money is different.

        • johan_larson says:

          The way I like to think about it is the the Canadian university system is like the US state-level university system: reasonably affordable, open to all but the worst students, and ranging in quality from somewhat questionable to really quite good. Apparently the admissions experience is quite similar too. If you only apply in-state in the US and aren’t seeking financial aid, it’s not that hard.

          But the US then adds a private system that is much more variable. The best private US schools are the most prestigious in the world, but the worst are apparently pretty dismal. They are also much more expensive. And the admissions process to the top schools is a zoo, since they are in such demand.

          • dndnrsn says:

            All the “somewhat questionable” schools I can think of in Canada are private. The public universities here are at minimum “meh.”

          • BBA says:

            I wonder if the Dartmouth College decision 200 years ago set the US down this path. It didn’t have much legal impact, since states just included “we reserve the right to modify” clauses in later corporate charters, but the moral precedent against government takeovers of private universities has stuck with us. Oxford and Cambridge became public, but Harvard and Yale (and Dartmouth) never did. (OK, there’s Rutgers and a few other exceptions, but none of them are in the upper echelons.)

      • johan_larson says:

        …are high school grades really that consistent from school to school?

        Education in Canada is a provincial responsibility, so each province does things in its own way. At least when I was going through, there were provincial standards about what was to be taught in each grade and each subject. But there were no common province-wide tests, at least in my province (Ontario.)

        Various comments from university professors made it clear to me that nominally identical courses were actually taught at rather different levels from school to school. The universities dealt with this in two ways. First, they somehow adjusted grades depending on the high school you had gone to. Second, they did a certain amount of review in first year. I remember first term of first year being pretty much review of what I had learned in the senior-year college-preparatory courses. I guess that means my high school did a good job. But some students apparently experience much more of a shock going from high school to first year.

    • Lord Nelson says:

      I expect the instruction offered by a college such as Yale is of high quality, but it has no particular reputation for particularly high standards or difficulty.

      I would refute this based on personal experience.

      I have an engineering degree from an Ivy League university. Freshman year, the Ivy League classes were relatively easy, with one or two exceptions, and getting A’s was not a problem. Starting in my sophomore year, there were several classes where I slid by with B’s. Starting in junior year, the classes became difficult enough that I had to devote most of my time to them. One computer science class in particular (which was a graduate-level course that we were required to take as juniors) was difficult enough that I would have failed, had my professor not let me take an Incomplete and redo my worst assignment over the following summer, after which she changed my grade to a C-.

      The Ivy League classes also covered far more than their counterparts at the state university. For example, I took a freshman level econ course at the Ivy League (which was challenging enough to be interesting), then a sophomore/junior level econ course over the summer at the state school (which was complete review and I got perfect scores on tests without even trying). The junior/senior level math course that I took at said state school was also about 80-90% review of material covered in my freshman year at the Ivy League school.

      I know that one anecdote does not prove anything, but at least in my experience, the STEM classes at Ivy League schools are pretty rigorous. The big difference is that, as long as you’re trying your best, the Ivy Leagues will jump through hoops to ensure that you graduate without failing any courses. They want you to remember them fondly and donate lots of money as an alumni, after all.

      • quaelegit says:

        It’s probable that you’re smarter than me, and comparing anecdote to anecdote doesn’t mean squat, but… your engineering curriculum sounds much easier than mine if you made it through 2 whole years before you had to devote most of your time to your classes!

      • j1000000 says:

        I assume everyone understands that the STEM classes are intense at these great schools, and that the reason graduation numbers are so high is because under-qualified matriculants choose the less rigorous majors. I’m under the impression that Ivy League schools are notorious for trying to whittle down pre-med numbers, especially.

        I once met a guy from Princeton who went to a good med school, but he’d gotten there after majoring in English for undergrad then taking a much less demanding “post-bach” program to fulfill med school pre-reqs. He said Princeton pre-med was too competitive and his grades would’ve been too poor to get into a good med school, if he’d even survived the track at all. But because he was smart enough to get into Princeton in the first place, his MCAT scores were more than good enough when he was in competition with non-Princeton students. (That final sentence was only implied; he was the self-aware, “I went to school in New Jersey” type.)

    • Worley says:

      There are various complications:

      I’m told that the University of Iowa is easy to get in to, but the professors of the freshman classes are required to fail half of the students. So the freshman year cuts the class size by a factor of 4. OTOH it’s a state school, so you’re not risking a lot of money by attending. And also, parents of kids who aren’t admitted will complain to their legislators, but parents of kids who flunk out will not.

      There is a huge prestige competition in affluent suburbs over what college you kid attends. This prestige can be traded off against the amount of tuition that affluent parents will pay. So if you are a school that is well-known enough to be in this competition, there are financial incentives to make your school appear to be as hard to get into as possible.

      • acymetric says:

        I’m told that the University of Iowa is easy to get in to, but the professors of the freshman classes are required to fail half of the students.

        It seems highly unlikely to me that this is actual policy.

        Some searching concludes that it is in fact false: more than 80% of freshmen return for their second year at the University of Iowa.

  51. nameless1 says:

    Weighted blankets and similar things, weighted vests, compression clothing (specialized, but perhaps also the ones from athletic stores) subthread. Anything. Have you even heard about it? It is meant for autistic people originally, but it is getting realized it is just a good generic stress-reduction thing. What is your experience and everything that you have to say about them.

    Fun: one research realized that ADHD kids liked weighted vests so much, they were acting out just to get them, hence originally it did not work in calming then down, only when they were allowed to wear them all the time.

    I have ordered myself a compression t-shirt, but I don’t understand weighted vests. Don’t they put all the weight on your shoulders, effectively being the same as a heavy backpack, which I hated as a kid?

    • pozorvlak says:

      I have a weighted blanket, and love it. I don’t have any rigorous double-blind data or anything to back this up, but it certainly feels like it helps me and my partner sleep, especially in the summer when it’s too warm for a duvet.

      I haven’t heard of weighted vests being used for stress-reduction, only for athletic training. Looking at Amazon, the heavier ones all seem to have chest straps and/or waist belts, which should take a lot of the weight off your shoulders. This should also be true of backpacks, by the way! If you’re carrying heavy loads on your back, make sure you have a backpack with a waist belt, and ideally a chest strap as well – most of the weight should be on your hips rather than your shoulders.

    • bullseye says:

      Hikers’ backpacks have a built-in belt that puts the weight on your hips. I looked at pictures of weighted vests just now and it looks like they might put the weight on your chest or belly.

    • FrankistGeorgist says:

      I’ve always slept quite well my whole life, and even though I currently live basically on top of a major Manhattan Avenue I thought I still slept like a rock. But I’ve always enjoyed the weight of people on top of me during sex/cuddling, and since I’m utterly burnt out on dating I decided to get a weighted blanket.

      The effect was so extreme I actually wondered if I’d been autistic or anxious for my whole life and never noticed it. I was blissed out for a solid week, not a care in the world. The compression shirt followed to extend the effect to the day. It pulled me most of the way out of a huge spiritual/physical/let’s-not-get-this-diagnosed-that-sounds-expensive funk. I actually still spend parts of the day completely giddy. I sleep like the dead and dream in a much more fantastical way than I have in years, and often lucidly whichever I used to be able to do but had basically lost. Sex drive returned, and acid reflux from the restriction of the compression shirt showed me I was eating way too much (next thing to work on).

      Both the compression shirt and the weighted blanket also give me something to push against with my stomach for deep breathing. I am a much much too shallow breather, and being able to push against a force – however gentle – with my diaphragm, has been life changing for all the reasons deep breathing is usually said to be good.

      Only thing is the weighted blanket gets holy hell hot.

    • J Mann says:

      I tried a weighted blanket, but other than being much warmer, didn’t notice much difference either way. If it gets cold enough to try, I might try doubling it.

    • broblawsky says:

      I used one and didn’t notice a substantial difference in stress or anxiety, but the blanket I used may have been too light for a person of my weight and size.

    • RavenclawPrefect says:

      Suppose that someone has heard about all this hype, and wants to actually purchase a weighted X, but they seem to be expensive enough that making a poor choice is semi-costly. Which items do people recommend or anti-recommend? When looking at size and weight, how much of a function of my various measurements is it? I’ve seen ~10% of your body weight mentioned for blankets (and a comment that more than this is good too), but I don’t know if this is universally accepted.

      • FrankistGeorgist says:

        I went for 10% of my body weight rounded up. They seem to come in about 5 lb intervals so that meant 30 lbs and indeed it was expensive $169 on amazon. (Currently out of stock in that weight but this set.)

        I expected to be disappointed since I’d enjoyed the weight of a full human body on me and figured there couldn’t be a weighted blanket that could recreate the feeling, but the more even spread and compression of the blanket is a wonderful feeling in its own right. Like being swaddled or hugged.

        A friend also suggested that buying a large bean bag chair would probably have the same effect, since that’s basically what a weighted blanket is, but looking now it seems bean bag chairs are much more expensive than I thought.

        You also don’t need to get one the size of your bed. Mine is basically a full size, because the weight keeps them in place and any excess would just drape off the bed, so there’s perhaps some saving there.

        • dmolling says:

          I’ve tried out a weighted blanket recently and had small, but definitely positive results – less moving around during the night and I believe faster falling asleep.

          I never thought about compression shirts – any particular recommendations?

          • FrankistGeorgist says:

            I don’t have great recommendations there I’m afraid. My current strategy has been to buy cheap compression shirts from Gotoly that’s more for hiding fatness. Since the biggest benefit for me is the deeper breathing, the focus on the stomach is fine. They wear out quick, though, as I suppose they must, being under tension by definition. So I go for cheapness and replaceability.

            I struggle to think of what the difference is between a shaping compression shirt and a medical one, though, besides distribution of the pressure. I got a CalmWear once but it was much too small sadly.

            More than anything I think you should feel comfortable with the material.

    • sandoratthezoo says:

      We got a weighted blanket and used it for months. My wife liked it, but I hated it and we eventually got rid of it because it was seriously hurting my ability to get enough sleep during nights.

    • Armadillo Daffodil says:

      I got a weighted blanket about four months ago, and have been using it almost every night. My experience is that it definitely has a calming effect, but the magnitude is not that big. The first morning after I’d slept with it on the whole night through (it takes a bit of getting used to, so it’s better to start gently), the effect was comparable to 5 or 10 mg of diazepam – I felt very very relaxed, and the effect didn’t wear off until after lunchtime.

      In the beginning I was concerned over dependence or addiction, that I would start to have a hard time sleeping anywhere except my own bed. There has been some habituation, to the extent that regular duvets now feel comically light-weight, but thankfully I can easily get to sleep anywhere.

      It would be interesting if someone did some rigorous research on weighted blankets. I would like to know what kind of person it works for, what kind of person it doesn’t work for, and why.

    • Rebecca Friedman says:

      Weighted blankets only – I don’t know about weighted vests/compression clothing (though this thread is interesting).

      I got a weighted blanket a year or so back, and it works very well for me. The downside is pretty much just habituation – I find it much harder to sleep without it (or other heavy blankets at minimum), to the point of hauling the thing through airports on a few occasions just so I could get enough sleep (though my sleep with it is still better* than my sleep without it ever was). I have always slept better with significant weight on top of me, which is why I tried it – the only time I can remember starting to fall asleep without wanting to since being very small indeed was the time child-me crawled into a folded-over futon, which was heavy enough to send me halfway to sleep before I realized and crawled out, and I have never slept well during the summer. (Heat = no blankets = no sleep.) That said, I also know people who weighted blankets don’t work for, who find them uncomfortable. Anecdotally, the best evidence for whether it works is whether having heavy blankets (or presumably other weight) on top of you works. If you don’t like that, you probably won’t like the blanket either.

      I got mine from an SSC link: https://www.etsy.com/shop/AutisticRabbit#about. I’ve been quite happy with it thus far. The only downside is it is not washable, though you can get a cover that is. I get the impression this is a common thing for weighted blankets, but I could be wrong.

      * Note: “Sleep better” = “be able to get to sleep”; the blanket does not, so far as I can tell, actually cut the hours I need to sleep or anything like that. It just shifts time-trying-to-fall-asleep from 1-3h at worst to 30m at worst.

    • Lord Nelson says:

      I have autism and anxiety and find that weighted blankets are really comforting. It doesn’t have to be a blanket–anything that weighs at least 10-15 lbs and distributes the weight across my torso and upper legs works–but I’m just going to say “weighted blanket” for simplicity. If I’m stressed or anxious, the extra weight helps calm me down significantly. If I’m not stressed or anxious, it still puts me into a more relaxed state (and puts me to sleep if I’m not careful).

      Even when I was a kid, before I knew that weighted blankets were A Thing, I took advantage of the benefits. I was the weird kid who slept under a full bed of blankets, even in the summer, because overheating was preferable to sleeping without any weight on top of me.

      My only complaint about weighted blankets is how expensive they are. I’m too much of a cheapskate to buy a pre-made one, and I don’t have the skills to make one from scratch. My mother said she’d make me one for Christmas a few years back, but alas, those plans unraveled.

      • Nancy Lebovitz says:

        I wonder what’s involved in making a weighted blanket. What skills are required?

        • Nancy Lebovitz says:

          So I looked it up, and it seems as though the only skill needed is minimal ability to use a sewing machine. Minimal because the weighted blanket doesn’t have to look nice.

          At least some maker spaces have sewing machines.

          I’m not sure whether making a weighted blanket will save much money– the cheapest ones seem to be about 30 or 40 dollars.

          • Lord Nelson says:

            Wow, weighted blankets are that cheap now? Last I checked (which, admittedly, was about two years ago) a weighted blanket of the correct size and weight for an adult cost about $150, whereas making one cost around $30-$40.

            The demand must have increased dramatically if the market changed that rapidly.

  52. AlphaGamma says:

    This is of course the modern Opentathlon Thread, otherwise known as the 19th-century action-hero event as it tests the skills required of a young officer attempting to return to his unit through enemy lines- cross-country running, swimming, riding an unfamiliar horse (because he’s just liberated it!) and fighting with sword and pistol.

    The ancient pentathlon was sprint (one stade, about 180m), wrestling, long-jump, javelin and discus. This was held at some early Olympic Games, with a 1500 metre run replacing the wrestling.

    The modern pentathlon (not to be confused with the Modern Pentathlon) is held at indoor competitions instead of the (women’s) heptathlon, as indoor javelin throwing is impractical. It also removes the 200m race from the heptathlon, leaving 60m hurdles, high jump, long jump, shot put and 800m. Men compete in the decathlon outdoors, and a different heptathlon indoors.

    • Chalid says:

      So what would the event look like if we were testing the skills of the 20th or 21st century military?

      Swimming, sprinting, and distance running still seem appropriate though perhaps these should be done with a heavy load. Instead of javelin, you might have shot-put as it is more like throwing a grenade. And maybe a crawling race? Competitive digging?

      • johan_larson says:

        Running, swimming, grenade-throw, rifle shooting seem like pretty obvious choices. I’m not sure what the fifth event should be, though.

        Orienteering?
        Hand-to-hand combat?
        An obstacle course?

        • Furslid says:

          Would a stealth component be appropriate? It seems really useful in modern warfare, but I’m not sure how to measure it.

          Maybe mix it up with navigation. The competitor must get from point A to point B in random terrain guarded/patrolled by either neutral parties or the other competitors with cameras. Points are lost if the searchers can locate the competitor. More points if they can get a picture demonstrating they could have shot them.

        • Watchman says:

          Night orienteering (a seriously fun varient, if like me you enjoy running through dark forests at night). Most modern warfare is nocturnal.

        • AlphaGamma says:

          Possibly put in a combined event- modern pentathlon combines running and pistol shooting in one event, though it works slightly differently from biathlon.

          Perhaps have orienteering carrying a rifle, at certain points competitors return to the shooting range and have to hit some targets.

          (As far as ways to combine running and shooting, in biathlon competitors race carrying their weapon. If they miss they must either complete an extra “penalty loop” or have a penalty added to their final time. Meanwhile, in modern pentathlon they do not carry their pistols- at the shooting range, they must either hit 5 targets or wait 50 seconds before they can start running again. There is no additional penalty for a miss.)

          • Garrett says:

            Have runners start with the pistol, 1 empty magazine and no ammunition. At the shooting stations there is a tray of loose ammo of a standard caliber (say – 9mm). When shooting you have to first load the rounds into the magazine by hand choose how many rounds to load. You keep shooting until all 5 targets are hit. If you load 5 rounds and only hit 4 targets you’ll need to eject and load at least 1 more round in order to drop that last target. If you load more rounds than you needed you either need to choose to eject the extra rounds prior to running again, or you get stuck with the extra weight.

      • deciusbrutus says:

        Drone operation?

      • AlphaGamma says:

        Swimming, sprinting, and distance running still seem appropriate though perhaps these should be done with a heavy load.

        I would like to see a modern version of the Hoplitodromos.

      • dodrian says:

        Starcraft, Counter Strike, Rocket League, Dota and Fortnite.

      • bean says:

        Hmm….
        The first four are pretty easy:
        Obstacle course
        Distance march/run with rucksack
        Rifle shooting
        Kill house (rifle in a more tactical environment, graded on both speed and accuracy)

        Not so sure on the last one. If Doing Paperwork or Suicide Prevention Training are disallowed for not being athletic enough, swimming wouldn’t be a bad choice. Maybe some form of land nav/orienteering course would work. Or a first-aid event. If all else fails, test pistol skills, too.

      • Incurian says:

        So what would the event look like if we were testing the skills of the 20th or 21st century military?

        Probably like this: http://www.bestrangercompetition.com/

    • John Schilling says:

      The Olympics are primarily for entertainment, and the Modern Pentathalon has more to do with being a Swashbuckling Action Hero out of Dumas et al than with any prioritized ranking of actual martial skills. So the Post-Modern Pentathalon will presumably have more to do with John Wick or James Bond than with SOCOM.

      In order:

      Fencing gets replaced with Mixed Martial Arts, details TBD.

      Pistol Shooting is folded into an IPSC Three-Gun match, probably spaced along the running course.

      The running component will be upgraded to Parkour, perhaps using the natural urban terrain of the host city.

      The equestrian event is replaced with motorcycle racing, either motocross or street racing using the host city’s streets.

      Swimming is tricky, because it doesn’t seem to fit into the common action-hero skillset. Possibly we keep it anyway for tradition and martial utility, but I’d be open for a thematically-appropriate replacement. Any suggestions?

      • johan_larson says:

        Skydiving?

        • John Schilling says:

          Thematically appropriate but hard to do as a competitive individual sport.

          I did consider a HALO jump to the start point of the Parkour course, with the obstacles laid out to severely handicap anyone who doesn’t land right on target. Minimum time to ground plus precision landing gives you a head start over the competition. But that looks like several sorts of hazard lining up to kill the competitors, and I think they frown on that in the Olympics.

      • dndnrsn says:

        Underwater obstacle course, maybe.

      • Nornagest says:

        Trash talk and one-liners, scored like gymnastics.

        • John Schilling says:

          I think we have a winner.

        • dndnrsn says:

          Surely this would be integrated into the other events?

          • Nornagest says:

            I’m thinking the trash talk and one-liners would take place over the course of the other events but be scored separately. You might come in third in the parkour segment, but come up with a really good quip when #4 misses a roll and breaks an ankle and the Russian judge might give you the nod later on over #1 and #2. This would then be incorporated into the final scoring in some relatively balanced way.

          • Eugene Dawn says:

            This reminds me of the sword-fighting in the Monkey Island games, where sword thrusts must be accompanied by insults (rhyming insults when fighting at sea), and the quality of the insults determines the winner.

  53. liskantope says:

    One of these days maybe we’ll have an AI that has learned how to detect culture warlike comments and acts as a filter to prevent them from getting posted (rather like how the presence of a tabooed term for a certain ideology automatically disqualifies a full comment from appearing here); until then, privately emailing warnings to people and then checking that they comply afterwards sounds like it would be a major headache.

    • Murphy says:

      shadow-banning commenters and posts seems to be more and more of a thing.

      basically: outright ban people and they get angry and move somewhere else, taking their ad revenue with them.

      So more and more sites are switching to shadow bans and similar: everything looks the same to the user…. but their posts start getting hidden, or moved to the bottom of all discussion and don’t get highlighted to anyone except those who really go looking.

      And so they continue to rant…. but they get less and less replies.

      • brad says:

        I don’t think it’s about ad revenue. It’s about coming back with an even worse sockpuppet.

        • RavenclawPrefect says:

          At least on reddit, this is the motivation for shadowbanning users on most subs; it basically only happens to spammers and repeat trolls who come back on new accounts, because it doesn’t notify them and means the sockpuppet turnover is slower.

      • Edward Scizorhands says:

        Shadowbans are about cowardly admins who don’t want the personal confrontation of telling someone they are kicked off. They can’t even bother tossing them in the oubliette.

        If the person really is a monster trying to destroy you, sure, shadowban away, but most people aren’t that monster, and it’s a shitty way to treat people who aren’t monsters. But once you have that tool to avoid the confrontation, you will come up with a way of deciding the people you have to manage really are monsters.

      • Reasoner says:

        The more general version of this idea is to use NLP to detect the level of outrage in a comment and sort the comment thread accordingly.

      • Bugmaster says:

        If I knew that SSC engages in shadowbanning people, then I’d stop reading the comment sections — because I could never be sure if the discussions were genuine or indirectly scripted. Perhaps this is the end goal, I’m not sure…

    • Enkidum says:

      privately emailing warnings to people and then checking that they comply afterwards sounds like it would be a major headache

      I guess? But I think there are many of us who read most comments on most open threads, and it wouldn’t be that much work to fire off a quick boilerplate email. Scott presumably doesn’t read everything, but even 20% + user flagging would work well enough, I’d imagine.

  54. kaakitwitaasota says:

    No Comment of the Week?

  55. rlms says:

    Let me know what you think.

    Sounds good, as long as it hits those dreadful other people and not me.

  56. googolplexbyte says:

    How about a buddy system rather than a ban. You can only post a culture war response, if you can convince someone else to post it in their own words.

  57. liskantope says:

    Here’s another linguistics-related suggestion for a linkpost: The Small Island Where 500 People Speak Nine Languages.

  58. Frog-like Sensations says:

    There’s something I’m confused about regarding Google Stadia and cloud gaming more generally.

    As a non-gaming-pc-haver, I’m intrigued by the prospect of cloud gaming but worried the input latency will be unplayably high. In connection with this, people often claim that the speed of light on its own guarantees that the latency will be too high, but as far as I can tell this is just wrong.

    60 frames per second is widely considered good enough for the vast majority of even hardcore games (competitive shooters are the main exception). That comes out to roughly a frame every 17 milliseconds. So for the speed of light on its own to impose less than a frame of latency, the light must travel from your computer to the server and back within 17 ms.

    In 17 ms, light can travel over 3000 miles. Apparently light only moves ~2/3 its vacuum speed in fiber optic cables, so we can lower that to 2000 miles. This means that Google can place a server 1000 miles away from you and the speed of light on its own will impose less than a single frame of latency. And obviously it’s well within Google’s abilities to create enough servers to ensure that 90% of Americans are way closer to a server than that.

    Now, I have no idea how surmountable the other sources of input latency are, and so for all I know cloud gaming is independently doomed. But the speed of light does not seem to be that much of an issue.

    Am I missing something obvious here?

    • tossrock says:

      The speed of light is not necessarily an issue. I think half-educated people phrase it that way to gussy up their knowledge of the fact that latency is an issue. And latency is definitely a substantial issue. Real latency is determined much more by the number hops and thus times a packet has to be processed by all the various routers and switches between your computer and the server in question. Right now, my ping to google.com is between a low of 25ms and a high of 166ms, with an average around 90.

      So, the speed of light is an issue in the sense that it consumes some of the latency budget, but the amount it consumes is not that high compared to being routed around the internet. Cloud gaming can work, especially for less latency-sensitive titles, but there are definitely big technical hurdles. For the twitchiest games it may never beat having the silicon rendering the pixels next to the screen they’re being rendered to.

    • deciusbrutus says:

      That’s not the speed of light limit for ‘one frame’ of latency. That’s the speed of light limit for 1.5 *additional* frames of input latency. (you are operating on data that is ~8ms out of date, and it will be ~16ms before you see the result of your input.

      But internet doesn’t happen at the speed of light, because you don’t have a dedicated fiber optic line to the cloud system. Use ping times instead. So people are wrong in their explanation for why the latency is unacceptable, but right in that the latency is indeed unacceptable. There’s also throughput concerns; while hiqh-resolution video can be encoded and decoded fast enough to stream, the compression and decompression still has a pipeline time, which is added to all of the other times in determining total latency.

      That’s why the esports championships have the competitors colocated; to remove any effect associated with network status from the equation.

    • Murphy says:

      I don’t know if you play any online FPS shooters but once people hit a ping of 200ms the game experience is severely degraded.

      Now, lets imagine my keyboard and mouse are 2000 miles from the graphics card.

      I press up on my keyboard, it takes 17ms to reach the server and 17ms for the result to travel back.

      But there’s more delays. I have to get a screen-worth of data from the graphics card, there’s compression and tricks using the difference between frames… but lets assume worse case where something updates the whole screen like an ingame grenade.

      my screen is 1200*1900 so 2280000 pixels.

      When running locally that’s going over a 10.2Gbps HDMI cable.

      But 10.2Gbps for a single connection is not practical over the internet.

      So we need to compress that image in various ways. You know what compression takes? time. So we add in some more milliseconds for each frame to be processed and compressed. Then decompressed at the user end.

      Now add in stutter and line congestion. We hit the evening and everyone on my street starts streaming HD netflix.

      The congestion affects both upstream and downstream and it affects high bandwidth connections more.

      online FPS games manage because they’re typically sending no video data at all, rather a tiny stream of data updating coordinates and positions.

      Most of these problems remain even if the remote server is only 1 mile away. Light speed is only part of the problem.

    • Enkidum says:

      30 FPS is fine for most mere mortals playing most games (emphasis on “most”). But even getting that reliably will be a challenge given existing infrastructure.

    • nadbor says:

      Online gaming including FPS already exists, is very popular and has been for decades. The latency argument doesn’t make any sense because it applies exactly the same to Google’s project as well as all existing online gaming.

      What is different this time is the *throughput* requirement, not latency. It is much harder to achieve consistently low latency when you have to send the entire screen’s worth of pixels compared to just a handful of coordinates of players in an FPS. Nevertheless, I’m pretty sure that Google’s engineers were not born yesterday, are aware of the challenges and have a plan to deal with them.

      • rlms says:

        it applies exactly the same to Google’s project as well as all existing online gaming.

        No it doesn’t; normal games can do client-side prediction to reduce apparent latency.

        • nadbor says:

          Fair enough. This is harder.

          But client-side prediction can only get you so far. It will completely smooth out a ping spike only if no interaction with other players is going on at the given moment (which is most of the time). So it looks to me that streaming gaming requires latency to be as low *all of the time* as it is just *most of the time* in traditional gaming to give the same experience.

          Is this true?

          • rlms says:

            I’m not really sure what you mean. My perspective is that are things like moving your mouse to change direction that require x ms latency for the game to be fun which can be done client-side, and things like seeing the position of an opponent that require y ms and can’t be done client-side. If x < y then a streaming service with z ms total latency is going to have problems if x < z, which is possible even though z < y is required even in the traditional model.

          • Incurian says:

            I agree with rlms here. A user can put up with quite a bit of server lag, but input lag could be horrendous (especially in VR where users expect their display to reflect their proprioception instantly); when I submit an input I expect to see feedback immediately. Possibly google will come up with some clever way to run some things cheaply on the client and reduce apparent input lag.

          • acymetric says:

            @Incurian

            But you’re effectively just streaming video from the server, so there would be no way to reflect user input client side without traveling to the server, rendering, and traveling back.

          • Incurian says:

            That’s why it would have to be clever. I have no idea how it would be done, but I wouldn’t discount the possibility based on my lack of imagination.

          • nadbor says:

            I guess my theory was that x = y but x-things (moving mouse) happen all the time while y-things (shooting, being hit) happen occasionally.

            Either way, plenty of people game online and get ping of 20ms so that’s all the proof you need that ping 20ms is physically possible.

            Which is not to say it is easy to achieve it consistently and at 10000x the throughput and (God forbid) outside major urban areas.

          • HeelBearCub says:

            It might be useful to examine PVP gaming to understand how much of an advantage is perceived to come from reducing latency. PVP gamers will happily pay thousands of dollars to increase their frames from 60 FPS to 144 or even 250 FPS. They will also pay for a low latency monitor, reducing latency from a more typical 5 ms to < 1 ms. They will do both of these things even if they are playing on a 60 Hz monitor. That’s how much they want to reduce input lag.

            Once you play a game at 144 FPS on a 144 Hz monitor, you will not want to go back. You can sense the difference 100%.

            Or we can look at online musical collaboration and realize that once we get past 35 ms of desync, it becomes impossible to play together.

            Now, on the other end of this are console games played on a TV at 30 FPS with a 50 ms output lag on the TV.

            I’d say that “games as streaming” are most likely a competitor for the console market, but robust local hardware will always have a place.

          • acymetric says:

            I’d say that “games as streaming” are most likely a competitor for the console market, but robust local hardware will always have a place.

            I would also like to believe that eventually there will be a backlash against everything being a subscription model and people will want to go back to owning stuff (I’ve been there for like 10 years already) but I’m not sure what it would take to reach that point or if we ever will.

          • Clutzy says:

            It might be useful to examine PVP gaming to understand how much of an advantage is perceived to come from reducing latency. PVP gamers will happily pay thousands of dollars to increase their frames from 60 FPS to 144 or even 250 FPS. They will also pay for a low latency monitor, reducing latency from a more typical 5 ms to < 1 ms. They will do both of these things even if they are playing on a 60 Hz monitor. That’s how much they want to reduce input lag.

            One of the difficulties is teasing out latency from inconsistency because they are so often correlated. Its true that an upper bound of around 250 ms seems to be what makes it nearly impossible to be competitive on games like CS, and Quake, with slightly higher limits for slower games like EQ and WOW. However, almost no one has a 250 MS connection that is consistent just based on how the internet works. Its usually a 150-500 MS connection with spikes and that just murders you.

          • HeelBearCub says:

            @Clutzy:
            If your ping to the server is 150 ms you have a pretty cruddy internet (overall network setup) or you are playing on a server on another continent.

            But (and this a big but) 150 ms lag from when an enemy action affects you is very, very different from 150 ms of input lag. 150 ms of enemy desync is playable. 150 ms of input lag is going to feel like everything you do is happening in tar.

            If you snap your mouse or controller to the side and you have that much lag until your camera starts to move on screen the only game you will want to play is something involving naval fleet maneuvers.

          • Clutzy says:

            I’m just referring to what I know about old CS:GO non-lan competitions. And often you’d have American and Russian players playing against each other (or against EU teams) in these competitions and at certain points it was manageable, but very often the lag spikes just ruined everything. Obviously all the big stuff has always been lan for that reason, but many qualifiers used to work like that (although now its big enough that everyone can usually do regional qualifiers).

          • HeelBearCub says:

            @Clutzy:
            You are right that variable “desync” is harder to deal with than steady desync. I’d generally take higher, but steady desync over lower, but greatly variable desync.

            This also is true for frame rates BTW.

            But this doesn’t really have anything to do with the question at hand dealing with input lag. Having your client respond immediately to inputs is much more important than how delayed your view of the server state (and other clients) is. They, are apples and pears. Similar, but quite different.

            As long as the view I have on screen is consistent, and my controls cause nearly instant response on screen, I can play. But if I have to snap my camera view ~90 degrees to respond to an incoming threat, it will be “very hard” to stop my turn precisely if the camera doesn’t even start to move until after my mouse movement has stopped.

          • deciusbrutus says:

            Not discussed: forms of gaming that are more sensitive to hardware weakness than to input lag.

            Dwarf Fortress can overpower anyone’s PC with pathfinding and temperature calculations, but would tolerate 100ms input lag pretty well.

            Large Civilization games will chug on credible hardware, but again will tolerate high input lag.

            It doesn’t have to be only the specific games that run worst on the service.

          • HeelBearCub says:

            @deciusbrutus:
            Absolutely 100%. That’s why I said in the beginning that there are certainly plenty of examples of people happily playing console games in high-input lag environments (consoles on a high lag TV being an easy example). The game itself determines how much or little input lag you are willing to tolerate. I explicitly pointed out that streaming computing services could definitely threaten the console markets because of this.

            I just was pointing out that low input lag is definitely highly prized in many cases, and that this is always an advantage for (more) local hardware.

          • deciusbrutus says:

            Console games have a very particular following; because the typical console controller is a thumbstick that does not allow very precise control, console games can’t require very precise analog controls.

            Console games optimized for streaming could simply increase the size of their windows to mask the input lag. New AAA titles could probably even afford to make their controls retroact, so that it seems responsive.

          • HeelBearCub says:

            @deciusbrutus:
            The fact that controller based games use a “keep moving until I say stop” approach (as opposed to a mouse based “move to here” controls scheme) does help them. Some amount of interpolation (prediction) of when the user is likely to stop can also help.

            But if you have 150ms of lag between stopping controls and the screen responding, you ARE going to feel that. Much in the same way that a speaker whose own voice is echoed back to them will feel it. Games that I play on controller on the PC, like Rocket League or Dark Souls, feel much better to play than they did on console. In fact Dark Souls Remastered changed the experience of playing in Blight-town from one of the the most frustrating areas due to lag induced by frame rate issues to not different than anything other than the normal “fuck you, this is Dark Souls.”

            Input lag is just a bitch to deal with, and there is no way around that in a game that tests your reactions to novel events. You need some indication of what you have done in order to adjust your future actions, and when that feedback is delayed, it limits you.

      • JPNunez says:

        I assume the plan is “serve the people who can use this today, and expect this creates demand for better infrastructure, while expecting to keep a first mover advantage until then”.

        • woah77 says:

          Yeah, that mirrors my thoughts on it. Someone asked me about it at my convention meeting on Saturday and my response was basically “The infrastructure doesn’t exist yet.” Although my prediction was a little more pessimistic than yours was. I expect that what will happen is that it’ll flop and then over the next five years or so our infrastructure will upgrade and someone will try again and succeed.

          • JPNunez says:

            It can flop, and Google’s track record says they might cancel it after a while. My prediction is that it will stay alive for a decade without taking over the console market, but by then the streaming market may be different enough for them to not be important anymore.

            If anything, MS seems way better poised to take it all. They have the cloud infrastructure too, a current console, and may ride the market on both sides without committing to either, letting people use whatever fits them better. My prediction is that in the end, streaming biggest player will be MS, the end being whenever streaming gaming is 2x revenue than consoles.

      • Murphy says:

        Online FPS games typically run 2 copies of the map, one on the client and one on the server, the server one of course sans-actual-graphics.

        FPS games then get to cheat because if you run down the hallway, if the connection stutters you may never notice because you continue running down the local copy of the hallway until the client and server catch up with each other. It’s only if the connection really cuts out and the server starts disagreeing with the local client or the time steps get bigger than the server will allow that you start getting players lagging badly that they really notice.

        Hell, it’s not unusual in FPS games for deaths to involve the server making some slight edits to recent history as far as the clients are concerned. Bob shoots at mike and the server agrees he hit him, meanwhile mike thinks he made it round the corner but then the server updates his client that he’s dead on the floor a few steps back.

        Try to do that with screen frames and you’re gonna have a bad time.

    • Vitor says:

      For most day to day stuff, the speed of light is so many orders of magnitude above any other speed that it’s not worth thinking about. If someone told you that if you want to cross the atlantic, you need to keep in mind that it will take you at least 50ms no matter what mode of travel you use, you would look at them like they’re crazy.

      The speed of light is not the only limiting factor in cloud gaming, as others have pointed out. However, it is definitely relevant enough that you actually need to include it in your calculations.

      • Murphy says:

        Yep, in network computing it can yield some interesting behavior.

        There’s a famous old story of the 500-mile-email.

        https://www.ibiblio.org/harris/500milemail.html

        Here’s a problem that *sounded* impossible… I almost regret posting the
        story to a wide audience, because it makes a great tale over drinks at a
        conference. 🙂 The story is slightly altered in order to protect the
        guilty, elide over irrelevant and boring details, and generally make the
        whole thing more entertaining.

        I was working in a job running the campus email system some years ago when
        I got a call from the chairman of the statistics department.

        “We’re having a problem sending email out of the department.”

        “What’s the problem?” I asked.

        “We can’t send mail more than 500 miles,” the chairman explained.

        I choked on my latte. “Come again?”

        “We can’t send mail farther than 500 miles from here,” he repeated. “A
        little bit more, actually. Call it 520 miles. But no farther.”

        “Um… Email really doesn’t work that way, generally,” I said, trying to
        keep panic out of my voice. One doesn’t display panic when speaking to a
        department chairman, even of a relatively impoverished department like
        statistics. “What makes you think you can’t send mail more than 500
        miles?”

        “It’s not what I *think*,” the chairman replied testily. “You see, when
        we first noticed this happening, a few days ago–”

        “You waited a few DAYS?” I interrupted, a tremor tinging my voice. “And
        you couldn’t send email this whole time?”

        “We could send email. Just not more than–”

        “–500 miles, yes,” I finished for him, “I got that. But why didn’t you
        call earlier?”

        “Well, we hadn’t collected enough data to be sure of what was going on
        until just now.” Right. This is the chairman of *statistics*. “Anyway, I
        asked one of the geostatisticians to look into it–”

        “Geostatisticians…”

        “–yes, and she’s produced a map showing the radius within which we can
        send email to be slightly more than 500 miles. There are a number of
        destinations within that radius that we can’t reach, either, or reach
        sporadically, but we can never email farther than this radius.”

        “I see,” I said, and put my head in my hands. “When did this start? A
        few days ago, you said, but did anything change in your systems at that
        time?”

        “Well, the consultant came in and patched our server and rebooted it.
        But I called him, and he said he didn’t touch the mail system.”

        “Okay, let me take a look, and I’ll call you back,” I said, scarcely
        believing that I was playing along. It wasn’t April Fool’s Day. I tried
        to remember if someone owed me a practical joke.

        I logged into their department’s server, and sent a few test mails. This
        was in the Research Triangle of North Carolina, and a test mail to my own
        account was delivered without a hitch. Ditto for one sent to Richmond,
        and Atlanta, and Washington. Another to Princeton (400 miles) worked.

        But then I tried to send an email to Memphis (600 miles). It failed.
        Boston, failed. Detroit, failed. I got out my address book and started
        trying to narrow this down. New York (420 miles) worked, but Providence
        (580 miles) failed.

        I was beginning to wonder if I had lost my sanity. I tried emailing a
        friend who lived in North Carolina, but whose ISP was in Seattle.
        Thankfully, it failed. If the problem had had to do with the geography of
        the human recipient and not his mail server, I think I would have broken
        down in tears.

        Having established that–unbelievably–the problem as reported was true,
        and repeatable, I took a look at the sendmail.cf file. It looked fairly
        normal. In fact, it looked familiar.

        I diffed it against the sendmail.cf in my home directory. It hadn’t been
        altered–it was a sendmail.cf I had written. And I was fairly certain I
        hadn’t enabled the “FAIL_MAIL_OVER_500_MILES” option. At a loss, I
        telnetted into the SMTP port. The server happily responded with a SunOS
        sendmail banner.

        Wait a minute… a SunOS sendmail banner? At the time, Sun was still
        shipping Sendmail 5 with its operating system, even though Sendmail 8 was
        fairly mature. Being a good system administrator, I had standardized on
        Sendmail 8. And also being a good system administrator, I had written a
        sendmail.cf that used the nice long self-documenting option and variable
        names available in Sendmail 8 rather than the cryptic punctuation-mark
        codes that had been used in Sendmail 5.

        The pieces fell into place, all at once, and I again choked on the dregs
        of my now-cold latte. When the consultant had “patched the server,” he
        had apparently upgraded the version of SunOS, and in so doing
        *downgraded* Sendmail. The upgrade helpfully left the sendmail.cf
        alone, even though it was now the wrong version.

        It so happens that Sendmail 5–at least, the version that Sun shipped,
        which had some tweaks–could deal with the Sendmail 8 sendmail.cf, as most
        of the rules had at that point remained unaltered. But the new long
        configuration options–those it saw as junk, and skipped. And the
        sendmail binary had no defaults compiled in for most of these, so, finding
        no suitable settings in the sendmail.cf file, they were set to zero.

        One of the settings that was set to zero was the timeout to connect to the
        remote SMTP server. Some experimentation established that on this
        particular machine with its typical load, a zero timeout would abort a
        connect call in slightly over three milliseconds.

        An odd feature of our campus network at the time was that it was 100%
        switched. An outgoing packet wouldn’t incur a router delay until hitting
        the POP and reaching a router on the far side. So time to connect to a
        lightly-loaded remote host on a nearby network would actually largely be
        governed by the speed of light distance to the destination rather than by
        incidental router delays.

        Feeling slightly giddy, I typed into my shell:

        $ units
        1311 units, 63 prefixes

        You have: 3 millilightseconds
        You want: miles
        * 558.84719
        / 0.0017893979

        “500 miles, or a little bit more.”

        Trey Harris

    • Frog-like Sensations says:

      Thanks for all the informative replies.

      I’m certainly interested in all the other more prosaic latency issues facing cloud gaming as well. But as far as I can tell there has been no substantial pushback against the base claim that Google should be capable of making enough servers for the speed of light to not really be part of the problem anymore, at least not a major part. From this lack of pushback, I’m inferring that all the authoritative people saying otherwise are as full of it as I’d suspected.

      • acymetric says:

        But as far as I can tell there has been no substantial pushback against the base claim that Google should be capable of making enough servers for the speed of light to not really be part of the problem anymore, at least not a major part.

        Doesn’t that only follow for games with large, well-distributed bases and require that they only play against other people on their nearest servers?

        It doesn’t matter how many servers they have worldwide, if I want to play with my friends in Taiwan or whatever speed of light is a hard limit on minimum latency (plus whatever other latency is introduced).

        • Murphy says:

          Nah, this is for single-player gaming where your graphics card is hundreds of miles from you.

          Any multiplayer related latency would get layered on top of the latency from having your graphics card in another country to your screen. Your remote-graphics card server would connect out to the taiwan server just like your home PC does now

        • Frog-like Sensations says:

          In addition to Murphy’s response, I’ll add that I’m mostly interested in single player games anyway, so not that concerned on the multiplayer front. But my taste in single-player games is still such that sufficient input latency could drive me crazy.

      • Murphy says:

        It largely comes down to how well they can simulate a dedicated 1-meter 10.2Gbps connection over a Mb-scale, [large number]-mile long shared pipe.

        If major companies struggle to get smooth real-time 800*640 low-fps video and audio to work between meetings in their own internal corporate offices with thick corporate pipes, I’m not gonna hold my breath for high-def gaming to work well enough that I wouldn’t spend half my time swearing at the screen.

        They might pull some magic out… but my guess is that it’ll continue to be crap until people have Gb-scale connections running protocols that can guarantee smooth transmission.

        • Frog-like Sensations says:

          Yeah, I’m not exactly throwing my consoles in the trash already (or no longer keeping one eye on gpu prices). From what I’ve heard, the current levels of latency are not something I would have enjoyed playing Celeste or Sekiro with.

          I guess my my original post was mostly inspired by the primate part of my brain that is more interested in ensuring people don’t wrongfully gain status through bad science than with the practical matters.

    • dick says:

      One other thing I’d add to this discussion: The average home network connection arguably wasn’t good enough for Facebook when Facebook was built, either.

      • acymetric says:

        What? The original Facebook implementation was incredibly lightweight. Unless you are suggesting that the average home network connection arguably wasn’t good enough for anything (which feels true now but wasn’t true then in practice).

        The key difference in either case is that if your network is too slow for Facebook, Facebook just loads slowly. Same with early editions of Youtube or other video streaming. So you buffer, and wait for it to load (maybe go make a sandwich or watch some TV) and then come back and it is ready.

        With live gaming, there is no intermediate “it works, but slowly” there is either “usable” or “not usable”.

        This whole system would also leave people not in or near fairly large urban areas out in the cold.

        • dick says:

          What? The original Facebook implementation was incredibly lightweight.

          Very few early FB users were on home connections. But the relevant point is, “half of your potential customers don’t have a fast enough computer/pipe/etc” is not a death sentence for a new service, as long as the other half do.

          With live gaming, there is no intermediate “it works, but slowly” there is either “usable” or “not usable”.

          Some Quake 3 users never played with a ping over 50 and others never saw one below 200, especially the people in rural areas you mentioned; does that mean it was “usable” or “not usable”? Stadia will presumably work fine if you have a good server nearby and not work well if you don’t, just like Q3, right?

  59. Aapje says:

    Scott,

    Shouldn’t you note that this thread is culture-war free?

  60. Erusian says:

    In the Judgment of Paris, Paris was forced to declare Hera, Aphrodite, or Athena the fairest goddess. Naturally, being Greek Goddesses, they tried to bribe him. Hera offered to make him ruler of all Asia. Athena offered to make him the wisest and most skilled man, in both peace and war, in all the world. And Aprohdite offered him the truest love of the most beautiful woman in the world.

    Which would you choose? Why?

    Edit: For the female-identifying out there (or otherwise not interested in women), you can of course substitute ‘most beautiful woman’ for your preference.

    • Evan Þ says:

      “Most beautiful” is underspecified. Unlike the prototypical Greek man, I’m looking for a wife who’s got beautiful character more than one who’s got beautiful appearance. So Aphrodite’s out. That leaves Hera and Athena… Hera’s offer sounds interesting, but I’d probably need Athena’s offer to do much useful with it. And what’s more, I remember what happened to one guy who just might’ve taken Hera up on the deal.

      So Athena it is. And for that matter, I’d say she really is “the fairest” in the other sense of the word.

      • Lambert says:

        Some say a throng of horsemen, others infantry,
        others a fleet is the most beautiful thing
        on this dark earth, but I say it is
        whatever you love.

    • Aapje says:

      Wisest and most skilled, because then you are most skilled at conquering the world and at wooing the most beautiful women. So it is the superior option.

      Being made ruler doesn’t mean that you will be granted the skill to rule. If you are just parachuted into the position, there is a good chance that it doesn’t last long before you are figuratively or literally stabbed in the back.

      • Bugmaster says:

        I was going to post a similar reply, but you beat me to it.

        Additionally, this “ruler of all Asia” proposition reminds me of Lex Luthor’s line from the JLA cartoon: “President ? Are you kidding ? Do you realize how much power I’d have to give up just to be President ?”

      • There is probably zero probability that anyone in any time period could ever conquer all of Asia, no matter their qualities. And the contemporary world has so much more qualifiers on conquering your weaker neighbors compared to world history.

    • Robert Jones says:

      Athena’s bribe seems obviously the best (like choosing the orange pill in Scott’s story), but of course I would choose whichever of them was the fairest.

    • There has to be a catch to Athena’s bribe, otherwise you’d be skilled enough in war to conquer all Asia, and skilled enough in peace to rule it, and being so skilled in general would mean that you wouldn’t have much trouble attracting beautiful women, whether or not they are the most beautiful, or whether or not they can offer you the truest love. There has to be a catch, otherwise you’d obviously pick the option that makes you the best at everything.

      • thirqual says:

        This. Athena is as petty as the rest of that sorry bunch.

      • Chalid says:

        Being most skilled in war doesn’t make you skilled enough to actually conquer all Asia. Presumably the second-most-skilled person will beat you if you have a bad day, or if his army is just better or larger.

        And where are you getting an army from anyway? Sure you could start building one from scratch using your skill in peace, but that’s a lengthly process and there are lots of chances to for things to go fatally wrong along the way.

      • @Chalid
        I’m going with the other commenters who point out that recieving Asia by fiat doesn’t mean you are able to keep it, so although being the most skilled doesn’t mean you automatically get to conquer Asia, all other things (other armies being bigger, weather, geopolitics etc) being equal, you’ve got the best chance as far as things you can actually control if you are the most skilled. You raise a good point though, because although any given commenter here plonked in charge of a continent would surely screw things up, if you had pre-existing leadership skill sufficient to rule large territories, then taking a shortcut past the military conquest part may be the wiser option.

        On the other hand, we could question whether we even want all of Asia. Besides bragging rights, or the desire to enact your morality in the world, the main upside of having all of Asia would be to live a lavish lifestyle with vast resources at your command. However, if you were the most skilled person on the planet then you needn’t bother with becoming a King or Emperor to have sufficient access to that lifestyle, as there are many ways you could use your skills to make vast wealth even as a lesser lord. You could invent loads of amazing devices unknown to the ancient world and have Kings and Emperors as your benefactors instead.

      • JPNunez says:

        I think the catch goes away if you consider the goddesses will support Paris if chosen.

        So the decision really comes down to Hera (wife of Zeus, the main god) or Athena (daughter of Zeus and carrier of the Aegis and other symbols, and friend of Nike, goddess of Victory). Aphrodite is clearly the worst option.

        Honestly I’d throw my lot with Athena. Hera’s relationship with Zeus is fragile and in whatever conflict you end up, Hera may not be on Zeus side anyway, or if it is, it may be subject to betrayals.

    • Peter says:

      Athena. Whichever you pick, there’s going to be trouble from the other two. Skill in war would be remarkably handy in handling that trouble.

    • nameless1 says:

      I am pro-truth, so obviously Aphrodite. Surely the goddess of beauty and sex is objectively the most beautiful, whatever “objectively” means in this context.

      As for the rewards, Athene.

      • JPNunez says:

        Dunno if it follows. Her brother, Ares, the god of war, gets his ass handed to him by Athena in the Iliad, so dunno if they are guaranteed to be supreme on their respective domain.

    • bullseye says:

      Which definition of Asia are we going with?

      • JPNunez says:

        I assume it’s Asia Major and Minor (in the classic meaning), which IIRC is modern Turkey and Syria, maybe a few countries over. Iran is probably too much.

        Given that Paris was secretly Priam’s son, and Troy was in Turkey, I take this to mean Paris takes over Priam’s throne and goes on to conquest the rest of Turkey.

        e: checking, Hera offers dominion over Europe and Asia Minor. Yeah I am gonna take a pass on that. Sounds like asking for trouble.

        • Douglas Knight says:

          I had no idea that anyone ever used the phrase “Asia Major.” I assumed that “Asia Minor” meant “the small thing that earlier peoples meant by Asia.”

          I can’t find anyone saying that Asia Major meant the Levant. Some sources say that Asia Major meant Mesopotamia or Persia or both. Would Mesopotamia really include the Levant? I guess Mesopotamia+Persia is big enough that you might just throw in the Levant. (These sources don’t make clear who uses it this way.)

          This book says that it was first used in the 4th century and meant all of Asia, except Asia Minor. Here’s his list: “Sarmatia Asiatica with all the Scythian tribes, Colchis, Iberia, Albania, Armenia, Assyria, Babylonia, Media, Susiana, Persis, Ariana, Hyrcania, Margiana, Bactriana, Sogdiana, India, and the country of the Sinae and Serica.” The Levant is a glaring omission. I guess he must include it in Assyria, which can mean too many things (as the author notes!), which is a good reason not to use it in lists like this. [Maybe I shouldn’t have assumed that you meant the Levant by “Syria.”]

          I’m pretty sure that in the preclassical period Asia meant Anatolia. In the classical period some people extended it to something bigger, but there was a big range of usage. “Major” was never used until late antiquity and was never popular.

      • Erusian says:

        The Ancient Greek definition which either meant all of Asia today (including the northern steppes, China, and India) or the area from Turkey to Persia to Arabia, though explicitly not including Egypt or any part of Africa.

        Another way to think of it: The Persian Empire minus its Greek and Egyptian territories. Or a third to half of the world in the classical Greek view.

    • deciusbrutus says:

      The three finalists are disqualified for attempted bribery. I’m keeping the apple.

    • Protagoras says:

      They all play favorites (though Aphrodite’s kind of a flake). It’s probably most faithful to the original to take Aphrodite as offering the most desirable woman in the world, but her offer still isn’t worth it; Athena and Hera are vengeful and will screw you over for not picking them, and Aphrodite can’t protect you (which is of course describes the fate of Paris when he makes that choice in the original story). But if you choose Hera, you can probably count on her taking care of you; you don’t have to worry about getting overthrown the next day when you have the blessing of the queen of the gods. Yoiu can count on your love life sucking, and and having Athena out to get you is going to be annoying as well, but probably more survivable than having both Athena and Hera working together against you. Having said that, in the end I pick Athena. With the queen of the gods actively undermining you, your prospects for success as a conqueror aren’t as good as some are suggesting, and your love life will still suck. But you’ll have the wisdom to make good use of what you can get.

      On the bribery/corruption angle, it occurs to me that if you are supposed to be judging the nature of the goddess, rather than being completely shallow, all of the bribes are tied to their natures. So perhaps it isn’t unfair of them to be offering those bribes.

    • Joseph Greenwood says:

      By “Asia” did Hera mean the continent or “Asia Minor”?

    • honoredb says:

      With the benefit of “hindsight”, it seems like the catch here is that whichever goddess you choose will want you to be legendary for her gifts, which makes it into a bit of a curse (see also the picnic that being the Chosen People of Yahweh turned out to be). Paris choosing Aphrodite means that not only does Helen love him, but a war is fought over Helen’s beauty, which does not end well for Paris. Odysseus being the favorite of Athena means that he gets endless opportunities to demonstrate his ability to overcome challenges (it’s amazing how often he just straight-up cries with self-pity in the Odyssey). Being famous for ruling all of Asia at least means you won’t rule it only nominally and briefly–either you’re famous for your skill in conquering it, or for your longevity or choices as a ruler.

      Honestly I’m probably screwed no matter what; my best chance of getting out in one piece might be to declare that they are all the fairest because beauty is subjective, and “prove” it by finding three other people able to sincerely declare that each goddess is fairest. Then throw myself on the mercy of the contestants.

      • Tim van Beek says:

        “I cannot tell, you are all wonderful, I have to go now”.

        I can’t speak from experience whether this works with vain, vengeful immortals, but it kinda works with real people like e.g. women choosing dresses for a wedding.

    • Furslid says:

      I’d pick Hera’s offer. It’s the sure bet, assuming that I’m already marginally competent.

      I don’t want to pick Aphrodite. The most beautiful woman in the world is nice, but not super important. Anyone in the top .5% is probably good enough to make me happy. In addition, ancient societies were either polygamous or accepted concubines and she only offered me one.

      Athena’s offer is tempting, but loses out to Hera’s. Being the wisest and most skilled doesn’t assure victory. Victory also requires starting resources and position, and those are Hera’s domain. I’d rather trust my existing moderate skills with great resources than have great skill and lack resources.

      In addition, Hera is the most vindictive of the three goddesses and the one I’m most afraid of upsetting.

      • Protagoras says:

        Athena is extremely vindictive. I suppose there are more stories about Hera exercising that trait, but I submit that it is more because people mostly knew better than to screw with Athena, while in Hera’s case there was at least one person (Zeus, obviously) who just wasn’t scared of pissing her off.

        • Furslid says:

          None of the Greek gods are nice. Obviously I’d be sacrificing to all three of them afterwards.

        • Theodoric says:

          I remember in Cryptonomicon, it was stated that Athena was less of a shithead than your typical Greek god. Did Stephenson mess up (or just not care)?

          • Eric Rall says:

            Athena’s vindictiveness, AFAIK, seems pretty narrowly targeted to people who were party to what were, in the context of that culture, fairly major insults against her.

            Specifically, Arachnae persistently bragged of being a better weaver than Athena (who had weaving as part of her Goddess Portfolio). Athena responded first by warning Arachnae (in human guise) that she was offending against a major god and urging her to withdraw the insult, and then when Arachnae refused this, challenged her to a weaving contest to prove her boast. Arachnae’s entry in the contest was a tapestry depicting all the jackass things the various Olympians did to mortals (effectively doubling-down on her “insulting the gods” motif), and only then to Athena lash out at her (driving her to commit suicide, then reviving her and turning her into a spider).

            The Medusa story is particularly harsh to modern readers, since we read Medusa as being an innocent victim who shouldn’t be blamed for her unwilling participation in Poseidon’s act defiling one of Athena’s temples. The classical Greeks, however, would probably have agreed with Athena that willing or no, Medusa was part of the defiling of the temple and thus catches a share of the punishment for it (the whole punishment, in this case, since Athena wasn’t in a position to punish Poseidon).

            In both cases, she comes off badly to modern eyes largely because our norms of honor and morality have moved on quite a bit from those of Greek and Roman writers and audiences 2000-3000 years ago. And even by modern standards, she compares favorably to the likes of Zeus or Ares, whose misdeeds are a lot more numerous and arbitrary than Athena’s.

    • J Mann says:

      Athena for the reasons stated. I would find being wise and skilled very congenial, I could probably accomplish a lot of good, and wisdom and skill would be most helpful in responding to the blowback from choosing any one of the three.

    • Tim van Beek says:

      As a side note, does anybody think that Athena’s offer is out of character? Because she should have known that no man would choose wisdom, if love is an alternative?
      There is this joke that says “God was unjust with his distribution of talents except for intelligence, because everybody is happy with his/hers.”
      Does anybody know a story/myth/fairytale were the protagonist actually wishes for more intelligence or wisdom?

      P.S.: In case you, dear reader, are an exception, please substitute “no man would” with “most people wouldn’t” etc.

      • J Mann says:

        Well, there’s the classic example.

      • Protagoras says:

        Didn’t Odin sacrifice his eye for wisdom?

        • Nornagest says:

          It’s ambiguous. Some versions of the story make it sound like he got more knowledge out of the deal, not necessarily more wisdom; it’s usually how he learns the runes, for example. The Hávamál is one of the clearer versions:

          I know that I hung
          upon a windy tree
          for nine whole nights,
          wounded with a spear
          and given to Othinn,
          myself to myself for me;
          on that tree
          I knew nothing
          of what kind of roots it came from.

          They cheered me with a loaf
          and not with any horn,
          I investigated down below,
          I took up the runes,
          screaming I took them,
          and I fell back from there.

          • Le Maistre Chat says:

            I know that I hung
            upon a windy tree
            for nine whole nights,
            wounded with a spear
            and given to Othinn,
            myself to myself for me;

            Hanging on a tree, wounded with a spear, as a sacrifice to himself.
            Is Odin what happens when a Player Character tries to game the same rules Christ followed?

          • Nornagest says:

            It’s been pointed out before. And it’s probably not a coincidence. But it’s hard to tell exactly how much Christian influence there is in our sources for Norse paganism, since most of them were written down centuries after the fact by Christians, and even the stuff that wasn’t was written long after contact with Christianity (and even with Islam, by way of the Varangians and others).

            Baldr gets a lot of attention for this, too — a son of the chief god, associated with light and spring, conspicuously good-natured among the grim and vicious Norse pantheon, killed before his time by malice and treachery, who will return after the end of the world and usher in a new golden age.

          • Le Maistre Chat says:

            There’s a lot to unpack here, and almost no evidence to do it with.
            It’s hard to tell exactly how much Christian influence there is in the Norse material, since it was written down so late.
            How similar was the Woden of the 90s AD, when Tacitus mentions him as chief Germanic god, to the character of Odin we have written down?
            How long before the 90s AD did Woden replace Tues as the head of the Germanic pantheon?

          • Nornagest says:

            How similar was the Woden of the 90s AD, when Tacitus mentions him as chief Germanic god, to the character of Odin we have written down?
            How long before the 90s AD did Woden replace Tues as the head of the Germanic pantheon?

            There seem to be some differences between the pantheon we see in Tacitus’s Germania and the one we see in the late Norse material, but that doesn’t necessarily mean much — for one thing, Tacitus took a characteristically Roman syncretic view of the Germanic pantheon (he refers to Wotan as Mercury and Ziu [Týr] as Mars; his Hercules is probably Donar, and he also mentions Isis, whose identity is anyone’s guess). For another, he doesn’t go into much detail about actual beliefs, spending more time on ritual, which he probably considered more significant. And finally, he’s not talking about quite the same ethnic group, and pre-Christian religion in most places showed quite a bit of regional variation.

            As to when Woden became the chief god, about all we have to go on there is placenames, and those are tough to date. Dedications to Týr (under various names) appear throughout Scandinavia and the British Isles, though.

    • Conrad Honcho says:

      I pick Aphrodite. Ruling Asia sounds like a big headache, and as an engineer I don’t have much use for skill “in peace and war.”

    • 10240 says:

      The one with the biggest tits.
      (Wait, that’s a different joke.)

    • The Nybbler says:

      We know picking Aphrodite turned out badly. Hera is infamous for her jealousy and temper; making a deal with her is sure to end badly sooner rather than later, when she detects some real or imagined slight. And we’re a genre-savvy people now; we know this is a triple-bind; picking Athena will work out about as long as it takes Hera and Aphrodite to gang up on us.

      So “Paris? No, no, I’m London. Paris is downstairs, I’ll go get him for you. Bye!” (Then I change my name to “York” or “Berlin”)

      • Nick says:

        So “Paris? No, no, I’m London. Paris is downstairs, I’ll go get him for you. Bye!” (Then I change my name to “York” or “Berlin”)

        If your name is Stockholm you’re just screwed. But if your name is Lima maybe they’ll take mercy on you?

        • Le Maistre Chat says:

          If his parents had dubbed that boy Stockholm instead of Paris, someone would have raptio-ed him, rather than him taking Helen.

    • FrankistGeorgist says:

      I would shock everyone by offering the Apple to Persephone. Then, when any/all of the 3 goddesses contrived to smite me, I’d maybe get a good deal in the afterlife. Maybe not Elysian fields, but we could have tea and pomegranates every once and a while or something.

      To take your request seriously, Hera because she’s the closest to Zeus and seems like you’d want to be on his good side. Play up that only she’s fit for him etc etc. Try and play both sides. Get Asia, do my best to institute meritocracy and order and squishy enlightenment values and maybe a land value tax while I had divine support and then when I felt the tide turn convert to a Hellenic-Judeo-Zoroastrianism of my own devising and get myself smote. See how long it lasts by asking people as they show up in the afterlife. Like playing a game of civ, basically.

      • JPNunez says:

        Gotta keep in mind that the whole Troyan War is a long play by Zeus to get rid of demigods and overpopulation.

        You may want to play along and send as many demigods against each other in the most balanced setups to maximize glorious deaths and avoid having Zeus concoct a huge war on your doorstep.

        If anything else fails, just organize a huge demigod single elimination tournament. You are guaranteed to end up with a single demigod, and you can poison him or something.

        • Le Maistre Chat says:

          We have a disproportionate number of coders here. Who will code this fighting game?

          • JPNunez says:

            It’s probably a bad idea. A few years ago (a decade?) Tecmo Koei used their know how of battle games about Chinese and Japanese dynasties to make Troy: Legends of War or something like that. A game about the Troyan War where you battle thousands of enemies. It super flopped.

            For whatever reason, the occidental world does not see much value in the Iliad/Odyssey, beyond the Brad Pitt movie. Netflix had a series adaptation a couple of years ago, which made waves by making Achilles, Zeus, and a couple of other characters, black people. That aspect was ok, but the series itself fell flat on some other aspects.

            This is where I make some comment about Confucianism, but maybe the Iliad is just too alien for us now. And there’s the issue of copyright, where companies will not be as easily compelled to invest in characters they don’t own (Here’s where people mention stuff like Sherlock Holmes). It’s probable that in the modern world, feminism has made Homer less appealing, where women are mostly trophies and at best they remain at home and trick suitors. Or are Gods I guess.

            Or maybe the Iliad is just too short, and companies instead need Marvel sized universes to invest in. Of course, mythology is a lot bigger than Homer, but at some point you run out of mythic stories to tell.

            It’s really weird, cause I saw some data suggesting that Fantasy is a better genre for videogames than Scifi, not only in occident, but in the orient too. They seem to be easier to get into for people. Maybe Greek Myths somehow does not fit into the “fantasy” category, and only Tolkien derivatives do.

          • AG says:

            My guess is that the Trojan War isn’t good-and-evil enough to work with modern storytelling sensibilities. The grandeur of the big battle is at odds with how everyone involved is a selfish ass (except Hector), and characters that are closer to traditional heroes in other parts of their own history are pretty much just selfish asses in the context of the Trojan War.

            There are certainly stories (movies/TV/games) than can work with a ‘this is a genre soap opera” setup, but they don’t also combine with a big city siege and battle. See all of the Rome-based shows that have been popular, and are rooted in the all of the backstabbing politics.

            The Aeneid and Odyssey are much more protagonist/antagonist based, our heroes against an obstacle, a clear defined desirable ending, as well as being journey narratives instead of a battle narrative, so they fare better.

            Btw, that swipe against feminism was entirely unnecessary. People were fine with the recent God of War: Parenting Edition and AC: Odyssey, and Xena is still beloved. There’s nothing inherent to the sword-and-sandal genre that feminists dislike.

          • John Schilling says:

            My guess is that the Trojan War isn’t good-and-evil enough to work with modern storytelling sensibilities. The grandeur of the big battle is at odds with how everyone involved is a selfish ass (except Hector), and characters that are closer to traditional heroes in other parts of their own history are pretty much just selfish asses in the context of the Trojan War.

            The 2004 movie version did a fairly good job, and was fairly well received IIRC, with a bunch of selfish asses all around and the primary conflict between Achilles as a sympathetic selfish ass and Agamemnon as a decidedly unsympathetic one. Hector, Priam, and Odysseus were the only generally selfless and honorable ones.

            What they did away with, and what probably doesn’t work with modern sensibilities, was the part where the Gods A: existed and B: were petty enough to use mortals as their proxies in their Olympian rivalries.

          • The Nybbler says:

            There’s two not-too-long-ago TV serieses based on a time “When the ancient gods were petty and cruel, and they plagued mankind with suffering”, but their Trojan War take was pretty far off.

          • John Schilling says:

            Fair enough, and I suppose two generations’ worth of “Clash of the Titans” fits into that paradigm as well. I think the difference is that we now tend to split “myths and legends” into one category of half-forgotten history and a separate category of made-up fantasy, and since Schliemann, the Trojan War gets put into the former category.

          • AG says:

            Yeah, “the Powers that Be are selfish asses” is actually a pretty popular trope nowadays, but the difference is that they are then contrasted and pitted against our genuinely heroic protagonists, who chafe at being pawns of the petty divine.

            The difference between the 1981 Clash of the Titans and the 2010 remake is pretty enlightening. The former does feature the gods being petty and Perseus just kind of going along with it, while the 2010 makes things much more starkly good-and-evil.
            But you can’t make the Trojan War into Star Wars.

            Just realized, though:

            just organize a huge demigod single elimination tournament

            Fate Stay What?

          • @John Schilling

            What they did away with, and what probably doesn’t work with modern sensibilities, was the part where the Gods A: existed and B: were petty enough to use mortals as their proxies in their Olympian rivalries.

            I really dislike this modern tendency of subtracting the mythical elements of mythological stories. I want an ancient tale to feel alien, and part of the reason for their morality feeling so different is that they believed there were squabbling supernatural agencies behind the fabric of the world, and that you could gain the favor of one or the ire of another. Without that it becomes sterile and I’d rather just watch a movie with a modern setting where characters do things because “it’s the right thing to do”.

            I also think that religious motivations for things are underplayed in movies set in the middle ages. There’ll be some window dressing of religion, but they’ll always translate it into secular logic in order that we think the hero’s and villain’s motivations make sense.

          • JPNunez says:

            @AG

            Not taking a swipe at feminism, but at Homer. You gotta admit that the Iliad ain’t the most feminist of stories. Chapters can go without a woman appearing. The most proactive woman is Hera, who takes matters in her hands by…seducing her husband, and modern adaptations that remove the gods have to build up the role of the women in the saga (In the Brad Pitt movie, Briseis gets a ton more screentime, and the BBC series gives a ton more to do to Helen, and brings in the Amazons. The BBC series has gods, but their role is still low). The Odyssey fares similarly.

            I don’t doubt that there are great sword and sandal/mythology stories with tons of great women out there, but when talking about Homer in particular, the record is not that good.

            Fate Stay What?

            AFAIK Fate Stay Nite centers on Arthurian legends. I suspect part of the issue with fantasy leaving out the greek mythology is that Britain has their own legendarium in the Arthurian stories, which in time evolved into the Tolkien/Howard stories, which make the base of modern fantasy, which makes greek legends less attractive. British stories were inherited by America, who makes most of the media we consume nowadays. As for the rest of Europe, I don’t know. I suspect that Spain got the short end of the stick, because Don Quijote basically killed the romantic knight genre in the language, with nothing to replace it with.

          • AG says:

            @JPNunez

            The Fate champions come from all mythologies, as well as historical figures. Arthur is one of the protagonists, but other notable characters include Alexander the Great, Gilgamesh, Medusa, Medea, and Herakles.

            And that doesn’t even get into the spinoffs.

            And Greek/Roman mythology still has a fair foothold in pop culture. Most notably, you have the Percy Jackson novels, but also the whole Wonder Woman section of DC. Most supernatural genre shows (Buffy et al) tend to reference them, too, with Lost Girl out right featuring the Greek gods in the flesh. King Arthur is not a part of the SHAZAM acronym.
            And even medieval fantasy tends to steal the gladiator concept for their world-building, as well as tending more towards the Greco-Roman pantheons first, than Norse/Egyptian/Asian.

            In literature, my impression is that swords-and-sandals fantasy is more likely to be written by lady authors, for whatever reason. So you’ll find more examples in YA, again because of the popularity of the gladiator concept.

            Also, the Iliad and Odyssey are still fairly popular as fodder for non-mythological modern retellings (1, 2, 3). So people still like it as a story, less as fantasy.
            No such resurgence for Jason and the Argonauts, though. That one seems to have really fallen through the cracks.

          • Dan L says:

            This thread is hitting some of my weirdly specific buttons.

            @ JPNunez:

            Not taking a swipe at feminism, but at Homer. You gotta admit that the Iliad ain’t the most feminist of stories. Chapters can go without a woman appearing. The most proactive woman is Hera, who takes matters in her hands by…seducing her husband, and modern adaptations that remove the gods have to build up the role of the women in the saga (In the Brad Pitt movie, Briseis gets a ton more screentime, and the BBC series gives a ton more to do to Helen, and brings in the Amazons. The BBC series has gods, but their role is still low). The Odyssey fares similarly.

            I can and have written long-form defenses of Homer as a promoter of strong female figures. While it is definitely true that the gender roles are not symmetrical and that there are plenty of disposable women, there are plenty of disposable men as well and it’s not obvious that the asymmetry is endorsed. Also, Athena.

            The Odyssey includes several very blatant shots at the Patriarch(y). Calypso in 5:129-160 is the most direct, but I’m a fan of the depiction of Arete.

            @ AG:

            Fate Stay What?

            The Grail Wars and Servant system in general are unsuited to actually lowering the population of demigods, since it only pulls copies from the historical record* and can’t actually result in heroes being erased from that same record**.

            * Exception: Alaya is a cheating bastard.

            ** Exception: Ab, frevbhfyl, vg’f n fcbvyre.

        • FrankistGeorgist says:

          Hell, if anything, I’d try and keep up a good correspondence with Hera. “Hey how’s the husband? Run off again? If that lady comes into my Kingdom I’ll make sure to have her killed. Oh what’s that, people aren’t burning the right part of the cow? How many squab are equivalent to a cow? Too much god blood distracting people from sacrifices and the laws of hospitality? People these days!” then quietly try and align my Asian Empire to fly under the radar of the Gods.

          Admittedly, that’s when having Athena’s wisdom would help, since Greek Mythology is not brimming with evidence of how humans and gods can totally work things out to their mutual benefit.

    • broblawsky says:

      Presumably, Athena’s blessing includes being the world’s greatest philosopher, which seems like the best route for attaining happiness in this life and enduring fame in history, a la Plato or Socrates. More people can identify Socrates correctly than Mithradates, after all.

      • Le Maistre Chat says:

        This. If I were a guy in Paris’s place, I’d choose the only goddess offering to make me a better person.

      • Erusian says:

        More people can identify Socrates correctly than Mithradates, after all.

        I’m not sure this is comparing like to like. Socrates is probably more comparable to Alexander the Great than Mithradates. And I suspect there’s closer parity there.

    • John Schilling says:

      Athena’s gift is more broadly valuable, and can be pressed into service as a partial substitute for the other two. Also, Athena is a more reliable patron which, coupled with the use of her gift, would seem to offer better odds of surviving the attentions of the other two vengeful goddesses than any other combination. But I think I will be trying to keep a low profile going forward, rather than trying to conquer Asiatic domains and woo Spartan princesses through my augmented skills.

      A tactful “none of the above” would be the safe answer, but there’s no fun in that and not much chance of anyone remembering my name in ten thousand years.

    • Athena.

      I don’t want to rule things, and beauty is not very high up on my list of desiderata for a partner.

    • Randy M says:

      If we’re going by the rewards, and not the personalities of the givers, I’d go with Aphrodite, then Athena, then Hera.
      Assuming I was not already married, in case cross Aphrodite off the list. And also assuming that the truest love of the most beautiful woman means an enduring nurturing love and not simply the most intense transient lust.
      I’m wise enough to rule my own life (not that I’d turn down more wisdom if offered, of course) and have no ambitions at ruling Asia. Reliable romantic companionship adds the most to my life.

    • Simulated Knave says:

      You don’t take the bribe.

      That’s why Paris is punished with tragedy and ruin. Because when asked to judge a contest by the gods themselves, because of his reputation for fairness in a previous contest involving gods, he took a bribe.

      You need not judge the contest. Zeus refused to, and as a mere mortal you can certainly take your cue from him. Or you can judge the contest by whatever standard you want. Just do not take the bribe. Make a clean decision, uninfluenced by the gifts. Preferably find a polite way to refuse the gifts before making the decision.

      Personally, based on what I’ve seen of statues, I’d pick Athena. And that’s probably the gift I’d pick, too. But picking any of the gifts will lead you to ruin. Refuse the gifts, then judge.

    • Theodoric says:

      Athena. I might not conquer anything, I might not get the most beautiful woman, but if I am the wisest and most skilled at whatever I attempt (“in both peace and war”) I would probably have a pretty nice life.

    • cassander says:

      Seems like if I were the wisest and most skilled, I could conquer asia if I wanted, and if I wasn’t wise and skilled, then I’d have trouble keeping it. Ditto wooing the most beautiful woman in the world, but that’s a close run second.

    • Jiro says:

      “I define the fairest goddess as the one most capable of protecting me from the other goddesses”. Add enough reasoning that it doesn’t sound as blunt as that.

      • Protagoras says:

        Honestly, Athena and Hera are probably close enough to a tie in that department that you need a tiebreaker. Both are quite protective of their own, and while Athena is personally more formidable, Hera has better political connections. And similarly Athena’s gift makes you good at protecting yourself and Hera’s gift gives you lots of minions to protect you. I don’t see an easy answer as to which will end up protecting you better (maybe just go with whichever style you’re more comfortable with). But, yes, this is the reason Paris was an idiot; Aphrodite is the blazingly obvious wrong choice.

        • Aapje says:

          Are Greek Gods deterred by bodyguards? It seems to me that individual skill or getting help is key.

          You don’t send minions against Medusa, but use a mirror.

  61. Plumber says:

    From The New York Times today

    “Why the Cool Kids Are Playing Dungeons & Dragons
    Fighting the dragon queen Tiamat is a much more satisfying way to spend time with my friends than social media ever was.

    By Annalee Newitz

    Ms. Newitz is a science journalist and novelist.

    I started playing Dungeons & Dragons right around the time I completely gave up on Facebook. It was a little less than a year ago, as the first stories broke about the Cambridge Analytica scandal. I was sick of the social media idea of friendship, defined as likes or shares or “X knows the same 50 people you know.” So when my friend Kate suggested we start a game of Dungeons & Dragons, I thought, “Yes, I’m going to get together with people face-to-face, without any hearting or retweeting, and we’re going to eat chips and fight those damn cultists who are trying to resurrect the evil, five-headed dragon queen Tiamat.”

    Until then, I had played a little D&D as an adult, but I hadn’t joined a group that met regularly. But I am basically the target demographic for “Stranger Things.” Like the characters on that show, I played D&D in the 1980s with a group of geeky guys every day at lunch throughout the sixth grade, slaying vegepygmies in a crashed spaceship and meeting the great demon Lolth in her sticky transdimensional web.

    Kate became our dungeon master, the narrator of our adventure, who sets the scene using maps, dice, flowery language and silly accents. We were joined by seven other friends around my dining room table, eager to take on the roles of fighting monk, rogue, sorcerer, warlock, paladin, bard and cleric. As soon as Kate told us to fill out our character sheets, I remembered the feeling of sheer awesomeness that had drawn me to the game when I was 11. I was about to become an Aarakocra cleric, a bird person with a divine connection to nature who could call down lightning, raise winds, grow plants from the barren earth and heal the dying with a touch.

    But D&D isn’t only about inventing a more badass version of myself, with wings and magic powers instead of sneakers and a laptop. I was also drawn to the idea of building a social group whose baseline assumption was that we’d see one another regularly. There’s a sense of purpose to the gathering.

    Using a few maps spread on the table, we chart our course, explaining to Kate and one another what we want to do next. And when Kate leaves us on a cliffhanger, there’s no “Hey, I’ll text you later and maybe we can meet up.” Of course we’ll meet up again. The point of the game isn’t to win; it’s to go adventuring together.

    Wizards of the Coast, the parent company of Dungeons & Dragons, reported that 8.6 million people played the game in 2017, its biggest year of sales in two decades. That mark was eclipsed in 2018, when D&D sales reportedly grew 30 percent. All of those D&D consumers are snapping up the Fifth Edition, a new rule set released in 2014 that emphasizes a flexible approach to combat and decision-making. New players don’t need to learn as many arcane rules to get started, and sales of D&D starter kits skyrocketed.

    Adding to the newfound popularity are thousands of D&D games broadcast on YouTube and the live-stream service Twitch. “Critical Role,” a popular livestream and podcast, features actors playing the game.

    This surge of interest is no doubt also inspired by shows like “Stranger Things” and the D&D-esque world of “Game of Thrones.” We want to escape into fantasy worlds where we know who the bad guys are and our spells to banish evil actually work. In this way, D&D is similar to online games like World of Warcraft, where people take on imaginary identities, form a guild and shout at one another using headsets while fighting orcs.

    What makes D & D different is that we can never forget about the human beings behind the avatars. When a member of my group makes a bad choice, I can’t look into his face and shout insults the way I would if we were playing online. He’s a person, and my friend, even if he also inexplicably decided to open an obviously booby-trapped trunk, get a faceful of poison and use up my last remaining healing spell.

    Annalee Newitz (@Annaleen), a science journalist, is the founder of the science fiction website io9 and the author of a novel, “Autonomous.””

    • J Mann says:

      New players don’t need to learn as many arcane rules to get started

      Well, if you’re going to play an arcane caster, you should at least learn your spell save DC, the concentration mechanic, and spell components, which a surprising number of people don’t. You can depend on your GM for the more esoteric stuff like casting more than one spell in the same turn and/or round. 🙂

    • Walter says:

      8 players? I roll to disbelieve on how often they are able to get this game happening. I bet this is monthly, if that.

      • J Mann says:

        7 players and a DM. I think it’s great that she found an adult table with friends, and it sounds like she liked it, but we can infer:

        1) That campaign either rarely met or was willing to meet without 1-3 players depending on schedule.

        2) Unless Kate is a very experienced DM, combats took forever and were extremely easy most of the time, with occasional moments of outright deadliness.

        3) Unless Kate is a great DM, players spent a lot of time building dice towers.

        • Nick says:

          It says that “we,” i.e., DM Kate and player Annalee, were joined by 7 other friends. That’s 9 total.

        • Watchman says:

          I’m now trying to visualise the best way to build a tower of d4s. Or how to get a character with better attacks maybe…

          • Nick says:

            This is a known packing problem!

            I confess to building a lot of dice towers in my day—I’ve been in one-shots with 12+ players and several of our regular campaigns back in college were 7+ players. I always go cube, octahedron, decahedra, dodecahedron, icosahedron, tetrahedron. Le Maistre Chat and Nornagest may have heard my dice tower fall near the mic last night too, sorry about that. 😀

          • woah77 says:

            I’m slowly phasing out my cube dice for Better D6s. Namely dodecahedron D6s. Because they’re more evenly random.

          • thevoiceofthevoid says:

            Top to bottom:
            d4
            d6
            d8
            d12
            d20

            d10s aren’t platonic solids, they can go sit off to the side. Unless I’m going for a real challenge, in which case the d10 and d100 go between the d8 and the d12.

          • LHN says:

            I always kind of hated the way tetrahedral dice (don’t) roll. I recently bought a few of these, which would stack reasonably nicely.

            https://i.warosu.org/data/tg/img/0491/35/1472881934378.jpg

            (There are also eight-sided dice with 1-4 twice, but I like the Roman numerals as an easy way to distinguish them from normal 12-siders.)

          • John Schilling says:

            OK, so how much does it cost to get a set of those made up with four sides marked “IV” and only two marked “I”, and another set vice versa, and how long will it take for the average GM to note that you’re swapping those dice in and out of service for the really important rolls?

            Actually, that would be pretty low-return given how rarely the D4 is used on really important rolls, but now that I think about it a D20 with two “18” faces and no “3” might go unnoticed for quite a while. Fiddling with the “1” or “20” might be more conspicuous.

          • bean says:

            Note to self: If gaming with John, bring my own dice.

          • John Schilling says:

            Diplomacy is notably dice-free. Just saying.

          • bean says:

            I’ve been there and done that, and I don’t feel particularly eager to be backstabbed by you again.

          • Aapje says:

            @bean

            You should have picked Athena.

            Just saying…

          • J Mann says:

            @John Schilling – dice cheating is very enticing in DnD. It’s not that hard, but if other players and the GM start to notice unusual rolls at particularly important moments (a) they watch pretty closely and (b) they tend to react pretty strongly.

            Rigged dice would have the advantage that your good luck wouldn’t be confined to key moments, but my bet is they would raise Bayesian hackles pretty quickly.

          • Nancy Lebovitz says:

            Would a smallish advantage like substituting an 18 for a 3 be that likely to get noticed?

      • Le Maistre Chat says:

        3-4 players + DM is the sweet spot for D&D. DM Kate could be running two games a week in the same setting for people with different schedules!

        • Nick says:

          I think four players is ideal, and I’d hard cap at six personally.

          • woah77 says:

            This. After 4 it becomes exponentially harder to give everyone a reasonable amount of your attention. After 6 you have at least two players consistently checked out at all times. Unless you’re running a larp, in which case none of these apply, but instead you get cliches of 4-6 people.

          • Le Maistre Chat says:

            @Nick: If I were lucky enough to have seven players, I wouldn’t kick someone out if they all managed to show up for a session. I would not, however, build a group that big who normally have compatible schedules!

          • Nick says:

            @Le Maistre Chat: Yeah, I think it’s a better idea to run multiple campaigns rather than a single large one. Gives players more variety too, and it cuts down on the eternal problem of having too many books and systems to run.

          • Nornagest says:

            You could kick it old school and run two parallel campaigns in the same setting, with the two groups as rivals.

            I’ve always kind of wanted to see the Head of Vecna incident live.

          • Walter says:

            4 players is ideal. 3 players is playable, 5 is really pushing it. 6 or 2 are right out.

          • Le Maistre Chat says:

            @Nornagest:

            You could kick it old school and run two parallel campaigns in the same setting, with the two groups as rivals.

            Yes, that’s what I want to do.

          • Nornagest says:

            4 players is ideal. 3 players is playable, 5 is really pushing it. 6 or 2 are right out.

            Absent a really good DM, I’d take five over three in any edition of D&D. Three players is probably optimal for keeping combat flowing, but most adventures aren’t written to deal with a party that’s missing one of the four basic classes or a close equivalent — take out the thief and you can’t deal with locks or traps, take out the mage and you don’t have battlefield control. Old school is actually more flexible, partly because you can fill the gaps with henchmen and hirelings and partly because you can usually get away with being more creative, but it bogs down less in the first place so it can handle a larger table more easily.

            I’ve seen other games successfully run with a lot more than four players. Paranoia plays well all the way up to eight or so, but that’s partly because a third of them will be dead at any given time.

          • Le Maistre Chat says:

            @Nornagest: You know I like giving each Old School D&D player 2 characters, to keep each player engaged in case their PC dies. 3.0 codified this as the Leadership feat, but using Leadership at even a 3-player table became horrifically tedious. In my ACKS game, we could probably recruit 2 more players who control 2 characters each before combat took as long as my small 3.x table did.
            In 5E, it’s almost impossible for PCs to die, so 4 players with tank-healer-wizard-rogue is close to ideal.

            For the unfamiliar: Nornagest plays in my Adventurer, Conqueror, King (Basic/Expert with more classes, more economics & D20 Thief skills, basically) Discord game as dual tanks, Nick is a Cleric with an attack dog and our third player covers both the Mage and Thief PCs.

    • Nabil ad Dajjal says:

      I have no data, just anecdotes, but it feels like D&D podcasts have reeled in a ton of new players and 5e’s streamlined rules have helped keep some of them in the hobby.

      That said, the rules are still very intimidating to a lot of would-be players. I’ve mostly been recruiting graduate students and FAANG employees for my games and a lot of them still struggle to keep track of their bonuses or spell slots. These aren’t dummies but while the book-keeping is greatly reduced from 3.X it’s still a huge challenge for new players.

      I know this will never happen, but I would kill for a Basic line for 5e. The starter rules don’t really qualify: they’re just as fiddly in play as the full game, only with fewer options. If there was a simpler version with less of what my girlfriend calls “secret math,” it would help to ease people in and reduce attrition.

      • J Mann says:

        I made some 1-2 page “getting started” sheets for each of the players in my daughter’s Level 3 one-shot, and they went over well. I had two sections:

        1) A brief explanation of a player’s action economy – actions, bonus, movement, and reactions.

        2) A list of what that character could do during each phase, with some quick reference stats for each choice.

        3) If I did it over, I’d add a section with a couple sentences about role playing and a quick rundown of their stats and proficiencies, plus maybe background and some backstory.

        • Nabil ad Dajjal says:

          If you still have them and feel comfortable putting them up somewhere I’d love to see them.

          • J Mann says:

            I’ll see if I can find them tonight and reach out.

          • J Mann says:

            Well, now that I found them, they’re definitely a first draft. I threw them together a hour ahead of a one shot.

            Nate’s sheet is for someone who never played before – we gave him a level 3 dragonborn fighter (champion), and the sheet explains the combat action economy and what his character can do in each phase of a round.

            The other sheets are for other level 3 characters whose players had played before – I didn’t explain the action economy, but just summarized most of their combat options.

            As I said, if I had it to do over again, I’d probably make 3 clean one page sheets.

            1) Explaining out of combat role play and highlighting where the characters abilities and skills were strong and weak, plus maybe some backstory prompts.

            2) Explaining the in-combat action economy.

            3) Explaining that character’s options during each phase of combat.

        • FrankistGeorgist says:

          I would also be interested in seeing these. I have some newbies to induct and I’ve never DMed so I’m looking for many diverse opinions on what information new players should see.

          • J Mann says:

            Thanks, I’ll try to find them.

            So as not to oversell, I was trying to solve a specific problem, which is that players in one-shots often have no idea what their options are during combat, and fall back to “I cast that one cantrip again” or “I swing my sword,” so the guides are really focused on the specific combat mechanics of a level 3 druid, assassin, champion, etc., and not on how to role play, but I bet you could make a quick page on that too.

      • Le Maistre Chat says:

        The B/X or BECMI rules from the ’80s are still really great for this. Look up the retroclone Dark Dungeons for a free version.

        • Nabil ad Dajjal says:

          I mean if I want to run Rules Cyclopedia I’ll just run it, the only change that I noticed in Dark Dungeons is converting attack tables to BAB.

          But yeah, I’d like to see an updated version of Basic which incorporates some of the things 5e got right, most notably the Advantage / Disadvantage mechanic. I’ve played around with the idea of making one and nailed down the basic mechanics but my attempt at writing it died when I realized that nobody would ever actually play it.

          • dndnrsn says:

            What, you can’t just force rules you wrote on your group? I’ve successfully done that. So far, going well.

      • Walter says:

        I am with you on the ‘podcasts/streams lured in a new generation’ thing.

        I think it is the fact that the saved media makes it impossible to not notice that these people are actually having fun. RPGs are fun, and people like fun. Then you get the ‘They ought to have…’ effect, and before you know it a game is starting up.

    • Randy M says:

      If parents of the ’80s had anticipated the unholy terror of social media, they would have been pushing D&D on their children left and right instead of banning ‘those evil books’.

    • Plumber says:

      Well I’m envious of Annalee Newitz for getting to play a form of D&D again, I got a little bit in (besides play-by-post which is a poor substitute) after my older son turned the age I was in ’78 when I started playing, but before my younger son was born – if I was to try any gaming now I’d wind up divorced or murdered.

      I still buy, read a bit, and then hide game books – but that’s not the same.

  62. CheshireCat says:

    What are some things that help people with anhedonia (specifically the emotional blunting kind)? I’ve been trying to treat my own for a long time, with little success. I don’t have much money so things like ECT are out of the question, but any suggestions help.

    My main symptom is a general lack of emotionality. I just don’t feel much. The severity of it waxes and wanes, usually being exacerbated by stress, but it persists in one form or another regardless of my life circumstances.

    Things I’m interested in trying or have heard helps:

    – Parnate (MAOI)
    – NSI-189 (Experimental, neurogenic)
    – Sarcosine + NAC (Stack, don’t even know)
    – Uridine stack (Aka the ‘Mr Happy Stack’, dubious name but pretty common)
    – Rexulti/Vraylar (Atypical antipsychotics)
    – Rhodiola Rosea (Adaptogen)
    – Salvia microdosing (Something something downregulating kappa opioid receptors)
    – Curcumin (Anti-inflammatory, antioxidant)
    – Meditation (Meditation)

    Things I’ve tried:

    – Ketamine (Self-administered, helps a bit but is inconsistent)
    – Exercise (May help a little but is inconsistent at best)
    – Therapy (3 different providers, several years)
    – Zoloft
    – Wellbutrin SR and XR (Helped a little until it started making me feel bad every time I took it)
    – Trintellix
    – Celexa
    – Remeron
    – Mushrooms (Low and high doses)
    – Ayahuasca
    – Weed
    – Lactobacillus Reuteri 6475 yogurt (Oxytocinergic-activity-increasing bacteria I guess)
    – Having a girlfriend lol
    – A variety of vitamin supplements — Vitamin D, B complex, magnesium, fish oil, vitamin C, folate
    – Various other supplements — SAMe, L-methylfolate, inulin, creatine, L-tyrosine, L-Theanine, ashwagandha, Alpha-Lipoic Acid, St. John’s Wort

    Been trying to address this issue for quite a few years now, I don’t want to say I’m getting desperate, but I will say my tolerance for risk is steadily increasing. I’ve posted about this a few times already but am always looking for more ideas.

    • Core says:

      Consider tianeptine for your next pharmaceutical to try—better safety profile than Parnate, and one of the SSC Nootropics surveys ranked it rather highly. The internet consensus is that it’s subtly mood-lifting without the blunting of an SSRI. Anecdata: it helped get me out of a multi-year rut of anhedonia. N=1, so there is a chance it just spontaneously resolved, but people online report similar things. YMMV.

      It’s unscheduled in the US (and most other places), so you can buy it off the clearnet easily and cheaply. Mechanism of action might be mu-opioid related. There’s some stories out there of people experiencing withdrawal after taking far above the therapeutic dose, but the risks are fairly low otherwise.

      As far as the other things, I’d give exercise another shot, especially if medications/supplements help your energy levels enough to do so. If the process of getting ready to exercise outside is too much, a bodyweight routine in your room works. Otherwise, find some kind of cardio that you find intrinsically fun. Anecdata: I also thought exercise didn’t help much, but I realized I just hated running. Bicycling worked much better. I generally prefer strength training for health reasons, but cardio seems better for mood. Bonus points for cardio that lets you see the outdoors.

      Also, check on your sleep duration and quality. Going from 4.5hrs a night to 7.5hrs on a regular schedule probably did more for my mood than anything else. Use melatonin as needed to achieve that.

      • Cariyaga says:

        What type of tianeptine is best? There’s salts, sulfate, etc. for sale. Not OP, but have similar symptoms as well as motivational issues that make things I can purchase online legally ideal.

        • Core says:

          Sodium and sulfate are the two major kinds. Sodium had clinical trials run on it and is the one that can actually be prescribed in the EU. Sulfate is supposed to have a longer half-life, avoiding thrice-daily dosing and lowering abuse potential, but I’m not clear that it’s been proven to be as effective as sodium. Last I checked, sulfate was more research chemical than actual pharma.

          Prescribing guidelines for sodium are 12.5mg t.i.d., so you can make a solution in water and dose volumetrically if you buy powder. Or if you can find tablets, even better. It’s a shame that it’s become more difficult to source (see belvarine’s reply), but at least it’s still legal in nearly all states, and you can still find reasonably good vendors on the clearnet, though some of them might want to be paid in BTC.

      • belvarine says:

        Unfortunately since Michigan scheduled tianeptine, most major payment processors refused to work with vendors selling the substance so it’s very difficult, if not impossible, to obtain tianeptine from reputable online US vendors with adequate quality control these days.

        You may be able to discuss a prescription with your psychiatrist, but since tianeptine was discontinued after trials in the US major pharmaceuticals don’t manufacture it over here and you may have a difficult time convincing anyone to prescribe it to you.

        Note: None of the above applies to Europe, where you can get a tianeptine prescription.

      • CheshireCat says:

        Tianeptine worries me a bit because it seems like a quintessential “treating the symptoms” kinda thing given the short half life (readministration requirement) and potential for abuse. I’m mostly looking for longterm cures.

        Sounds like people’s reports on it are very positive though, so I’ll look into it. Thanks!

        I’m still exercising, though I do way less cardio than I should. I’ve mostly been sticking to weight lifting recently, I will try to work in more cardio and HIIT if I can though. I hate running, but bicycling is the shit.

        Sleep is super important and something I’ve been meaning to look into, since I’ve always needed like 9-10+ hours to feel rested. I don’t know how to ascertain my sleep quality but I think I’m getting enough. I do have issues with insomnia when I’m not taking meds for it though — melatonin as recommended by SSC doesn’t help, sadly. I want to try a weighted blanket soon.

        • Rebecca Friedman says:

          I don’t have anhedonia, but I do sleep 9-10 hours – or did, up until ~3 weeks ago when I started taking, of all things, Allegra. (For an entirely unrelated reason.) So far the best explanation anyone’s given me is “if an antihistamine is helping, it might be sleep apnea” – have you been tested for that? – but honestly I don’t really know why it works, just that it sure seems to. Anyway, this probably won’t help, there are lots of different reasons to sleep 9-10 hours and I gather some people just do, but I figured I’d throw it out there just in case.

          (I do not know if it interacts with anything else you’re taking this is not medical advice, sample size is one so take with requisite grain(s) of salt.)

          IME weighted blankets help with insomnia but not sleep duration, but the effect varies widely across people. If you sleep better in winter/with heavier blankets and have worse insomnia in summer/with lighter blankets, that is strong evidence that a weighted blanket would help.

          Good luck – that’s quite an extensive list. I really hope something works!

          • CheshireCat says:

            I was actually tested for sleep apnea as part of a clinical trial, and I came out clean. I’m also below average in weight and nobody’s told me I snore, so I doubt that’s it.

            Although the antihistamine bit is interesting, I take Remeron to help with sleep and that has antihistamine properties iirc. I’ll consider trying some at a later date.

            Posting this shortly after having slept for 11 hours… I definitely could’ve slept more. Maybe I should be alarmed that nearly half my life is spent asleep when I can afford to do so.

            Thank you for your well wishing, I appreciate it. If I ever find a cure the first thing I’ll do is post about it in an open thread here.

    • nameless1 says:

      Haven’t tried but glycine worths a look. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3756114/

      I find it interesting how the purported effects of glycine are reversing rather exactly the typical civilizational illnesses. Glycine is mostly from collagen, which means eating the less appetizing parts of animals, not just steaks. Could a glycine deficiency play a big part in our civilizational illnesses?

      • CheshireCat says:

        Interesting. I added TMG (Trimethylglycine) to the list. Nice thing about that is it’s a methyl donor, which I guess means that it supports the function of other neurochemical systems in addition to being a source of glycine. Thanks for the tip!

    • FrankistGeorgist says:

      As mentioned above a weighted blanket started a virtuous cycle for me by greatly improving my sleep hygiene.

      I’m pretty good about trying supplements one at a time for a couple months at a time, and like most people I agree they’re basically bunk. Some exceptions:

      Fish oil does make my incredibly dry skin only very dry
      Vitamin D really does make doctors stop saying I should take vitamin D
      ZMA before bed occasionally to relax and sleep better and as part of my general curiosity about sex supplements I won’t go into here but can vouch for.
      MSM – didn’t even know what it did, my body builder brother left it accidentally, but it had a noticeable effect on my joints and body aches. Basically it made me feel younger and made exercise and life a lot easier. (Full disclosure, I’ve heard this is basically just garlic, but even with my love for the stuff I’ve not been able to recreate the effect naturally)
      Curcumin – a much milder form of above. I think both are anti-inflammatories? I take both now and when I leave for weeks somewhere and don’t take them I notice when I go off.

      I do lsd once every 4-6 months and always feel nicely cleaned out emotionally by that. Weed, if anything, exacerbated my anhedonia. I like Kratom for a nice light body high that I find centering, but it’s super addictive so that’ll depend on the addictiveness of your personality. Legal though in most states. I do it once every other week or there abouts.

      If you do the Keto thing you won’t fix anhedonia but you’ll be almost entirely without blood sugar fluctuations for days and I found that eerie and novel. I can see why people say it gives them a “clear head.” But, I found Keto joyless as anything and it came with the purity-status games all diets do that was quite unappealing.

      Exercise is obviously objectively good but I still have no desire to do it, so I do it only out of some sense of obligation. But yeah, obviously exercise, I’m just not the one to say it totally worked for me. Sleep hygiene though. My god. I worship at the altar of Hypnos now for just how normal good sleep makes me feel. ZMA, no phone before bed, dimming lights and listening to brown noise and boring audiobooks to drown out the screaming nightmare of a city outside. That and deep breathing, which is halfway to meditation so maybe.

      I am coming out of it though, and the anhedonia dogged me from before the city and to my surprise I might even conquer it before I leave, under honestly the worst circumstances for improving mental health outside of San Francisco, on this island the shape of a neurosis.

      • CheshireCat says:

        Weighted blanket is definitely on my list of things to try, though arguably it should be higher as I can never seem to get an adequate amount of sleep.

        LSD I doubt will work, serotonergic psychedelics just kind of hurt at high doses. My experiences with shrooms and ayahuasca have made me less than optimistic about prospects similar drugs; I don’t seem to derive any lasting insight or emotional refreshment from them. That said, I do have a few tabs which I plan to try eventually.

        Keto is interesting though I doubt it’d help me. Also want to try gluten free or low wheat, because why not

        I am coming out of it though, and the anhedonia dogged me from before the city and to my surprise I might even conquer it before I leave, under honestly the worst circumstances for improving mental health outside of San Francisco, on this island the shape of a neurosis.

        This and the vitamin D thing gave me a chuckle

        Thanks for the tips, I think your glowing reviews are enough that I’ll be looking into buying a weighted blanket soon…

    • Incurian says:

      Spend more time outside.

  63. toastengineer says:

    Cons: I would be tempted to use it much more than I use current bans

    I say “good,” but then I’m a “k-line them all, God will sort the banned-“ist in general. I would like to see a more democratic process for this though, where a user who isn’t on the opposite side to the user in question has to say “yeah, that guy has crossed the line,” since it blurs the bright line of whether or not you’re censoring people. Maybe this is impractical.

    I’m not sure how to do it without it being an administrative headache for me.

    Honestly, I think it would be valuable even if it were just a public Naughty List; it directly attacks the disciplined user’s status, serves as a “don’t listen to that guy, he doesn’t represent us” to outside observers, and has value even just as an official “hey stop that.” I mean, I learned to drive by always doing the first thing that occurred to me and listening for horns…

    • pdbarnlsey says:

      I would probably rather be temporarily banned than publicly shamed.

      Perhaps the message could be a warning “I am considering banning you based on your contributions on culture war topics”. I feel like that would do the trick somewhere north of 90% of the time, but perhaps I’m easily cowed.

      • Enkidum says:

        Ditto. I’ve sometimes wondered if I’m crossing the line and a confirmation to that effect would pretty much stop the behaviour in its tracks.

      • JohnBuridan says:

        I think the email warning would work (and it would test community response to soft power). I do this with debate students when their arguments get too heated.

        An outright ban would work too, but could result in aforementioned headaches, especially since you might feel committed to it once you begin the process.

        Start with email warnings. If those don’t work, move to a three month culture war ban system. If that causes too many headaches, we can figure out how to reduce the headache from there.

    • silver_swift says:

      > Honestly, I think it would be valuable even if it were just a public Naughty List

      Let’s not go for the public shaming route, that tends to enflame conversation rather than cooling it down.

      Which users are banned can/should be public information, but ‘Hey, knock it off’ style warnings are much more effective when given in private.

    • k-line them all

      I just wanted to post my appreciation for the terminology. IRC represent. *fistbump*

  64. “Let me know what you think.”

    My inclination is against.

    I’m not in your position and don’t know how serious the threat to the blog is from attackers pointing out reasonable but politically incorrect comments. But since part of what makes the blog so good is the wide range of positions offered, I’m worried about the potential danger of selectively suppressing such.

    One problem with the secrecy of the approach you describe is that if you are making decisions which many contributors to the blog would consider misguided, we will never find out about the decisions, so you will never find out our view of them.

  65. Robert Jones says:

    I expect 3 to work badly.

    As described, it doesn’t appear that anyone other than you (Scott) and the subject of the ban would be aware of the ban. Therefore only those people could police the ban. I’m sure that most SSC participants would self-police effectively, but I suspect that the subset who break rules in the context of CW topics are less likely to do so. If the subjects fail to self-police effectively, then you (Scott) are faced with the impossible task of personally checking that they’re complying with the restriction.

    This is exacerbated by the fact that “culture war” really isn’t a clearly defined category (and in an SSC context seems to mean something slightly different from what it normally means), so it would be difficult even for a good faith participant to be confident they were complying with the ban and the person enforcing the ban would potentially be faced with a whole series of difficult decisions as to whether comments were straying into the forbidden territory.

    The suggestion appears to be aimed at dealing with users who break the rules in a CW context but otherwise make a positive contribution to the blog. I’m afraid I’m not entirely clear who these people are, so I may not have a good understanding of the problem, but perhaps you could make more use of warnings or short bans? You could even say in a warning, “If you find yourself incapable of posting on this topic in a constructive way [or whatever] then you may wish to consider refraining from posting on this topic at all, because if you continue as you are I will have to ban you, which would be unfortunate as I value your other contributions.”

    • Robert Jones says:

      I’ve now worked out what DavidFriedman is referring to. If 3 is in fact a response to that episode, I think it’s bad for a further reason.

      If (because of problems you’ve previously referred to or other reasons) you prefer that people not discuss certain topics on this blog, you should ban those topics. You should not allow discussion of the topics while secretly banning people who take one side of the argument from participating. That would obviously be antithetical to our ideals of rational discourse.

      • kaathewise says:

        You should not allow discussion of the topics while secretly banning people who take one side of the argument from participating.

        This is a strawman, it’s nowhere near what Scott is suggesting.

        • Clutzy says:

          Based on recent public banning behavior it would seem to be the likely result. You know Bayesian reasoning and all.

    • Frederic Mari says:

      “If you find yourself incapable of posting on this topic in a constructive way [or whatever] then you may wish to consider refraining from posting on this topic at all, because if you continue as you are I will have to ban you, which would be unfortunate as I value your other contributions.”

      Seems pretty good halfway house solution to me.

      I was banned for 3 months for making a disrespectful (but obviously totally fair and deserved) comment about Trump.

      Had I known that Trump’s honour was a sacred cow on SSC, I would have abstained from making a crude comment about him/his abilities. I mean, I get it. There’s no great intellectual contribution to be made by Trump supporters and myself trading insults in the comment section. I thought I was being witty but I get it.

      Now, as I said, the 3 months ban didn’t really affect me much since I’m not a regular contributor (I read the blog regularly since I discovered it last year and like to catch up with long ago topics but I don’t comment often) but it kinda stinged – I was being super witty, goddamnit!

      A ‘fair warning’ would have been a decent ‘Strike 1. Strike 2 and you’re out’ method.

      • 10240 says:

        I suspect the problem with your comment wasn’t insulting Trump’s honor but that comments with such tone steer the conversation towards Trump opponents and supporters trading insults, and away from Trump opponents and supporters having a constructive debate.

      • Nick says:

        If anyone would like to decide for themselves whether Frederic’s comment was fair, deserved, or super witty, they can read it here.

        • Thanks.

          I don’t think that post made any valuable contribution to the discussion—just insults without argument.

        • Frederic Mari says:

          Hmmm. Re-reading, yeah, okay, not my best and not exactly witty. Oh well. Selective memory bias.

          I shouldn’t have done it. OTOH, I do think I was making a valid point in the first sentence.

          Anyhooo…. back to moderation tactics, I thought I’d have reacted correctly to a warning and dial it down as a consequence. And found it a slightly better experience than a straight ban.

          That said, the ban worked as well in my case so it’s hard to argue too strenuously for a different moderating policy

          • Aapje says:

            OTOH, I do think I was making a valid point in the first sentence.

            You probably meant paragraph, because I doubt you are referring to “Hahahahahaha.”

            It’s quite a poor paragraph, which includes circular reasoning. You argue that Trump is racist by arguing that his racism will make him treat Norwegian migrants better than non-whites, but you don’t actually seem to have any evidence that he did/does treat Norwegian migrants better.

            If you want to argue supposed hypocrisy in the future (elsewhere), I suggest you only make claims that you can actually substantiate and then provide the evidence. Then others can evaluate the evidence.

            In many cases, what seems hypocritical to some, seems like a case of disparate situations that require a disparate response to others.

            By giving the evidence, those who disagree can then point out why they think the situations are disparate. You can then argue they are not (or agree). This allows for (semi-)good debate.

            Extremely vague assertions or judgments based on mind-reading don’t allow for good debate.

          • rlms says:

            That’s a very isolated and frankly silly demand for rigour. Trump’s ban on immigration from Muslim countries and lack of ban on immigration from Norwegian countries (clear evidence that “he did/does treat Norwegian migrants better”) should be common knowledge. It’s true that for some reasonable definitions of “racist” you can’t infer from that evidence that Trump is racist, but Frederic’s comment didn’t do that. The only crime he committed was being coarse and not particularly interesting, and thereby failing to meet “kind” and “necessary”.

          • albatross11 says:

            Nitpick:

            Trump talked about a ban on immigration from Muslim countries or by Muslims, but the actual executive order he put out banned immigration from five countries that were already under heavy immigration restrictions because of the risk of terrorists coming in via the immigration system.

            Also, Muslim isn’t a race, it’s a religion. For example, a policy of refusing immigration by Iraqi Muslims but permitting it by Iraqi Christians might be unconstitutional, but it wouldn’t be racist.

          • Aapje says:

            @rlms

            Frederic made a claim that Trump distinguishes between and makes or seeks to make different policy for these three racial groups:
            – Brown/black immigrants
            – Chinese/Eastern Asian immigrants
            – Norwegian immigrants

            I’ve never seen this claim before and after, so it’s not a claim that people can be expected to be familiar with. The particular policy you refer to has been argued (with some evidence provided) to be limited to a subset of Muslim countries because of legal reasons, where Trump had preferred to extend it to all Muslim countries. However, I’ve never seen anyone provide evidence that Trump’s policy had a racial intent and that he would specifically prefer to restrict migration of brown/black people. So it is far from obvious that the evidence that you argue is relevant, actually is relevant to the claim at hand.

            Another issue with the claim is that it shifts between a racial claim and a nationalist claim. At one end we have skin color (brown/black), then for the second group it shifts to a national/regional group (Chinese/Eastern Asian) and then for the last group it shift once more, fully to nationality (Norwegian). This doesn’t add up to a coherent claim about Trump’s views or policies. Is the claim that Trump discriminates by skin color, by religion or by culture????

            One way to troll is to play games like this: one writes things that are extremely ambiguous, even to the point of being contradictory, while being suggestive of a certain interpretation. Then when the other person tries to rebut this interpretation with specific objections, one claims being strawmanned, that the other person’s interpretation is due to bias, etc.

            Now, the intent of such writing doesn’t have to be trolling, but I would argue in favor of being very intolerant of such sloppy writing, whether intentionally or accidentally so.

          • Lambert says:

            This is the non-cw thread, right?

          • HeelBearCub says:

            … the actual executive order he put out banned immigration from five countries that were already under heavy immigration restrictions because of the risk of terrorists coming in via the immigration system.

            The bolded part is merely the pretextual reason.

            The (overwhelmingly likely by Bayesian logic) actual reason is that Trump pre-committed to a ban on Muslims. He then stated that he had banned Muslims.

          • Frederic Mari says:

            I would like to reply and justify my claims but 1- I was only citing my case to argument about moderation (and I’ve unwillingly derailed that) and 2- I don’t want to break the no-CW rule on this thread.

            Any idea of where on the blog I might reply? Would you gents follow me back to the thread with my initial comment?

          • Nick says:

            The 125.25 thread is up now and permits CW discussion.

          • albatross11 says:

            HeelBearCub:

            Those five nations were already under heavier immigration restrictions from the Obama administration, as I understand it.

          • Frederic Mari says:

            @aapje @albatross11 @rlms and whoever else might be interested: It’s done.

            See https://slatestarcodex.com/2019/04/10/open-thread-125-25/#comment-740380

      • bean says:

        First, this is the CW-free thread, so please refrain from insisting that your original insult was valid and witty.

        Second, I think the ban had a lot more to do with tone than with content. Trump’s honor isn’t a sacred cow here. Posts that read like low-effort drive-bys are generally not liked, particularly when they come from new posters, who don’t have a record of good contributions to balance against.

      • Deiseach says:

        Had I known that Trump’s honour was a sacred cow on SSC, I would have abstained from making a crude comment about him/his abilities.

        It’s not that his honour is a sacred cow, or even about him in particular. It’s that you make that remark, then I call Alexandria Ocasio-Cortez “Horseface”, then everyone else chips in with their favourite insult for their least favourite personality, and we end up like the nastier parts of Reddit.

        The best way to avoid all that is not to start in the first place.

        • Dan L says:

          I can appreciate a good bit of apophasis, but it tends to be guilty of the sins it decries. CW-free thread, too.

    • 10240 says:

      Another issue is that other commenters wouldn’t see, by example, what sorts of culture war comments Scott considers bad.

      Is there any reason not to post these content-specific ban notices as the usual bold, red public comment, rather than (or in addition to) in e-mail?